WO2021260910A1 - Système d'intégration d'ia, dispositif d'intégration d'ia et programme d'intégration d'ia - Google Patents

Système d'intégration d'ia, dispositif d'intégration d'ia et programme d'intégration d'ia Download PDF

Info

Publication number
WO2021260910A1
WO2021260910A1 PCT/JP2020/025175 JP2020025175W WO2021260910A1 WO 2021260910 A1 WO2021260910 A1 WO 2021260910A1 JP 2020025175 W JP2020025175 W JP 2020025175W WO 2021260910 A1 WO2021260910 A1 WO 2021260910A1
Authority
WO
WIPO (PCT)
Prior art keywords
control
data
vehicle
learning
unit
Prior art date
Application number
PCT/JP2020/025175
Other languages
English (en)
Japanese (ja)
Inventor
紀俊 川口
一真 千々和
匠 星
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2022532199A priority Critical patent/JP7414995B2/ja
Priority to PCT/JP2020/025175 priority patent/WO2021260910A1/fr
Publication of WO2021260910A1 publication Critical patent/WO2021260910A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present disclosure relates to an AI integrated system that integrates a plurality of AIs (Artificial Intelligence) that control based on various input information, and an AI integrated device and an AI integrated program used therein.
  • AIs Artificial Intelligence
  • AI is installed in various in-vehicle devices such as using AI for controlling an in-vehicle camera 8 for detecting a human body and controlling a steering device.
  • Patent Document 1 a technique for controlling a plurality of lower-level devices equipped with AI from higher-level devices is disclosed (for example, Patent Document 1).
  • a user can easily construct a neural network mounted on a plurality of devices via a GUI, or control a plurality of devices equipped with a neural network from a higher-level device. Make it possible.
  • the present disclosure is to solve the above-mentioned problems, and an object of the present invention is to enable AI that performs various controls to be appropriately controlled after being integrated into one system.
  • the AI integrated system receives detection information indicating the characteristics of the environment in which the controlled device operates as input via at least one of a sensor and an external network, and generates a plurality of control signals for controlling the controlled device.
  • the device to be controlled is controlled using the control unit that selects one of the trained models based on at least one of the detection information or the generated control signal, and the selected trained model. It is equipped with a sensor processing unit.
  • the AI integrated system receives detection information indicating the characteristics of the environment in which the controlled device operates as input via at least one of a sensor and a communication network, and generates a control signal for controlling the controlled device.
  • Learning to make the trained model additionally trained for each of the trained model, the plurality of sensor processing units that are controlled by using the trained model corresponding to each of the plurality of controlled devices, and the plurality of sensor processing units. It has a part.
  • the AI integrated device generates a plurality of control signals for controlling the device to be controlled by inputting detection information indicating the characteristics of the environment in which the device to be controlled is operated via at least one of a sensor and an external network. It is provided with a control unit that selects one from the trained models of the above, based on at least one of the detection information and the generated control signal.
  • the AI integrated device is controlled by inputting detection information indicating the characteristics of the environment in which the controlled device operates, which corresponds to each of the plurality of controlled devices, as input via at least one of the sensor and the communication network. It is provided with a control unit that preferentially performs additional learning for at least one of the trained models that generate a control signal for controlling the device.
  • the AI integrated program receives detection information indicating the characteristics of the environment in which the controlled device operates as an input via at least one of a sensor and an external network to generate a control signal for controlling the controlled device. From the trained models of the above, one suitable for the environment is selected based on at least one of the detection information and the generated control signal, and the device to be controlled is controlled.
  • the AI integrated program is controlled by inputting detection information indicating the characteristics of the environment in which the controlled device operates, which corresponds to each of the plurality of controlled devices, as input via at least one of the sensor and the communication network. Priority is given to additional training for at least one of the trained models that generate control signals for controlling the device.
  • the effect of being controlled more appropriately than before can be obtained.
  • FIG. 3 is a system configuration diagram showing various AIs integrated into an AI integrated system. It is a system block diagram for demonstrating another AI integrated system.
  • FIG. 3 is a system configuration diagram showing various AIs integrated into the AI integrated system shown in FIG. It is a system configuration diagram for demonstrating the control part of the AI integrated system shown in FIG. It is a system configuration diagram for demonstrating the configuration of the AI integrated system shown in FIG. It is a schematic diagram for demonstrating the structure of a sensor processing part. It is a figure for demonstrating the learning to perform AI, and the trained model acquired by learning.
  • Embodiment 1 AI (Artificial Intelligence) is referred to as a "learning model”, a “learned model”, or a “learner”.
  • the “learning device” refers to a control device composed of software or LSI that can determine output information with respect to input information by using a known learning method, and the “learning model” is used.
  • the “learned model” refers to the software or LSI in the previous stage where the correspondence between the input information and the output information constituting the above-mentioned “learner” is determined, and the “learned model” refers to the input information and the output constituting the above-mentioned “learner”. It refers to software or LSI at the stage where the correspondence with information is determined, but it may have a configuration other than the above-mentioned configuration as long as it acquires intelligence through a learning process.
  • FIG. 1 is a system configuration diagram for explaining the AI integrated system 1 in the first embodiment of the present disclosure.
  • the AI integrated system 1 as used herein refers to a system that performs processing or operation according to an application by integrating and operating a plurality of devices controlled by using AI.
  • FIG. 1 shows an example of the AI integrated system 1.
  • the AI integrated system 1 is a system in which the industrial robots M1 and M2 and the conveyor device M3 are integrated.
  • the industrial robot M1 includes an arm node Arm1, a capture unit Hand1, and an image pickup unit Camera1 to identify a plurality of parts Parts1 carried by a conveyor while imaging with the image pickup unit Camera1 and identify the arm node arm1 and the capture unit Hand1. Is driven to supplement the parts Parts1.
  • the industrial robot M2 includes an arm node Arm2, a capture unit Hand2, and an image pickup unit Camera2, and identifies a plurality of parts Parts2 carried by a conveyor while taking an image with the image pickup unit Camera2, and identifies the arm node arm2 and the capture unit.
  • the part Hand2 is driven to supplement the parts Parts2.
  • the conveyor device M3 includes a conveyor Conveyor and a switching unit Switch.
  • one passage is branched into two passages in the middle to carry a plurality of parts Parts, and the switching unit Switch is a branch of the conveyor Conveyor.
  • the plurality of parts to be carried are switched to either side of the industrial robot M1 or the industrial robot M2 under predetermined conditions and flown.
  • FIG. 2 is a system configuration diagram showing various AIs integrated into the AI integrated system 1.
  • the arm node Arm1 of the industrial robot M1 is controlled by the AI AI1a
  • the capture unit Hand1 is controlled by the AI AI1h
  • the imaging unit Camera1 is controlled by the AI AI1c.
  • the arm node Arm2 of the industrial robot M2 is controlled by AI2a, which is AI
  • the capture unit Hand2 is controlled by AI2h, which is AI
  • the imaging unit Camera2 is controlled by AI2c, which is AI.
  • the conveyor device M3 is controlled by AI3 which is AI.
  • AI1a, AI1h and AI1c of the industrial robot M1, AI2a, AI2h and AI2c of the industrial robot M2, and AI3 of the conveyor device M3 are controlled so that the industrial robots M1, M2 and the conveyor device M3 can be interlocked with each other. conduct.
  • AI1a, AI1h and AI1c of the industrial robot M1, AI2a, AI2h and AI2c of the industrial robot M2, and AI3 of the conveyor device M3 are the industrial robots M1, M2 and the conveyor device M3 on which the respective AIs are mounted. Is individually learned about control before being integrated, and is not learned about control in the environment where each device actually operates after being integrated as AI integrated system 1. Therefore, there is a concern that there is no guarantee that each device will operate properly as a system.
  • an AI that causes each AI to perform learning for performing control suitable for the system will be described.
  • FIG. 2B an AI that causes a plurality of such AIs to perform learning is shown as AIsv.
  • FIG. 3 is a system configuration diagram for explaining another AI integrated system 1.
  • FIG. 3 shows the configuration of the vehicle 2 as the AI integrated system 1.
  • the vehicle 2 is referred to as an automobile (more specifically, a completed vehicle) here.
  • the AI integrated system 1 is also simply referred to as a “system”.
  • the vehicle body 3 which is the body of the vehicle 2, is equipped with a tire 4, a door 5, a headlight 6, an electronic mirror 7, an in-vehicle camera 8, a radar 9, and a transmission 10.
  • the in-vehicle camera 8 and the radar 9 are devices for detecting an object around the vehicle 2.
  • the in-vehicle camera 8 is an image pickup device.
  • an electromagnetic wave such as a laser radar such as LiDAR (light detection and ranging) or a millimeter wave radar is used.
  • the drive device 11 is a device that generates a driving force for driving the vehicle 2, such as an engine or a motor.
  • the braking device 12 is a device or a deceleration mechanism that generates a braking force for decelerating or stopping the vehicle 2, such as a mechanical brake or a power regenerative brake.
  • the steering device 13 is a steering device for changing the traveling direction of the vehicle 2.
  • the shock absorber 14 is a suspension device that generates a damping force for relaxing and cushioning stresses such as vibration and inertial force generated in the vehicle 2.
  • the driving device 11 and the braking device 12 apply a driving force and a braking force to the tire 4. Further, the steering device 13 gives a drag force toward the traveling direction by changing the direction of the tire 4. Further, the shock absorber 14 applies a damping force between the tire 4 and the vehicle body 3 in order to relieve the stress generated between the vehicle 2 and the road surface.
  • the UI device 15 displays the surroundings of the vehicle 2 to the occupants, such as an instrument panel including a meter device, a car navigation system, or an information terminal device, notifies the situation around the vehicle 2, and informs the vehicle 2. It is a device for UI (User Interface, User Interface) that enables operation of various mounted devices.
  • the door 5, the headlight 6, and the electronic mirror 7 operate by receiving an operation from the passenger via the UI device 15.
  • the recognition device 16 detects and recognizes an object inside and outside the vehicle 2 and detects and recognizes an object inside and outside the vehicle 2 by controlling an in-vehicle camera 8, a radar 9, a GPS (Global Positioning System) device (not shown), and the like individually or in conjunction with each other. Get information.
  • the recognition device 16 may control the headlight 6 to improve the accuracy of detection and recognition of an object or the like by imaging with the vehicle-mounted camera 8.
  • the transmission device 17 controls a transmission 10 which is a path related to drive transmission such as a transmission and a differential device.
  • the above-mentioned drive device 11, braking device 12, steering device 13, shock absorber 14, UI device 15, recognition device 16, and transmission device 17 may be mounted as the same device.
  • other devices mounted on the vehicle 2 may be included in the AI integrated system 1.
  • FIG. 4 is a system configuration diagram showing various AIs integrated into the AI integrated system 1 shown in FIG.
  • the AI integrated system 1 integrates an AI for controlling each of the drive device 11, the braking device 12, the steering device 13, the shock absorber 14, the UI device 15, the recognition device 16, and the transmission device 17.
  • Each AI is a learner Lm using machine learning, and is, for example, a trained model trained using a neural network.
  • Each AI is composed of software or LSI.
  • the AI that controls the drive device 11 is the drive control unit 21, the AI that controls the braking device 12 is the braking control unit 22, the AI that controls the steering device 13 is the steering control unit 23, and the AI that controls the shock absorber 14 is the buffer control unit.
  • the AI that controls the UI device 15 is the UI control unit 25
  • the AI that controls the recognition device 16 is the recognition control unit 26
  • the AI that controls the transmission device 17 is the transmission control unit 27.
  • FIG. 5 is a system configuration diagram for explaining the control unit 30 of the AI integrated system 1 shown in FIG.
  • the AI integrated system 1 includes a control unit 30.
  • the control unit 30 is also treated as an AI integrated device.
  • the control unit 30 shall also handle the AI integrated program, the AI integrated circuit, and the AI integrated data.
  • the control unit 30 causes the drive control unit 21, the braking control unit 22, the steering control unit 23, the buffer control unit 24, the UI control unit 25, the recognition control unit 26, and the transmission control unit 27 to perform integrated learning. Further, the control unit 30 estimates the driving scene Ds, which is the environment in which the vehicle 2 travels, evaluates the control of each AI integrated in the AI integrated system 1, and switches the trained model used for controlling each AI. And so on. The details of the processing in the control unit 30 will be described later.
  • the driving scene Ds is treated as an example of the environment in which the vehicle 2 is placed, but the environment here is not limited to the state or situation related to the driving of the vehicle 2, for example, the passenger. This includes those related to the control of the in-vehicle device, such as the behavior, physical condition and safety status of the vehicle 2, and the state of aging deterioration of the vehicle 2.
  • FIG. 6 is a system configuration diagram for explaining the configuration of the AI integrated system 1 shown in FIG.
  • the AI integrated system 1 includes a control unit 30 and a plurality of sensor processing units 31A, 31B, ..., 31N.
  • the control unit 30 connects to a plurality of sensor processing units 31 to transmit and receive various signals.
  • the AI integrated system 1 includes various sensors 32A, 32B, ..., 32N mounted on the vehicle 2, a communication device 41 connected to the vehicle-mounted network 40, and various vehicle-mounted devices mounted on the vehicle 2. Vd is connected. These sensors 32, the communication device 41, and the in-vehicle device Vd are connected via the signal transmission path 42.
  • the in-vehicle network 40 is connected to vehicle-to-vehicle communication, ground-to-vehicle communication, and a wide-area communication network or an Internet network laid as social infrastructure.
  • the sensor processing unit 31 connected to the sensor 32A is referred to as the sensor processing unit 31A
  • the sensor processing unit 31 connected to the sensor 32B is referred to as the sensor processing unit 31B. It does not have to be one-to-one, and may be one-to-many, multiple-to-one, or multiple-to-many.
  • Various sensors 32A, 32B, ..., 32N are devices mounted on the vehicle 2, and various information related to the vehicle 2, such as running speed information, load information applied to the suspension, and temperature information, are used. Is detected and output.
  • any of the plurality of sensors 32 may include velocity and acceleration, angles and angular velocities including roll pitch yaw, vibration amplitude and frequency, and positive torque and at least one of the vehicle body 3 and individual wheels.
  • Dynamic information related to driving control such as negative torque (that is, driving force and braking force) and temperature, humidity, illuminance, weight, etc. in at least one of the inside of the vehicle body 3, the outside of the vehicle body 3, and the vehicle body 3 itself. It detects static information related to driving conditions and outputs the detected information to the corresponding sensor processing unit 31.
  • any of the other sensors 32 is information related to image pickup by the camera 8, object detection by the radar 9, and recognition of surrounding conditions such as position detection using GPS and data communication using a communication device. Is acquired, and the acquired information is output to the corresponding sensor processing unit 31.
  • the vehicle 2 as a system 1 that integrates various devices recognizes the state of the own vehicle 2 based on the information obtained from the various sensors 32 and communication described above, and associates the action Act to be taken in the recognized state St. By handling the data set (behavior Act, state St), it interacts with the environment in which the own vehicle 2 is placed.
  • the vehicle 2 in which all the various devices are integrated is referred to as a completed vehicle.
  • the state St of the own vehicle 2 is derived from various information obtained via the sensor 32 and communication described above, for example, position information, distance information to the target, speed information, and the like.
  • the information obtained from various sensors 32 and communication is collectively referred to as detection information Si.
  • the action Act refers to the control of various in-vehicle devices Vd mounted on the vehicle 2 and the notification or provision of various information to the passengers.
  • the sensor processing unit 31 is connected to the vehicle-mounted device Vd to be controlled so as to be capable of bidirectional signal transmission.
  • the sensor processing unit 31 is mounted on the vehicle-mounted device Vd to be controlled or another device connected to the signal transmission path 42.
  • the sensor processing unit 31 generates control signals Cs based on the various sensors 32 described above and the detection information Si input via communication, and transmits the generated control signals Cs to the vehicle-mounted device Vd to be controlled. do.
  • the sensor processing unit 31 sequentially outputs the input detection information Si and the control signal Cs generated for controlling the in-vehicle device Vd to the control unit 30.
  • the control unit 30 includes a storage unit 30m, and can hold various information such as input information, derived information, or set information in the storage unit 30m until it is temporarily or erased.
  • the control unit 30 holds the related information Ri associated with the detection information Si, the control signal Cs, and the sensor processing unit 31 of the output source.
  • Information indicating the sensor processing unit 31 of the output source is added to the detection information Si and the control signal Cs input to the control unit 30. This is given as output source information when the sensor processing unit 31 outputs the detection information Si and the control signal Cs, or the device on the signal transmission path 42 gives the information and detection information of the output source sensor processing unit 31. It is possible by adding it to associate Si and the control signal Cs.
  • the above-mentioned data set (behavior Act, state St) can be treated as a data set (control signal Cs, detection information Si) in which the detection information Si and the control signal Cs are associated with each other.
  • a neural network is treated as an example of AI implemented by the sensor processing unit 31.
  • the sensor processing unit 31 holds a plurality of neural network data, switches and sets the plurality of neural network data, and controls the vehicle-mounted device Vd to be controlled.
  • the control signal Cs corresponding to the detection information Si is generated.
  • the neural network data will be referred to as NN data Nd.
  • the sensor processing unit 31A holds the learner Lm, the NN data NdA1, NdA2, ..., NdAn, and the teacher data TdA. Further, the sensor processing unit 31B holds the learner Lm, the NN data NdB1, NdB2, ..., NdBn, and the teacher data TdB. After that, the same applies to the sensor processing unit 31N.
  • the plurality of NN data Nd held by the sensor processing unit 31 is a trained model.
  • This trained model trains a training model constructed by a neural network using a data set of input detection information Si and output information related to control (or control signal Cs), that is, teacher data Td. Obtained by letting them do it.
  • the teacher data Td is also called correct answer data.
  • the learning of the plurality of NN data Nd held by the sensor processing unit 31 is performed prior to the integration of the sensor processing unit 31 into the vehicle 2.
  • the learning model of AI may be constructed by a machine learning method other than the neural network, for example, reinforcement learning.
  • FIG. 7 is a schematic diagram for explaining the configuration of the sensor processing unit 31.
  • the sensor processing unit 31 has an AI (that is, a learner Lm), a storage device (not shown) for holding a plurality of NN data Nd, and a setting (not shown) for setting one of the plurality of NN data Nd in the AI.
  • a communication unit (not shown) for communicating with other devices inside and outside the vehicle is provided.
  • the plurality of NN data Nd included in the sensor processing unit 31 is a learned model in which learning with the learner Lm using the teacher data group has converged before the sensor processing unit 31 is integrated into the vehicle 2.
  • the AI of the sensor processing unit 31A is realized by a neural network and has a perceptron.
  • Each of the plurality of NN data NdA1, NdA2, ..., NdAn is parameter data for configuring the network of the perceptron
  • the setting unit is a plurality of NN data NdA1, NdA2, ..., NdAn.
  • One of them is set to the AI perceptron. That is, the configuration of the perceptron can be changed by a plurality of NN data NdA1, NdA2, ..., NdAn.
  • the AI of the sensor processing unit 31A can generate different output information (that is, control signals Cs) for certain input information (that is, detection information Si) depending on the set NN data Nd. It will be possible.
  • each sensor processing unit 31 has high adaptability based on the detection information Si input according to the environment by setting the NN data Nd that can be adapted to the environment in which the vehicle 2 is placed in AI.
  • the control signal Cs (in other words, interactive with the environment) can be generated to control the vehicle-mounted device Vd to be controlled.
  • the configuration of the perceptron that is, the control characteristic of AI is changed by changing the parameter data.
  • the sensor processing unit 31 has a different configuration of the perceptron.
  • the control characteristics of the AI may be changed, or the perceptron may be provided in the learner Lm. That is, the arrangement of the perceptron shall be arbitrarily designed.
  • FIG. 8 is a diagram for explaining the learning performed by AI and the trained model acquired by the learning.
  • FIG. 8 deals with, for example, a learning model for controlling the braking device 12.
  • the information B1 input in the learning process of the learning model includes the traveling speed of the vehicle 2, the traveling point (or the traveling position), the target point (or the target distance or the target position), and the target point. , Target speed, etc.
  • the input information B2 that becomes the teacher data Td in the learning process is the traveling speed of the vehicle 2, the traveling point (or the traveling position), the target point (or the target distance or the target position), the target speed, and the braking force (or). , Braking amount or braking time) and the like.
  • the learning model trained using the teacher data Td can estimate an appropriate braking force (or braking amount or braking time) for the input information B1 and output it as output information C.
  • the data acquired for learning may handle various information related to the braking control Bc obtained in the actual traveling vehicle 2. good.
  • Various information related to the braking control Bc includes the running state (for example, the load applied to each tire 4, the inclination of the vehicle body 3, and the inertial moment acting on the vehicle body 3) obtained or derived by using the sensor 32.
  • data derived by constructing a model of the vehicle 2 and the environment and performing simulation analysis on the constructed model may be handled.
  • unique information of the vehicle 2 for example, vehicle body 3 weight, center of gravity at rest, two-wheel drive or four-wheel drive, front-wheel drive or rear-wheel drive, etc.
  • unique information of the vehicle 2 for example, vehicle body 3 weight, center of gravity at rest, two-wheel drive or four-wheel drive, front-wheel drive or rear-wheel drive, etc.
  • the AI of the sensor processing unit 31 whose control target is the braking device 12 uses the above-mentioned input information B1 and B2 and the output information C.
  • the braking control Bc can be appropriately determined at the time of an operation such as stopping at the vehicle.
  • NN data Nd as a plurality of learned models corresponding to various driving scenes Ds.
  • the various driving scenes Ds are distinguished by information on factors that characterize the various environments in which the finished vehicle travels, such as traffic regulations, driving locations, road conditions, climate, and temperature.
  • the teacher data Td used in the learning process is prepared in advance to include information on the elements that characterize this environment.
  • the driving scene Ds will be described later.
  • the in-vehicle device Vd mounted on the vehicle 2 can adjust the operation of its own device to the environment by the sensor processing unit 31 switching and applying these NN data Nd.
  • the AI of the sensor processing unit 31 is, for example, braking in an automobile equipped with an automatic driving function or a driving support function applicable to various environments. It is used for braking control of the device 12. This makes it possible for the finished vehicle manufacturer to secure the desired reliability for the in-vehicle device Vd mounted on the finished vehicle.
  • the vehicle 2 as a completed vehicle travels, for example, an urban area or a mountainous area traveling at a relatively low speed, a highway traveling at a relatively high speed or a traffic network in a suburban area with loose traffic restrictions, and a grip of a tire 4 are effective.
  • Examples include easy paved roads and unpaved roads where the grip of the tire 4 is difficult to use.
  • the detection information Si in these environments for example, in urban areas, there are many obstacles such as traveling lanes, signs, traffic lights, other vehicles 2, pedestrians, bicycles, and buildings that obstruct the visibility of the surroundings or the surroundings. .. Further, in mountainous areas, there are many frequent and severe undulations, slopes and curves in the traveling lane. Further, on the expressway, there are many positional relationships with the surrounding traveling vehicle 2, light and darkness due to entering and exiting the tunnel, lane changes including getting on and off the road, and traffic restrictions or signs.
  • a heavy rain situation in which it is difficult to recognize information from an image captured by an in-vehicle camera 8
  • a strong wind situation in which it is difficult to drive according to steering control
  • a road surface such as a puddle, freezing, and deep snow.
  • Examples include the weather and weather conditions that worsen the condition.
  • the detection information Si in the environment in which the vehicle 2 travels is affected by various traffic regulations, surrounding or surrounding conditions and road surface conditions, various weather and climatic conditions, and these conditions and conditions in a complex manner.
  • the driving scenes Ds that have multiple influences are, for example, general roads in urban areas in fine weather, unpaved driving roads in the suburbs in rainy weather, and highways in heavy snowfall, which characterize the environment. It is composed by rearranging and combining the elements of.
  • the sensor processing unit 31 inputs these plurality of elements as detection information Si.
  • the plurality of elements that characterize the environment may be information derived by simulation, or may be information acquired when the actual vehicle 2 is driven in the actual environment.
  • the AI of the sensor processing unit 31 that controls various in-vehicle devices Vd according to the environment in which the vehicle 2 is placed performs learning before being integrated into the vehicle 2.
  • a plurality of teacher data Td (hereinafter referred to as teacher data group) are prepared in advance for each driving scene Ds expressing an environment including various features.
  • a trained model is acquired by training a training model using the teacher data group for each of these running scenes Ds.
  • the trained model corresponding to one running scene Ds is one NN data Nd, and the sensor processing unit 31 holds a plurality of NN data Nd.
  • the sensor processing unit 31 performs control using the NN data Nd for which the switching instruction Sw has been given from the control unit 30 based on the detection information Si to be input.
  • the switching instruction Sw of the control unit 30 will be described later.
  • the switching of the NN data Nd in the sensor processing unit 31 at this time is the timing at which the control of the sensor processing unit 31 or the operation of the in-vehicle device Vd is stable even if it is not immediately after the switching instruction Sw from the control unit 30 is received. You may do it at.
  • the degree of change in speed is relaxed in the urban area and the in-vehicle camera is used.
  • the detection target of 8 and the in-vehicle radar 9 is aimed at relatively small objects (for example, pedestrians, bicycles, signals, etc.), and the sensitivity is particularly increased.
  • the sensitivity of the in-vehicle camera 8 and the in-vehicle radar is particularly increased toward relatively large objects (for example, curved cliffs and slopes). It is possible to control the vehicle-mounted device Vd.
  • the sensor processing unit 31 has priority over the amount of control, the period (or time), the rate of change (or the speed of change), and other controls, for example, even in the control of making a right turn, depending on the traveling environment. It is possible to generate and output control signals Cs having different degrees.
  • the control unit 30 detects information for estimating various driving scenes Ds, information indicating the association between the plurality of NN data Nd held by each sensor processing unit 31 and the driving scene Ds, and the sensor processing unit 31.
  • the related information Ri associated with the information Si and the control signal Cs is stored in advance. This related information Ri is based on the information related to the learning performed for the AI before integration into the system 1, that is, the data set of the input information B1 and B2 and the output information C shown in FIG. 8A. Generated. Further, the control unit 30 holds the information of the NN data Nd set in each sensor processing unit 31.
  • the control unit 30 includes an evaluation unit 30e and a selection unit 30s.
  • the evaluation unit 30e of the control unit 30 will be described.
  • the evaluation unit 30e Based on the input detection information Si, the evaluation unit 30e extracts or derives an element that characterizes the environment in which the vehicle 2 is placed, and whether control by the NN data Nd set in the sensor processing unit 31 is appropriate. Please evaluate.
  • the evaluation unit 30e evaluates the data set of the detection information Si and the control signal Cs output by each sensor processing unit 31 after being integrated into the vehicle 2 by using the above-mentioned related information Ri. That is, the evaluation unit 30e uses the data set of the state St (that is, the input detection information Si) and the action Act (that is, the output control signal Cs) of the vehicle 2 when the vehicle 2 interacts with the environment. The control of the NN data Nd set in each AI is evaluated.
  • FIG. 9 is a schematic diagram for explaining a process in which the evaluation unit 30e of the control unit 30 evaluates the NN data Nd based on the data set of the detection information Si and the control signal Cs.
  • the NN data NdA1 set in the sensor processing unit 31A is a learned model acquired for controlling the drive device 11 (for example, an engine or a motor) in traveling in an urban area.
  • the NN data NdA2 set in the sensor processing unit 31A is a learned model acquired for controlling the drive device 11 (for example, an engine or a motor) in traveling on a highway.
  • the NN data NdB1 set in the sensor processing unit 31B is a learned model acquired for controlling the braking device 12 (for example, the braking device) in traveling in an urban area. Further, the NN data NdB2 set in the sensor processing unit 31B is a learned model acquired for controlling the braking device 12 (for example, the braking device) in traveling on a highway.
  • the NN data NdC1 set in the sensor processing unit 31C is a learned model acquired for controlling the steering device 13 (for example, the steering device) in traveling in an urban area. Further, the NN data NdC2 set in the sensor processing unit 31C is a learned model acquired for controlling the steering device 13 (for example, the steering device) in traveling on a highway.
  • the NN data NdD1 set in the sensor processing unit 31D is a learned model acquired for controlling the shock absorber 14 (for example, the suspension device) in traveling in an urban area.
  • the NN data NdD2 set in the sensor processing unit 31D is a learned model acquired for controlling the shock absorber 14 (for example, a suspension device) in traveling on a highway.
  • the NN data Nd set in the sensor processing unit 31 is the environment based on the data set of the detection information Si and the control signal Cs input from the sensor processing units 31A to D after integration into the vehicle 2. Evaluate whether it is suitable for.
  • FIG. 9A shows the traveling speed v of the vehicle 2 and the traveling speed v of the vehicle 2 when the sensor processing unit 31A controls the drive device 11 (for example, an engine or a motor) using the NN data NdA1 and the NN data NdA2. It shows the relationship with the degree of instability Is of running derived from vibration and inclination.
  • the running instability degree Is a data set of information related to control for the in-vehicle device Vd to be evaluated and information related to the sensor 32 may be used, for example, the frictional force of each tire 4. And the load (or impact force), and the kinetic energy and moment of inertia of the vehicle 2 may be used.
  • the control for the drive device 11 is referred to as a drive control Dc.
  • the NN data NdA1 suppresses the degree of instability Is of traveling lower than that of the NN data NdA2 when the traveling speed v is 40 km / h to 60 km / h. be able to. Further, according to the learning process, the NN data NdA2 can suppress the degree of instability Is of traveling lower than that of the NN data NdA1 when the traveling speed v is in a state of 80 km / h to 100 km / h.
  • the evaluation unit 30e is setting NN data NdA1 in the sensor processing unit 31A based on the information held by the control unit 30 or the information input by the control unit 30 from the sensor processing unit 31A, and the detection information Si and the control signal.
  • the traveling speed v is in the state of 40 km / h to 60 km / h based on the Cs data set
  • the degree of instability Is of traveling is suppressed low based on the data set of the detection information Si and the control signal Cs
  • it is evaluated that the control of the NN data NdA1 is appropriate.
  • NN data NdA2 is being set in the sensor processing unit 31A based on the information held by the control unit 30 or the information input by the control unit 30 from the sensor processing unit 31A, and the detection information Si
  • the traveling speed v is in the state of 80 km / h to 100 km / h based on the data set of the control signal Cs
  • the degree of instability Is of traveling is suppressed low based on the data set of the detection information Si and the control signal Cs
  • it is evaluated that the control of the NN data NdA2 is appropriate.
  • FIG. 9B shows the traveling speed v of the vehicle 2 and the degree of instability of traveling when the sensor processing unit 31B controls the braking device 12 (for example, the braking device) using the NN data NdB1 and the NN data NdB2. It shows the relationship with Is.
  • the degree of instability in running Is is the same as in FIG. 9A.
  • the control for the braking device 12 is referred to as braking control Bc.
  • the NN data NdB1 suppresses the degree of instability Is of traveling lower than the NN data NdB2 when the traveling speed v is in a state of 40 km / h to 60 km / h. be able to.
  • the NN data NdB2 can suppress the degree of instability Is of traveling lower than that of the NN data NdB1 when the traveling speed v is in a state of 80 km / h to 100 km / h.
  • the evaluation unit 30e is setting NN data NdB1 in the sensor processing unit 31B based on the information held by the control unit 30 or the information input by the control unit 30 from the sensor processing unit 31B, and the detection information Si and the control signal.
  • the traveling speed v is in the state of 40 km / h to 60 km / h based on the Cs data set
  • the degree of instability Is of traveling is suppressed low based on the data set of the detection information Si and the control signal Cs
  • it is evaluated that the control of the NN data NdB1 is appropriate.
  • NN data NdB2 is being set in the sensor processing unit 31B based on the information held by the control unit 30 or the information input by the control unit 30 from the sensor processing unit 31B, and the detection information Si
  • the traveling speed v is in the state of 80 km / h to 100 km / h based on the data set of the control signal Cs
  • the degree of instability Is of traveling is suppressed low based on the data set of the detection information Si and the control signal Cs
  • it is evaluated that the control of the NN data NdB2 is appropriate.
  • FIG. 9C shows the traveling speed v of the vehicle 2 and the degree of instability of traveling when the sensor processing unit 31C controls the steering device 13 (for example, the steering device) using the NN data NdC1 and the NN data NdC2. It shows the relationship with Is.
  • the degree of instability in running Is is the same as in FIG. 9A.
  • the control for the steering device 13 is defined as steering control Sc.
  • the NN data NdC1 suppresses the degree of instability Is of traveling lower than the NN data NdC2 when the traveling speed v is in a state of 40 km / h to 60 km / h. be able to.
  • the NN data NdC2 can suppress the degree of instability Is of traveling lower than that of the NN data NdC1 when the traveling speed v is in a state of 80 km / h to 100 km / h.
  • the evaluation unit 30e is setting NN data NdC1 in the sensor processing unit 31C based on the information held by the control unit 30 or the information input by the control unit 30 from the sensor processing unit 31C, and the detection information Si and the control signal.
  • the traveling speed v is in the state of 40 km / h to 60 km / h based on the Cs data set
  • the degree of instability Is of traveling is suppressed low based on the data set of the detection information Si and the control signal Cs
  • it is evaluated that the control of the NN data NdC1 is appropriate.
  • NN data NdC2 is being set in the sensor processing unit 31C based on the information held by the control unit 30 or the information input by the control unit 30 from the sensor processing unit 31C, and the detection information Si
  • the traveling speed v is in the state of 80 km / h to 100 km / h based on the data set of the control signal Cs
  • the degree of instability Is of traveling is suppressed low based on the data set of the detection information Si and the control signal Cs
  • it is evaluated that the control of the NN data NdC2 is appropriate.
  • FIG. 9D shows the traveling speed v of the vehicle 2 and the degree of instability of traveling when the sensor processing unit 31D controls the shock absorber 14 (for example, the suspension device) using the NN data NdD1 and the NN data NdD2. It shows the relationship with Is.
  • the degree of instability in running Is is the same as in FIG. 9A.
  • the control for the shock absorber 14 is referred to as buffer control Cc.
  • the NN data NdD1 suppresses the degree of instability Is of traveling lower than that of the NN data NdD2 when the traveling speed v is 40 km / h to 60 km / h. be able to.
  • the NN data NdD2 can suppress the degree of instability Is of traveling lower than that of the NN data NdD1 when the traveling speed v is in a state of 80 km / h to 100 km / h.
  • the evaluation unit 30e is setting NN data NdD1 in the sensor processing unit 31D based on the information held by the control unit 30 or the information input by the control unit 30 from the sensor processing unit 31D, and the detection information Si and the control signal.
  • the traveling speed v is in the state of 40 km / h to 60 km / h based on the Cs data set
  • the degree of instability Is of traveling is suppressed low based on the data set of the detection information Si and the control signal Cs
  • it is evaluated that the control of the NN data NdD1 is appropriate.
  • NN data NdD2 is being set in the sensor processing unit 31D based on the information held by the control unit 30 or the information input by the control unit 30 from the sensor processing unit 31D, and the detection information Si
  • the traveling speed v is in the state of 80 km / h to 100 km / h based on the data set of the control signal Cs
  • the degree of instability Is of traveling is suppressed low based on the data set of the detection information Si and the control signal Cs
  • it is evaluated that the control of the NN data NdD2 is appropriate.
  • the evaluation unit 30e can be used as a data set of detection information Si and control signal Cs when the in-vehicle device Vd to be controlled is controlled by using one of the NN data Nd in which each sensor processing unit 31 has a plurality of NN data Nd. Based on this, it is evaluated whether or not the NN data Nd set in the sensor processing unit 31 is suitable, and the stability of the operation and state of the in-vehicle device Vd under the control of the NN data Nd is evaluated.
  • the evaluation unit 30e may further determine and evaluate whether or not the control when the NN data Nd being set is combined with the plurality of sensor processing units 31 is functioning appropriately.
  • the selection unit 30s of the control unit 30 will be described.
  • the selection unit 30s determines the input detection information Si and the control. Based on the data set of the signal Cs, it is estimated which of the above-mentioned plurality of driving scenes Ds is close to or which driving scene Ds is suitable.
  • Information for the selection unit 30s to estimate the traveling scene Ds includes, for example, the traveling speed, vibration, inclination, kinetic energy and moment of inertia of the vehicle 2, the frictional force and load (or impact force) applied to the tire 4, and The degree of slope or curvature of the road (that is, the degree of R), temperature or climate, etc. may be mentioned.
  • the selection unit 30s may estimate the driving scene Ds using the information obtained from the vehicle-mounted network 40.
  • control unit 30 holds in advance the information of the elements that characterize the environment and the information of the driving scene Ds associated with the information of the elements.
  • the detection information Si obtained from the actual environment rarely matches the information of the elements constituting the traveling scene Ds held in advance. Therefore, among the elements that characterize the environment, those related to driving safety or reliability are given high priority in advance, and the selection unit 30s may estimate the driving scene Ds having high priority comprehensively. A plurality of candidates for similar driving scenes Ds may be estimated, and the driving scene Ds corresponding to the control having the highest driving safety may be estimated.
  • the control having the highest traveling safety is, for example, one in which the control range of the traveling speed is low, or one in which the traveling control is performed based on the detection of a surrounding object or the recognition of the surrounding situation.
  • the selection unit 30s selects the NN data Nd held by each sensor processing unit 31 that is suitable for the estimated driving scene Ds. Further, the selection unit 30s selects a data set obtained by learning using a teacher data group including features that are close to or corresponding to the actual environment based on the input detection information Si and the control signal Cs data set. , NN data Nd may be selected using the information obtained from the vehicle-mounted network 40.
  • the selection unit 30s issues a switching instruction Sw to each sensor processing unit 31 so as to set the selected NN data Nd.
  • the selection unit 30s is new from a plurality of driving scenes Ds based on the data set of the detection information Si and the control signal Cs input from a certain sensor processing unit 31 regardless of the evaluation content of the evaluation unit 30e.
  • Other NN data Nd suitable for the driving scene Ds is selected from the NN data Nd set in a certain sensor processing unit 31, and the other NN data Nd is selected for the sensor processing unit 31.
  • the switching instruction Sw for setting the NN data Nd may be performed.
  • FIG. 10 is a flowchart for explaining the processing in the control unit 30 in the AI integrated system 1.
  • FIG. 10A shows the processing in the evaluation unit 30e.
  • the evaluation unit 30e inputs the data set of the detection information Si and the control signal Cs from each sensor processing unit 31.
  • the evaluation unit 30e evaluates the control of the NN data Nd of each sensor processing unit 31 corresponding to the data set input in the processing Sp81a.
  • the evaluation unit 30e determines whether or not the control in each NN data Nd is appropriate based on the evaluation in the processing Sp82a.
  • the process proceeds to the process Sp81a.
  • the process proceeds to processing Sp84a.
  • the process of FIG. 10B is performed.
  • FIG. 10B shows the processing in the selection unit 30s.
  • the selection unit 30s inputs the detection information Si from each sensor processing unit 31.
  • the selection unit 30s estimates the traveling scene Ds in which the vehicle 2 travels based on the detection information Si input in the processing Sp81b.
  • the selection unit 30s selects the NN data Nd suitable for the traveling scene Ds estimated by the processing Sp82b for each sensor processing unit 31.
  • the selection unit 30s transmits a switching instruction for switching to the NN data Nd selected in the processing Sp83b to each sensor processing unit 31.
  • FIGS. 10A and 10B may be performed independently. Further, the individual processes of FIGS. 10 (a) and 10 (b) may be started to be executed regardless of the progress of the next process. For example, if a data set is transmitted from each sensor processing unit 31, sequential input is performed.
  • the AI is mounted in the AI integrated system 1 in which a plurality of devices that perform operations for various purposes and a plurality of AIs that control the plurality of devices are integrated.
  • Each of the sensor processing units 31 inputs the detection information Si, which is information on the environment in which the system 1 is placed, and generates and outputs control signals Cs for controlling the device to be controlled.
  • Each sensor processing unit 31 holds a plurality of NN data Nd, and controls by setting one of the NN data Nd to AI.
  • the plurality of NN data Nd is a trained model in which the learning model is trained according to the environment in which the system 1 is placed.
  • the control unit 30 evaluates the control of each sensor processing unit 31 to the device to be controlled by the NN data Nd set in the AI. Further, the control unit 30 (selection unit 30s) selects the NN data Nd corresponding to the estimated driving scene Ds to each sensor processing unit 31 and instructs each sensor processing unit 31 to switch to the selected NN data Nd. Let it be set. As a result, when a plurality of AIs that control each device that performs operations for various purposes are integrated into the AI integrated system 1, NN data Nd suitable for the environment in which the system 1 is placed is sent to each sensor processing unit 31. It can be set to appropriately control each device operating in various applications.
  • the AI integrated system 1 includes, for example, an industrial robot (that is, factory automation), a monitoring system (or monitoring device), an air conditioning system (or air conditioning device), and an air conditioning device. , Home electronics, etc. that integrate multiple AIs are targeted.
  • the control unit 30 evaluates the control of each AI and sets an appropriate trained model so that the system 1 does not fall into an unstable state due to the control of a plurality of integrated AIs. Therefore, even if the AIs that control the devices that perform operations for various purposes are individually trained, when a plurality of AIs that are controlled by the trained model are integrated into the system 1, the system 1 will be used. It is possible to operate stably.
  • the control unit 30 of the AI integrated system 1 may select and set one of a plurality of trained models for one sensor processing unit corresponding to one in-vehicle device Vd.
  • FIG. 11 is a schematic diagram for explaining a first modification of the control unit 30.
  • FIG. 12 is a schematic diagram for explaining a second modification of the control unit 30.
  • the control unit 30 of the AI integrated system 1 may include at least one of the sensor processing units 31.
  • the control unit 30 of the AI integrated system 1 is located on the external server of the vehicle 2, and information is exchanged with the sensor processing unit 31 of the vehicle 2 and the like via the communication of the in-vehicle network 40. You may do it.
  • Embodiment 2 In the first embodiment, prior to integrating various vehicle-mounted devices Vd into the vehicle 2, the AI of the sensor processing unit 31 that controls the vehicle-mounted device Vd is made to learn for each of a plurality of driving scenes Ds. A plurality of NN data Nd corresponding to the scene Ds were acquired. Then, after the in-vehicle device Vd was integrated into the vehicle 2, each sensor processing unit 31 was made to switch and set a plurality of NN data Nd acquired according to the environment in which the vehicle 2 travels. In the second embodiment, integrated learning is performed on the NN data Nd set by each sensor processing unit 31 according to the environment in which the vehicle 2 is placed.
  • the plurality of NN data Nd held by each sensor processing unit 31 is a trained model acquired by the learning in the previous stage integrated as the final product (that is, the finished vehicle), and is the final model. There is no guarantee that appropriate control can be performed for the in-vehicle device Vd to be controlled in the integrated state as a product. That is, even if each sensor processing unit 31 can hold a plurality of learned NN data Nd in advance and switch between them to control the in-vehicle device Vd to be controlled, the vehicle travels in an actual environment. When integrated in the system 1 of a completed vehicle, it may not be possible to confirm the reliability of whether or not the NN data Nd set by the switching instruction Sw can continue to stably control each in-vehicle device Vd.
  • the high robustness in control means the AI of the sensor processing unit 31 (that is, learning of the learning process) in the environment in which the vehicle 2 actually travels after each sensor processing unit 31 is integrated into the vehicle 2.
  • the detection information Si including the feature not included in the teacher data Td in the learning process is input or is included in the teacher data Td.
  • the detection information Si including a plurality of features that are not input as a combination is input, or the detection information Si including a disturbance that is not considered in the learning process is input, the in-vehicle device Vd to be controlled is not input. It refers to the property of being able to quickly transition to a stable operating state and continue control without leaving it in a stable operating state.
  • FIG. 13 is a schematic diagram for explaining a control region for expressing the detection information Si and the control signal Cs handled in the control by AI in the second embodiment of the present disclosure in two dimensions.
  • the control unit 30 has the detection information Si and the detection information Si that can theoretically stably control the in-vehicle device Vd based on the teacher data Td obtained by the analysis by the simulation simulating the completed vehicle or the actual measurement using the actual completed vehicle.
  • Information in a controllable region represented by a range of control signals Cs is held in association with each sensor processing unit 31.
  • the control area of the sensor processing unit 31A whose control target is the drive device 11 handles the speed and acceleration of the vehicle 2 as the detection information Si, and handles the driving force Df as the control signal Cs.
  • the control area of the sensor processing unit 31B whose control target is the braking device 12 handles the slope and frictional force of the road surface (that is, the degree of grip between each tire 4 and the road surface) as the detection information Si, and the control signal Cs. It is assumed that the braking force Bf is treated as.
  • the control area of the sensor processing unit 31C whose control target is the steering device 13 handles changes in the direction and position of the vehicle 2 (for example, the amount of change or the rate of change) as the detection information Si, and the steering reaction as the control signal Cs.
  • Sr for example, steering amount or steering speed
  • the control area of the sensor processing unit 31D whose control target is the shock absorber 14 handles the vibration and stress change (for example, change amount or rate of change) of the vehicle 2 as the detection information Si, and the buffer reaction as the control signal Cs. Cr (eg, buffer amount or buffer rate) shall be dealt with.
  • the ellipses A1 to A4, B1 to B4, C1 to C4 and D1 to D4 on the control region are NN data NdA1 to NdA4, NdB1 to NdB4, NdC1 to NdC4 and NdD1 held by the sensor processing units 31A, 31B, 31C and 31D.
  • ⁇ NdD4 is shown, and the range of the detection information Si and the control signal Cs handled by each NN data Nd is schematically shown.
  • the ellipse A2 when comparing the ellipse A1 corresponding to the driving scene Ds in the urban area and the ellipse A2 corresponding to the driving scene Ds on the highway, the ellipse A2 has a higher numerical value than the ellipse A1.
  • the driving force will be controlled so as to maintain the speed and acceleration of the vehicle 2 in the range.
  • Temporary low-speed driving such as entering and exiting the expressway or slow driving such as traffic congestion shall be included in the driving scene Ds and the teacher data Td corresponding to the urban area or the expressway.
  • the range in which the speed and acceleration of the vehicle 2 are maintained is the same, but the ellipse A1
  • the ellipse A3 controls the driving force in a range where the numerical value is higher. Comparing the ellipse A1 corresponding to the driving scene Ds in the urban area and the ellipse A4 corresponding to the driving scene Ds on the unpaved road in the suburbs, the ellipse A3 has a higher numerical value than the ellipse A1.
  • the driving force will be controlled in the range where the numerical value is high while maintaining the speed and acceleration.
  • this elliptical frame is defined as the controllable area Ia.
  • the range of the detection information Si to be input and the control signal Cs to be generated corresponds to the ellipse B1 which is the controllable area Ia corresponding to the driving scene Ds in the urban area and the driving scene Ds on the highway.
  • the ellipse B2 which is a controllable area Ia
  • an ellipse B3 which is a controllable area Ia corresponding to a driving scene Ds in a mountainous area
  • an ellipse B4 which is a controllable area Ia corresponding to a driving scene Ds on an unpaved road.
  • the range of the detection information Si to be input and the control signal Cs to be generated corresponds to the ellipse C1 which is the controllable area Ia corresponding to the driving scene Ds in the urban area and the driving scene Ds on the highway.
  • the ellipse C2 which is a controllable area Ia
  • an ellipse C3 which is a controllable area Ia corresponding to a driving scene Ds in a mountainous area
  • an ellipse C4 which is a controllable area Ia corresponding to a driving scene Ds on an unpaved road.
  • the range of the detection information Si to be input and the control signal Cs to be generated corresponds to the ellipse D1 which is the controllable area Ia corresponding to the driving scene Ds in the urban area and the driving scene Ds on the highway.
  • the ellipse D2 which is a controllable area Ia
  • an ellipse D3 which is a controllable area Ia corresponding to a driving scene Ds in a mountainous area
  • an ellipse D4 which is a controllable area Ia corresponding to a driving scene Ds on an unpaved road.
  • Each of the NN data NdA1, NdA2, ..., NdD4 puts the vehicle 2 in the environment in the controllable area Ia acquired by the learning before the integration when the in-vehicle device Vd and the sensor processing unit 31 are integrated into the vehicle 2. It is expected to adapt and control driving.
  • the control unit 30 holds in advance the controllable area Ia of each NN data Nd as information represented by the data set of the detection information Si and the control signal Cs, as shown in FIG.
  • the evaluation unit 30e uses the information in the controllable area Ia to evaluate the stability of the operating state of the in-vehicle device Vd and evaluate whether the control with the NN data Nd being set is functioning properly. Can be done. Further, the selection unit 30s can select the NN data Nd suitable for the current driving scene Ds by using the information of the controllable area Ia, and can perform the switching instruction Sw.
  • FIG. 14 is a schematic diagram for explaining the mutual influence in the control of each in-vehicle device Vd.
  • FIG. 14 shows that when the vehicle 2 travels in a mountainous area, the sensor processing units 31A, 31B, 31C and 31D control the NN data NdA3, NdB3, NdC3 and NdD3.
  • the solid line arrows Eab, Eac and Ead indicate that the control of the sensor processing unit 31A by the NN data NdA3 interacts with the control of the other sensor processing units 31 by the NN data NdB3, NdC3 and NdD3.
  • each of the NN data NdB3, NdC3 and NdD3 in the braking device 12 is the point Ub of the controllable area Ia.
  • Uc and Ud are used for control.
  • the evaluation unit 30e evaluates that the control of each NN data Nd is functioning appropriately.
  • each of the NN data NdB3, NdC3 and NdD3 in the braking device 12 is in the controllable area Ia. It is assumed that control is performed at points Fb, Fc and Fd. At this time, the points Fa and Fb are included in the controllable area Ia, but the points Fc and Fd are not included in the controllable area Ia. Evaluate not.
  • the controllable range Ia There is a possibility that the vehicle 2 can be maintained in a stable running state.
  • the control system including the trained model (in other words, the control system) is expected to have some robustness. That is, it is expected that the NN data Nd of the sensor processing unit 31 can control the in-vehicle device Vd to be controlled even in the region of the broken line elliptical frame beyond the controllable range Ia by having this robustness.
  • this broken line elliptical frame is defined as the robust control area Ra.
  • Such control in the robust control range Ra is for temporary or temporal changes in running characteristics such as the air pressure or brake pressure of the tire 4 of the vehicle 2 and the weight or wind pressure of the vehicle body 3 due to the load.
  • the robust control region Ra is an uncertain region, and it is difficult to provide the control system (in other words, the control system) with the robustness intended by the design. Therefore, there is a possibility that there is an out-of-applicable region Na that cannot be dealt with by the robustness of the NN data Nd held by the sensor processing unit 31.
  • each in-vehicle device Vd may be controlled in an uncertain robust control area Ra other than the controllable area Ia of the NN data NdA3, NdB3, NdC3 and NdD3.
  • an uncertain robust control area Ra it is not always properly linked on the system 1 of the vehicle 2.
  • the result of the control in the robust control area Ra in one NN data Nd is the other NN data. It may lead to control in the non-adaptive region Na with respect to Nd, and may fall into a situation where the other NN data Nd cannot be controlled.
  • Such a non-adaptive region Na can be predicted to some extent by evaluating the range in which each NN data Nd of the sensor processing unit 31 can be controlled even before the in-vehicle device Vd is integrated into the vehicle 2.
  • the integrated learning to narrow the non-adaptive region Na is the individual NN data Nd before being integrated into the vehicle 2. (That is, in the process of acquiring NN data Nd as a trained model).
  • FIG. 15 is a schematic diagram for explaining the robustness of the control by AI.
  • the robust control areas Ra of the NN data NdA3, NdB3, NdC3 and NdD3 that control each in-vehicle device Vd are set so as not to fall into the control in the non-adaptive region Na shown in FIG.
  • the possibility that the control in another NN data Nd is included in the robust control area Ra increases.
  • each NN data Nd rapidly shifts the control of the controlled object into the controllable area Ia, and as a result, the possibility of continuing stable control increases.
  • FIG. 15 shows that as a result of the control in the NN data NdC3 being performed in the robust control area Ra, the control in the NN data NdD3 is in the extended robust control area Ra. As a result, at the timing of the next control, the NN data NdC3 and NdD3 can quickly return to the control within the controllable area Ia.
  • the control unit 30 includes a learning unit 30a.
  • the control unit 30 previously holds information on the controllable area Ia of the data sets of the plurality of NN data NdA1, NdA2, ..., NdD4 in each sensor processing unit 31 as shown in FIG. Then, when the NN data Nd set in the sensor processing unit 31 controls outside the controllable area Ia and then promptly returns to the control within the controllable area Ia, the learning unit 30a controls at this time.
  • the data set corresponding to the control outside the possible area Ia is set as the robust control area Ra of the NN data Nd, and the information of the controllable area Ia held in advance is added and updated.
  • the operating state of the various in-vehicle devices Vd constituting the vehicle 2 is stable or stable. This can be done by whether the transition to the above-mentioned state has occurred or by the evaluation unit 30e evaluating the stability in a period predetermined by the design.
  • FIG. 16 is a schematic diagram for explaining a state in which robustness in control by AI is expanded. As shown in FIG. 16, if the robust control range Ra of the NN data Nd is expanded in other driving scenes Ds as well as the driving scene Ds in the mountainous area, the traveling of the vehicle 2 can be controlled more stably. It becomes possible.
  • FIG. 17 is a schematic diagram showing an overlapping portion of the AI control area corresponding to the traveling scene Ds.
  • the overlapping control area Da is in the area where the controllable area Ia and the robust control area Ra of a plurality of NN data Nd corresponding to the traveling scene Ds held by the sensor processing unit 31 are combined. May exist.
  • the handling of the data set included in such an overlapping control area Da for example, the NN data Nd corresponding to the driving scene Ds having consistency or affinity with the driving scene Ds of the NN data Nd being set may be used.
  • FIG. 18 is a schematic diagram for explaining the relationship between the vehicle 2 as the system 1, the AI as a subsystem that controls various in-vehicle devices Vd, and the control unit 30.
  • control unit 30 treats each pair of the sensor processing unit 31 and the in-vehicle device Vd to be controlled as a subsystem.
  • Each subsystem uses the NN data Nd acquired by the learning before the integration, and after the integration into the system 1 called the vehicle 2, inputs the detection information Si and generates the control signal Cs, and each subsystem Perform the intended use of.
  • the vehicle type for example, the weight of the vehicle body 3 weight, the center of gravity, the vehicle width, the wheel base, etc.
  • the vehicle type is different, or the vehicle type is the same.
  • the specifications eg, hybrid engine or motor engine, two-wheel drive or four-wheel drive, and exhaust volume, etc.
  • equipment eg, tire 4, wheels, headlight 6, etc.
  • the completed vehicle as the system 1 has different properties (that is, characteristics determined by various parameters such as vehicle type, specifications, and equipment).
  • each finished vehicle has multiple unique parameters. Therefore, in order to perform integrated learning by interlocking the NN data Nd that controls various in-vehicle devices Vd, it is necessary to consider the influence of a plurality of unique parameters of the vehicle 2 on the in-vehicle device Vd.
  • controllable area Ia of the NN data Nd acquired by the learning before integration is the same, if the properties of the vehicle 2 are different, the area where the data set transitions will also be different due to the control. do. Therefore, it is desirable that the controllable area Ia of the NN data Nd is appropriately formed for each vehicle 2 (that is, the completed vehicle), and as a result, the robust control area Ra can be expected to be appropriately expanded.
  • the integrated learning after integration is performed for the purpose of reforming the controllable region Ia of the NN data Nd and further expanding the robust control region Ra based on the reformed controllable region Ia.
  • the integrated learning of the NN data Nd after the integration is performed by driving the completed vehicle in an actual environment or in an environment simulating a certain driving scene Ds. It is assumed that these environments include elements equivalent to the characteristic elements included in the driving scene Ds in the learning before integration.
  • the selection unit 30s estimates the driving scene Ds based on the input detection information Si, and among the plurality of NN data Nd held by each sensor processing unit 31, the estimated driving scene Ds.
  • the NN data Nd corresponding to is selected and set for each sensor processing unit 31.
  • a person may perform the estimation of the traveling scene Ds performed by the selection unit 30s and the selection and setting of the NN data Nd.
  • the NN data Nd held by each sensor processing unit 31 has an additional learning non-compatible mode in which the trained model is not changed and an additional learning compatible mode in which the trained model can be changed by additional learning. , Can be switched and set.
  • the NN data Nd of each sensor processing unit 31 is set to the additional learning non-compliant mode.
  • the instruction of the additional learning compatible mode from the learning unit 30a is referred to as the additional learning compatible mode AL
  • the instruction of switching the additional learning non-compatible mode from the learning unit 30a is referred to as the additional learning non-compatible mode NL.
  • the learning unit 30a switches the NN data Nd being set for each sensor processing unit 31 to the additional learning compatible mode, and the NN data Nd of each sensor processing unit 31 independently and simultaneously performs integrated learning. An example to be performed will be described.
  • the expressway is treated as a driving scene Ds for performing integrated learning.
  • two vehicles 2A and 2B are handled as different properties for each vehicle 2.
  • the vehicle 2A is a vehicle 2 having a large displacement, a high horsepower, and a heavy weight of the vehicle body 3 (for example, a vehicle having a displacement of 4000 cc or more).
  • the vehicle 2B is a vehicle 2 having a small displacement, a low horsepower, and a light weight of the vehicle body 3 (for example, a vehicle having a displacement of 660 cc or less).
  • vehicles 2A and 2B accelerate to maintain driving speed or overtake other vehicles, lane change or decelerate for curves near junctions, and steer to change direction of travel. ..
  • the drive device 11, the braking device 12, the steering device 13, and the sensor processing unit 31A, the sensor processing unit 31B, and C that control these three in-vehicle devices Vd are handled. It shall be.
  • the NN data Nd corresponding to the expressway set in the sensor processing unit 31A, the sensor processing unit 31B and C is referred to as NN data NdA2, NdB2 and NdC2.
  • the NN data NdA2, NdB2, and NdC2 control the vehicle-mounted device Vd to be controlled, and independently perform integrated learning all at once.
  • the NN data NdB2a that controls the braking device 12 of the vehicle 2A slows down sufficiently before approaching the curve because the weight of the vehicle 2A is heavy.
  • the braking device 12 is controlled with a large braking force exceeding the possible range Ia.
  • the NN data NdC2a that controls the steering device 13 of the vehicle 2A determines that the traveling safety cannot be ensured by the control in the controllable range Ia because the traveling speed is too fast at the time of approaching the curve, and other NNs.
  • the steering device 13 Independent of the control of the data Nd, the steering device 13 is controlled with a large steering amount and steering speed exceeding the controllable range Ia. As a result, it is conceivable that the vehicle 2A will slip due to the hard grip of the tire 4 due to the strong braking and the sudden steering.
  • the NN data NdA2a that controls the drive device 11 of the vehicle 2A controls within a range of higher driving force than the NN data NdA2b that controls the drive device 11 of the vehicle 2B in order to accelerate the vehicle body 3 that is heavier than the vehicle 2B. Will be done.
  • the fluctuation of the controlled amount for controlling the driving force that is, the amount of change or the rate of change
  • the control of the NN data NdA2a has a stronger tendency to increase the driving force as compared with the NN data NdA2b as the learning is performed.
  • the NN data NdB2a that controls the braking device 12 of the vehicle 2A has a higher braking force range than the NN data NdB2b that controls the braking device 12 of the vehicle 2B in order to decelerate the vehicle body 3 that is heavier than the vehicle 2B. It will be controlled. As a result, the fluctuation of the controlled amount for controlling the braking force (that is, the amount of change or the rate of change) becomes steep. Therefore, it is considered that the control of the NN data NdB2a has a stronger tendency to increase the braking force as compared with the NN data NdB2b as the learning is performed.
  • the NN data NdC2a that controls the steering device 13 of the vehicle 2A has a steering angle as compared with the NN data NdC2b that controls the braking device 12 of the vehicle 2B in order to steer the vehicle body 3 that is heavier than the vehicle 2B and has a long wheelbase.
  • Control becomes complicated. This is because the heavier the vehicle body 3, the greater the inertia in the traveling direction, which makes it difficult to stabilize the feedback control toward the traveling direction, which is a new target.
  • the control of the NN data NdC2a has a stronger tendency to change the steering angle more frequently than the NN data NdC2b as the learning is performed.
  • the NN data NdB2b that controls the braking device 12 of the vehicle 2B travels sufficiently before approaching the curve because the weight of the vehicle 2B is light. You can slow down. Therefore, the NN data NdB2b can control the braking device 12 in the controllable area Ia.
  • the steering device 13 can be controlled in the controllable range Ia. As a result, the vehicle 2B has a good grip of the tire 4 due to appropriate braking and steering, and does not slip.
  • the vehicle 2B Since the vehicle 2B has a small displacement, the weight of the vehicle body 3 is light, and the wheelbase is short, the fluctuation of the control amount (that is, the change amount or the rate of change) is slower than that of the control by the vehicle 2A.
  • the NN data Nd which is a trained model before integration, is used as it is for controlling the vehicle 2A
  • the NN data Nd tries to control the vehicle 2A based on the prior learning process (or the teacher data Td)
  • the robust control region deviates from the controllable range Ia acquired by the prior teacher data Td.
  • the possibility of taking control by Ra is relatively large. This makes it difficult to quickly transition from the control in the robust control area Ra to the control in the controllable area Ia. In such a case, even if integrated learning is performed, it is unlikely that the controllable area Ia of the trained model before integration can be reformed into the controllable area Ia suitable for the vehicle 2A.
  • the NN data Nd which is a trained model before integration
  • the NN data Nd controls the vehicle 2B based on the prior learning process (or teacher data Td). Since the fluctuation of the operation of the vehicle 2B in the actual environment (that is, the fluctuation of the data set indicating the state of the vehicle 2 with respect to the control signal Cs) is gradual, it deviates from the controllable range Ia acquired by the prior teacher data Td. Therefore, it is considered that the possibility of taking control in the robust control range Ra is relatively small. This facilitates a rapid transition from control in the robust control area Ra to control in the controllable area Ia. In such a case, it is highly possible that the controllable area Ia of the trained model before integration can be reformed into the controllable area Ia suitable for the vehicle 2B by performing the integrated learning.
  • the learning unit 30a considers the control priority P, causes the sensor processing unit 31 having a high priority P to switch the NN data Nd being set to the additional learning compatible mode, and each sensor processing unit 31.
  • An example of causing the NN data Nd of the above to perform integrated learning in an orderly manner will be described.
  • FIG. 19 is a schematic diagram for explaining the degree of convergence of integrated learning in each NN data Nd when the control priority P is taken into consideration. It is assumed that the highway is treated as the traveling scene Ds of the vehicle 2.
  • FIGS. 19 (a1) to 19 (a3) integrated learning is performed in the order of drive control Dc, braking control Bc, and steering control Sc as control priority P.
  • FIG. 19A1 shows the transition of the control stability evaluation Se with respect to the number of transmissions of the control signal Cs to the drive device 11 when the sensor processing unit 31A performs the drive control Dc using the NN data NdA2. There is.
  • control stability evaluation Se can be determined by the evaluation unit 30e based on the evaluation value derived based on the information obtained from the detection information Si. Then, the evaluation unit 30e determines that the learning has converged because the fluctuation of the evaluation value is equal to or less than the convergence determination value Conv.
  • the stability evaluation Se of the control for the drive device 11 and the braking device 12 uses, for example, information such as the amount and rate of change in the traveling speed, the degree of achievement of the target traveling speed in the target mileage, and fuel efficiency. It can be determined by the evaluation unit 30e. Further, the stability evaluation Se of the control for the steering device 13 uses, for example, information such as the degree of arrival at the target traveling track at the target mileage and the posture of the vehicle 2 on the track, and the evaluation unit 30e. Can be determined by. Further, the stability evaluation Se of the control for the shock absorber 14 uses information such as the moment of inertia acting on the vehicle 2, the load applied to each tire 4, the inclination of the vehicle 2, and the vibration of the vehicle 2 to be used in the evaluation unit 30e. Can be determined by.
  • the fluctuation of the evaluation value converges to the convergence judgment value Conv or less when the number of transmissions of the control signal Cs exceeds 10M.
  • the number of transmissions of the control signal Cs at this time is set to Ta1.
  • FIG. 19A2 shows the transition of the control stability evaluation Se with respect to the number of transmissions of the control signal Cs to the braking device 12 when the sensor processing unit 31D performs the braking control Bc using the NN data NdD2.
  • the number of transmissions of the control signal Cs and the control stability evaluation Se are the same as in FIG. 19 (a1).
  • the fluctuation of the evaluation value converges to the convergence judgment value Conv or less when the number of transmissions of the control signal Cs exceeds 10M.
  • the number of transmissions of the control signal Cs at this time is set to Ta2.
  • FIG. 19A3 shows the transition of the control stability evaluation Se with respect to the number of transmissions of the control signal Cs to the steering device 13 when the sensor processing unit 31B performs the steering control Sc using the NN data NdB2.
  • the number of transmissions of the control signal Cs and the control stability evaluation Se are the same as in FIG. 19 (a1).
  • the fluctuation of the evaluation value converges to the convergence judgment value Conv or less before the number of transmissions of the control signal Cs reaches 10M.
  • the number of transmissions of the control signal Cs at this time is set to Ta3.
  • FIG. 20 is another schematic diagram for explaining the degree of convergence of integrated learning in each NN data Nd when the control priority P is taken into consideration. It is assumed that the highway is treated as the traveling scene Ds of the vehicle 2. In FIGS. 20 (b1) to 20 (b3), integrated learning is performed in the order of steering control Sc, drive control Dc, and braking control Bc as control priority P.
  • FIG. 20 (b1) shows the transition of the control stability evaluation Se with respect to the number of transmissions of the control signal Cs to the steering device 13 when the sensor processing unit 31B performs the steering control Sc using the NN data NdB2.
  • 1M described in the number of transmissions of the control signal Cs indicates a predetermined number of times
  • 10M indicates 10 times of 1M
  • 100M indicates 100 times of 1M.
  • the number of transmissions of the control signal Cs and the control stability evaluation Se are the same as in FIG. 19 (a1).
  • the fluctuation of the evaluation value converges to the convergence judgment value Conv or less when the number of transmissions of the control signal Cs exceeds 100M.
  • the number of transmissions of the control signal Cs at this time is Tb1.
  • FIG. 20 (b2) shows the transition of the control stability evaluation Se with respect to the number of transmissions of the control signal Cs to the drive device 11 when the sensor processing unit 31A performs the drive control Dc using the NN data NdA2.
  • the number of transmissions of the control signal Cs and the control stability evaluation Se are the same as in FIG. 19 (a1).
  • the fluctuation of the evaluation value converges to the convergence judgment value Conv or less when the number of transmissions of the control signal Cs exceeds 100M.
  • the number of transmissions of the control signal Cs at this time is Tb2.
  • FIG. 20 (b3) shows the transition of the control stability evaluation Se with respect to the number of transmissions of the control signal Cs to the braking device 12 when the sensor processing unit 31D performs the braking control Bc using the NN data NdD2.
  • the number of transmissions of the control signal Cs and the control stability evaluation Se are the same as in FIG. 19 (a1).
  • the fluctuation of the evaluation value converges to the convergence judgment value Conv or less when the number of transmissions of the control signal Cs exceeds 100M.
  • the number of transmissions of the control signal Cs at this time is Tb3.
  • the NN data NdA2, NdB2 and NdD2 have more control signals even though the fluctuation of the evaluation value converges as compared with the case of FIGS. 19 (a1) to 19 (a3). It requires transmission of Cs. That is, in FIGS. 20 (b1) to 20 (b3), by performing control with this priority P, the NN data NdA2, NdB2 and NdD2 can stably control the in-vehicle device Vd to be controlled.
  • it contains a large amount of data sets when the control is not performed properly, so that the acquired controllable area Ia and robust control area Ra are included. It may not be appropriate as compared with the cases of FIGS. 19 (a1) to 19 (a3).
  • FIG. 21 is yet another schematic diagram for explaining the degree of convergence of integrated learning in each NN data Nd when the control priority P is taken into consideration. It is assumed that the highway is treated as the traveling scene Ds of the vehicle 2. In FIGS. 21 (c1) to 21 (c4), integrated learning is performed in the order of buffer control Cc, drive control Dc, braking control Bc, and steering control Sc as control priority P.
  • FIG. 21 (c1) shows the transition of the control stability evaluation Se with respect to the number of transmissions of the control signal Cs to the shock absorber 14 when the sensor processing unit 31C performs the buffer control Cc using the NN data NdC2.
  • the number of transmissions of the control signal Cs and the control stability evaluation Se are the same as in FIG. 19 (a1).
  • the fluctuation of the evaluation value converges to the convergence judgment value Conv or less when the number of transmissions of the control signal Cs does not reach 10M.
  • Dc1 be the number of times the control signal Cs is transmitted at this time.
  • FIG. 21 (c2) shows the transition of the control stability evaluation Se with respect to the number of transmissions of the control signal Cs to the drive device 11 when the sensor processing unit 31A performs the drive control Dc using the NN data NdA2.
  • the number of transmissions of the control signal Cs and the control stability evaluation Se are the same as in FIG. 19 (a1).
  • the fluctuation of the evaluation value converges to the convergence judgment value Conv or less when the number of transmissions of the control signal Cs does not reach 10M.
  • the number of transmissions of the control signal Cs at this time is Dc2.
  • FIG. 21 (c3) shows the transition of the control stability evaluation Se with respect to the number of transmissions of the control signal Cs to the braking device 12 when the sensor processing unit 31D performs the braking control Bc using the NN data NdD2.
  • the number of transmissions of the control signal Cs and the control stability evaluation Se are the same as in FIG. 19 (a1).
  • the fluctuation of the evaluation value converges to the convergence judgment value Conv or less when the number of transmissions of the control signal Cs does not reach 10M.
  • Dc3 be the number of transmissions of the control signal Cs at this time.
  • FIG. 21 (c4) shows the transition of the control stability evaluation Se with respect to the number of transmissions of the control signal Cs to the steering device 13 when the sensor processing unit 31B performs the steering control Sc using the NN data NdB2.
  • the number of transmissions of the control signal Cs and the control stability evaluation Se are the same as in FIG. 19 (a1).
  • the fluctuation of the evaluation value converges to the convergence judgment value Conv or less when the number of transmissions of the control signal Cs does not reach 10M.
  • the number of transmissions of the control signal Cs at this time is Dc4.
  • the NN data NdA2, NdB2 and NdD2 have less control signals Cs even though the fluctuation of the evaluation value converges as compared with the case of FIGS. 19 (a1) to 19 (a3). It is enough to send. That is, in FIGS. 21 (c2) to 21 (c4), by performing control with this priority P, the NN data NdA2, NdB2 and NdD2 can stably control the vehicle-mounted device Vd to be controlled. Further, as compared with the cases of FIGS. 19 (a1) to 19 (a3), a large number of appropriately controlled data sets are included, and therefore, the acquired controllable area Ia and robust control area Ra are shown in FIG. 19 (a1). It can be expected that it is more appropriate than the cases of a1) to (a3).
  • the evaluation unit 30e determines whether or not the integrated learning in each NN data Nd converges, for example, the number of transmissions of the control signal Cs before the fluctuation of the evaluation value converges to the convergence judgment value Conv or less. This can be done by reaching a predetermined number of times or by elapse of a predetermined time.
  • the predetermined number of times and the predetermined time may be set in advance in the control unit 30 for each of the driving scene Ds and the sensor processing unit 31, or the control unit 30 sets the properties of the vehicle 2 and the driving scene Ds through integrated learning in the own vehicle 2. It may be set by deriving the vehicle based on the vehicle or acquiring the vehicle via the in-vehicle network 40.
  • the learning unit 30a may cause the selection unit 30s to perform the reselection instruction Rs and the switching instruction Sw of the NN data Nd.
  • the difference in the degree of convergence of the integrated learning in the drive control Dc, the braking control Bc, and the steering control Sc in FIGS. 19 (a1) to (a3) and 20 (b1) to (b3) drives the priority P of the integrated learning.
  • the control Dc, the braking control Bc, and the steering control Sc were performed in this order.
  • the priority P of the three control items Ci of the drive control Dc, the braking control Bc, and the steering control Sc is considered as follows. Even if the braking control Bc and the steering control Sc are performed in a state where the drive control Dc is not properly performed and the traveling speed is very high due to excessive acceleration, the frictional force between the tire 4 and the road surface (that is, the tire 4) is performed. The grip) may slip due to the inertial force of the vehicle body 3. Therefore, it is difficult to maintain the vehicle 2 in a stable state in a state where the drive control Dc is not properly performed. Therefore, among the three control items Ci, the one in which the drive control Dc is appropriately performed has the highest priority.
  • the steering control Sc is performed preferentially over the braking control Bc in a state where the drive control Dc is appropriately performed and the traveling speed is appropriate, the frictional force between the tire 4 and the road surface (that is, the tire) is performed.
  • the grip of 4) may slip due to the inertial force of the vehicle body 3. Therefore, it is difficult to maintain the vehicle 2 in a stable state even if the steering control Sc is performed in a state where the braking control Bc is not properly performed. Therefore, among the three control items Ci, the one in which the braking control Bc is appropriately performed has the next priority.
  • FIG. 21 (c1) This is the priority given to the integrated learning in the buffer control Cc.
  • the inclination (or posture) of the vehicle body 3 and the position of the center of gravity are controlled against the inertial force, and each tire 4 is equal to the road surface. It is desirable to ground with a moderate load.
  • the position of the center of gravity of the vehicle 2 is less likely to fluctuate, and it becomes easier for each tire 4 to come into contact with the road surface with an even load.
  • the frictional force between each tire 4 and the road surface is not easily impaired, and it becomes easy to maintain a state in which the grip is effective.
  • the shock absorber 14 changes the position of the center of gravity of the vehicle 2 by changing the degree of cushioning (that is, the amount of change or the speed of change) of each tire 4 (that is, the balance between the front / rear / left / right inclination and the weight of the vehicle body 3). Can be changed).
  • the degree of cushioning that is, the amount of change or the speed of change
  • the frictional force that is, grip
  • the tire 4 that transmits the force to the road surface against the inertial force acting in the direction different from the traveling direction of the vehicle 2 is applied. It will be possible to improve it further.
  • the shock absorber 14 increases the degree of cushioning of the rear wheels against this inertial force (that is, the rear wheels By operating (to prevent it from sinking), it is possible to prevent the inertial force from concentrating on the rear wheels and losing the frictional force of the rear wheels, and as a result, the forward rotational force from the tire 4 is easily transmitted to the road surface. be able to.
  • the vehicle 2 when the vehicle 2 curves while traveling, the vehicle 2 tilts in the direction opposite to the direction in which the vehicle 2 curves due to the inertial force.
  • the inertial force By acting to strengthen (that is, to prevent the wheels on the outside of the curve from sinking), it prevents the inertial force from concentrating on the wheels on the outside of the curve and losing the frictional force of the wheels, that is, left and right.
  • the inertial force can be appropriately distributed to the wheels of the tire 4, and as a result, the reaction force for changing the traveling direction from the tire 4 can be easily transmitted to the road surface.
  • the vehicle 2 When the vehicle 2 decelerates along with the curve, the vehicle 2 tilts forward, but the load applied to the front wheels increases the frictional force (that is, the reaction force for changing the traveling direction) at the curve. On the other hand, too much load is applied to the front wheels (in other words, the vehicle 2 tilts too much forward), which causes an action of weakening the frictional force of the rear wheels (that is, slipping of the rear wheels). It is desirable that the shock absorber 14 adjusts the degree of cushioning of the front and rear wheels on the outside of the curve so that the rear wheels do not slip due to such excessive action.
  • the buffer control Cc suitable for the vehicle 2 By performing the buffer control Cc suitable for the vehicle 2 by the integrated learning, it becomes possible to suppress the inclination of the vehicle body 3 and to make each tire 4 touch the road surface with an even load. As a result, it is possible to suppress a decrease in the frictional force between each tire 4 and the road surface to bring about an action of making the grip easier to use. Therefore, giving priority to the buffer control Cc over the drive control Dc, the braking control Bc, and the steering control Sc makes it easy to secure the stability of the vehicle 2 in running.
  • the priority P of the drive control Dc, the braking control Bc, and the steering control Sc of the vehicles 2A and B having different properties is dealt with.
  • the displacement, horsepower, and weight of the vehicle body 3 are larger in the vehicle 2A than in the vehicle 2B, the fluctuation of the control amount is also large by that amount, and the NN data Nd acquired by the learning before the integration is appropriately controlled.
  • the NN data Nd acquired by the learning before the integration is appropriately controlled.
  • the vehicle 2B has a smaller size and lighter weight of the vehicle body 3 and wheels than the vehicle 2A, so that stability is ensured when traveling on the uneven road surface. It's hard.
  • the cushioning control Cc here is, for example, the cushioning of the shock absorber 14 for each tire 4 according to the inclination of the vehicle body 3, the size of the unevenness on which the tire 4 rides, the size of the inertial force acting on the curve, and the like. It is conceivable to change the strength of the degree or change the response of the buffering operation.
  • each AI after integration may not be able to fulfill the purpose in the system 1. That is, even if integrated learning is performed, the controllable area Ia of the NN data Nd may not be properly reshaped, and the operation of the system 1 may remain unstable.
  • the learning unit 30a performs the stability evaluation Se shown in FIGS. 19 to 21 and determines that the control in the NN data Nd to be integrated learning is not functioning properly with respect to the environment. Feeds back to the process of deriving the control priority P, changes the control priority P to perform integrated learning, thereby suppressing excess or deficiency of control to the in-vehicle device Vd to be controlled. Is possible.
  • the result of performing such stability evaluation Se is accumulated as information indicating the characteristics of the operation of each in-vehicle device Vd with respect to the control, and the influence on the property of the vehicle 2 or the characteristics of the driving scene Ds is analyzed. It can be used for the teacher data Td in the learning before the integration, the evaluation index in the learning process, the estimation of the driving scene Ds in the integrated learning after the integration, and the derivation of the priority P of the control.
  • FIG. 22 is a schematic diagram for explaining an example of a mechanism in which a physical quantity related to the vehicle 2 affects the control item Ci.
  • control item Ci As the control item Ci, the drive control Dc, the braking control Bc, the steering control Sc, and the buffer control Cc are dealt with here. Further, as various parameters related to the vehicle 2, the vehicle body 3, the vehicle width, the wheelbase, the traveling speed, the weight (or mass), and the center of gravity are dealt with here.
  • the weight (or mass) is determined by the vehicle body 3 which is a completed vehicle.
  • the center of gravity is determined by the vehicle body 3, the vehicle width, the wheelbase, and the like. Passengers and loads may also be treated as parameters relating to weight (or mass) and center of gravity.
  • control results of the drive control Dc, the braking control Bc, the steering control Sc, and the buffer control Cc of the control item Ci influence each other.
  • the physical quantities constituting the control model Ca of each control item Ci are frictional force Fb, load (or impact force) Fp, kinetic energy E, and moment of inertia.
  • the control model Ca of the drive control Dc is the control model CaA
  • the control model Ca of the braking control Bc is the control model CaB
  • the control model Ca is determined for each control item Ci.
  • the load (or impact force) Fp and the kinetic energy E the quantity or volatility may be dealt with.
  • the control model Ca of the drive control Dc can be expressed including at least the frictional force Fb acting on the vehicle 2 and the kinetic energy E.
  • the control model Ca of the braking control Bc can be expressed including at least the frictional force Fb acting on the vehicle 2 and the kinetic energy E.
  • the control model Ca of the steering control Sc can be expressed including at least the frictional force Fb acting on the vehicle 2 and the moment of inertia I (or kinetic energy E).
  • the control model Ca of the buffer control Cc can be expressed including at least the load (or impact force) Fp acting on the vehicle 2.
  • the frictional force Fb and the load (or impact force) Fp can be regarded as acting on each tire 4 of the vehicle 2.
  • the drive control Dc and the braking control Bc are affected by two fluctuations of the frictional force Fb and the kinetic energy E. Then, the steering control Sc is affected by two fluctuations of the frictional force Fb and the moment of inertia I (or kinetic energy E). Then, the buffer control Cc is affected by one fluctuation of the load (or impact force) Fp.
  • the frictional force Fb can be expressed by, for example, the mathematical formula (1).
  • the load or impact force Fp can be expressed by, for example, the mathematical formula (2).
  • the kinetic energy E can be expressed by, for example, the mathematical formula (3).
  • the moment of inertia I can be expressed by, for example, the mathematical formula (4).
  • the control model Ca may be included in the learning model, or may be applied to a general control circuit or control program not limited to machine learning.
  • N included in the frictional force Fb for a very short time varies depending on the load applied to each tire 4 or the impact force Fp. Further, the load or impact force Fp applied to each tire 4 varies depending on the kinetic energy E of the traveling vehicle 2. Further, the kinetic energy E of the traveling vehicle 2 fluctuates depending on the traveling speed v of the vehicle 2 and the moment of inertia I. Further, the moment of inertia I fluctuates depending on the position d of the center of gravity of the vehicle 2.
  • the drive control Dc and the braking control Bc change the traveling speed v of the vehicle 2.
  • the buffer control Cc changes the position d of the center of gravity of the vehicle 2.
  • the steering control Sc can be treated as a small one that does not change the traveling speed and the center of gravity of the vehicle 2.
  • both the load (or impact force) Fp expressed by the kinetic energy E and the frictional force Fb expressed by the load (or impact force) Fp vary depending on the drive control Dc and the braking control Bc.
  • the steering control Sc that includes the moment of inertia I that directly receives the fluctuation of the position d of the center of gravity in the control model Ca is strongly influenced by the buffer control Cc that fluctuates the position of the center of gravity.
  • the drive control Dc, the braking control Bc, and the steering control Sc that include the kinetic energy E that directly receives the fluctuation of the traveling speed v in the control model Ca are affected by the driving control Dc and the braking control Bc that fluctuate the traveling speed v. strong.
  • the drive control Dc and the braking control Bc are compared, the drive control Dc increases the kinetic energy E and the braking control Bc decreases the kinetic energy E, so that the drive control Dc gives the other control item Ci. The influence is strong.
  • the steering control Sc is strongly influenced by the drive control Dc, the braking control Bc, and the buffer control Cc.
  • the braking control Bc is strongly influenced by the drive control Dc and the buffer control Cc.
  • the drive control Dc is strongly influenced by the braking control Bc and the buffer control Cc.
  • the buffer control Cc is affected by the drive control Dc and the braking control Bc.
  • the influence of the drive control Dc is stronger than the influence of the braking control Bc.
  • the drive control Dc, the braking control Bc, and the steering control Sc are affected by the moment of inertia I, and the moment of inertia I is affected by the steering control Sc. Therefore, the degree to which each control item Ci affects, that is, the priority P of control, is in the order of buffer control Cc> drive control Dc> braking control Bc> steering control Sc.
  • the control model Ca of the vehicle 2 is not limited to the above equations (1) to (4), and uses the functions of Lagrangian or Hamiltonian, such as momentum, potential, frictional force, and air resistance.
  • Various physical quantities including the external force expression of the above may be derived by being calculated by a computer capable of high-speed calculation (that is, a calculation device or a quantum computer specialized in the analysis of each control model Ca).
  • a computer capable of high-speed calculation that is, a calculation device or a quantum computer specialized in the analysis of each control model Ca.
  • the calculation amount may be suppressed by treating it as an analysis of the momentum.
  • a plurality of prior analysis results of each control model Ca are held in, for example, a control unit 30, a sensor processing unit 31, an in-vehicle device Vd, or a server on the in-vehicle network 40.
  • the efficiency or speed of the analysis calculation may be improved.
  • the learning unit 30a sets the NN data Nd of the sensor processing unit 31D whose control target is the shock absorber 14 to the additional learning compatible mode, and after integration.
  • the buffer control Cc suitable for the vehicle 2 of the above is integratedly learned.
  • the learning unit 30a sets the NN data Nd of the sensor processing unit 31A whose control target is the drive device 11 to the additional learning compatible mode, and causes integrated learning of the drive control Dc suitable for the vehicle 2 after integration.
  • the learning unit 30a sets the NN data Nd of the sensor processing unit 31B whose control target is the braking device 12 to the additional learning compatible mode, and causes integrated learning of the braking control Bc suitable for the vehicle 2 after integration.
  • the learning unit 30a sets the NN data Nd of the sensor processing unit 31C whose control target is the steering device 13 to the additional learning corresponding mode, and causes integrated learning of the steering control Sc suitable for the vehicle 2 after integration. In this way, by considering the control priority P, it is possible to switch the integrated learning of the NN data Nd related to each control item Ci in chronological order.
  • FIG. 23 is a schematic diagram for explaining a process of integrated learning of a plurality of AIs integrated in the AI integrated system 1.
  • the two axes shown in FIG. 23 have a control item Ci corresponding to AI on the vertical axis and a time axis Co as a time-series quantity on the horizontal axis.
  • the control items Ci on the vertical axis are arranged in the order of control priority P in the environment in which the system 1 operates (for example, the traveling scene Ds).
  • the horizontal axis may be, for example, the number of counts for calculation or counting as long as it can be handled in time series.
  • the buffer control Cc, the drive control Dc, the braking control Bc, the steering control Sc, the transmission control Tc, the recognition control Rc, the UI control Ui, and the battery control Ec are arranged in this order.
  • the learning unit 30a causes the integrated learning Ld1 to be performed on the NN data NdD of the sensor processing unit 31D corresponding to the buffer control Cc having the highest priority P at the time T1 at which the integrated learning is started.
  • the evaluation unit 30e evaluates the progress of the integrated learning Ld1 started from the time T1 based on the information such as the degree of convergence in the integrated learning Ld1 or the stability evaluation Se.
  • the learning unit 30a determines whether to switch to integrated learning of another AI based on the evaluation by the evaluation unit 30e.
  • the integrated learning Ld1 is made according to the characteristics of the devices integrated into the system 1 and the AI to be controlled. It is also possible to have the learning unit 30a process so as to make a determination to switch to the integrated learning of another AI according to the progress of learning obtained from the stability evaluation Se or the like even if the learning is not converged. Further, it is also possible to process the learning unit 30a so as to sequentially end the integrated learning of the AI and switch to the integrated learning of the next AI in descending order of the priority P of the control.
  • the learning unit 30a causes the NN data NdA of the sensor processing unit 31A corresponding to the drive control Dc having the second highest priority P to perform the integrated learning La1 at the time T2.
  • the evaluation unit 30e evaluates the progress of the integrated learning La1 started from the time T2 based on the information such as the degree of convergence in the integrated learning La1 or the stability evaluation Se.
  • the learning unit 30a determines whether to switch to integrated learning of another AI based on the evaluation by the evaluation unit 30e.
  • the learning unit 30a determines to switch to the integrated learning of another AI according to the progress of learning obtained from the stability evaluation Se or the like even if the integrated learning La1 does not converge. do.
  • the learning unit 30a causes the NN data NdD of the sensor processing unit 31D to perform the integrated learning Ld2 again at the time T3.
  • the learning unit 30a can make a determination to switch from the integrated learning Ld1 to the integrated learning La1 at the time T2 and also to make a determination to switch to the integrated learning Ld2 of the buffer control Cc again according to the progress in the integrated learning La1. Is. That is, based on the above-mentioned policy, the learning unit 30a causes the integrated learning y of another AI having a low priority P to proceed halfway before the end of the integrated learning x having a high priority P, and the priority P is again set. By advancing the high integrated learning x, it is possible to change the trained model in parallel for a plurality of AIs while considering the control priority P.
  • the learning unit 30a causes the integrated learning Lb1 to be performed on the NN data NdB of the sensor processing unit 31B corresponding to the braking control Bc having the third highest priority P at the time T4.
  • the evaluation unit 30e evaluates the progress of the integrated learning Lb1 started from the time T4 based on the information such as the degree of convergence in the integrated learning Lb1 or the stability evaluation Se.
  • the learning unit 30a determines whether to switch to integrated learning of another AI based on the evaluation by the evaluation unit 30e.
  • the learning unit 30a causes the NN data NdA of the sensor processing unit 31A corresponding to the drive control Dc to perform the integrated learning La2 again at the time T5.
  • the learning unit 30a has the NN data NdD of the sensor processing unit 31D corresponding to the buffer control Cc and the NN data NdA of the sensor processing unit 31A corresponding to the drive control Dc.
  • the NN data NdB of the sensor processing unit 31B corresponding to the braking control Bc and the NN data NdC of the sensor processing unit 31C corresponding to the steering control Sc are subjected to integrated learning.
  • integrated learning Ld1 integrated learning La1, integrated learning Ld2, integrated learning Lb1, integrated learning La2, integrated learning Lc1, integrated learning Ld3, integrated learning La3, integrated learning Lb2 and integrated learning Lc2
  • priority is given to control.
  • the learning of the NN data Nd can be sequentially completed while reflecting the degree P.
  • the learning unit 30a is connected to the NN data Nd of the sensor processing unit 31 corresponding to each of the transmission control Tc, the recognition control Rc, the UI control Ui, and the battery control Ec having a low control priority P from the time T11 to the time T17.
  • the learning unit 30a is connected to the NN data Nd of the sensor processing unit 31 corresponding to each of the transmission control Tc, the recognition control Rc, the UI control Ui, and the battery control Ec having a low control priority P from the time T11 to the time T17.
  • the integrated learning Lg1 for the NN data NdG of the sensor processing unit 31G corresponding to the UI control Ui in parallel during the integrated learning Lf1 such as starting the integrated learning Lf1 for the NN data NdF, and the integrated learning Lg1 for the NN data NdG, and
  • the integrated learning Lh1 for the NN data NdH of the sensor processing unit 31H corresponding to the battery control Ec may be started.
  • the learning unit 30a can be divided into a plurality of AIs to perform integrated learning. This allows multiple AIs to have not only one-sided additional learning (ie, integrated learning) that follows the strength of their influence, but also two-way additional learning (that is, integrated learning) that reflects the consequences of their mutual influence. Learning) makes it possible to change the learning model.
  • integrated learning one-sided additional learning
  • two-way additional learning that is, integrated learning
  • FIG. 24 is a flowchart for explaining the processing in the control unit 30 in the AI integrated system 1.
  • FIG. 24 shows a process in which the control unit 30 causes the NN data Nd of each sensor processing unit 31 to perform integrated learning.
  • the selection unit 30s estimates the traveling scene Ds in which the vehicle 2 travels based on the detection information Si input in the processing Sp81b of FIG. 10B.
  • the learning unit 30a derives the priority P of control in the NN data Nd of each sensor processing unit 31 based on the traveling scene Ds estimated by the processing Sp161.
  • the learning unit 30a causes the NN data Nd of each sensor processing unit 31 to perform integrated learning based on the control priority P derived in the processing Sp162.
  • the evaluation unit 30e evaluates the control of the NN data Nd based on the data set input by the integrated learning for the NN data Nd in the process Sp163.
  • the evaluation unit 30e determines whether or not the integrated learning in each NN data Nd has converged based on the evaluation in the process Sp164.
  • the process proceeds to Sp161
  • the control priority P is continuously derived and each of them is derived.
  • the learning unit 30a proceeds to the process Sp163 based on the evaluation in the process Sp164, and switches the execution of the integrated learning in each NN data Nd as shown in FIG. 23.
  • the process may proceed to Sp162, and the priority P of control in each NN data Nd may be derived again according to the characteristics of the traveling scene Ds.
  • the series of processes shown in FIG. 24 may be performed independently of the processes shown in FIGS. 10 (a) and 10 (b). Further, the individual processes of FIG. 24 may be started to be executed regardless of the progress of the next process. For example, while deriving the control priority P for each NN data Nd, the individual NN data Nd may be subjected to integrated learning based on the already derived priority P.
  • the learning unit 30a may always set each NN data Nd to the additional learning non-compatible mode or set the additional learning compatible mode after the integrated learning of each NN data Nd has converged. You may.
  • NN data Nd in which integrated learning is performed in one vehicle 2 is acquired by communication with another vehicle 2. It may be used.
  • each NN data Nd subjected to integrated learning is compared and analyzed to extract a difference in information, and the extracted difference information is fed back to each NN. It can be used to improve problems in control with data Nd, or reflected in the control model Ca when deriving the priority P of control, and the control with each NN data Nd is more stable, safe and reliable. It may be applied so as to have a high value. By doing so, in various environments in which the vehicle 2 travels, the vehicle 2 can individually integrate a plurality of AIs into the AI integrated system 1 without steadily performing integrated learning among the plurality of vehicles 2. It is possible to adapt a plurality of AIs to the system 1 of the vehicle 2 with higher accuracy and more quickly while complementing each other for an unsupported environment.
  • the server connected to the in-vehicle network 40 communicates with the control unit 30 mounted on the plurality of vehicles 2 and various information held by each control unit 30 and the sensor processing unit 31. And NN data Nd may be updated.
  • the control unit 30 causes a plurality of AIs to perform integrated learning in consideration of the control priority P.
  • the control unit 30 causes the NN data Nd held by each sensor processing unit 31 integrated in the system 1 to perform integrated learning.
  • the supervision unit 30 causes integrated learning to be performed on the NN data Nd corresponding to the running scene Ds estimated by the supervision unit 30 (selection unit 30s).
  • control unit 30 (learning unit 30a) is ordered from among the NN data Nd set in each sensor processing unit 31 corresponding to the driving scene Ds, based on the control priority P (that is,). Let them perform integrated learning (in order).
  • the control priority P here is, for example, the strength relationship of the degree of influence of each control item Ci on each other, and the physical quantity included when each control item Ci is expressed by the control model Ca (that is, system 1). Learning based on the causal relationship between the control items Ci via the parameter) and the stability evaluation Se of the operation performed by the device to be controlled by the control performed by the NN data Nd of each sensor processing unit 31.
  • the unit 30a may be derived, or may be set by a person or via a network.
  • the NN data Nd set in the plurality of AIs is ordered in the environment in which the system 1 operates. It is possible to stand up and make integrated learning.
  • the control item Ci (in other words, in other words, which is the basis for maintaining a stable operating state in the environment in which the system 1 operates) is used. Since the critical control item Ci) will function properly in advance, the other control item Ci will be able to function more appropriately, and as a result, it will be easier to adapt multiple AIs after integration to the system 1.
  • the AI after integration into the system 1 can adapt and change the trained model acquired by the learning before the integration by the integrated learning to the system 1. It can be said that the AI whose control is changed according to the system 1 has higher robustness in controlling than before the integration into the system 1.
  • the control unit 30 causes the integrated learning to be performed again.
  • the system 1 can continue a stable operating state.
  • the learning unit 30a processes the NN data Nd of the sensor processing unit 31 that handles the sensor 32 that has been added or changed so as to preferentially perform integrated learning, so that the system 1 operates stably again. It can be expected to make it. This leads to the improvement of the robustness in the control by the AI whose control is changed according to the system 1.
  • the system 1 can be stably operated by the integrated learning of the changed NN data Nd by the control unit 30. It will be possible. At this time, not only the changed NN data Nd but also other NN data Nd may be integratedly learned based on the control priority P.
  • the control priority P may be derived again by the control unit 30, or may already be derived.
  • the control unit 30 causes the in-vehicle device Vd or the sensor processing unit 31 in the abnormal state.
  • the possibility of continuously operating the system 1 can be obtained. This improves the robustness of control for the operation of the device in the system 1.
  • the learning unit 30a derives the priority P for the control performed by each NN data Nd and uses it for integrated learning, but regarding at least one of the operation of the in-vehicle device Vd itself and the in-vehicle device Vd.
  • the priority P may be derived and used for integrated learning.
  • the second embodiment can be applied. It is possible.
  • the control unit 30 derives the priority P for causing the single NN data Nd (that is, AI) used in each sensor processing unit 31 to perform integrated learning as described above, and derives the priority.
  • the learning of each NN data Nd is executed based on P.
  • AI that is, NN data Nd
  • AI which is a learned model for controlling the in-vehicle device Vd to be controlled by learning before being integrated into the system 1
  • the integrated learning can be performed in an orderly manner, and as a result, the effect of more appropriately controlling the AI of each sensor processing unit 31 can be obtained. This leads to improving the robustness of the control (in other words, autonomous control) acquired by AI through learning.
  • control priority P is derived using the control model Ca expressing each control item Ci.
  • control priority P is derived based on the learning process in each control item Ci.
  • FIG. 25 is a schematic diagram for explaining a process of determining a control priority P through integrated learning to NN data Nd in the third embodiment of the present disclosure.
  • the learning unit 30a can acquire information regarding learning before integration of the NN data Nd held by each sensor processing unit 31 into the vehicle 2.
  • the information regarding the learning before the integration of the NN data Nd into the vehicle 2 is the information regarding the learning process of the NN data Nd in various driving scenes Ds, for example, the control of each driving scene Ds in each NN data Nd.
  • Various evaluation items such as the degree of contribution, the degree of convergence of learning, or the degree of variation in the learning process when the seed used for learning is changed can be mentioned.
  • the degree of contribution of control for each driving scene Ds is an influence on the driving of the vehicle 2, for example, when the vehicle-mounted device Vd to be controlled by the NN data Nd operates appropriately (for example, control of the vehicle-mounted device Vd). And when the result is close to the teacher data Td corresponding to the driving scene Ds), or when it does not operate properly (for example, the control of the in-vehicle device Vd and the result are the teacher data corresponding to the driving scene Ds). It can be derived from (when it is not close to Td) and the stability evaluation of running after control.
  • the degree of convergence of learning can be derived from, for example, the number of transmissions of control signals Cs until it is determined that learning has converged, the number of teacher data (or the number of data sets) used for learning, and the like.
  • the degree of variation in the learning process when the seeds used for learning are changed can be derived from, for example, the relationship between the number of seeds and the variation in the learning process.
  • the evaluation unit 30e evaluates the NN data Nd of each sensor processing unit 31 before integration into the vehicle 2 with the above-mentioned evaluation items for each driving scene Ds.
  • the above-mentioned evaluation items are the degree of contribution of control, the degree of convergence of learning, and the degree of variation in the learning process.
  • the evaluation unit 30e derives the evaluation value Val for each driving scene Ds based on the evaluation result, and extracts the one having a high evaluation value Val. In the process of extracting the evaluation value Val, the weighting may be changed for each evaluation item.
  • priority P is set in descending order of the evaluation value Val, and integrated learning is performed. Further, for other driving scenes Ds, evaluation, extraction and determination of priority P are performed in the same manner, and integrated learning is performed on each NN data Nd.
  • the evaluation unit 30e provides information on, for example, the learning process before integration in the NN data Nd of each sensor processing unit 31.
  • the evaluation value Val is derived based on the degree of contribution of control in each NN data Nd
  • the control unit 30 determines each control item based on the derived evaluation value Val. It is possible to determine the priority P of the control of Ci and perform integrated learning. It is also possible to derive the control priority P in combination with the method of the second embodiment.
  • control item Ci extracted in the third embodiment is further combined to perform integrated learning.
  • FIG. 26 is a schematic diagram for explaining another process of determining the priority P of control through integrated learning to NN data Nd in the fourth embodiment of the present disclosure.
  • the learning unit 30a has four control items Ci, the buffer control Cc, the drive control Dc, the braking control Bc, and the steering control Sc, among the control items Ci carried by the various sensor processing units 31 integrated in the vehicle 2. Was extracted.
  • the learning unit 30a combines two or more of the four extracted control items Ci (that is, drive control Dc, braking control Bc, steering control Sc, and buffer control Cc) on a trial basis. Have students perform integrated learning.
  • the learning unit 30a sets the two control item Cis to be combined as the additional learning support mode for the six patterns in which two of the four extracted control item Cis are combined, and other control items.
  • Ci performs trial integrated learning as a mode that does not support additional learning.
  • the evaluation unit 30e evaluates the learning process when trial integrated learning is performed on the combined 6 patterns.
  • the evaluation of the learning process for example, the degree of variation in the output of each NN data Nd (that is, the control signal Cs), the stability evaluation Se of the control shown in FIGS. 19 to 21, and the like can be considered.
  • the learning unit 30a determines the NN data Nd to be preferentially combined to perform integrated learning, and the learning of each combined NN data Nd converges. Let them perform integrated learning.
  • the learning unit 30a supports additional learning by changing the order of the two control item Cis to be combined for 12 patterns considering the order of the combination of two of the four extracted control item Cis.
  • the mode is set, and the other control item Ci is set as a mode that does not support additional learning, and trial integrated learning is performed.
  • the evaluation unit 30e evaluates the learning process when trial integrated learning is performed on the combined 12 patterns. Based on the evaluation in such trial integrated learning, the learning unit 30a determines the NN data Nd to be preferentially combined and performed integrated learning in consideration of the order, and the combined NN data Nd. Let the integrated learning be performed until the learning converges in sequence.
  • 26 (a) and 26 (b) are examples, and the learning unit 30a causes trial integrated learning to be performed by combining a plurality of NN data Nd from various control items Ci, and determines the priority P. You may have them perform integrated learning.
  • the learning unit 30a is, for example, an NN in a plurality of patterns in which the NN data Nd of each sensor processing unit 31 is combined. Trial integrated learning is performed with the data Nd, and the evaluation unit 30e evaluates these trial integrated learning. Then, the learning unit 30a can determine the priority P of the combination of the NN data Nd based on the evaluation by the evaluation unit 30e, and make it possible to perform the integrated learning of the NN data Nd. It is also possible to derive the control priority P in combination with the methods of the second and third embodiments.
  • the neural network according to the above-described embodiments 1 to 4 is generally composed of an input layer composed of a plurality of neurons, an intermediate layer (hidden layer) composed of a plurality of neurons, and an output layer composed of a plurality of neurons.
  • the intermediate layer may be one layer or two or more layers.
  • FIG. 27 is a schematic diagram for explaining an example of a three-layer neural network. For example, in the case of a three-layer neural network as shown in FIG. 27, when a plurality of inputs are input to the input layer (X1-X3), the value is multiplied by the weight W1 (w11-w16) to form an intermediate layer (w11-w16).
  • FIG. 28 is a hardware configuration example for implementing the technique of the present disclosure shown in each of the above-described embodiments and modifications.
  • the hardware is composed of at least a CPU which is an arithmetic unit, an auxiliary storage device which is a storage device such as a memory, and a main storage device such as a hard disk or an optical disk.
  • the hardware configuration is not limited to the example of FIG. 28. Further, it may be provided with a communication device for connecting to an external network.
  • the AI integrated system receives detection information indicating the characteristics of the environment in which the controlled device operates as input via at least one of a sensor and an external network, and generates a plurality of control signals for controlling the controlled device. It includes a control unit that selects one of the trained models based on the input detection information, and a sensor processing unit that controls the device to be controlled by using the selected trained model. The control unit may select one of the plurality of trained models based on at least one of the input detection information and the control signal.
  • the AI integrated system receives detection information indicating the characteristics of the environment in which the controlled device operates as input via at least one of a sensor and a communication network, and generates a control signal for controlling the controlled device.
  • Learning to make the trained model additionally trained for each of the trained model, the plurality of sensor processing units that are controlled by using the trained model corresponding to each of the plurality of controlled devices, and the plurality of sensor processing units. It has a part.
  • the learning unit may have at least one of the plurality of sensor processing units perform additional learning.
  • the AI integrated device generates a plurality of control signals for controlling the device to be controlled by inputting detection information indicating the characteristics of the environment in which the device to be controlled is operated via at least one of a sensor and an external network. It is equipped with a control unit that selects and controls one of the trained models of the above based on the input detection information.
  • the control unit may select one of the plurality of trained models based on at least one of the input detection information and the control signal.
  • the AI integrated device is controlled by inputting detection information indicating the characteristics of the environment in which the controlled device operates, which corresponds to each of the plurality of controlled devices, as input via at least one of the sensor and the communication network. It is provided with a control unit that preferentially performs additional learning for at least one of the trained models that generate a control signal for controlling the device.
  • the AI integrated program generates a plurality of control signals for controlling the controlled device by inputting detection information indicating the characteristics of the environment in which the controlled device operates as an input via at least one of a sensor and an external network. From the trained models of, select one that is suitable for the environment based on the input detection information, and control the device to be controlled.
  • the AI integrated program may select one of a plurality of trained models based on at least one of the input detection information and the control signal.
  • the AI integrated program is controlled by inputting detection information indicating the characteristics of the environment in which the controlled device operates, which corresponds to each of the plurality of controlled devices, as input via at least one of the sensor and the communication network. Priority is given to additional training for at least one of the trained models that generate a control signal for controlling the device.
  • 1 AI integrated system 2 vehicles, 3 car bodies, 4 tires, 5 doors, 6 headlights, 7 electronic mirrors, 8 in-vehicle cameras, 9 radars, 10 transmissions, 11 drive devices, 12 braking devices, 13 steering devices, 14 shock absorbers. , 15 UI device, 16 recognition device, 17 transmission device, 21 drive control unit, 22 braking control unit, 23 steering control unit, 24 buffer control unit, 25 UI control unit, 26 recognition control unit, 27 transmission control unit, 30 supervision Unit, 30e evaluation unit, 30s selection unit, 30a learning unit, 30m storage unit, 31,31A, 31B, ..., 31N sensor processing unit, 32,32A, 32B, ..., 32N sensor, 40 in-vehicle network, 41 communication device, 42 signal transmission path, 101 subsystem, Nd, NdA, NdA1, NdA2, ..., NdB, NdB1, ..., NdNn NN data, Lm learner, Si detection information, Cs control signal, Sw.
  • Switching instruction Rs reselection instruction, AL additional learning compatible mode, NL additional learning non-compatible mode, Ds, DsA, DsB, ..., DsN driving scene, Ca, CaA, CaB, ..., CaN control model, Se Stability evaluation, P control priority, Ci control items, Dc drive control, Bc braking control, Sc steering control, Cc buffer control, Ui UI control, Rc recognition control, Tc transmission control, Ia controllable area, Ra robust control area , Na Non-adaptive area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention concerne un système d'intégration d'intelligence artificielle (IA) (1) comprenant une unité d'intégration (30) et une unité de traitement de capteur (31). Parmi une pluralité de modèles appris (Nd) qui reçoivent une entrée d'informations de détection (Si), qui est entrée par l'intermédiaire d'un capteur et/ou d'un réseau externe et qui indique une caractéristique d'un environnement dans lequel un dispositif à commander fonctionne, et qui génèrent un signal de commande (Cs) permettant de commander le dispositif à commander, l'unité d'intégration sélectionne un modèle appris en fonction des informations de détection (Si) et/ou du signal de commande généré (Cs). L'unité de traitement de capteur fait intervenir le modèle appris sélectionné (Nd) pour commander le dispositif à commander.
PCT/JP2020/025175 2020-06-26 2020-06-26 Système d'intégration d'ia, dispositif d'intégration d'ia et programme d'intégration d'ia WO2021260910A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022532199A JP7414995B2 (ja) 2020-06-26 2020-06-26 Ai統合システム、ai統合装置及びai統合プログラム
PCT/JP2020/025175 WO2021260910A1 (fr) 2020-06-26 2020-06-26 Système d'intégration d'ia, dispositif d'intégration d'ia et programme d'intégration d'ia

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/025175 WO2021260910A1 (fr) 2020-06-26 2020-06-26 Système d'intégration d'ia, dispositif d'intégration d'ia et programme d'intégration d'ia

Publications (1)

Publication Number Publication Date
WO2021260910A1 true WO2021260910A1 (fr) 2021-12-30

Family

ID=79282164

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/025175 WO2021260910A1 (fr) 2020-06-26 2020-06-26 Système d'intégration d'ia, dispositif d'intégration d'ia et programme d'intégration d'ia

Country Status (2)

Country Link
JP (1) JP7414995B2 (fr)
WO (1) WO2021260910A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018173121A1 (fr) * 2017-03-21 2018-09-27 株式会社Preferred Networks Dispositif de serveur, programme de fourniture de modèle entraîné, procédé de fourniture de modèle entraîné et système de fourniture de modèle entraîné
WO2019193660A1 (fr) * 2018-04-03 2019-10-10 株式会社ウフル Système de commutation de modèles appris par machine, dispositif périphérique, procédé de commutation de modèles appris par machine et programme
WO2020090251A1 (fr) * 2018-10-30 2020-05-07 日本電気株式会社 Dispositif, procédé et programme de reconnaissance d'objets

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002104104A (ja) 2001-06-11 2002-04-10 Hitachi Ltd 自動車の協調制御装置
US20220001858A1 (en) * 2018-11-13 2022-01-06 Nec Corporation Dangerous scene prediction device, dangerous scene prediction method, and dangerous scene prediction program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018173121A1 (fr) * 2017-03-21 2018-09-27 株式会社Preferred Networks Dispositif de serveur, programme de fourniture de modèle entraîné, procédé de fourniture de modèle entraîné et système de fourniture de modèle entraîné
WO2019193660A1 (fr) * 2018-04-03 2019-10-10 株式会社ウフル Système de commutation de modèles appris par machine, dispositif périphérique, procédé de commutation de modèles appris par machine et programme
WO2020090251A1 (fr) * 2018-10-30 2020-05-07 日本電気株式会社 Dispositif, procédé et programme de reconnaissance d'objets

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IBUKI, TAKERO ET AL.: "Development of a deep learning basis that enables the creation and offering of A1 models in a short period of time", NTT DOCOMO TECHNICAL JOURNAL, vol. 25, no. 4, 31 January 2018 (2018-01-31), pages 12 - 18 *

Also Published As

Publication number Publication date
JPWO2021260910A1 (fr) 2021-12-30
JP7414995B2 (ja) 2024-01-16

Similar Documents

Publication Publication Date Title
US20220379920A1 (en) Trajectory planning method and apparatus
US10235881B2 (en) Autonomous operation capability configuration for a vehicle
US10824155B2 (en) Predicting movement intent of objects
CN113386795A (zh) 一种自动驾驶车辆智能决策及局部轨迹规划方法及其决策系统
CN114391088B (zh) 轨线规划器
Ferrara et al. Second order sliding mode control of vehicles with distributed collision avoidance capabilities
EP3814909A2 (fr) Utilisation de divergence pour mener des simulations basées sur le journal
CN113835421B (zh) 训练驾驶行为决策模型的方法及装置
KR101876063B1 (ko) 차량 데이터 기반의 노면 판단 방법
CN109733474A (zh) 一种基于分段仿射分层控制的智能车转向控制系统及方法
CN112577506B (zh) 一种自动驾驶局部路径规划方法和系统
US11300968B2 (en) Navigating congested environments with risk level sets
CN110794851A (zh) 车辆远程控制安全防护方法、装置和无人驾驶车辆
CA3155591C (fr) Securite fonctionnelle dans la conduite autonome
US20240010198A1 (en) Methods and Systems for Adjusting Vehicle Behavior Based on Ambient Ground Relative Wind Speed Estimations
EP4219253A1 (fr) Procédé de freinage électromécanique et dispositif de freinage électromécanique
Ali et al. Minimizing the inter-vehicle distances of the time headway policy for urban platoon control with decoupled longitudinal and lateral control
WO2023010043A1 (fr) Système de commande complémentaire pour véhicule autonome
CN117980212A (zh) 基于优化的规划系统
Ali et al. Urban platooning using a flatbed tow truck model
WO2021260910A1 (fr) Système d'intégration d'ia, dispositif d'intégration d'ia et programme d'intégration d'ia
CN115712950A (zh) 一种用于半拖挂汽车的自动驾驶决策方法
US20220242441A1 (en) Systems and methods for updating the parameters of a model predictive controller with learned operational and vehicle parameters generated using simulations and machine learning
CN111857112A (zh) 一种汽车局部路径规划方法及电子设备
KR102616457B1 (ko) 자율 주행 차량의 에어서스펜션 작동 플래닝 생성 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20941742

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022532199

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20941742

Country of ref document: EP

Kind code of ref document: A1