WO2005091214A1 - 車両用情報処理システム、車両用情報処理方法およびプログラム - Google Patents
車両用情報処理システム、車両用情報処理方法およびプログラム Download PDFInfo
- Publication number
- WO2005091214A1 WO2005091214A1 PCT/JP2005/003916 JP2005003916W WO2005091214A1 WO 2005091214 A1 WO2005091214 A1 WO 2005091214A1 JP 2005003916 W JP2005003916 W JP 2005003916W WO 2005091214 A1 WO2005091214 A1 WO 2005091214A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- inference
- learning
- presentation
- presented
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
Definitions
- the present invention relates to a vehicle information processing system that presents an appropriate presentation target to a presentee using a Bayesian network model.
- a stochastic framework is effective. It is possible to assign certainty to multiple candidates, such as the possibility of interest in soccer-related products being 60% and the possibility of being interested in travel, and handle them with uncertainty. it can. Then, if the page you were looking at just before was, for example, about Korea, calculate the probability of being interested in World Cup Soccer-related information, travel-related information, and cooking-related information. Are likely to provide the objects that are most likely to be of interest. The probability is calculated by taking into account a number of factors (for example, hobbies answered in a questionnaire) and the interdependencies between them (if sports are a hobby, you may want to promote soccer). The use of (likely to have a taste) makes it possible to make more accurate predictions.
- Bayesian nets have recently attracted attention in various fields as an information processing model for performing probability calculations based on the dependence of a plurality of factors.
- a Bayesian network is a network-like probabilistic model defined by three factors: (1) a random variable and (2) a conditional dependency between random variables, and (3) a conditional probability.
- (1) is a node
- (2) is a directed link between nodes. The node at the end of the link is called a child node, and the node at the origin of the link is called a parent node.
- (3) is a conditional probability that a child node takes a certain value when a parent node takes a certain value.
- the above-described recommendation system can be realized using a Bayesian network. Specifically, based on survey results obtained from a huge number of customers with a wide variety of age, gender, lifestyle, etc., and statistical data such as purchase history, customer attributes and customer interest We express the relationship with the object with a high degree of accuracy using a Bayesian net model. Then, using the Bayesian Net model, the system infers the objects of high interest to the customer from the attributes of the customer and the situation at that time, and recommends the objects of high interest to the customer based on the inference results.
- a Bayesian net model is created based on statistical data obtained from various types of customer power. Therefore, conditions such as customer attributes and conditions for obtaining a recommendation target are required. It is considered that the effect of the on recommendation is small. For example, if the condition for finding a recommendation target is “customer's attribute is father”, the data on which the customer power with the attribute of father is also obtained is the total statistical data from which the Bayesian Net model was created. The effect of the father condition on the model is smaller than that of a model created based on statistical data acquired based on statistical data acquired.
- Currently seeking recommendations Research is being conducted to fully reflect the conditions for the recommendation results and to make more accurate recommendations.
- an object of the present invention is to provide a vehicle information processing system that can more appropriately determine a presentation target to be presented to a presentee.
- An information processing system for a vehicle is an information processing system for a vehicle that presents a stochastically suitable presentation target to a presenter who is an occupant using a Bayesian network model.
- a model storage unit that stores a plurality of Bayesian net models that differ according to the presented condition, which is a condition related to the presented side, and a plurality of model forces that are stored in the model storage unit.
- a model determining means for determining a model as an inference application model, and a presentation object inference for reading the inference application model determined by the model determination means from the model storage means and obtaining a presentation target by stochastic inference using the read inference application model.
- a presentation means for presenting the presentation target obtained by the presentation target inference means to the presentee.
- the model storage means stores a plurality of Bayesian net models different according to the attributes of the presentee, and the model determination means stores the model of the presente.
- a model corresponding to the attribute may be determined as the inference application model.
- the model storage means stores a plurality of Bayesian Net models that differ depending on the situation in which the presentation target is presented, and the model determination means stores the model corresponding to the situation in which the presentation target is presented in the inference application model. You may decide.
- the vehicle information processing system of the present invention uses a model for model selection that is applied to probabilistic inference for obtaining an inference application model from a plurality of model cards based on the attributes of the presentee and the situation in which the presentation target is presented.
- Model storing means for storing model selection wherein the model deciding means is adapted to perform the inference using the model for model selection read out from the model storing means for model selection, and to provide the attribute of the presentee and the presentation target to be presented. Based on this, an inference application model may be determined.
- the vehicle information processing system of the present invention is a vehicle information processing system that accepts a response of a presentee when a presentation target presents a presentation target obtained from an inference application model.
- the information presentation system of the present invention relates to the inference application model applied to the stochastic inference by the presentation object inference means, and associates the same model with the inference application model among the plurality of models stored in the model storage means.
- a learning target information storage unit that stores, as a learning model, another model that is to be affected by the inference result using the inference application model.
- the model learning unit uses the response received by the response reception unit to perform learning.
- the target information storage unit may learn a model associated with the inference application model as a learning model.
- the learning target information storage means stores a reflection parameter indicating a degree to which a response is reflected in learning of the learning model, and the reflection parameter is set for each of a plurality of learning models corresponding to one inference application model.
- the model learning means reads a reflection parameter associated with the learning model to be learned from the learning target information storage means, and reflects the response to the learning model at a degree corresponding to the read reflection parameter. You can go.
- the vehicular information processing system of the present invention generally acquires general learning data used for learning to bring a model specialized for each presented condition through learning by a model learning means closer to a general model.
- a learning data acquisition unit may be provided, and the model learning unit may use the general learning data acquired by the general learning data acquisition unit to perform learning of the model stored in the model storage unit.
- general learning reflection parameter storage means for storing general learning reflection parameters indicating a degree of reflecting the general learning data to model learning is provided.
- the model learning means is a general learning reflection parameter read from the general learning reflection parameter storage means.
- a learning process may be performed to reflect the general learning data to the model learning at a degree according to the reflection parameter.
- An information processing system for a vehicle includes an information presenting device provided with presenting means, and a center device connected to the information presenting device through communication. When the presentation target is presented, a response received from the presented person may be collected.
- a vehicle information processing system is a vehicle information processing system that infers a presentation target suitable for a subject using an inference algorithm and presents the presentation target obtained by the inference.
- a calculation resource storage unit that stores a plurality of calculation resources that differ according to a presentation condition that is a condition related to a presentation side that receives a presentation target; and a plurality of calculation resources that are stored in the calculation resource storage unit.
- Calculation resource power Calculation resource determination means for determining calculation resources according to the presented condition, and calculation resource storage means power for calculation resources determined by calculation resource determination means, and read out calculation resources
- a presentation target inference means for obtaining a presentation target by inference using a method and a presentation means for presenting the presentation target obtained by the presentation target inference means to a presentee. Obtain.
- the vehicle information presentation device of the present invention is a vehicle information presentation device that presents a stochastically suitable presentation target to an occupant to be presented using a Bayesian network model.
- a model storage means storing a plurality of Bayesian net models that differ according to the presented condition, which is a condition relating to the presented side to be presented, and a plurality of model force stored in the model storage means.
- a model deciding unit that decides a model corresponding to the model as an inference application model, and a presentation target that reads the inference application model determined by the model decision unit from the model storage unit and obtains a presentation target by stochastic inference using the read inference application model.
- Inference means and presentation means for presenting the presentation target obtained by the presentation target inference means to the presentee are provided.
- the vehicle information presentation device of the present invention may be provided in an automobile.
- An information processing method for a vehicle is a method for presenting a stochastically suitable presentation target to a presenter who is an occupant using a Bayesian network model.
- a model determination step of determining a model corresponding to the presented condition as an inference application model from a plurality of Bayesian Net models that differ according to the presented condition, which is a condition relating to the presenter, and a model determining step.
- the method includes a presentation object inference step of obtaining a presentation object by stochastic inference using an inference application model, and a presentation step of presenting the presentation object obtained in the presentation object inference step to the presentee.
- the program of the present invention is a program for presenting a stochastically suitable presentation target to a presenter who is an occupant using a Bayesian network model.
- the present invention has other aspects. Accordingly, the disclosure of the present invention is intended to provide some aspects of the present invention, and is not intended to limit the scope of the claimed invention.
- FIG. 1 is a diagram illustrating a configuration of an information processing system according to a first embodiment.
- FIG. 2 is a diagram showing a configuration of a content providing device according to the first embodiment.
- FIG. 3 is a diagram showing an example of data stored in a content information storage unit in the first embodiment.
- FIG. 4 is a diagram showing an example of a model stored in a model storage unit according to the first embodiment.
- FIG. 5 is a diagram illustrating an example of data stored in a learning target table storage unit according to the first embodiment.
- FIG. 6 is a diagram showing a configuration of a center device according to the first embodiment.
- FIG. 7 is a diagram illustrating an operation of the information processing system according to the first embodiment.
- FIG. 8 is a diagram illustrating an example of data stored in a learning target table storage unit according to the second embodiment.
- FIG. 9 is a diagram illustrating an operation of the information processing system according to the second embodiment.
- FIG. 10 is a diagram showing a configuration of a center device according to a third embodiment.
- FIG. 11 is a diagram showing a configuration of a content providing device according to a third embodiment.
- FIG. 12 is a diagram illustrating an operation of the information processing system according to the third embodiment.
- FIG. 13 is a diagram showing a configuration of a center device according to a modification. BEST MODE FOR CARRYING OUT THE INVENTION
- the vehicle information processing system is a vehicle information processing system that presents a presentation target that is appropriately suited to an occupant, using a Bayesian net model.
- a model storage unit that stores a plurality of Bayesian net models that differ according to the presented condition, which is a condition related to the presented side that receives the presentation of the target, and a plurality of models stored in the model storage unit.
- a presentation means for presenting the presentation target obtained by the presentation target inference means to the presentee.
- a plurality of different models are stored according to the presented condition such as the attribute of the presented person, and a model corresponding to the presented condition is read from the models, and the read model is used for probability inference. By applying, an appropriate presentation target can be obtained with high accuracy under the present presented condition.
- the model storage means stores a plurality of Bayesian net models different according to the attributes of the presentee
- the model determination means stores the models of the presentee in the attributes of the presentee.
- the corresponding model may be determined as the inference application model.
- the model storage means stores a plurality of Bayesian net models that differ depending on a situation in which the presentation target is received, and the model determination means stores the presentation target.
- a model corresponding to the situation in which the inference occurs may be determined as the inference application model.
- the vehicle information processing system stores a model for model selection to be applied to stochastic inference for obtaining a plurality of model force inference application models based on the attributes of the presentee and the situation in which the presentation target is presented.
- Model selection means is provided, and the model decision means is based on the model storage means and the probability of inference using the read model selection model based on the attributes of the presentee and the situation in which the presentation target is presented. Then, the inference application model may be determined.
- the response receiving means for receiving the response of the presentee when the presenting means presents the presentation target obtained from the inference applying model by the presentation target inferring means, and the response receiving means receiving the response.
- Model learning means for learning the model stored in the model storage means using the response and updating the model to a dedicated model for each presented condition may be provided.
- the model stored in the model storage means is learned according to the response of the presentee, and the model of the model storage means is converted into a specialized model suitable for probabilistic inference under the presented conditions. Be updated. Therefore, the presentation target can be appropriately obtained using the model stored in the model storage means.
- the information presenting system is the same as the inference application model and the inference application model among the plurality of models stored in the model storage means, in association with the inference application model applied to the probability inference by the presentation object inference means.
- a learning target information storage unit that stores, as a learning model, another model that should be affected by the inference result using the learning target information storage unit.
- the model learning unit uses the response received by the response receiving unit to store the learning target information storage unit.
- the model associated with the inference application model may be learned as a learning model in.
- the response of the presentee to the presentation target obtained from the inference application model model is It is possible to appropriately learn other models using only the inference applied model using the model, and to efficiently learn a plurality of models stored in the model storage means.
- the learning target information storage means stores a reflection parameter indicating a degree of reflecting a response to learning of the learning model, and the reflection parameter corresponds to one inference application model.
- the model learning means reads, from the learning target information storage means, reflection parameters associated with the learning model to be trained, and a degree corresponding to the read reflection parameters, You can also perform a learning process to reflect the response on the learning model with.
- the response is reflected in the learning model at a degree corresponding to the reflection parameter, so that, for example, the degree of reflection of the model applied to inference on the model itself can be larger than that of other models. Also, for example, when two models are associated with the same learning model, the degree of reflection can be different between the two models. In this way, the magnitude of the effect of the response on the model can be set variably, and the model can be appropriately learned.
- This vehicle information processing system provides general learning data for acquiring general learning data used for learning to bring a model specialized for each presented condition through learning by a model learning means close to a general model.
- An acquisition unit may be provided, and the model learning unit may use the general learning data acquired by the general learning data acquisition unit to learn the model stored in the model storage unit! ,.
- the vehicle information processing system includes a general learning reflection parameter storage unit that stores a general learning reflection parameter indicating a degree at which the general learning data is reflected in the model learning.
- the model learning unit includes a general learning reflection parameter.
- Memory means A learning process may be performed in which general learning data is reflected in model learning at a degree corresponding to the read general learning reflection parameters.
- the vehicle information processing system includes an information presenting device including a presenting unit, and a center device connected to the information presenting device by communication. When the is presented, a response received from the presentee may be collected.
- the center device collects the responses received by the plurality of information presenting devices, so that it is possible to grasp general responses of a large number of presentees to the presentation target.
- a vehicle information processing system is a vehicle information processing system that infers a presentation target suitable for a presentee using an inference algorithm and presents the presentation target obtained by the inference.
- a calculation resource storage unit that stores a plurality of calculation resources that differ according to a presented condition that is a condition regarding a presented side that receives a presentation of a presentation target; and a plurality of storage resources that are stored in the calculation resource storage unit.
- the calculation resource determining means for determining the calculation resource according to the presented condition from the calculation resources of the present invention, and the calculation resource determined by the calculation resource determination means are read out from the calculation resource storage means.
- Object inference means for obtaining an object to be presented by inference using calculated computing resources, and presentation for presenting the object to be presented to the presentee obtained by the object inference means Ru and a stage.
- calculation resource is a resource used in the calculation of the inference algorithm.
- a knowledge base is used, and in a neural network, a neuronal model in which a large number of neuron models are connected to each other.
- the vehicle information presenting apparatus of the present embodiment is a vehicle information presenting apparatus that presents a stochastically suitable presenting target to a presenter who is an occupant using a Bayesian network model. Differs depending on the presented condition, which is the condition on the presented Model storage means for storing a plurality of Bayesian network models, model determination means for determining, as an inference application model, a model corresponding to a plurality of model forces stored in the model storage means and presented conditions, and model determination means.
- the inference applying model determined by the above is read from the model storage means, and the presentation object inference means for obtaining the presentation object by stochastic inference using the read inference application model, and the presentation object obtained by the presentation object inference means are provided by the presentee. And a presentation means for presenting to the user.
- the vehicle information presentation device of the present embodiment may be provided in an automobile.
- the vehicle information processing method is a method of presenting a stochastically suitable presentation target to a presenter who is an occupant using a Bayesian net model, and receives the presentation target.
- a model determination step is performed in which a model corresponding to the presented condition is determined as an inference application model from a plurality of Bayesian Net models that differ according to the presented condition, which is a condition relating to the presented side, and a model determining step.
- the present invention comprises a presentation target inference step of obtaining a presentation target by using a probabilistic inference, and a presentation step of presenting the presentation target obtained in the presentation target inference step to the presentee.
- the program of the present embodiment is a program for presenting a stochastically suitable presentation target to an occupant presenter using a Bayesian network model.
- Inferred inference A presentation target inference step of obtaining a presentation target by stochastic inference using an application model, and a presentation step of presenting the presentation target obtained in the presentation target inference step to the presentee are executed.
- a vehicle information processing system according to an embodiment of the present invention will be described with reference to the drawings.
- a system for providing music suitable for a user in a content providing device mounted on an automobile will be described, but the presenting target of the vehicle information processing system of the present invention is not limited to music.
- the vehicle information processing system of the present invention can be applied to, for example, a recommendation system that recommends restaurants and events suitable for the user.
- the information processing system for a vehicle according to the present invention is not limited to a system mounted on an automobile.
- a presentation target suitable for a user is presented on a portable terminal carried by a home computer user. Applicable to systems.
- FIG. 1 is a diagram showing a configuration of an information processing system 10 according to the embodiment.
- the vehicle information processing system (hereinafter referred to as “information processing system”) 10 includes a plurality of content providing devices 20, and each content providing device 20 is communicably connected to a center device 50.
- the content providing device 20 is a device that selects music suitable for a user who is an occupant of an automobile and automatically performs the selected music.
- the center device 50 is a device that tallies the data of the user response acquired by each content providing device 20.
- FIG. 2 is a diagram showing a configuration of the content providing device 20.
- the content providing device 20 includes a presentation unit 22 for presenting content to the user, an operation unit 24 for receiving an operation from the user, a data transmitting / receiving unit 26 for communicating with the center device 50, and a content providing device. And a control unit 28 for controlling the entirety of the control unit 20. Further, the content providing device 20 stores the content information storage unit 38 storing the content information, the model storage unit 40 storing the Bayesian network model, and the information of the model to be learned in association with the inference application model. And a learning target table storage unit 42.
- the presentation unit 22 has a function of presenting content to a user.
- Hardware of presentation part 22 Air is composed of speakers that output music performances, displays that display music names, artist names, and the like.
- the operation unit 24 has a function of receiving a user's operation.
- the operation unit 24 receives an operation for selecting an inference application model to be used for probability inference of the content, and a response to the presented content.
- the hardware of the operation unit 24 includes buttons for instructing playback, stop, fast-forward, etc. of the performance, and a volume adjustment knob.
- selecting an inference application model the candidate model is displayed on the presentation unit 22, and the inference application model is selected using the fast forward and rewind buttons.
- accepting a response information on the operation of the button or the volume adjustment knob is obtained, and the response is determined.
- the data transmitting / receiving unit 26 has a function of performing wireless communication with the center device 50.
- the content providing device 20 and the center device 50 can communicate via, for example, a mobile phone network.
- the content information storage section 38 stores a plurality of pieces of content information that are candidates for content to be presented to the user.
- the content providing device 20 selects content suitable for the user based on the content stored in the content information storage unit 38 and provides the user with the content.
- FIG. 3 is a diagram showing an example of content information stored in the content information storage unit 38.
- the content information storage unit 38 stores music titles, genres, rankings, and music data.
- the configuration in which the content information storage unit 38 stores the music data is described as an example, but the content providing device 20 may not have the music data.
- a configuration in which the selected music data is acquired via a network, or a configuration in which an external disc player is also acquired is also possible.
- the model storage unit 40 has a function of storing a model used to obtain content suitable for the user.
- the model storage unit 40 stores three models, Model A, Model B, and Model C, according to the presented condition.
- the presented condition is a condition relating to the presented side to be presented with the presented object.
- Each model stored in the model storage unit 40 is suitable for performing probability inference under the corresponding presented condition!
- the presented condition is the attribute of the user who gets in the car
- model A is for the father
- model B is for the mother
- model C is for the family. Therefore, for example, Model A is a model suitable for obtaining the content to be presented to the father by stochastic inference.
- Model A to C are general models before learning in the content presentation device 20.
- Model A—Model C is specialized for each presented condition.
- three models are stored in the model storage unit 40. The number of models stored is not limited to three.
- FIG. 4 is a diagram showing an example of a model stored in the model storage unit 40.
- the Bayesian network model includes nodes Nl for users, nodes N2 for contents, nodes N3 for situations, and nodes N4 for responses.Links indicating the dependencies of conditional probabilities via other nodes. It is connected.
- the node N4 regarding the response is a node for predicting the response of the user.
- the Bayesian inference unit 30 has a function of using a model stored in the model storage unit 40 to obtain content suitable for a user by probability inference.
- the model inference unit 30 reads the model determined by the model determination processing unit 34 from the model storage unit 40, and performs inference using the read model.
- the inference method by the Bayesian inference unit 30 will be described using a model shown in FIG.
- the Bayesian inference unit 30 sets values for the nodes N1 to N3. In the node N1 relating to the user, for example, information such as the user's age, gender, and music preference is set.
- One content information is read from the content information storage unit 38 and set in the node N2 relating to the content.
- the node N3 relating to the situation, for example, time information and traveling location information are set.
- the Bayesian inference unit 30 obtains the score of the node N4 regarding the response by the probability propagation from the node N1 to the node N3.
- the Bayesian inference unit 30 sequentially reads out the content information from the content information storage unit 38, and obtains a score for each content by repeating the above operation. Then, the Bayesian inference unit 30 selects content suitable for the user based on the obtained score.
- the Bayesian learning unit 32 has a function of learning a model using the response of the user.
- the user response is information on whether or not to accept content input by the user, and is observed information.
- the Bayesian learning unit 32 corrects the dependency of the conditional probability of the model based on the response, and updates the model to a model that can more accurately obtain the content that matches the user's preference under the presented conditions corresponding to the model. I do. That is, the model stored in the model storage unit 40 is updated to a dedicated model suitable for performing the probability inference under the corresponding presented condition.
- the model determination processing unit 34 has a function of determining a model used for inference based on the model selection information input from the operation unit 24, and a learning target table storage unit 42. Has a function to determine the model to learn with reference to
- FIG. 5 is a diagram showing an example of data stored in the learning target table storage unit 42.
- the learning target table storage unit 42 stores the inference application model and the learning target model in association with each other.
- the inference application model is a model applied to stochastic inference to find content suitable for the user.
- the learning target model is a model that performs learning using a response to the presented content. For example, in the example shown in FIG. 5, when Model A is used as an inference application model, Model A and Model C are learning target models. Therefore, the response to the content obtained using the model A for the father is used for learning the model A for the father and the model C for the family.
- the model A By reflecting the father's response to the content obtained by the model A for the father on the model A, the model A becomes a model that can more appropriately obtain the content that matches the preference of the father.
- the father because the father is a member of the family, the father's response also affects the family model C. Therefore, the model C for the family is also set as the model for learning.
- the response transmission processing unit 36 has a function of transmitting the response received from the operation unit 24 to the center device 50.
- the response transmission processing unit 36 transmits all the received responses.
- the response of the father, the response of the mother, and the response of the family are all transmitted.
- FIG. 6 is a diagram showing a configuration of the center device 50.
- the center device 50 includes a data transmission / reception unit 52 for communicating with the content providing device 20 and a control unit 54 for controlling the whole. Further, the center device 50 stores a response information storage unit 60 that stores a response transmitted from the content providing device 20 and a general model that stores a general model created based on the response stored in the response information storage unit 60.
- a model storage unit 62 is a model storage unit 62.
- the response information storage unit 60 has a function of storing the information of the response transmitted from the content providing device 20. Response information transmitted from the plurality of content providing devices 20 is collected in the response information storage unit 60.
- the model creation unit 56 reads the response information stored in the response information storage unit 60, and creates a Bayesian network model based on the read response information. Has functions.
- the model created here is a general model in which the presentation conditions of the content are not defined.
- the model creation unit 56 has a function of storing the created model in the general model storage unit 62.
- the model distribution unit 58 has a function of reading out the general model stored in the general model storage unit 62 and distributing the read model to the content providing device 20.
- the model distribution unit 58 may distribute the general model in response to a request from the content providing device 20, or may distribute the general model periodically.
- FIG. 7 is a diagram illustrating an operation of the information processing system 10 according to the first embodiment.
- the content providing device 20 determines a model for obtaining content suitable for the user (S10).
- the content providing device 20 determines one model from the three models stored in the model storage unit 40.
- the content providing device 20 displays the information on the model stored in the model storage unit 40 on the presentation unit 22 and accepts the selection of the model by the operation unit 24.
- the selection of model A for the father has been received from the operation unit 24.
- the information received by the operation unit 24 is notified to the model determination processing unit 34, and the model determination processing unit 34 determines a model for obtaining content.
- the Bayesian inference unit 30 of the content providing device 20 uses the determined model A Then, probability inference is performed to obtain content suitable for the user (S12).
- the content information sequentially read from the content information storage unit 38 is set to the node N2 of the model A, and the score of the response node N4 is calculated for each content by the probability propagation from the nodes N1 to N3. Then, the content having a high response score is selected as the content to be presented.
- the Bayesian inference unit 30 may obtain a song with the highest score as the content to be presented, or may obtain a plurality of songs having a score equal to or higher than a certain value. If the time for automatic performance is long, it is desirable to find several songs in time with the automatic performance time.
- the content providing device 20 presents the content to the user by playing the requested music (S14).
- the Bayesian learning unit 32 of the content providing device 20 receives a response from the user (S16).
- the Bayesian learning unit 32 accepts the user's response from the operation unit 24 and determines the response based on the content of the operation. For example, if the music being played is stopped, the response is that the presented content will not be accepted. If the music being played has been heard to the end or the volume has been increased, the presented content will be accepted. Response.
- the model determination processing unit 34 of the content providing device 20 selects a model to be learned (S18).
- the model A for the father is applied to the inference, the model A and the model C are selected as the learning target models from the information of the learning target table storage unit 42 shown in FIG.
- the Bayesian learning unit 32 of the content providing device 20 learns the models A and C using the response of the user received from the operation unit 24 (S20).
- the content providing device 20 determines whether or not it has the power to end the content presentation processing (S22). The determination as to whether or not to end the content presentation processing can be made based on, for example, whether or not the vehicle has reached the destination. If it is determined that the content presentation processing is not to be ended, the operation of the content providing apparatus 20 proceeds to step S12 of the inference processing, and the content providing apparatus 20 obtains the music to be played next by probability inference. .
- the response transmission processing unit 36 transmits the user response received by the operation unit 24 to the center device 50 (S24).
- the response sent at this time includes information on the user's attributes, status, and content attributes in addition to the information on the user's response itself.
- the center device 50 receives the response transmitted from the content providing device 20 (S 26), and stores the received response in the response information storage unit 60.
- the center device 50 accumulates responses transmitted from the plurality of content providing devices 20.
- the model creation unit 56 of the center device 50 After a predetermined amount of response information is stored in the response information storage unit 60, the model creation unit 56 of the center device 50 reads the response information from the response information storage unit 60, and uses the read response information. To create a Bayesian network model (S30). The model created at this time is a general model in which the presented condition is not defined. The model creation unit 56 stores the created model in the general model storage unit 62. The model distribution unit 58 of the center device 50 distributes the general model stored in the general model storage unit 62 to the content providing device 20 in response to a request from the content providing device 20 or periodically.
- the content providing device 20 stores a plurality of different models in the model storage unit 40 according to the attributes of the user. These models are suitable for obtaining the content to be presented to the user with the corresponding attribute by stochastic inference. Then, the content providing device 20 reads out a model that also matches the attributes of the user with the model power stored in the model storage unit 40, and obtains the content to be presented to the user by using the read out model by probability inference. Content with a high degree of satisfaction can be sought. That is, the information processing system 10 can accurately obtain appropriate content. Furthermore, this can reduce the number of times the user has to re-present the content, thereby reducing the driver's burden and contributing to safe driving.
- the learning target table storage unit 42 associates an inference application model with a model to be learned using a response to the content inferred by the inference application model. I remember. Then, the Bayesian learning unit 32 of the content providing device 20 determines the learning model by referring to the learning target table storage unit 42, so that the learning of the model affected by the response can be performed. In addition, since the learning target table storage unit 42 stores a plurality of learning target models for one inference application model, it is possible to efficiently learn a plurality of models with one response. .
- the center device 50 aggregates the response information obtained by the content providing device 20 in the response information storage unit 60 and creates a general model using the aggregated response information, so that the latest trend is reflected. The obtained general model is obtained. Then, since the center device 50 distributes the general model to the content providing device 20, it becomes possible for the content providing device 20 to perform probability inference using the general model.
- the information processing system 10 according to the second embodiment has the same basic configuration as the information processing system 10 according to the first embodiment. This is different from the information processing system 10 of the first embodiment.
- FIG. 8 is a diagram illustrating an example of data stored in the learning target table storage unit 42 according to the second embodiment.
- the learning target table storage unit 42 stores information on reflection parameters in addition to the inference application model and the learning target model.
- the reflection parameter is a parameter indicating the degree to which the response of the user is reflected on the model to be learned. With this reflection parameter, the degree of reflection of the response to each model can be adjusted. For example, if the inference model is model A for the father, the reflection ratio of learning to model C for the family using the response is the reflection ratio of the response obtained using model C of ⁇ 1 ''. This indicates that the ratio is “0.2”.
- FIG. 9 is a diagram illustrating an operation of the information processing system 10 according to the second embodiment.
- the operation of the information processing system 10 according to the second embodiment is basically the same as that of the first information processing system 10. However, unlike the operation of the first embodiment, the operation of the second embodiment also reads out the reflection parameters when selecting the learning target model from the learning target table storage unit 42 (S18). Then, the Bayesian learning unit 32 of the content providing device 20 learns the model using the read reflection parameters (S20). [0087] In the information processing system 10 of the second embodiment, appropriate learning can be performed by learning the model using a parameter indicating the degree of reflection of the response to the model.
- model C learning can be performed appropriately.
- the information processing system 10 according to the third embodiment has the same basic configuration as the information processing system according to the first embodiment, except that the center device 50 collects responses from a plurality of content providing devices 20. The difference is that general learning data for making a model closer to a general model is created based on the information, and the generated general learning data is distributed to the content providing apparatus 20. Further, the content providing device 20 learns the model stored in the model storage unit 40 based on the general learning data transmitted from the center device 50.
- FIG. 10 is a diagram showing a configuration of the center device 50 in the information processing system 10 according to the third embodiment.
- the center device 50 according to the third embodiment includes a general learning data creation unit 64, a general learning data distribution unit 66, and a general learning data storage unit 68 in addition to the configuration of the center device 50 according to the first embodiment.
- the general learning data creation unit 64 has a function of creating general learning data based on the response information stored in the response information storage unit 60 to bring the model to be learned closer to the general model.
- the general learning data storage unit 68 has a function of storing the general learning data created by the general learning data creating unit 64.
- the general learning data distribution unit 66 has a function of reading the general learning data stored in the general learning data storage unit 68 and distributing the general learning data to the content providing device 20.
- FIG. 11 is a diagram illustrating a configuration of the content providing device 20 in the information processing system 10 according to the third embodiment.
- the content providing device 20 according to the third embodiment includes, in addition to the configuration of the content providing device 20 according to the first embodiment, a reflection parameter indicating a degree of reflecting the general learning data transmitted from the center device 50 in the model.
- the reflection parameters are stored in the model storage unit 40. Defines the degree to which the model is brought closer to the general model.
- the Bayesian learning unit 32 of the control unit 28 uses the reflection parameters read from the reflection parameter storage unit 44 to determine the degree to which the general learning data is reflected in the model, and learns the model according to the determined degree. .
- the Bayesian learning unit 32 performs learning using the general learning data, for example, the number of times set by the reflection parameter. When the value of the reflection parameter is large, the number of times of learning increases, so that the degree of reflection of the general learning data to the model increases.
- FIG. 12 is a diagram illustrating an operation of the information processing system 10 according to the third embodiment.
- the content information providing device 20 presents the content to the user and receives a response from the user as in the first embodiment or the second embodiment (S40). Then, the content providing device 20 transmits the received response to the center device 50 (S42). When receiving the response via the data transmission / reception unit 52 (S44), the center device 50 stores the received response in the response information storage unit 60 (S46).
- the general learning data creation unit 64 of the center device 50 reads out the response stored in the response information storage unit 60, creates general learning data using the read response, and converts the created general learning data into general learning data. It is stored in the storage unit 68 (S48).
- the general learning data distribution unit 66 of the center device 50 distributes the general learning data stored in the general learning data storage unit 68 to the content providing device 20 (S50).
- the content providing device 20 acquires the general learning data by receiving the general learning data transmitted from the center device 50 (S52). Then, the Bayesian learning unit 32 of the content providing device 20 learns the model stored in the model storage unit 40 using the received general learning data (S54). At this time, the Bayesian learning unit 32 reads the reflection parameters from the reflection parameter storage unit 44 and reflects the general learning data based on the degree set by the read reflection parameters.
- general learning data for learning a model is created based on response information aggregated from the content providing device 20, and the created general learning data is provided to the content. Deliver to device 20.
- the content providing device 20 learns the model using the distributed general learning data, thereby reducing the specificity of the model due to the presented conditions, and creating a dedicated model suitable for probability inference under the presented conditions. To the model You can get closer.
- the information processing system of the present invention is not limited to the force S described in detail in the embodiments, and the present invention is limited to the above embodiments.
- the content providing device 20 includes the Bayesian inference unit 30 and the Bayesian learning unit 32, and performs content inference and model learning suitable for the user in the content providing device 20.
- the force center device 50 may include a Bayesian inference unit 30 and a Bayesian learning unit 32.
- FIG. 13 is a diagram showing a configuration of a center device 50 including a Bayesian inference unit 30 and a Bayesian learning unit 32.
- the center device 50 includes a model storage unit 78 that stores a model used for selecting content to be presented, and a content information storage unit 80 that stores content information.
- the model storage unit 78 stores a plurality of models used in each of the content providing devices 20 and a learning target table indicating a learning target model.
- the control unit 54 of the center device 50 includes a model determination processing unit 74 for selecting a model to be applied to probability inference, a Bayesian inference unit 70 for selecting content to be presented, and distributes the selected content.
- the content providing device 20 transmits the model selection information received by the operation unit 24 to the center device 50, and the center device 50 uses the model information indicated by the selection information.
- the content suitable for the user is requested, and the requested content is distributed to the content providing device 20.
- the response received by the content providing device 20 is transmitted to the center device 50, and the center device 50 learns the model.
- the force information processing system 10 described in the example in which the information processing system 10 includes the center device 50 need not necessarily include the center device 50, and the content providing device illustrated in FIG. Only 20 constitutes the information processing system of the present invention.
- one of the attributes of the user has been described as an example of the presented condition corresponding to the plurality of models stored in the model storage unit 40. Any other user attribute may be used as the presented condition.
- the presented condition is not limited to the attribute of the user, and for example, a situation at the time of presentation can be used as the presented condition.
- the situation at the time of presentation is the day of the week when the presentation is received, the time of day when the presentation is received, the mood when receiving the presentation, and the like.
- the method of determining a force model described in the method of selecting a model by a user is the same as the method of user selection.
- the information providing power of the key used when the user moves the car can be determined by the content providing apparatus 20 and the model can be automatically determined.
- each model is created based on explicit criteria as described in the example in which a model is selectively used for each attribute such as for the father and for the mother, and each model is dedicated. It does not have to be done.
- the model to be applied to the inference can be determined by the probability inference using the model selection model. That is, a model for model selection for obtaining a model to be applied to inference is stored in the storage unit and prepared. Then, a plurality of observed variables such as user attributes and the current situation are input to the model selection model read from the storage unit, and a model to be applied to inference is obtained by probability inference.
- the model storage unit 40 stores three types of model A—model C.
- the example in which the model is stored and the model that uses force among the three models is determined has been described.
- the content providing device 50 receives the general model from the center device 50, and stores the received general model in the model storage unit 40 as a model D.
- the content to be presented is obtained by stochastic inference using model D, and learning of model D is performed using the child's response to the content.
- a model is created that can infer the content that matches the child's preference.
- the information processing system of the present invention can be realized by executing, by a computer, a program including a module for realizing each component of the information processing system of the above-described embodiment. Included in the range.
- the present invention is useful as a recommendation system or the like that presents a stochastically suitable presentation target using a Bayesian network model.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Computational Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Navigation (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/593,065 US7797266B2 (en) | 2004-03-18 | 2005-03-07 | Vehicle information processing system for content recommendation using Bayesian network models |
DE112005000613T DE112005000613T5 (de) | 2004-03-18 | 2005-03-07 | Fahrzeuginformationsverarbeitungssystem, Fahrzeuginformationsverarbeitungsverfahren und Programm |
JP2006511153A JP4639296B2 (ja) | 2004-03-18 | 2005-03-07 | 車両用情報処理システム、車両用情報処理方法およびプログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004079026 | 2004-03-18 | ||
JP2004-079026 | 2004-03-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005091214A1 true WO2005091214A1 (ja) | 2005-09-29 |
Family
ID=34993917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/003916 WO2005091214A1 (ja) | 2004-03-18 | 2005-03-07 | 車両用情報処理システム、車両用情報処理方法およびプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US7797266B2 (ja) |
JP (1) | JP4639296B2 (ja) |
DE (1) | DE112005000613T5 (ja) |
WO (1) | WO2005091214A1 (ja) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007109110A (ja) * | 2005-10-14 | 2007-04-26 | Uchida Yoko Co Ltd | 個人の行動特性を用いた意図推測システム及び方法並びにプログラム |
JP2007179185A (ja) * | 2005-12-27 | 2007-07-12 | Ntt Docomo Inc | サービス推薦システム、及び、サービス推薦方法 |
JP2007230422A (ja) * | 2006-03-02 | 2007-09-13 | Denso It Laboratory Inc | 車載機器制御装置 |
JP2008100665A (ja) * | 2006-09-22 | 2008-05-01 | Denso Corp | 車両用空調装置およびその制御方法 |
JP2008195221A (ja) * | 2007-02-13 | 2008-08-28 | Denso Corp | 車両用空調装置、車両用空調装置の制御方法および制御装置 |
JP2008273497A (ja) * | 2007-03-30 | 2008-11-13 | Denso It Laboratory Inc | 情報提供支援方法及び情報提供支援装置 |
JP2008285111A (ja) * | 2007-05-21 | 2008-11-27 | Denso Corp | 車両用空調装置およびその制御方法 |
JP2008290494A (ja) * | 2007-05-22 | 2008-12-04 | Denso Corp | 車両用空調装置およびその制御方法 |
JP2009139231A (ja) * | 2007-12-06 | 2009-06-25 | Denso Corp | 位置範囲設定装置、移動物体搭載装置の制御方法および制御装置、ならびに車両用空調装置の制御方法および制御装置 |
US7962441B2 (en) | 2006-09-22 | 2011-06-14 | Denso Corporation | Air conditioner for vehicle and controlling method thereof |
WO2013118225A1 (ja) * | 2012-02-08 | 2013-08-15 | 日本電気株式会社 | 最適クエリ生成装置、最適クエリ抽出方法および判別モデル学習方法 |
WO2016189905A1 (ja) * | 2015-05-27 | 2016-12-01 | ソニー株式会社 | 情報処理装置、情報処理方法及びプログラム |
JP2018526732A (ja) * | 2015-07-31 | 2018-09-13 | ブルヴェクター, インコーポレーテッドBluvector, Inc. | マルウェアの識別とモデルの不均一性のために現場の分類器を再訓練するためのシステム及び方法 |
JP2019124596A (ja) * | 2018-01-17 | 2019-07-25 | 横河電機株式会社 | 測定値予測モジュール、測定値予測プログラム及び測定値予測方法 |
JP2020140673A (ja) * | 2019-03-01 | 2020-09-03 | 富士ゼロックス株式会社 | 学習装置、情報出力装置、及びプログラム |
JP2021039487A (ja) * | 2019-09-02 | 2021-03-11 | 東芝テック株式会社 | マッチング情報出力装置及びマッチング情報出力システム |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050065804A1 (en) * | 2003-09-18 | 2005-03-24 | The Cobalt Group, Inc. | Method and system for furnishing of customized information to venues targeted to selected audiences |
US8055544B2 (en) | 2006-06-02 | 2011-11-08 | Cobalt Group, Inc. | Source- and venue-specific inventory data processing and identification system |
DE102008033439A1 (de) * | 2007-07-20 | 2009-01-29 | Denso Corporation, Kariya | Fahrzeugklimaanlage und Verfahren zur Steuerung der Fahrzeugklimaanlage |
JP2009075010A (ja) * | 2007-09-21 | 2009-04-09 | Denso It Laboratory Inc | 経路長算出装置、経路長算出方法、経路長算出プログラム及び車両用空調装置ならびに移動物体搭載機器の制御装置 |
JP5142736B2 (ja) * | 2008-01-25 | 2013-02-13 | 株式会社デンソーアイティーラボラトリ | 車両搭載機器の制御装置及び制御方法 |
US20090259621A1 (en) * | 2008-04-11 | 2009-10-15 | Concert Technology Corporation | Providing expected desirability information prior to sending a recommendation |
US8438310B2 (en) | 2008-10-01 | 2013-05-07 | Adp Dealer Services, Inc. | Systems and methods for configuring a website having a plurality of operational modes |
US8051159B2 (en) * | 2008-10-01 | 2011-11-01 | The Cobalt Group, Inc. | Systems and methods for configuring a network of affiliated websites |
US8781915B2 (en) * | 2008-10-17 | 2014-07-15 | Microsoft Corporation | Recommending items to users utilizing a bi-linear collaborative filtering model |
US8433660B2 (en) | 2009-12-01 | 2013-04-30 | Microsoft Corporation | Managing a portfolio of experts |
JP5584914B2 (ja) * | 2010-07-15 | 2014-09-10 | 株式会社日立製作所 | 分散計算システム |
US8473437B2 (en) | 2010-12-17 | 2013-06-25 | Microsoft Corporation | Information propagation probability for a social network |
US10482475B2 (en) | 2011-02-10 | 2019-11-19 | Adp Dealer Services, Inc. | Systems and methods for providing targeted advertising |
US11080734B2 (en) | 2013-03-15 | 2021-08-03 | Cdk Global, Llc | Pricing system for identifying prices for vehicles offered by vehicle dealerships and other entities |
WO2016070124A1 (en) | 2014-10-30 | 2016-05-06 | Pearson Education, Inc. | Content database generation |
US9667321B2 (en) * | 2014-10-31 | 2017-05-30 | Pearson Education, Inc. | Predictive recommendation engine |
US10867285B2 (en) | 2016-04-21 | 2020-12-15 | Cdk Global, Llc | Automatic automobile repair service scheduling based on diagnostic trouble codes and service center attributes |
US10332068B2 (en) | 2016-04-21 | 2019-06-25 | Cdk Global, Llc | Systems and methods for stocking an automobile |
US10853769B2 (en) | 2016-04-21 | 2020-12-01 | Cdk Global Llc | Scheduling an automobile service appointment in a dealer service bay based on diagnostic trouble codes and service bay attributes |
CN106897775B (zh) * | 2017-01-25 | 2019-04-02 | 浙江大学 | 基于贝叶斯集成学习的软测量建模方法 |
US10326858B2 (en) | 2017-05-23 | 2019-06-18 | Cdk Global, Llc | System and method for dynamically generating personalized websites |
CN111756754B (zh) * | 2017-07-28 | 2023-04-07 | 创新先进技术有限公司 | 一种训练模型的方法及装置 |
US11501351B2 (en) | 2018-03-21 | 2022-11-15 | Cdk Global, Llc | Servers, systems, and methods for single sign-on of an automotive commerce exchange |
US11190608B2 (en) | 2018-03-21 | 2021-11-30 | Cdk Global Llc | Systems and methods for an automotive commerce exchange |
US11410030B2 (en) | 2018-09-06 | 2022-08-09 | International Business Machines Corporation | Active imitation learning in high dimensional continuous environments |
US12020217B2 (en) | 2020-11-11 | 2024-06-25 | Cdk Global, Llc | Systems and methods for using machine learning for vehicle damage detection and repair cost estimation |
US11080105B1 (en) | 2020-11-18 | 2021-08-03 | Cdk Global, Llc | Systems, methods, and apparatuses for routing API calls |
US11514021B2 (en) | 2021-01-22 | 2022-11-29 | Cdk Global, Llc | Systems, methods, and apparatuses for scanning a legacy database |
US12045212B2 (en) | 2021-04-22 | 2024-07-23 | Cdk Global, Llc | Systems, methods, and apparatuses for verifying entries in disparate databases |
US11803535B2 (en) | 2021-05-24 | 2023-10-31 | Cdk Global, Llc | Systems, methods, and apparatuses for simultaneously running parallel databases |
US11983145B2 (en) | 2022-08-31 | 2024-05-14 | Cdk Global, Llc | Method and system of modifying information on file |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000099441A (ja) * | 1998-09-25 | 2000-04-07 | Fujitsu Ltd | 情報を調整して提示する情報提示装置および方法 |
JP2002244947A (ja) * | 2001-02-16 | 2002-08-30 | Ntt Comware Corp | 車輌向けアプリケーションサービス提供方法ならびにそのポータルサーバ |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6655963B1 (en) * | 2000-07-31 | 2003-12-02 | Microsoft Corporation | Methods and apparatus for predicting and selectively collecting preferences based on personality diagnosis |
JP3815218B2 (ja) | 2000-12-20 | 2006-08-30 | 株式会社日立製作所 | データ分析方法及び装置 |
US6681215B2 (en) * | 2001-03-20 | 2004-01-20 | General Electric Company | Learning method and apparatus for a causal network |
EP1320045A3 (en) | 2001-12-17 | 2005-08-10 | Nissan Motor Company, Limited | Vehicle information providing apparatus and method and on-vehicle information playing apparatus and method |
US7254747B2 (en) * | 2003-03-28 | 2007-08-07 | General Electric Company | Complex system diagnostic service model selection method and apparatus |
-
2005
- 2005-03-07 JP JP2006511153A patent/JP4639296B2/ja active Active
- 2005-03-07 US US10/593,065 patent/US7797266B2/en active Active
- 2005-03-07 DE DE112005000613T patent/DE112005000613T5/de not_active Ceased
- 2005-03-07 WO PCT/JP2005/003916 patent/WO2005091214A1/ja active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000099441A (ja) * | 1998-09-25 | 2000-04-07 | Fujitsu Ltd | 情報を調整して提示する情報提示装置および方法 |
JP2002244947A (ja) * | 2001-02-16 | 2002-08-30 | Ntt Comware Corp | 車輌向けアプリケーションサービス提供方法ならびにそのポータルサーバ |
Non-Patent Citations (3)
Title |
---|
ICHISE R. ET AL: "Onsei Gengo Interface no Jitsuyoka to Onsei Gengo Taiwa eno Tenkai: Tekioteki User Interface to Onsei Taiwa.", JOURNAL OF JAPANESE SOCIETY FOR ARTIFICIAL INTELLIGENCE., vol. 17, no. 3, 21 May 2002 (2002-05-21), pages 291 - 294, XP002992438 * |
SAMEKIMA K. ET AL: "Module Kyogo ni yoru Undo Patten no Symbol-ka Mimane Gakushu.", THE TRANSACTIONS OF THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS., vol. J85-D-II, no. 1, 1 January 2002 (2002-01-01), pages 90 - 100, XP002992440 * |
SUGIMOTO M. ET AL: "User Modeling to Tekioteki Interaction: Johoshushu System ni okeru User Modeling to Tekioteki Interaction.", JOURNAL OF JAPANESE SOCIETY FOR ARTIFICIAL INTELLIGENCE., vol. 14, no. 1, 1 January 1999 (1999-01-01), pages 25 - 32, XP002992439 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007109110A (ja) * | 2005-10-14 | 2007-04-26 | Uchida Yoko Co Ltd | 個人の行動特性を用いた意図推測システム及び方法並びにプログラム |
JP2007179185A (ja) * | 2005-12-27 | 2007-07-12 | Ntt Docomo Inc | サービス推薦システム、及び、サービス推薦方法 |
JP2007230422A (ja) * | 2006-03-02 | 2007-09-13 | Denso It Laboratory Inc | 車載機器制御装置 |
US7962441B2 (en) | 2006-09-22 | 2011-06-14 | Denso Corporation | Air conditioner for vehicle and controlling method thereof |
JP2008100665A (ja) * | 2006-09-22 | 2008-05-01 | Denso Corp | 車両用空調装置およびその制御方法 |
JP2008195221A (ja) * | 2007-02-13 | 2008-08-28 | Denso Corp | 車両用空調装置、車両用空調装置の制御方法および制御装置 |
US7966280B2 (en) | 2007-02-13 | 2011-06-21 | Denso Corporation | Automotive air conditioner and method and apparatus for controlling automotive air conditioner |
JP2008273497A (ja) * | 2007-03-30 | 2008-11-13 | Denso It Laboratory Inc | 情報提供支援方法及び情報提供支援装置 |
JP2008285111A (ja) * | 2007-05-21 | 2008-11-27 | Denso Corp | 車両用空調装置およびその制御方法 |
JP2008290494A (ja) * | 2007-05-22 | 2008-12-04 | Denso Corp | 車両用空調装置およびその制御方法 |
JP2009139231A (ja) * | 2007-12-06 | 2009-06-25 | Denso Corp | 位置範囲設定装置、移動物体搭載装置の制御方法および制御装置、ならびに車両用空調装置の制御方法および制御装置 |
US8180539B2 (en) | 2007-12-06 | 2012-05-15 | Denso Corporation | Location range setting apparatus, control method and controller for apparatus mounted in mobile object, and automotive air conditioner and control method for the same |
WO2013118225A1 (ja) * | 2012-02-08 | 2013-08-15 | 日本電気株式会社 | 最適クエリ生成装置、最適クエリ抽出方法および判別モデル学習方法 |
JPWO2013118225A1 (ja) * | 2012-02-08 | 2015-05-11 | 日本電気株式会社 | 最適クエリ生成装置、最適クエリ抽出方法および判別モデル学習方法 |
WO2016189905A1 (ja) * | 2015-05-27 | 2016-12-01 | ソニー株式会社 | 情報処理装置、情報処理方法及びプログラム |
JP2018526732A (ja) * | 2015-07-31 | 2018-09-13 | ブルヴェクター, インコーポレーテッドBluvector, Inc. | マルウェアの識別とモデルの不均一性のために現場の分類器を再訓練するためのシステム及び方法 |
US10733539B2 (en) | 2015-07-31 | 2020-08-04 | Bluvector, Inc. | System and method for machine learning model determination and malware identification |
US11481684B2 (en) | 2015-07-31 | 2022-10-25 | Bluvector, Inc. | System and method for machine learning model determination and malware identification |
JP2019124596A (ja) * | 2018-01-17 | 2019-07-25 | 横河電機株式会社 | 測定値予測モジュール、測定値予測プログラム及び測定値予測方法 |
JP7182059B2 (ja) | 2018-01-17 | 2022-12-02 | 横河電機株式会社 | 測定値予測モジュール、測定値予測プログラム及び測定値予測方法 |
JP2020140673A (ja) * | 2019-03-01 | 2020-09-03 | 富士ゼロックス株式会社 | 学習装置、情報出力装置、及びプログラム |
JP7293729B2 (ja) | 2019-03-01 | 2023-06-20 | 富士フイルムビジネスイノベーション株式会社 | 学習装置、情報出力装置、及びプログラム |
JP2021039487A (ja) * | 2019-09-02 | 2021-03-11 | 東芝テック株式会社 | マッチング情報出力装置及びマッチング情報出力システム |
Also Published As
Publication number | Publication date |
---|---|
US7797266B2 (en) | 2010-09-14 |
US20070288413A1 (en) | 2007-12-13 |
JPWO2005091214A1 (ja) | 2008-02-07 |
JP4639296B2 (ja) | 2011-02-23 |
DE112005000613T5 (de) | 2007-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4639296B2 (ja) | 車両用情報処理システム、車両用情報処理方法およびプログラム | |
JP7443486B2 (ja) | コンテンツアイテムの推奨 | |
US20190260729A1 (en) | Methods, systems, and media for presenting information related to an event based on metadata | |
US10318534B2 (en) | Recommendations in a computing advice facility | |
US10223459B2 (en) | Methods, systems, and media for personalizing computerized services based on mood and/or behavior information from multiple data sources | |
US8315931B2 (en) | System for determining high quality musical recordings | |
US8781915B2 (en) | Recommending items to users utilizing a bi-linear collaborative filtering model | |
US7974935B2 (en) | Telephone with intuitive capability | |
CN107533677A (zh) | 用于产生与有关信息相关的传感器输出的方法、系统和介质 | |
US20010013009A1 (en) | System and method for computer-based marketing | |
JP2006127452A (ja) | 情報提示装置および情報提示方法 | |
US20020062247A1 (en) | Selecting web site content to be displayed to a web site visitor based upon a probability distribution | |
JP2005520259A (ja) | 直観的な学習能力を持っている処理装置 | |
CN111242310A (zh) | 特征有效性评估方法、装置、电子设备及存储介质 | |
US20200250575A1 (en) | Systems and Methods for Simulating a Complex Reinforcement Learning Environment | |
US20140019221A1 (en) | System for determining high quality musical recordings | |
JP2007058398A (ja) | コンテンツ推薦装置、コンテンツ推薦方法及びコンピュータプログラム | |
US9654830B2 (en) | Audiovisual content recommendation method and device | |
WO2024152686A1 (zh) | 确定资源信息的推荐指标的方法、装置、设备、存储介质及计算机程序产品 | |
WO2024084691A1 (ja) | 対話型提案出力システム | |
JP7529601B2 (ja) | リコメンド装置、コンテンツ提供システム、及びリコメンド方法 | |
WO2021182581A1 (ja) | 情報処理システム、情報処理装置、端末装置、情報処理方法及びプログラム | |
JP2005292904A (ja) | 情報提示装置、情報提示方法およびプログラム | |
Willems | Design and development of a voice controlled music | |
IL288917A (en) | Computational learning rating and predictive calibration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006511153 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1120050006131 Country of ref document: DE |
|
RET | De translation (de og part 6b) |
Ref document number: 112005000613 Country of ref document: DE Date of ref document: 20070222 Kind code of ref document: P |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112005000613 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase | ||
WWE | Wipo information: entry into national phase |
Ref document number: 10593065 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10593065 Country of ref document: US |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8607 |