WO2019159234A1 - Information processing apparatuses, method, program and non-transitory computer readable recording medium - Google Patents
Information processing apparatuses, method, program and non-transitory computer readable recording medium Download PDFInfo
- Publication number
- WO2019159234A1 WO2019159234A1 PCT/JP2018/004903 JP2018004903W WO2019159234A1 WO 2019159234 A1 WO2019159234 A1 WO 2019159234A1 JP 2018004903 W JP2018004903 W JP 2018004903W WO 2019159234 A1 WO2019159234 A1 WO 2019159234A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- application
- information
- processing apparatus
- connected car
- information processing
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 127
- 238000000034 method Methods 0.000 title claims description 18
- 238000004891 communication Methods 0.000 claims description 139
- 230000006399 behavior Effects 0.000 claims description 73
- 230000007704 transition Effects 0.000 claims description 33
- 238000010801 machine learning Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 description 51
- 238000010586 diagram Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 13
- 239000000470 constituent Substances 0.000 description 12
- 230000005055 memory storage Effects 0.000 description 11
- 230000009471 action Effects 0.000 description 8
- 238000007726 management method Methods 0.000 description 8
- 238000005192 partition Methods 0.000 description 7
- 230000002776 aggregation Effects 0.000 description 5
- 238000004220 aggregation Methods 0.000 description 5
- 230000006872 improvement Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000009849 deactivation Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 231100001261 hazardous Toxicity 0.000 description 1
- PWPJGUXAGUPAHP-UHFFFAOYSA-N lufenuron Chemical compound C1=C(Cl)C(OC(F)(F)C(C(F)(F)F)F)=CC(Cl)=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F PWPJGUXAGUPAHP-UHFFFAOYSA-N 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W36/00—Hand-off or reselection arrangements
- H04W36/0005—Control or signalling for completing the hand-off
- H04W36/0083—Determination of parameters used for hand-off, e.g. generation or modification of neighbour cell lists
- H04W36/00837—Determination of triggering parameters for hand-off
- H04W36/008375—Determination of triggering parameters for hand-off based on historical data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W16/00—Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
- H04W16/18—Network planning tools
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
Definitions
- This invention is concerning an information processing apparatus, a method, a program and a non-transitory computer readable recording medium.
- resource management techniques are studied in order to improve resource utilization and service quality. For example, it is studied to coordinate and unify different levels of resource controls including Internet of Things (IoT) platform controls (e.g. application replication, application placement, topology adaptation, etc.), core network controls (e.g. network slicing, service chain design, VNF placement, etc.), and access network controls (e.g. radio resource allocation, radio resource configuration, traffic classification, hardware activation/deactivation, etc.).
- IoT Internet of Things
- core network controls e.g. network slicing, service chain design, VNF placement, etc.
- access network controls e.g. radio resource allocation, radio resource configuration, traffic classification, hardware activation/deactivation, etc.
- PTL 1-3 disclose techniques to predict future demand (demand for connectivity service, demand for website data, and demand for computing resources respectively).
- V2X Vehicle-to-everything
- Connected car technology is rapidly researched in these days.
- the idea of V2X or Connected car includes V2V (Vehicle-to-Vehicle), V2I (Vehicle-to-Infrastructure), V2N (Vehicle-to-Network) and V2P (Vehicle-to-Pedestrian).
- V2V communication a connected car will communicate with another connected car.
- V2I communication a connected car will be connected to a road side unit (RSU) and communicate with the RSU.
- RSU road side unit
- PLT 4-5 disclose techniques related to vehicular communications.
- An example object of the present invention is to provide an information processing apparatus, a method and a program that enable more proper resource management related to connected cars.
- An information processing apparatus includes: an information acquisition unit configured to acquire application state information indicating a state of an application working for a connected car; and a prediction unit configured to perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
- a method of according to an example aspect the present invention includes: acquiring application state information indicating a state of an application working for a connected car; and performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
- a program causes a processor to execute: acquiring application state information indicating a state of an application working for a connected car; and performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
- a non-transitory computer readable recording medium stores a program that causes a processor to execute: acquiring application state information indicating a state of an application working for a connected car; and performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
- An information processing apparatus includes: a memory storing instructions; and one or more processors configured to execute the instructions to: acquire application state information indicating a state of an application working for a connected car; and perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
- the present invention it is possible to perform resource management related to connected cars more properly.
- the present invention may exert other advantageous effects instead of or together with the above advantageous effects.
- Fig. 1 is an explanatory diagram illustrating the example of the schematic configuration of the system according to the first example embodiment of the present invention.
- Fig. 2 is a block diagram illustrating an example of a schematic logical configuration of the information processing apparatus according to the first example embodiment.
- Fig. 3 is a block diagram illustrating an example of a schematic hardware configuration of the information processing apparatus according to the first example embodiment.
- Fig. 4 is a block diagram illustrating an example of a schematic logical configuration of the control apparatus according to the first example embodiment.
- Fig. 5 is a block diagram illustrating an example of a schematic hardware configuration of the control apparatus according to the first example embodiment.
- Fig. 6 is an exemplary diagram for describing an example of tasks of an application for connected cars.
- Fig. 1 is an explanatory diagram illustrating the example of the schematic configuration of the system according to the first example embodiment of the present invention.
- Fig. 2 is a block diagram illustrating an example of a schematic logical configuration of the information processing apparatus according to the first
- FIG. 7 is an exemplary diagram for describing an example of a common task of two applications for connected cars.
- Fig. 8 is an exemplary diagram for describing an example of application states of an application for connected cars.
- Fig. 9 is a flow chart illustrating an example of a schematic flow of resource demand prediction according to the first example embodiment.
- Fig. 10 is a flow chart illustrating an example of a schematic flow of offloading decision according to the first example embodiment.
- Fig. 11 is a block diagram illustrating an example of a schematic logical configuration of the information processing apparatus according to the second example embodiment.
- Fig. 12 is a block diagram illustrating an example of a schematic hardware configuration of the information processing apparatus according to the second example embodiment.
- Fig. 13 is a flow chart illustrating an example of a schematic flow of resource demand prediction according to the second example embodiment.
- a and/or B means “both A and B, or, either A or B”.
- Fig. 1 is an explanatory diagram illustrating the example of the schematic configuration of the system 1 according to the first example embodiment of the present invention.
- the system 1 includes a connected car 10, communication nodes 20 (e.g. a communication node 20A, a communication node 20B and a communication node 20C etc.), an information processing apparatus 100 and a control apparatus 200.
- the communication nodes 20, the information processing apparatus 100 and the control apparatus 200 communicate with each other via network 30.
- the connected car 10 is connected to the communication node 20 and communicates with the communication node 20 when the connected car 10 is located within a coverage area 21 of the communication node 20. For example, when the connected car 10 is located within a coverage area 21A, the connected car is connected to a communication node 20A. Furthermore, when the connected car 10 moves from the coverage area 21A to a coverage area 21B, the connected car is connected to a communication node 20B, e.g. by way of a handover from the communication node 20A to the communication node 20B.
- the connected car 10 executes one or more applications (i.e. one or more application programs).
- the one or more applications are applications for connected cars.
- a first example of the application is a context-aware advertisement delivery application. This application provides advertisements depending on the connected car’s situation and/or the driver’s situation.
- a second example of the application is a tourist recommender service application. This application suggests, for example, nearby gas stations, restaurants and/or hotels in consideration of not only location of the connected car 10 but also the driver’s choice, age, nationality etc.
- the one or more applications are not limited to these examples.
- Various applications e.g. applications for safety, driver assistance, mobility management and/or infotainment etc. can be executed by the connected car 10.
- the communication node 20 is (e.g. wirelessly and directly) connected with the connected car 10 and (e.g. wirelessly and directly) communicates with the connected car 10 when the connected car 10 is located within a coverage area 21 of the communication node 20.
- the communication node 20 may be called an edge gateway or an edge cloud.
- the communication node 20 is a road side unit (RSU) which is located on the road side and provide connectivity support to the connected car 10.
- RSU road side unit
- the communication node 20 may be a base station of a mobile communication network such as evolved Node B (eNB) or next Generation Node B (gNB).
- the communication node 20 supports execution of applications working for the connected car 10. For example, the communication node 20 allocates its computing resources to process tasks of applications working for the connected car 10, and processes the tasks using the computing resources. That is, tasks of applications working for the connected car 10 are offloaded from the connected car 10 onto the communication node 20. Thanks to this offloading, the application working for the connected car 10 may be executed at higher speed in spite of limited computation resources of the communication car 10. In addition, tasks of applications for the connected car 10 may be offloaded from the connected car 10 onto another connected car.
- the communication node 20 supports networking for applications working for the connected car 10. For example, the communication node 20 allocates its radio resources for transmission of data for the applications, and transmits or receives the data using the radio resources. In addition, for example, the communication node 20 forwards data for the applications from the connected car 10 to the network 30 or from the network 30 to the connected car 10.
- the information processing apparatus 100 performs prediction of a resource demand.
- the resource demand is a resource demand for communication node 20 to be (e.g. wirelessly and directly) connected with one or more connected cars 10.
- the resource prediction performed by the information processing apparatus 100 will be described in detail later.
- Control apparatus 200 performs resource management in order to optimize resource utilization and improve service quality.
- the control apparatus 200 may be called a resource orchestrator.
- control apparatus 200 decides offloading of tasks of one or more applications to be working for the connected car 10.
- the control apparatus 200 decides the offloading based on the resource demand predicted by the information processing node 100.
- the control apparatus 200 instructs the communication node 20 to perform the decided offloading.
- the offloading decision performed by the control apparatus will be described in detail later.
- FIG. 2 is a block diagram illustrating an example of a schematic logical configuration of the information processing apparatus 100 according to the first example embodiment.
- the information processing apparatus 100 includes a communication unit 110, a storage unit 120, a local storage unit 130, a profile storage unit 140 and a processing unit 150.
- the communication unit 110 receives a signal from a network (e.g. the network 30), and transmits a signal to the network.
- a network e.g. the network 30
- the storage unit 120 temporarily or permanently stores a program, a parameter, and various data for an operation of the information processing apparatus 100.
- the local storage unit 130 stores records of transactions and resource utilization per application and/or communication node 20.
- the records are provided by connected cars 10 and/or communication nodes 20 and stored in the information processing apparatus 100 (the local storage unit 130).
- the profile storage unit 140 stores application behavior information generated based on the records stored in the local storage unit 130.
- the application behavior information will be described in detail later.
- the processing unit 150 provides various functions of the information processing apparatus 100.
- the processing unit 150 includes a communication processing unit 151, information acquisition unit 153, generation unit 155, a location estimation unit 157 and prediction unit 159. It is noted that the processing unit 150 may further include other constituent components other than these constituent components. That is, the processing unit 150 may also perform an operation other than the operations of these constituent components.
- the communication processing unit 151 communicates with other nodes via communication unit 110.
- the information acquisition unit 153 acquires information from the communication unit 110, the storage unit 120, the local storage unit 130 and/or the profile storage unit 140.
- the information acquisition unit 153 may acquire information generated in the processing unit 150 (e.g. information generated by a constituent component of the processing unit 150).
- FIG. 3 is a block diagram illustrating an example of a schematic hardware configuration of the information processing apparatus 100 according to the first example embodiment.
- the information processing apparatus 100 includes a central processing unit (CPU) 181, a random access memory (RAM) 183, an internal memory storage 185, a hard disk drive (HDD) 187, a communication interface 189 and a bus 191.
- the CPU 181, the RAM 183, the internal memory storage 185, the HDD 187, the communication interface 189 are connected with each other via the bus 191.
- the communication unit 110 may be implemented by the communication interface 189.
- the storage unit 120, the local storage unit 130 and profile storage unit 140 may be implemented by the internal memory storage 185 and/or the HDD 187.
- the processing unit 150 may be implemented by the CPU 181 and the RAM 183.
- the hardware configuration of the information processing apparatus 100 is not limited to the example of FIG. 3.
- the information processing apparatus 100 may be implemented by other hardware configuration.
- the information processing apparatus 100 may be virtualized. That is, the information processing apparatus 100 may be implemented as a virtual machine. In this case, the information processing apparatus 100 may operate as a virtual machine on a hypervisor and a physical machine (hardware) including a processor and a memory etc.
- the information processing apparatus 100 includes: a memory (e.g. the RAM 183 and/or the internal memory storage 185) storing instructions (a program), and one or more processors (e.g. CPU 181) configured to execute the instructions to execute processing of the processing unit 150 (processing of the communication processing unit 151, the information acquisition unit 153, the generation unit 155, the location estimation unit 157 and/or the prediction unit 159).
- a memory e.g. the RAM 183 and/or the internal memory storage 185) storing instructions (a program)
- processors e.g. CPU 181
- the processing unit 150 processing of the communication processing unit 151, the information acquisition unit 153, the generation unit 155, the location estimation unit 157 and/or the prediction unit 159.
- FIG. 4 is a block diagram illustrating an example of a schematic logical configuration of the control apparatus 200 according to the first example embodiment.
- the control apparatus 200 includes a communication unit 210, a storage unit 220 and a processing unit 230.
- the communication unit 210 receives a signal from a network (e.g. the network 30), and transmits a signal to the network.
- a network e.g. the network 30
- the storage unit 220 temporarily or permanently stores a program, a parameter, and various data for an operation of the control apparatus 200.
- the processing unit 230 provides various functions of the control apparatus 200.
- the processing unit 230 includes a communication processing unit 231, information acquisition unit 233 and a decision unit 235. It is noted that the processing unit 230 may further include other constituent components other than these constituent components. That is, the processing unit 230 may also perform an operation other than the operations of these constituent components.
- the communication processing unit 231 communicates with other nodes via communication unit 210.
- the information acquisition unit 233 acquires information from the communication unit 210 and/or the storage unit 220.
- the information acquisition unit 233 may acquire information generated in the processing unit 230 (e.g. information generated by a constituent component of the processing unit 230).
- FIG. 5 is a block diagram illustrating an example of a schematic hardware configuration of the control apparatus 200 according to the first example embodiment.
- the control apparatus 200 includes a CPU 281, a RAM 283, an internal memory storage 285, a HDD 287, a communication interface 289 and a bus 291.
- the CPU 281, the RAM 283, the internal memory storage 285, the HDD 287 and the communication interface 289 are connected with each other via the bus 291.
- the communication unit 210 may be implemented by the communication interface 289.
- the storage unit 220 may be implemented by the internal memory storage 285 and/or the HDD 287.
- the processing unit 230 may be implemented by the CPU 281 and the RAM 283.
- control apparatus 200 is not limited to the example of FIG. 5.
- the control apparatus 200 may be implemented by other hardware configuration.
- control apparatus 200 may be virtualized. That is, the control apparatus 200 may be implemented as a virtual machine. In this case, the control apparatus 200 may operate as a virtual machine on a hypervisor and a physical machine (hardware) including a processor and a memory etc.
- control apparatus 200 includes: a memory (e.g. the RAM 283 and/or the internal memory storage 285) storing instructions (a program), and one or more processors (e.g. CPU 281) configured to execute the instructions to execute processing of the processing unit 230 (processing of the communication processing unit 231, the information acquisition unit 233 and/or the decision unit 235).
- a memory e.g. the RAM 283 and/or the internal memory storage 285) storing instructions (a program)
- processors e.g. CPU 281
- processing of the processing unit 230 processing of the communication processing unit 231, the information acquisition unit 233 and/or the decision unit 235.
- Each application for connected cars (i.e. each application which works for connected cars) have one or more tasks. For example, a part of the tasks is processed by a connected car 10 and the remaining part of the tasks is processed by a communication node 20 connected with the connected car 10.
- FIG. 6 is an exemplary diagram for describing an example of tasks of an application for connected cars.
- an application has four tasks, a task 41, a task 43, a task 45 and a task 47 represented by A, B, C and D respectively.
- the task 41 (task A) is processing which uses inputs x, y, z (inputs of the application), and outputs x1, y1, z1 as results.
- the task 43 (task B) is processing which uses inputs x1, y1 and outputs x2 as a result.
- the task 45 (task C) is processing which uses inputs y1, z1, and outputs z2 as a result.
- the task 47 (task D) is processing which uses inputs x2, z2 and outputs r as a result (a result of the application).
- a task group 51 including task 41 (task A) and task 43 (task B) is processed by a connected car 10
- a task group 53 including task 45 (task C) and task 47 (task D) is processed by a communication node 20.
- Each task consumes computation resources to be processed.
- transmission of intermediate outputs consumes network resources (e.g. radio resources). That is, transmission of the outputs y1, z1, x2 from the connected car 10 to the communication node 20 consumes network resources.
- Network resources may be represented by a data rate.
- each application can be represented as a graph including vertices (A, B, C) representing tasks and edges (i.e. links connecting vertices) representing relation between tasks.
- the graph includes input-output information of each task.
- the graph may also include resource utilization information indicating computation resources consumed for each task and network resources consumed for transmission of each intermediate output.
- a part of the tasks of the application is processed by the connected car 10 and the remaining part of the tasks is processed by the communication node 20 connected with the connected car 10.
- all of the tasks of the application may be processed by a connected car 10 or communication node 20.
- the number of tasks is not fixed for all applications and varies from application to application. Some applications may have many tasks and other applications may have fewer tasks.
- Two or more applications may have a relationship when executed.
- two or more applications have one or more common tasks to be processed.
- FIG. 7 is an exemplary diagram for describing an example of a common task of two applications for connected cars.
- an application 60 has tasks, 61, 63, 65
- an application 70 has tasks 63, 71, 73, 75, 77. That is, the task 63 is a common task of the application 60 and application 70.
- each application state is a level of processed tasks and consumed resources.
- each application state can be also represented as a graph including not only vertices and edges but also information on (a level of) processed tasks (i.e. active tasks) and consumed resources.
- vertices of the graph are associated with computation resources
- edges of the graph are associated with network resources.
- This graph can be expressed in suitable modelling language (e.g. Graph Modelling Language (GML)) or expressed as a virtual network including virtual vertices and edges (i.e. links).
- GML Graph Modelling Language
- FIG. 8 is an exemplary diagram for describing an example of application states of an application for connected cars.
- the application state 1 is an offline state, and the application states 2-4 are online states.
- a task 81 and a task 83 are processed in the application states 1.
- a task 85, a task 87 and a task 89 are processed in the application states 2-4.
- a size of each vertex indicates an amount of computation resources consumed for the task in the state.
- the task 85 is mainly processed in the state 2
- the task 87 is mainly processed in the state 3
- the task 89 is mainly processed in the state 4.
- a current application state transits from one application state to another application state.
- the number of states is not fixed for all applications and varies from application to application. There may be many states for some applications, and there may be fewer states for other applications.
- the number of tasks running during an application state varies from application state to application state.
- two tasks (tasks 81, 83) are running in the application state 1 (an offline state), while three tasks (tasks 85, 87, 89) are running in the application states 2, 3, 4 (online states).
- each application state is classified as an offline state or an online state in the example of FIG. 8, there may be other kinds of states for the application. For example, there may be an idle state, a sleep state and/or a standby state.
- Application states of an application depends on how the application is designed and number of modes it supports.
- the application states are defined by an application provider of the application.
- the application states may be defined by analyzing records related to the application (stored in the local storage 130).
- the records may be snapshots of processed tasks and consumed resources for processed tasks.
- the information processing apparatus 100 estimates a location of the connected car 10.
- the location of the connected car 10 is a location where the connected car 10 will be in the near future (for example in a predetermined period of time).
- the information processing apparatus 100 estimates the location of the connected car 10 based on information provided by the connected car 10.
- the provided information is navigation information indicating a navigation route of the connected car 10.
- the provided information may be travel information indicating a position of the connected car 10 and a direction and a speed of a travel of the connected car 10. The provided information is not limited to these examples.
- the location is a coverage area 21 of a communication node 20 where the connected car 10 will be in the near future.
- the location is a communication node 20 to be connected with the connected car 10 in the near future.
- the connected car 10 is within a coverage area 21A and connected to a communication node 20A at time T-1 and time T, and is within a coverage area 21B and connected to a communication node 20B at time T1.
- the information processing apparatus 100 may estimate, based on information provided by the connected car at time T-1, a location of the connected car 10 at time T0 to be coverage area 21A.
- the information processing apparatus 100 may estimate, based on information provided by the connected car at time T, a location of the connected car 10 at time T1 to be coverage area 21B.
- another node other than the information processing apparatus 100 may estimate a location of the connected car 10 and transmit, to the information processing apparatus 100, location information indicating the estimated location.
- the information processing apparatus (communication processing unit 151 and information acquisition unit) may receive and acquire the location information.
- the information processing apparatus 100 (the prediction unit 159) performs prediction of a resource demand.
- the resource demand is a resource demand for a communication node 20 to be connected with one or more connected cars (in the near future). That is, the information processing apparatus 100 (the prediction unit 159) predicts a resource demand per communication node 20.
- the resource demand per communication node 20 is aggregation of resource demands for one or more connected cars 10 to be connected to the communication node 20.
- the resource demand includes demand of computation resources for the communication node 20.
- the resource demand may also include demand of network resources (e.g. radio resources) for the communication node 20.
- network resources e.g. radio resources
- the prediction of the resource demand includes the following steps.
- the prediction of the resource demand includes prediction of one or more applications to be working for a connected car 10 (to be connected to the communication node 20). This application prediction may be performed per connected car 10.
- the prediction of the resource demand includes prediction of a state of each of the one or more applications. This state prediction may be performed per application.
- the prediction of the resource demand includes estimation of offloading of tasks of the one or more applications from a connected car 10 (to be connected to the communication node 20) onto the communication node 20.
- This offloading prediction may be performed per connected car 10.
- the estimation of the offloading may include estimation of tasks to be offloaded from a connected car 10 to the communication node 20, and estimation of an amount of resources to be used for the offloading (i.e. estimation of a resource demand for the connected car 10).
- the prediction of the resource demand includes aggregation of resource demands for one or more connected cars to be connected to the communication node 20 (in the near future). As a result, a resource demand for the communication node 20 is calculated.
- the information processing apparatus 100 acquires context feature information for each of one or more connected cars 10 to be connected to the communication node 20.
- the one or more connected cars 10 are connected cars 10 estimated to be connected to the communication node 20 by location estimation of the connected cars 10.
- the information processing apparatus 100 (the prediction unit 159) performs the prediction of the resource demand (for the communication node 20) based on the context feature information and application behavior information concerning behavior of applications for connected cars.
- the context feature information and the application behavior information will be described in detail later.
- the context feature information for the connected car 10 is provided by the connected car 10 and includes following information (i.e. basic features).
- the context feature information includes application state information indicating a state of an application working for a connected car 10.
- the state of the application i.e. an application state
- an application state is a level of processed tasks and consumed resources as described above. Examples of the application state are also described above with reference to FIG.8.
- the application state information is an index of the state of the application.
- the application state information may be values representing a snapshot of processed tasks and consumed resources for processed tasks.
- the application state information may indicate states of two or more applications working for the connected car, because two or more applications may be running in the connected car 10.
- the information processing apparatus 100 acquires the application state information for each of one or more connected cars 100 to be connected to the communication node 20. Then, the information processing apparatus 100 (the prediction unit 159) perform the prediction of the resource demand for communication node 20 based on the application state information for each of the one or more connected cars and the application behavior information.
- the context feature information includes other feature information related to the connected car 10 or the application working for the connected car 10.
- the context feature information includes information indicating one or more of following basic features: - resource utilization in the connected car - an application working for the connected car 10 (e.g.
- application ID ); - a location of the connected car 10; - a speed and a direction of the connected car 10; - time; - day of the week; - a car type of a connected car 10; - a frequency used by the application (per time slot); - a sound level around the connected car 10; - a fuel level of the connected car 10; - a temperature in or around the connected car 10; - an age of the driver or a passenger inside the connected car 10; - weather around the connected car 10; and - a charging status of the car or the apparatus inside the car related to the connected car 10 or the application working for the connected car 10.
- the application state information is also information indicating a basic feature (a state of an application working for the connected car 10).
- the application behavior information is generated by information processing apparatus 100 (generation unit 155) based on the records stored in the local storage unit 130, and is stored in the profile storage unit 140.
- the application behavior information may be called an application behavior profile.
- the application behavior information is spatio-temporal information and includes information per location (e.g. coverage 21/communication node 20) and/or time.
- the application behavior information includes following information.
- the application behavior information includes context feature information including above described basic features, higher level features and session features.
- the context feature information may include relations among different features.
- the context feature information may be represented as a graph (a context feature graph) indicating relation among different features.
- the context feature information is generated for each communication node 20.
- Higher level features are derived by analyzing one or more basic features. These higher level features provides the more human understandable context. For example, basic feature “age” can be analyzed with respect to different thresholds to derive the higher level feature “seniority (child, young, elderly)” of the driver or passenger inside the car. Another example, could be analyzing multiple basic features such as “location”, “time” and “day of the week” collected at different times to derive the higher level feature “going to home” or “going to office”.
- higher level features include one or more of the following features (actions or events related to a connected car): - at work - at home - parking - driving inside city - on highway - going to office - going to home - with family - at the signal - at the crossroad - facing turn - traffic condition etc.
- the session features capture the sequential relationship among different applications (or their states) launch events.
- the session features are features correlated to actions or events (higher level features) that took place within a time period and covers the hidden relationship among applications actions. For example, a recurrent preceding or a following action to a particular application launch suggests strong relationship in launch events. However, it is not necessary that the preceding or following action are immediate preceding or immediate following action, they can happen sometime back in the history (preceding) and might happen sometime ahead in future (following) with other events taking place in between. Thus, the duration of the time period may be obtained by sampling from an empirically defined Gaussian distribution. For example, in a connected car, car condition diagnosis, finding nearest repair shop and payment settlement consists of applications actions related to each other spread over a duration of time. Another example, while using social networking website such as Facebook, user tend to open Twitter, Instagram and YouTube in the same session.
- the session features include one or more of the following features correlated to a certain action or event (a certain higher level feature): - last application - last location - last charge time - last audio -last certain context trigger
- the session features include these features (i.e. a last application, a last location, a last charge time, last audio and/or last certain context trigger) after parking and/or before parking, these features after the crossroad and/or before the crossroad, and these features after driving inside city and/or before driving inside city etc.
- these features i.e. a last application, a last location, a last charge time, last audio and/or last certain context trigger
- the information processing apparatus 100 generates the Context Feature Information by analyzing the basic features. To derive these context features predefined set of rules (such as thresholds, mathematical functions, binary logic, existing machine learning feature transformations techniques etc.) are applied on the basic feature values.
- the context feature information acts as the input data source for training the application prediction model.
- the context feature information can be represented in csv file format where each row may represent the feature values for each connected car 10 at that particular time and columns in each row may consist of values for different features including basic features, higher level features and session features.
- the context feature information is stored in local storage unit 130.
- the context feature information is used for training the application prediction model.
- the application behavior information includes an application prediction model.
- An application prediction model is a machine leaning model trained using the context feature information stored in local storage unit 130.
- An application prediction unit predicts the applications that will be launched next in near real time for each connected car 10 (with launch probabilities) by analyzing the present values of context feature information.
- Application prediction model is generated in two stages, (1) a general prediction model is trained with a context feature information stored in storage unit 120 including historical context feature information of all the connected cars from all the edge gateways. However, this general model does not effectively capture the varying spatio temporal behavior of the applications running in different edge gateways and the each connected car, thus in the next step (2) parameter estimation is done for each edge gateways using the context feature information stored in local storage 130.
- the application prediction model is used for prediction of one or more applications to be working for a connected car 10 and/or prediction of a state of each of the one or more applications.
- the application behavior information includes state transition information indicating transitions among states of an application for connected cars.
- the state transition information is generated for each application.
- the state transition information may vary by location and/or time (e.g. time of day etc.)
- the state transition information indicates a graph including vertices representing the states and edges representing the transitions among the states with transition probabilities.
- the state transition information is a Markov chain model.
- the state transition information indicates a graph described in FIG. 8.
- the state transition information indicates a graph including four vertices representing states 1-4 and edges among the vertices. Each of the edges represents transition between two of the four application states with a transition possibility. That is, the order of running states is not static, and a transition from one state to another state takes place with a certain transition probability.
- Such a model can be represented by a Markov chain model.
- the information processing apparatus 100 (generation unit 155) generates the state transition information (e.g. a Markov chain model) based on the records (e.g. records of application state transitions) stored in the local storage unit 130.
- the state transition information e.g. a Markov chain model
- the records e.g. records of application state transitions
- the state transition information is used for prediction of a state of each of one or more applications to be working for a connected car 10.
- the application behavior information includes network strength information indicating network strength in locations within a coverage area 21 of the communication node 20.
- the network strength may be network availability (e.g. data rate) that can be supported in the locations.
- the information processing apparatus 100 (generation unit 155) generates the network strength information based on the records (e.g. records of network strength in locations within a coverage area21) stored in the local storage unit 130.
- the network strength information is used for estimation of offloading of tasks of the one or more applications (to be working for a connected car 10) from a connected car 10.
- the application behavior information includes offloading logic to decide tasks to be offloaded from a connected car 10 to a communication node 20.
- the offloading logic includes offloading gain function and constraints.
- the constraints include e.g. limited computing resources in a connected car, limited computing resources in a communication node, and/or limited network resources, etc.
- the offloading gain may represent improvement of application performance, improvement in response time, reduction of network usage, saving of energy, and/or ensuring of data locality, etc, which may be achieved by offloading tasks of applications. That is, the offloading gain represents the advantages of offloading.
- the application performance is a latency. In this case, if the task is executed inside car resulting in a latency of 150-200ms, while offloading that task to edge gateway gives latency of 50-100ms. Reduction in latency values is improvement in the offloading gain for that task. Offloading gain here can be referred as improvement in the latency (the application performance). Needless to say, the application performance and the offloading gain is not limited to the above example.
- the application performance may be throughput, response time, energy saving, reduced resource cost etc.
- the offloading gain function predicts the application performance under the constraints.
- the offloading gain function may predict change of the application performance with a change of conditions (e.g. allocated resources), under the constraints.
- the offloading gain function is predefined.
- the performance model is a mathematical formula or a machine learning model. Exact logic and type of the offloading gain function is implementation specific, and for this example embodiment it is not restricted to any specific way/technique of the offloading gain function.
- Various techniques for performance modelling, such as queuing theory, fuzzy logic, regression, artificial neural networks etc., may be used for the offloading gain function.
- the offloading gain function when values of allocated resources are inserted into the offloading gain function, the offloading gain function outputs the performance to be realized for the application.
- the values of allocated resources include values of computation resources (e.g. values of computing capacity at a communication node 20 and a connected car 10) and values of network resources (e.g. values of an amount of available network resources).
- the values of allocated resources may include a value of memory resources and/or a value of storage resources.
- tasks of an application are offloaded only if the offloading gain for the application is more than the predetermined gain.
- the offloading logic is used for estimation of offloading of tasks of the one or more applications (to be working for a connected car 10) from a connected car 10.
- the information processing apparatus 100 performs the prediction of the resource demand (for the communication node 20) based on the context feature information and the application behavior information.
- FIG. 9 is a flow chart illustrating an example of a schematic flow of resource demand prediction according to the first example embodiment.
- the prediction of the resource demand includes steps of “application and state prediction” (1st step: S301), “offloading estimation” (2nd step: S303), and “aggregation” (3rd step: S305).
- the 1st and 2nd steps are performed for each connected car 10 to be connected to the communication node 20.
- the 3rd step is performed for the communication node 20.
- the information processing apparatus 100 (the prediction unit 159) predicts one or more applications to be working for a connected car 10 (to be connected to the communication node 20) and a state of each of the one or more applications based on real-time (or current) context feature information (e.g. basic features including the application state information etc.) and the application behavior information (i.e. the application prediction model and the state transition information).
- real-time (or current) context feature information e.g. basic features including the application state information etc.
- the application behavior information i.e. the application prediction model and the state transition information.
- the application prediction model takes the real-time context feature information as an input and generates information on applications that will be running in the next time slot as an output (i.e. predicts the applications). For those predicted applications, the state transition information is fetched and used (with the current application state information of running applications among those predicted applications), and the next states of those applications are predicted.
- the information processing apparatus 100 estimates offloading of tasks of the one or more applications from a connected car 10 (to be connected to the communication node 20) onto the communication node 20, based on the result of the application and state prediction and the application behavior information (i.e. the network strength information and Offloading Logic).
- the information processing apparatus 100 (the prediction unit 159) discover resource availability in the estimated location (i.e. the constraints) based on the network strength information for the estimated location and resource utilization information for the estimated location. Then, the information processing apparatus 100 (the prediction unit 159) uses the offloading gain function and the resource availability to calculate application performance and offloading gain (i.e. improvement of the application performance). For example, application partition configurations (i.e. application task offloading patterns) are predefined.
- task A among 4 tasks (A, B, C, D) is offloaded in the first application partition configuration
- task B among 4 tasks (A, B, C, D) is offloaded in the second application partition configuration
- tasks A, B among 4 tasks (A, B, C, D) are offloaded in the third application partition configuration.
- the offloading gain is calculated for each of the application partition configuration.
- the information processing apparatus 100 selects an application partition configuration with the best offloading gain.
- the information processing apparatus 100 may select one or more application partition configurations with positive gains.
- the information processing apparatus 100 (the prediction unit 159) aggregates resource demands for the one or more connected cars 10 to be connected to the communication node 20. That is, the information processing apparatus 100 (the prediction unit 159) calculates a resource demand for the communication node 20 by summing the resource demands for the one or more connected cars 10. As a result, the resource demand for the communication node 20 is predicted.
- the resource demand includes a computation resource demand and a network resource demand.
- the application behavior information can cover these patterns. Therefore, the prediction of the resource demand(for the communication node 20) based on the context feature information (including the application state information ) and the application behavior information enables more precise prediction of the resource demand, which may result in more proper resource management related to connected cars.
- the predicted resource demand is used to decide offloading of tasks of one or more applications to be working for the connected car 20.
- the information processing apparatus 100 (the communication processing unit 151) transmits information on the predicted resource demand to the control apparatus 200.
- the information processing apparatus 100 (the communication processing unit 151) transmits information on the estimated offloading which the predicted resource demand is based on.
- the control apparatus 200 (the communication processing unit 231) receives the information from the information processing apparatus 100.
- control apparatus 200 acquires information on the predicted resource demand. Then, the control apparatus 200 (the decision unit 235) decides the offloading of tasks based on the predicted resource demand.
- the offloading includes offloading from the connected car 10 onto the communication node 20.
- the offloading further may include offloading between the connected car 10 and another connected car.
- FIG. 10 is a flow chart illustrating an example of a schematic flow of offloading decision according to the first example embodiment.
- the control apparatus 200 performs checks of a resource bottleneck and/or a performance degradation for applications etc. based on the predicted resource demand and available resources in the connected cars 10 and the communication node 20.
- the control apparatus 200 (the decision unit 235) optimize and decide the offloading based on the above checks. Specifically, for example, if there is any resource bottleneck and/or performance degradation as a result of the checks, the control apparatus 200 (the decision unit 235) optimize the offloading, e.g. in following way: - Reserving resources in the communication node 200 for critical applications. - Using resources of other connected cars in the vicinity with surplus resources to offload tasks. This car to car offloading can be executed via the communication node 20 or by directly establishing connection among the connected cars. - Offloading for applications recognized with critical contexts. - Running low quality version of the application, running limited features of the algorithm, reducing redundant tasks, and/or switching to other modes of the application provided by developer. - Queueing tasks in the buffer for later processing.
- the control apparatus 200 decides the offloading of tasks based on the predicted resource demand.
- the first example embodiment is not limited to this example.
- the information processing apparatus 100 may decide the offloading of tasks based on the predicted resource demand.
- the decision unit 235 may be included in the information processing apparatus 100 (processing unit 150).
- the information processing apparatus 100 (communication processing unit 151) may transmit information on the decision to the control apparatus 200.
- the information processing apparatus 100 (the generation unit 155) generates (at least part of) the application behavior information.
- the first example embodiment is not limited to this example.
- another apparatus may generate (at least part of) the application behavior information.
- the local storage unit 130 and/or the profile storage unit 140 may be included in another apparatus instead of the information processing apparatus 100.
- the second example embodiment is generalized example embodiment, while the above described first example embodiment is more concrete example embodiment.
- FIG. 11 is a block diagram illustrating an example of a schematic logical configuration of the information processing apparatus 400 according to the second example embodiment.
- the information processing apparatus 400 includes an information acquisition unit 410 and a prediction unit 420.
- FIG. 12 is a block diagram illustrating an example of a schematic hardware configuration of the information processing apparatus 400 according to the second example embodiment.
- the information processing apparatus 400 includes a CPU 481, a RAM 483, an internal memory storage 485, a HDD 487, a communication interface 489 and a bus 491.
- the CPU 481, the RAM 483, the internal memory storage 485, the HDD 487, the communication interface 489 are connected with each other via the bus 491.
- the information acquisition unit 410 and the prediction unit 420 may be implemented by the CPU 481 and the RAM 483 etc.
- the hardware configuration of the information processing apparatus 400 is not limited to the example of FIG. 12.
- the information processing apparatus 400 may be implemented by other hardware configuration.
- the information processing apparatus 400 may be virtualized. That is, the information processing apparatus 400 may be implemented as a virtual machine. In this case, the information processing apparatus 400 may operate as a virtual machine on a hypervisor and a physical machine (hardware) including a processor and a memory etc.
- the information processing apparatus 400 includes: a memory (e.g. the RAM 483 and/or the internal memory storage 485) storing instructions (a program), and one or more processors (e.g. CPU 481) configured to execute the instructions to execute processing of the information acquisition unit 410 and the prediction unit 420.
- a memory e.g. the RAM 483 and/or the internal memory storage 485
- processors e.g. CPU 481 configured to execute the instructions to execute processing of the information acquisition unit 410 and the prediction unit 420.
- the information processing apparatus 400 acquires application state information indicating a state of an application working for a connected car.
- the information processing apparatus 400 (the prediction unit 420) performs prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
- description of the application state information, the application behavior information and the prediction of the resource demand is the same as the first example embodiment.
- the application state information, the application behavior information and the prediction of the resource demand is not limited to the example of the first example embodiment.
- FIG. 13 is a flow chart illustrating an example of a schematic flow of resource demand prediction according to the second example embodiment.
- the information processing apparatus 400 acquires application state information indicating a state of an application working for a connected car.
- the information processing apparatus 400 (the prediction unit 420) performs prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
- any processing described herein need not be performed chronologically in the order illustrated in the corresponding flow chart.
- the steps of the processing may be performed in a different order from the order illustrated as the corresponding flow chart or may be performed in parallel.
- a module for an information processing apparatus or a control apparatus including constituent elements of the information processing apparatus or the control apparatus described herein (e.g. the communication processing unit, the information acquisition unit, the generation unit, the location estimation unit, the prediction unit and/or the decision unit) may be provided.
- the module may be an integrated circuit (IC) chip.
- methods including processing of such constituent elements may be provided, and programs for causing processors to execute processing of such constituent elements may be provided.
- a non-transitory computer readable recording medium storing the programs may be provided. It is apparent that such modules, methods, programs, and recording media are also included in the present invention.
- an information processing apparatus or a control apparatus of the present invention is not limited to a complete product, and may be a module of a complete product.
- the module may be an IC chip.
- An information processing apparatus comprising: an information acquisition unit configured to acquire application state information indicating a state of an application working for a connected car; and a prediction unit configured to perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
- Supplementary Note 5 The information processing apparatus according to any one of Supplementary Notes 1 to 4, wherein the information acquisition unit is configured to acquire the application state information for each of one or more connected cars to be connected to the communication node, and the prediction unit is configured to perform the prediction of the resource demand for communication node based on the application state information for each of the one or more connected cars and the application behavior information.
- Supplementary Note 7 The information processing apparatus according to Supplementary Note any one of Supplementary Notes 1 to 6, wherein the information acquisition unit is configured to acquire context feature information for the connected car, the context feature information includes the application state information and other information related to the connected car or the application working for the connected car.
- a method comprising: acquiring application state information indicating a state of an application working for a connected car; and performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
- a non-transitory computer readable recording medium storing a program that causes a processor to execute: acquiring application state information indicating a state of an application working for a connected car; and performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
- An information processing apparatus comprising: a memory storing instructions; and one or more processors configured to execute the instructions to: acquire application state information indicating a state of an application working for a connected car; and perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
(Problem) To enable more proper resource management related to connected cars. (Solution) An information processing apparatus of the present invention includes: an information acquisition unit configured to acquire application state information indicating a state of an application working for a connected car; and a prediction unit configured to perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
Description
This invention is concerning an information processing apparatus, a method, a program and a non-transitory computer readable recording medium.
In recent years, resource management techniques are studied in order to improve resource utilization and service quality. For example, it is studied to coordinate and unify different levels of resource controls including Internet of Things (IoT) platform controls (e.g. application replication, application placement, topology adaptation, etc.), core network controls (e.g. network slicing, service chain design, VNF placement, etc.), and access network controls (e.g. radio resource allocation, radio resource configuration, traffic classification, hardware activation/deactivation, etc.).
For example, PTL 1-3 disclose techniques to predict future demand (demand for connectivity service, demand for website data, and demand for computing resources respectively).
On the other hand, Vehicle-to-everything (V2X) or Connected car technology is rapidly researched in these days. The idea of V2X or Connected car includes V2V (Vehicle-to-Vehicle), V2I (Vehicle-to-Infrastructure), V2N (Vehicle-to-Network) and V2P (Vehicle-to-Pedestrian). For example, in V2V communication, a connected car will communicate with another connected car. In V2I communication, a connected car will be connected to a road side unit (RSU) and communicate with the RSU.
For example, PLT 4-5 disclose techniques related to vehicular communications.
However, existing technology including above-described PTL 1-5 does not predict a computing and/or network resource demand related to a connected car (or a vehicle) in consideration of the fact that behavior of connected car applications vary with time and place. Therefore, the prediction of a resource demand related to a connected car may not be precise. As a result, it may be difficult to perform resource management properly.
An example object of the present invention is to provide an information processing apparatus, a method and a program that enable more proper resource management related to connected cars.
An information processing apparatus according to an example aspect of the present invention includes: an information acquisition unit configured to acquire application state information indicating a state of an application working for a connected car; and a prediction unit configured to perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
A method of according to an example aspect the present invention includes: acquiring application state information indicating a state of an application working for a connected car; and performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
A program according to an example aspect of the present invention causes a processor to execute: acquiring application state information indicating a state of an application working for a connected car; and performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
A non-transitory computer readable recording medium according to an example aspect of the present invention stores a program that causes a processor to execute: acquiring application state information indicating a state of an application working for a connected car; and performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
An information processing apparatus according to an example aspect of the present invention includes: a memory storing instructions; and one or more processors configured to execute the instructions to: acquire application state information indicating a state of an application working for a connected car; and perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
According to the present invention, it is possible to perform resource management related to connected cars more properly. Note that the present invention may exert other advantageous effects instead of or together with the above advantageous effects.
Example embodiments of the present invention will be described in detail below with reference to the accompanying drawings. Note that, in the present description and drawings, elements to which the same or similar descriptions are applicable are denoted by the same reference signs, whereby overlapping descriptions may be omitted.
Descriptions will be given in the following order.
1. First Example Embodiment
1.1. Configuration of System
1.2. Configuration of Information Processing Apparatus
1.3. Configuration of Control Apparatus
1.4. Technical Features
2. Second Example Embodiment
2.1. Configuration of Information Processing Apparatus
2.2. Technical Features
1. First Example Embodiment
1.1. Configuration of System
1.2. Configuration of Information Processing Apparatus
1.3. Configuration of Control Apparatus
1.4. Technical Features
2. Second Example Embodiment
2.1. Configuration of Information Processing Apparatus
2.2. Technical Features
In this disclosure, “A and/or B” means “both A and B, or, either A or B”.
<<1. First Example Embodiment>>
With reference to FIGs.1-10, a first example embodiment of the present invention is described.
With reference to FIGs.1-10, a first example embodiment of the present invention is described.
<1.1. Configuration of System>
With reference to Fig. 1, an example of a schematic configuration of asystem 1 according to the first example embodiment of the present invention will be described. Fig. 1 is an explanatory diagram illustrating the example of the schematic configuration of the system 1 according to the first example embodiment of the present invention. With reference to Fig. 1, the system 1 includes a connected car 10, communication nodes 20 (e.g. a communication node 20A, a communication node 20B and a communication node 20C etc.), an information processing apparatus 100 and a control apparatus 200. For example, the communication nodes 20, the information processing apparatus 100 and the control apparatus 200 communicate with each other via network 30.
With reference to Fig. 1, an example of a schematic configuration of a
(1) Connected Car 10
Theconnected car 10 is connected to the communication node 20 and communicates with the communication node 20 when the connected car 10 is located within a coverage area 21 of the communication node 20. For example, when the connected car 10 is located within a coverage area 21A, the connected car is connected to a communication node 20A. Furthermore, when the connected car 10 moves from the coverage area 21A to a coverage area 21B, the connected car is connected to a communication node 20B, e.g. by way of a handover from the communication node 20A to the communication node 20B.
The
The connected car 10 executes one or more applications (i.e. one or more application programs). The one or more applications are applications for connected cars. A first example of the application is a context-aware advertisement delivery application. This application provides advertisements depending on the connected car’s situation and/or the driver’s situation. A second example of the application is a tourist recommender service application. This application suggests, for example, nearby gas stations, restaurants and/or hotels in consideration of not only location of the connected car 10 but also the driver’s choice, age, nationality etc. The one or more applications are not limited to these examples. Various applications (e.g. applications for safety, driver assistance, mobility management and/or infotainment etc.) can be executed by the connected car 10.
(2) Communication node 20
The communication node 20 is (e.g. wirelessly and directly) connected with the connectedcar 10 and (e.g. wirelessly and directly) communicates with the connected car 10 when the connected car 10 is located within a coverage area 21 of the communication node 20. The communication node 20 may be called an edge gateway or an edge cloud. For example, the communication node 20 is a road side unit (RSU) which is located on the road side and provide connectivity support to the connected car 10. Alternatively, the communication node 20 may be a base station of a mobile communication network such as evolved Node B (eNB) or next Generation Node B (gNB).
The communication node 20 is (e.g. wirelessly and directly) connected with the connected
The communication node 20 supports execution of applications working for the connected car 10. For example, the communication node 20 allocates its computing resources to process tasks of applications working for the connected car 10, and processes the tasks using the computing resources. That is, tasks of applications working for the connected car 10 are offloaded from the connected car 10 onto the communication node 20. Thanks to this offloading, the application working for the connected car 10 may be executed at higher speed in spite of limited computation resources of the communication car 10. In addition, tasks of applications for the connected car 10 may be offloaded from the connected car 10 onto another connected car.
Furthermore, the communication node 20 supports networking for applications working for the connected car 10. For example, the communication node 20 allocates its radio resources for transmission of data for the applications, and transmits or receives the data using the radio resources. In addition, for example, the communication node 20 forwards data for the applications from the connected car 10 to the network 30 or from the network 30 to the connected car 10.
(3) Information processing apparatus 100
Theinformation processing apparatus 100 performs prediction of a resource demand. For example, the resource demand is a resource demand for communication node 20 to be (e.g. wirelessly and directly) connected with one or more connected cars 10. The resource prediction performed by the information processing apparatus 100 will be described in detail later.
The
(4) Control apparatus 200
Thecontrol apparatus 200 performs resource management in order to optimize resource utilization and improve service quality. The control apparatus 200 may be called a resource orchestrator.
The
For example, the control apparatus 200 decides offloading of tasks of one or more applications to be working for the connected car 10. The control apparatus 200 decides the offloading based on the resource demand predicted by the information processing node 100. Then, the control apparatus 200 instructs the communication node 20 to perform the decided offloading. The offloading decision performed by the control apparatus will be described in detail later.
<1.2. Configuration of Information Processing Apparatus >
With reference to FIG. 2 and FIG. 3, an example of a configuration of aninformation processing apparatus 100 according to the first example embodiment is described. FIG. 2 is a block diagram illustrating an example of a schematic logical configuration of the information processing apparatus 100 according to the first example embodiment. According to FIG. 2, the information processing apparatus 100 includes a communication unit 110, a storage unit 120, a local storage unit 130, a profile storage unit 140 and a processing unit 150.
With reference to FIG. 2 and FIG. 3, an example of a configuration of an
The communication unit 110 receives a signal from a network (e.g. the network 30), and transmits a signal to the network.
The storage unit 120 temporarily or permanently stores a program, a parameter, and various data for an operation of the information processing apparatus 100.
The local storage unit 130 stores records of transactions and resource utilization per application and/or communication node 20. For example, the records are provided by connected cars 10 and/or communication nodes 20 and stored in the information processing apparatus 100 (the local storage unit 130).
The profile storage unit 140 stores application behavior information generated based on the records stored in the local storage unit 130. The application behavior information will be described in detail later.
The processing unit 150 provides various functions of the information processing apparatus 100. The processing unit 150 includes a communication processing unit 151, information acquisition unit 153, generation unit 155, a location estimation unit 157 and prediction unit 159. It is noted that the processing unit 150 may further include other constituent components other than these constituent components. That is, the processing unit 150 may also perform an operation other than the operations of these constituent components.
The communication processing unit 151 communicates with other nodes via communication unit 110. The information acquisition unit 153 acquires information from the communication unit 110, the storage unit 120, the local storage unit 130 and/or the profile storage unit 140. The information acquisition unit 153 may acquire information generated in the processing unit 150 (e.g. information generated by a constituent component of the processing unit 150).
Specific operations of the communication processing unit 151, the information acquisition unit 153, the generation unit 155, the location estimation unit 157 and the prediction unit 159 will be described in detail later.
FIG. 3 is a block diagram illustrating an example of a schematic hardware configuration of the information processing apparatus 100 according to the first example embodiment. According to FIG. 3, the information processing apparatus 100 includes a central processing unit (CPU) 181, a random access memory (RAM) 183, an internal memory storage 185, a hard disk drive (HDD) 187, a communication interface 189 and a bus 191. For example, the CPU 181, the RAM 183, the internal memory storage 185, the HDD 187, the communication interface 189 are connected with each other via the bus 191. The communication unit 110 may be implemented by the communication interface 189. The storage unit 120, the local storage unit 130 and profile storage unit 140 may be implemented by the internal memory storage 185 and/or the HDD 187. The processing unit 150 may be implemented by the CPU 181 and the RAM 183.
Needless to say, the hardware configuration of the information processing apparatus 100 is not limited to the example of FIG. 3. The information processing apparatus 100 may be implemented by other hardware configuration.
Alternatively, the information processing apparatus 100 may be virtualized. That is, the information processing apparatus 100 may be implemented as a virtual machine. In this case, the information processing apparatus 100 may operate as a virtual machine on a hypervisor and a physical machine (hardware) including a processor and a memory etc.
For example, more generally, the information processing apparatus 100 includes: a memory (e.g. the RAM 183 and/or the internal memory storage 185) storing instructions (a program), and one or more processors (e.g. CPU 181) configured to execute the instructions to execute processing of the processing unit 150 (processing of the communication processing unit 151, the information acquisition unit 153, the generation unit 155, the location estimation unit 157 and/or the prediction unit 159).
<1.3. Configuration of Control Apparatus >
With reference to FIG. 4 and FIG. 5, an example of a configuration of acontrol apparatus 200 according to the first example embodiment is described. FIG. 4 is a block diagram illustrating an example of a schematic logical configuration of the control apparatus 200 according to the first example embodiment. According to FIG. 4, the control apparatus 200 includes a communication unit 210, a storage unit 220 and a processing unit 230.
With reference to FIG. 4 and FIG. 5, an example of a configuration of a
The communication unit 210 receives a signal from a network (e.g. the network 30), and transmits a signal to the network.
The storage unit 220 temporarily or permanently stores a program, a parameter, and various data for an operation of the control apparatus 200.
The processing unit 230 provides various functions of the control apparatus 200. The processing unit 230 includes a communication processing unit 231, information acquisition unit 233 and a decision unit 235. It is noted that the processing unit 230 may further include other constituent components other than these constituent components. That is, the processing unit 230 may also perform an operation other than the operations of these constituent components.
The communication processing unit 231 communicates with other nodes via communication unit 210. The information acquisition unit 233 acquires information from the communication unit 210 and/or the storage unit 220. The information acquisition unit 233 may acquire information generated in the processing unit 230 (e.g. information generated by a constituent component of the processing unit 230).
Specific operations of the communication processing unit 231, the information acquisition unit 233 and the decision unit 235 will be described in detail later.
FIG. 5 is a block diagram illustrating an example of a schematic hardware configuration of the control apparatus 200 according to the first example embodiment. According to FIG. 5, the control apparatus 200 includes a CPU 281, a RAM 283, an internal memory storage 285, a HDD 287, a communication interface 289 and a bus 291. For example, the CPU 281, the RAM 283, the internal memory storage 285, the HDD 287 and the communication interface 289 are connected with each other via the bus 291. The communication unit 210 may be implemented by the communication interface 289. The storage unit 220 may be implemented by the internal memory storage 285 and/or the HDD 287. The processing unit 230 may be implemented by the CPU 281 and the RAM 283.
Needless to say, the hardware configuration of the control apparatus 200 is not limited to the example of FIG. 5. The control apparatus 200 may be implemented by other hardware configuration.
Alternatively, the control apparatus 200 may be virtualized. That is, the control apparatus 200 may be implemented as a virtual machine. In this case, the control apparatus 200 may operate as a virtual machine on a hypervisor and a physical machine (hardware) including a processor and a memory etc.
For example, more generally, the control apparatus 200 includes: a memory (e.g. the RAM 283 and/or the internal memory storage 285) storing instructions (a program), and one or more processors (e.g. CPU 281) configured to execute the instructions to execute processing of the processing unit 230 (processing of the communication processing unit 231, the information acquisition unit 233 and/or the decision unit 235).
<1.4. Technical Features>
Next, with reference to FIGs.6-10, technical features of the first example embodiment are described.
Next, with reference to FIGs.6-10, technical features of the first example embodiment are described.
(1) Application for connected cars
- Tasks
Each application for connected cars (i.e. each application which works for connected cars) have one or more tasks. For example, a part of the tasks is processed by aconnected car 10 and the remaining part of the tasks is processed by a communication node 20 connected with the connected car 10.
- Tasks
Each application for connected cars (i.e. each application which works for connected cars) have one or more tasks. For example, a part of the tasks is processed by a
FIG. 6 is an exemplary diagram for describing an example of tasks of an application for connected cars. With reference to FIG. 6, an application has four tasks, a task 41, a task 43, a task 45 and a task 47 represented by A, B, C and D respectively. The task 41 (task A) is processing which uses inputs x, y, z (inputs of the application), and outputs x1, y1, z1 as results. The task 43 (task B) is processing which uses inputs x1, y1 and outputs x2 as a result. The task 45 (task C) is processing which uses inputs y1, z1, and outputs z2 as a result. The task 47 (task D) is processing which uses inputs x2, z2 and outputs r as a result (a result of the application). A task group 51 including task 41 (task A) and task 43 (task B) is processed by a connected car 10, and a task group 53 including task 45 (task C) and task 47 (task D) is processed by a communication node 20. Each task consumes computation resources to be processed. In addition, transmission of intermediate outputs consumes network resources (e.g. radio resources). That is, transmission of the outputs y1, z1, x2 from the connected car 10 to the communication node 20 consumes network resources. Network resources may be represented by a data rate.
As described in FIG. 6, each application can be represented as a graph including vertices (A, B, C) representing tasks and edges (i.e. links connecting vertices) representing relation between tasks. In addition, the graph includes input-output information of each task. The outputs can be defined as (x1, y1, z1) = A(x, y, z), x2 = B(x1, y1), z2 = C(y1, z1), r = D(x2, z2), where A, B, C and D are functions of tasks. Furthermore, the graph may also include resource utilization information indicating computation resources consumed for each task and network resources consumed for transmission of each intermediate output.
As described above, for example, a part of the tasks of the application is processed by the connected car 10 and the remaining part of the tasks is processed by the communication node 20 connected with the connected car 10. Alternatively, all of the tasks of the application may be processed by a connected car 10 or communication node 20.
For example, the number of tasks is not fixed for all applications and varies from application to application. Some applications may have many tasks and other applications may have fewer tasks.
Two or more applications may have a relationship when executed. For example, two or more applications have one or more common tasks to be processed. FIG. 7 is an exemplary diagram for describing an example of a common task of two applications for connected cars. With reference to FIG. 7, an application 60 has tasks, 61, 63, 65 an application 70 has tasks 63, 71, 73, 75, 77. That is, the task 63 is a common task of the application 60 and application 70.
- States
There are application states for each application for connected cars.
There are application states for each application for connected cars.
For example, when the application is running, one or more tasks of the application are processed consuming computation resources and network resources at any particular instant. Snapshots (at instances) of processed tasks (i.e. active tasks) and consumed resources for processed tasks can be classified into different levels. Each of these levels can be referred as an application state. That is, each application state is a level of processed tasks and consumed resources.
As described above with reference to FIG. 6, the application can be represented as a graph including vertices representing tasks and edges (i.e. links) representing relation between tasks. Thus, each application state can be also represented as a graph including not only vertices and edges but also information on (a level of) processed tasks (i.e. active tasks) and consumed resources. For example, vertices of the graph are associated with computation resources, and edges of the graph are associated with network resources. This graph can be expressed in suitable modelling language (e.g. Graph Modelling Language (GML)) or expressed as a virtual network including virtual vertices and edges (i.e. links).
FIG. 8 is an exemplary diagram for describing an example of application states of an application for connected cars. With reference to FIG. 8, there are application states 1-4 for this application. The application state 1 is an offline state, and the application states 2-4 are online states. A task 81 and a task 83 are processed in the application states 1. A task 85, a task 87 and a task 89 are processed in the application states 2-4. In FIG. 8, a size of each vertex (each task) indicates an amount of computation resources consumed for the task in the state. In the simple example of FIG. 8, the task 85 is mainly processed in the state 2, the task 87 is mainly processed in the state 3, and the task 89 is mainly processed in the state 4. A current application state transits from one application state to another application state.
As the application states described in FIG. 8 are merely an example, there may be different application states for an application.
For example, the number of states is not fixed for all applications and varies from application to application. There may be many states for some applications, and there may be fewer states for other applications.
For example, the number of tasks running during an application state varies from application state to application state. In an example of FIG. 8, two tasks (tasks 81, 83) are running in the application state 1 (an offline state), while three tasks ( tasks 85, 87, 89) are running in the application states 2, 3, 4 (online states).
Although each application state is classified as an offline state or an online state in the example of FIG. 8, there may be other kinds of states for the application. For example, there may be an idle state, a sleep state and/or a standby state. Application states of an application depends on how the application is designed and number of modes it supports.
For example, the application states are defined by an application provider of the application. Alternatively, the application states may be defined by analyzing records related to the application (stored in the local storage 130). The records may be snapshots of processed tasks and consumed resources for processed tasks.
(2) Estimation of Location
For example, the information processing apparatus 100 (the location estimation unit 157) estimates a location of the connectedcar 10. The location of the connected car 10 is a location where the connected car 10 will be in the near future (for example in a predetermined period of time).
For example, the information processing apparatus 100 (the location estimation unit 157) estimates a location of the connected
For example, the information processing apparatus 100 (the location estimation unit 157) estimates the location of the connected car 10 based on information provided by the connected car 10. As one example, the provided information is navigation information indicating a navigation route of the connected car 10. As another example, the provided information may be travel information indicating a position of the connected car 10 and a direction and a speed of a travel of the connected car 10. The provided information is not limited to these examples.
Specifically, for example, the location is a coverage area 21 of a communication node 20 where the connected car 10 will be in the near future. In other words, the location is a communication node 20 to be connected with the connected car 10 in the near future. In an example of FIG. 1, the connected car 10 is within a coverage area 21A and connected to a communication node 20A at time T-1 and time T, and is within a coverage area 21B and connected to a communication node 20B at time T1. For example, the information processing apparatus 100 (the location estimation unit 157) may estimate, based on information provided by the connected car at time T-1, a location of the connected car 10 at time T0 to be coverage area 21A. For example, the information processing apparatus 100 (the location estimation unit 157) may estimate, based on information provided by the connected car at time T, a location of the connected car 10 at time T1 to be coverage area 21B.
Alternatively, another node other than the information processing apparatus 100 may estimate a location of the connected car 10 and transmit, to the information processing apparatus 100, location information indicating the estimated location. The information processing apparatus (communication processing unit 151 and information acquisition unit) may receive and acquire the location information.
(3) Prediction of Resource Demand
The information processing apparatus 100 (the prediction unit 159) performs prediction of a resource demand.
The information processing apparatus 100 (the prediction unit 159) performs prediction of a resource demand.
(3-1) Resource Demand
For example, the resource demand is a resource demand for a communication node 20 to be connected with one or more connected cars (in the near future). That is, the information processing apparatus 100 (the prediction unit 159) predicts a resource demand per communication node 20. For example, as described below, the resource demand per communication node 20 is aggregation of resource demands for one or moreconnected cars 10 to be connected to the communication node 20.
For example, the resource demand is a resource demand for a communication node 20 to be connected with one or more connected cars (in the near future). That is, the information processing apparatus 100 (the prediction unit 159) predicts a resource demand per communication node 20. For example, as described below, the resource demand per communication node 20 is aggregation of resource demands for one or more
For example, the resource demand includes demand of computation resources for the communication node 20. The resource demand may also include demand of network resources (e.g. radio resources) for the communication node 20.
(3-2) Concrete Steps
For example, the prediction of the resource demand includes the following steps.
For example, the prediction of the resource demand includes the following steps.
-1st step: Application and State Prediction
For example, the prediction of the resource demand includes prediction of one or more applications to be working for a connected car 10 (to be connected to the communication node 20). This application prediction may be performed per connectedcar 10.
For example, the prediction of the resource demand includes prediction of one or more applications to be working for a connected car 10 (to be connected to the communication node 20). This application prediction may be performed per connected
For example, the prediction of the resource demand includes prediction of a state of each of the one or more applications. This state prediction may be performed per application.
- 2nd step: Offloading Estimation
For example, the prediction of the resource demand includes estimation of offloading of tasks of the one or more applications from a connected car 10 (to be connected to the communication node 20) onto the communication node 20. This offloading prediction may be performed per connectedcar 10. The estimation of the offloading may include estimation of tasks to be offloaded from a connected car 10 to the communication node 20, and estimation of an amount of resources to be used for the offloading (i.e. estimation of a resource demand for the connected car 10).
For example, the prediction of the resource demand includes estimation of offloading of tasks of the one or more applications from a connected car 10 (to be connected to the communication node 20) onto the communication node 20. This offloading prediction may be performed per connected
- 3rd step: Aggregation
For example, the prediction of the resource demand includes aggregation of resource demands for one or more connected cars to be connected to the communication node 20 (in the near future). As a result, a resource demand for the communication node 20 is calculated.
For example, the prediction of the resource demand includes aggregation of resource demands for one or more connected cars to be connected to the communication node 20 (in the near future). As a result, a resource demand for the communication node 20 is calculated.
(3-3) Input Information
The information processing apparatus 100 (the information acquisition unit 153) acquires context feature information for each of one or moreconnected cars 10 to be connected to the communication node 20. The one or more connected cars 10 are connected cars 10 estimated to be connected to the communication node 20 by location estimation of the connected cars 10.
The information processing apparatus 100 (the information acquisition unit 153) acquires context feature information for each of one or more
Then, the information processing apparatus 100 (the prediction unit 159) performs the prediction of the resource demand (for the communication node 20) based on the context feature information and application behavior information concerning behavior of applications for connected cars. The context feature information and the application behavior information will be described in detail later.
(4) Context Feature Information
For example, the context feature information for theconnected car 10 is provided by the connected car 10 and includes following information (i.e. basic features).
For example, the context feature information for the
(4-1) Application State Information
The context feature information includes application state information indicating a state of an application working for aconnected car 10.
The context feature information includes application state information indicating a state of an application working for a
For example, the state of the application (i.e. an application state) is a level of processed tasks and consumed resources as described above. Examples of the application state are also described above with reference to FIG.8.
For example, the application state information is an index of the state of the application. Alternatively, the application state information may be values representing a snapshot of processed tasks and consumed resources for processed tasks.
The application state information may indicate states of two or more applications working for the connected car, because two or more applications may be running in the connected car 10.
For example, the information processing apparatus 100 (the information acquisition unit 153) acquires the application state information for each of one or more connected cars 100 to be connected to the communication node 20. Then, the information processing apparatus 100 (the prediction unit 159) perform the prediction of the resource demand for communication node 20 based on the application state information for each of the one or more connected cars and the application behavior information.
(4-2) Other Information
The context feature information includes other feature information related to the connectedcar 10 or the application working for the connected car 10. For example, the context feature information includes information indicating one or more of following basic features:
- resource utilization in the connected car
- an application working for the connected car 10 (e.g. application ID);
- a location of the connectedcar 10;
- a speed and a direction of the connectedcar 10;
- time;
- day of the week;
- a car type of aconnected car 10;
- a frequency used by the application (per time slot);
- a sound level around the connectedcar 10;
- a fuel level of the connectedcar 10;
- a temperature in or around the connectedcar 10;
- an age of the driver or a passenger inside the connectedcar 10;
- weather around the connectedcar 10; and
- a charging status of the car or the apparatus inside the car related to the connectedcar 10 or the application working for the connected car 10.
The context feature information includes other feature information related to the connected
- resource utilization in the connected car
- an application working for the connected car 10 (e.g. application ID);
- a location of the connected
- a speed and a direction of the connected
- time;
- day of the week;
- a car type of a
- a frequency used by the application (per time slot);
- a sound level around the connected
- a fuel level of the connected
- a temperature in or around the connected
- an age of the driver or a passenger inside the connected
- weather around the connected
- a charging status of the car or the apparatus inside the car related to the connected
The application state information is also information indicating a basic feature (a state of an application working for the connected car 10).
(5) Application Behavior Information
For example, (at least part of) the application behavior information is generated by information processing apparatus 100 (generation unit 155) based on the records stored in thelocal storage unit 130, and is stored in the profile storage unit 140. The application behavior information may be called an application behavior profile.
For example, (at least part of) the application behavior information is generated by information processing apparatus 100 (generation unit 155) based on the records stored in the
For example, the application behavior information is spatio-temporal information and includes information per location (e.g. coverage 21/communication node 20) and/or time.
For example, the application behavior information includes following information.
(5-1) Context Feature Information
For example, the application behavior information includes context feature information including above described basic features, higher level features and session features. The context feature information may include relations among different features. The context feature information may be represented as a graph (a context feature graph) indicating relation among different features. The context feature information is generated for each communication node 20.
For example, the application behavior information includes context feature information including above described basic features, higher level features and session features. The context feature information may include relations among different features. The context feature information may be represented as a graph (a context feature graph) indicating relation among different features. The context feature information is generated for each communication node 20.
- Higher level features
For example, Higher level features are derived by analyzing one or more basic features. These higher level features provides the more human understandable context. For example, basic feature “age” can be analyzed with respect to different thresholds to derive the higher level feature “seniority (child, young, elderly)” of the driver or passenger inside the car. Another example, could be analyzing multiple basic features such as “location”, “time” and “day of the week” collected at different times to derive the higher level feature “going to home” or “going to office”. Specifically, for example, higher level features include one or more of the following features (actions or events related to a connected car):
- at work
- at home
- parking
- driving inside city
- on highway
- going to office
- going to home
- with family
- at the signal
- at the crossroad
- facing turn
- traffic condition etc.
For example, Higher level features are derived by analyzing one or more basic features. These higher level features provides the more human understandable context. For example, basic feature “age” can be analyzed with respect to different thresholds to derive the higher level feature “seniority (child, young, elderly)” of the driver or passenger inside the car. Another example, could be analyzing multiple basic features such as “location”, “time” and “day of the week” collected at different times to derive the higher level feature “going to home” or “going to office”. Specifically, for example, higher level features include one or more of the following features (actions or events related to a connected car):
- at work
- at home
- parking
- driving inside city
- on highway
- going to office
- going to home
- with family
- at the signal
- at the crossroad
- facing turn
- traffic condition etc.
- Session features
The session features capture the sequential relationship among different applications (or their states) launch events. The session features are features correlated to actions or events (higher level features) that took place within a time period and covers the hidden relationship among applications actions. For example, a recurrent preceding or a following action to a particular application launch suggests strong relationship in launch events. However, it is not necessary that the preceding or following action are immediate preceding or immediate following action, they can happen sometime back in the history (preceding) and might happen sometime ahead in future (following) with other events taking place in between. Thus, the duration of the time period may be obtained by sampling from an empirically defined Gaussian distribution. For example, in a connected car, car condition diagnosis, finding nearest repair shop and payment settlement consists of applications actions related to each other spread over a duration of time. Another example, while using social networking website such as Facebook, user tend to open Twitter, Instagram and YouTube in the same session.
The session features capture the sequential relationship among different applications (or their states) launch events. The session features are features correlated to actions or events (higher level features) that took place within a time period and covers the hidden relationship among applications actions. For example, a recurrent preceding or a following action to a particular application launch suggests strong relationship in launch events. However, it is not necessary that the preceding or following action are immediate preceding or immediate following action, they can happen sometime back in the history (preceding) and might happen sometime ahead in future (following) with other events taking place in between. Thus, the duration of the time period may be obtained by sampling from an empirically defined Gaussian distribution. For example, in a connected car, car condition diagnosis, finding nearest repair shop and payment settlement consists of applications actions related to each other spread over a duration of time. Another example, while using social networking website such as Facebook, user tend to open Twitter, Instagram and YouTube in the same session.
Specifically, for example, the session features include one or more of the following features correlated to a certain action or event (a certain higher level feature):
- last application
- last location
- last charge time
- last audio
-last certain context trigger
- last application
- last location
- last charge time
- last audio
-last certain context trigger
For example, the session features include these features (i.e. a last application, a last location, a last charge time, last audio and/or last certain context trigger) after parking and/or before parking, these features after the crossroad and/or before the crossroad, and these features after driving inside city and/or before driving inside city etc.
- Generation of Context Feature Information
Theinformation processing apparatus 100 generates the Context Feature Information by analyzing the basic features. To derive these context features predefined set of rules (such as thresholds, mathematical functions, binary logic, existing machine learning feature transformations techniques etc.) are applied on the basic feature values. The context feature information acts as the input data source for training the application prediction model. The context feature information can be represented in csv file format where each row may represent the feature values for each connected car 10 at that particular time and columns in each row may consist of values for different features including basic features, higher level features and session features. The context feature information is stored in local storage unit 130.
The
- Use of Context Feature Information
The context feature information is used for training the application prediction model.
The context feature information is used for training the application prediction model.
(5-2) Application Prediction Model
For example, the application behavior information includes an application prediction model. An application prediction model is a machine leaning model trained using the context feature information stored inlocal storage unit 130. An application prediction unit predicts the applications that will be launched next in near real time for each connected car 10 (with launch probabilities) by analyzing the present values of context feature information.
For example, the application behavior information includes an application prediction model. An application prediction model is a machine leaning model trained using the context feature information stored in
- Generation of Application Prediction Model
Application prediction model is generated in two stages, (1) a general prediction model is trained with a context feature information stored instorage unit 120 including historical context feature information of all the connected cars from all the edge gateways. However, this general model does not effectively capture the varying spatio temporal behavior of the applications running in different edge gateways and the each connected car, thus in the next step (2) parameter estimation is done for each edge gateways using the context feature information stored in local storage 130.
Application prediction model is generated in two stages, (1) a general prediction model is trained with a context feature information stored in
- Use of Application Prediction Model
The application prediction model is used for prediction of one or more applications to be working for aconnected car 10 and/or prediction of a state of each of the one or more applications.
The application prediction model is used for prediction of one or more applications to be working for a
(5-3) State Transition Information
For example, the application behavior information includes state transition information indicating transitions among states of an application for connected cars. For example, the state transition information is generated for each application. The state transition information may vary by location and/or time (e.g. time of day etc.)
For example, the application behavior information includes state transition information indicating transitions among states of an application for connected cars. For example, the state transition information is generated for each application. The state transition information may vary by location and/or time (e.g. time of day etc.)
Specifically, for example, the state transition information indicates a graph including vertices representing the states and edges representing the transitions among the states with transition probabilities. As one example, the state transition information is a Markov chain model.
For example, the state transition information indicates a graph described in FIG. 8. Specifically, the state transition information indicates a graph including four vertices representing states 1-4 and edges among the vertices. Each of the edges represents transition between two of the four application states with a transition possibility. That is, the order of running states is not static, and a transition from one state to another state takes place with a certain transition probability. Such a model can be represented by a Markov chain model.
- Generation of State Transition Information
For example, the information processing apparatus 100 (generation unit 155) generates the state transition information (e.g. a Markov chain model) based on the records (e.g. records of application state transitions) stored in thelocal storage unit 130.
For example, the information processing apparatus 100 (generation unit 155) generates the state transition information (e.g. a Markov chain model) based on the records (e.g. records of application state transitions) stored in the
- Use of State Transition Information
The state transition information is used for prediction of a state of each of one or more applications to be working for aconnected car 10.
The state transition information is used for prediction of a state of each of one or more applications to be working for a
(5-4) Network Strength Information
For example, the application behavior information includes network strength information indicating network strength in locations within a coverage area 21 of the communication node 20.
For example, the application behavior information includes network strength information indicating network strength in locations within a coverage area 21 of the communication node 20.
As one example, the network strength may be network availability (e.g. data rate) that can be supported in the locations.
- Generation of Network Strength Information
For example, the information processing apparatus 100 (generation unit 155) generates the network strength information based on the records (e.g. records of network strength in locations within a coverage area21) stored in thelocal storage unit 130.
For example, the information processing apparatus 100 (generation unit 155) generates the network strength information based on the records (e.g. records of network strength in locations within a coverage area21) stored in the
- Use of Network Strength Information
The network strength information is used for estimation of offloading of tasks of the one or more applications (to be working for a connected car 10) from aconnected car 10.
The network strength information is used for estimation of offloading of tasks of the one or more applications (to be working for a connected car 10) from a
(5-5) Offloading Logic
For example, the application behavior information includes offloading logic to decide tasks to be offloaded from aconnected car 10 to a communication node 20. Specifically, for example, the offloading logic includes offloading gain function and constraints.
For example, the application behavior information includes offloading logic to decide tasks to be offloaded from a
The constraints include e.g. limited computing resources in a connected car, limited computing resources in a communication node, and/or limited network resources, etc.
The offloading gain may represent improvement of application performance, improvement in response time, reduction of network usage, saving of energy, and/or ensuring of data locality, etc, which may be achieved by offloading tasks of applications. That is, the offloading gain represents the advantages of offloading. As one example, the application performance is a latency. In this case, if the task is executed inside car resulting in a latency of 150-200ms, while offloading that task to edge gateway gives latency of 50-100ms. Reduction in latency values is improvement in the offloading gain for that task. Offloading gain here can be referred as improvement in the latency (the application performance). Needless to say, the application performance and the offloading gain is not limited to the above example. For example, the application performance may be throughput, response time, energy saving, reduced resource cost etc.
For example, the offloading gain function predicts the application performance under the constraints. Alternatively, the offloading gain function may predict change of the application performance with a change of conditions (e.g. allocated resources), under the constraints.
The offloading gain function is predefined. For example, the performance model is a mathematical formula or a machine learning model. Exact logic and type of the offloading gain function is implementation specific, and for this example embodiment it is not restricted to any specific way/technique of the offloading gain function. Various techniques for performance modelling, such as queuing theory, fuzzy logic, regression, artificial neural networks etc., may be used for the offloading gain function.
For example, when values of allocated resources are inserted into the offloading gain function, the offloading gain function outputs the performance to be realized for the application. For example, the values of allocated resources include values of computation resources (e.g. values of computing capacity at a communication node 20 and a connected car 10) and values of network resources (e.g. values of an amount of available network resources). In addition, the values of allocated resources may include a value of memory resources and/or a value of storage resources.
For example, tasks of an application are offloaded only if the offloading gain for the application is more than the predetermined gain.
- Use of Offloading Logic
The offloading logic is used for estimation of offloading of tasks of the one or more applications (to be working for a connected car 10) from aconnected car 10.
The offloading logic is used for estimation of offloading of tasks of the one or more applications (to be working for a connected car 10) from a
(6) Example of Processing of resource demand prediction
As described above, the information processing apparatus 100 (the prediction unit 159) performs the prediction of the resource demand (for the communication node 20) based on the context feature information and the application behavior information.
As described above, the information processing apparatus 100 (the prediction unit 159) performs the prediction of the resource demand (for the communication node 20) based on the context feature information and the application behavior information.
FIG. 9 is a flow chart illustrating an example of a schematic flow of resource demand prediction according to the first example embodiment. As described above, the prediction of the resource demand includes steps of “application and state prediction” (1st step: S301), “offloading estimation” (2nd step: S303), and “aggregation” (3rd step: S305).
The 1st and 2nd steps are performed for each connected car 10 to be connected to the communication node 20. On the other hand, the 3rd step is performed for the communication node 20.
(6-1) 1st step (S301): Application and State Prediction
In the 1st step (S301), the information processing apparatus 100 (the prediction unit 159) predicts one or more applications to be working for a connected car 10 (to be connected to the communication node 20) and a state of each of the one or more applications based on real-time (or current) context feature information (e.g. basic features including the application state information etc.) and the application behavior information (i.e. the application prediction model and the state transition information).
In the 1st step (S301), the information processing apparatus 100 (the prediction unit 159) predicts one or more applications to be working for a connected car 10 (to be connected to the communication node 20) and a state of each of the one or more applications based on real-time (or current) context feature information (e.g. basic features including the application state information etc.) and the application behavior information (i.e. the application prediction model and the state transition information).
Specifically, for example, the application prediction model takes the real-time context feature information as an input and generates information on applications that will be running in the next time slot as an output (i.e. predicts the applications). For those predicted applications, the state transition information is fetched and used (with the current application state information of running applications among those predicted applications), and the next states of those applications are predicted.
(6-2) 2nd step (S303): Offloading Estimation
In the 2nd step (S303), the information processing apparatus 100 (the prediction unit 159) estimates offloading of tasks of the one or more applications from a connected car 10 (to be connected to the communication node 20) onto the communication node 20, based on the result of the application and state prediction and the application behavior information (i.e. the network strength information and Offloading Logic).
In the 2nd step (S303), the information processing apparatus 100 (the prediction unit 159) estimates offloading of tasks of the one or more applications from a connected car 10 (to be connected to the communication node 20) onto the communication node 20, based on the result of the application and state prediction and the application behavior information (i.e. the network strength information and Offloading Logic).
Specifically, for example, the information processing apparatus 100 (the prediction unit 159) discover resource availability in the estimated location (i.e. the constraints) based on the network strength information for the estimated location and resource utilization information for the estimated location. Then, the information processing apparatus 100 (the prediction unit 159) uses the offloading gain function and the resource availability to calculate application performance and offloading gain (i.e. improvement of the application performance). For example, application partition configurations (i.e. application task offloading patterns) are predefined. As one example, task A among 4 tasks (A, B, C, D) is offloaded in the first application partition configuration, task B among 4 tasks (A, B, C, D) is offloaded in the second application partition configuration, and tasks A, B among 4 tasks (A, B, C, D) are offloaded in the third application partition configuration. The offloading gain is calculated for each of the application partition configuration. Finally, the information processing apparatus 100 selects an application partition configuration with the best offloading gain. Alternatively, the information processing apparatus 100 may select one or more application partition configurations with positive gains.
(6-3) 3rd step (S305): Aggregation
Above described the 1st step (S301) and the 2nd step (S303) is performed for each of one or more connected cars to be connected to the communication node 20. As a result, tasks to be offloaded from aconnected car 10 to the communication node 20 is estimated for each connected car 10. In addition, an amount of resources to be used for the offloading (i.e. a resource demand) is estimated for each connected car 10.
Above described the 1st step (S301) and the 2nd step (S303) is performed for each of one or more connected cars to be connected to the communication node 20. As a result, tasks to be offloaded from a
In 3rd step (S305), the information processing apparatus 100 (the prediction unit 159) aggregates resource demands for the one or more connected cars 10 to be connected to the communication node 20. That is, the information processing apparatus 100 (the prediction unit 159) calculates a resource demand for the communication node 20 by summing the resource demands for the one or more connected cars 10. As a result, the resource demand for the communication node 20 is predicted. The resource demand includes a computation resource demand and a network resource demand.
(7) Effects
For example, behaviors of connected car applications vary with time and place. Specifically, trigger/launch patterns and State transition patterns of connected car application highly vary with time and place. Specifically, for example, application understanding demographics of pedestrians to show customized ads, will have different resource demands at crowded places than empty places and will also vary with time and day of the week (a first example). Many connected car applications need to communicate with surrounding vehicles and may have different requirements at places such as traffic signals, parking places, gas stations etc. (a second example). Tourist places can trigger more information consumption inside vehicles (a third example). Hazardous/ dangerous places can trigger safety related applications (a fourth example). Intercity routes can have more entertainment information consumption, such as long movie streaming (a fifth example). Accident prone areas/ difficult route needs support from driver assistance related applications (a sixth example). Carpooling applications can more data at certain places such as residential areas going towards market, offices etc. (a seventh example).
For example, behaviors of connected car applications vary with time and place. Specifically, trigger/launch patterns and State transition patterns of connected car application highly vary with time and place. Specifically, for example, application understanding demographics of pedestrians to show customized ads, will have different resource demands at crowded places than empty places and will also vary with time and day of the week (a first example). Many connected car applications need to communicate with surrounding vehicles and may have different requirements at places such as traffic signals, parking places, gas stations etc. (a second example). Tourist places can trigger more information consumption inside vehicles (a third example). Hazardous/ dangerous places can trigger safety related applications (a fourth example). Intercity routes can have more entertainment information consumption, such as long movie streaming (a fifth example). Accident prone areas/ difficult route needs support from driver assistance related applications (a sixth example). Carpooling applications can more data at certain places such as residential areas going towards market, offices etc. (a seventh example).
For example, the application behavior information can cover these patterns. Therefore, the prediction of the resource demand(for the communication node 20) based on the context feature information (including the application state information ) and the application behavior information enables more precise prediction of the resource demand, which may result in more proper resource management related to connected cars.
(8) Decision of Offloading
For example, the predicted resource demand is used to decide offloading of tasks of one or more applications to be working for the connected car 20.
For example, the predicted resource demand is used to decide offloading of tasks of one or more applications to be working for the connected car 20.
- Control apparatus 200
For example, after the prediction of the resource demand, the information processing apparatus 100 (the communication processing unit 151) transmits information on the predicted resource demand to thecontrol apparatus 200. In addition, the information processing apparatus 100 (the communication processing unit 151) transmits information on the estimated offloading which the predicted resource demand is based on. The control apparatus 200 (the communication processing unit 231) receives the information from the information processing apparatus 100.
For example, after the prediction of the resource demand, the information processing apparatus 100 (the communication processing unit 151) transmits information on the predicted resource demand to the
For example, the control apparatus 200 (the information acquisition unit 233) acquires information on the predicted resource demand. Then, the control apparatus 200 (the decision unit 235) decides the offloading of tasks based on the predicted resource demand.
- Offloading
For example, the offloading includes offloading from the connectedcar 10 onto the communication node 20.
For example, the offloading includes offloading from the connected
In addition, the offloading further may include offloading between the connected car 10 and another connected car.
- Concrete Processing
FIG. 10 is a flow chart illustrating an example of a schematic flow of offloading decision according to the first example embodiment.
FIG. 10 is a flow chart illustrating an example of a schematic flow of offloading decision according to the first example embodiment.
In the 1st step (S311), the control apparatus 200 (the decision unit 235) performs checks of a resource bottleneck and/or a performance degradation for applications etc. based on the predicted resource demand and available resources in the connected cars 10 and the communication node 20.
In the 2nd step (S313), for example, the control apparatus 200 (the decision unit 235) optimize and decide the offloading based on the above checks. Specifically, for example, if there is any resource bottleneck and/or performance degradation as a result of the checks, the control apparatus 200 (the decision unit 235) optimize the offloading, e.g. in following way:
- Reserving resources in thecommunication node 200 for critical applications.
- Using resources of other connected cars in the vicinity with surplus resources to offload tasks. This car to car offloading can be executed via the communication node 20 or by directly establishing connection among the connected cars.
- Offloading for applications recognized with critical contexts.
- Running low quality version of the application, running limited features of the algorithm, reducing redundant tasks, and/or switching to other modes of the application provided by developer.
- Queueing tasks in the buffer for later processing.
- Reserving resources in the
- Using resources of other connected cars in the vicinity with surplus resources to offload tasks. This car to car offloading can be executed via the communication node 20 or by directly establishing connection among the connected cars.
- Offloading for applications recognized with critical contexts.
- Running low quality version of the application, running limited features of the algorithm, reducing redundant tasks, and/or switching to other modes of the application provided by developer.
- Queueing tasks in the buffer for later processing.
(9) Alternatives
As described above, for example, the control apparatus 200 (the decision unit 235) decides the offloading of tasks based on the predicted resource demand. However, the first example embodiment is not limited to this example. Alternatively, for example, instead of thecontrol apparatus 200, the information processing apparatus 100 may decide the offloading of tasks based on the predicted resource demand. In this case, the decision unit 235 may be included in the information processing apparatus 100 (processing unit 150). In addition, the information processing apparatus 100 (communication processing unit 151) may transmit information on the decision to the control apparatus 200.
As described above, for example, the control apparatus 200 (the decision unit 235) decides the offloading of tasks based on the predicted resource demand. However, the first example embodiment is not limited to this example. Alternatively, for example, instead of the
As described above, for example, the information processing apparatus 100 (the generation unit 155) generates (at least part of) the application behavior information. However, the first example embodiment is not limited to this example. Alternatively, for example, instead of the information processing apparatus 100, another apparatus may generate (at least part of) the application behavior information. Furthermore, the local storage unit 130 and/or the profile storage unit 140 may be included in another apparatus instead of the information processing apparatus 100.
<<2. Second Example Embodiment>>
Next, with reference to FIGs.11 to 13, a second example embodiment of the present invention will be described. The second example embodiment is generalized example embodiment, while the above described first example embodiment is more concrete example embodiment.
Next, with reference to FIGs.11 to 13, a second example embodiment of the present invention will be described. The second example embodiment is generalized example embodiment, while the above described first example embodiment is more concrete example embodiment.
<2.1. Configuration of Information Processing Apparatus >
With reference to FIG. 11 and FIG 12, an example of a configuration of aninformation processing apparatus 400 according to the second example embodiment is described. FIG. 11 is a block diagram illustrating an example of a schematic logical configuration of the information processing apparatus 400 according to the second example embodiment. According to FIG. 11, the information processing apparatus 400 includes an information acquisition unit 410 and a prediction unit 420.
With reference to FIG. 11 and FIG 12, an example of a configuration of an
Specific operations of the information acquisition unit 410 and the prediction unit 420 will be described in detail later.
FIG. 12 is a block diagram illustrating an example of a schematic hardware configuration of the information processing apparatus 400 according to the second example embodiment. According to FIG. 12, the information processing apparatus 400 includes a CPU 481, a RAM 483, an internal memory storage 485, a HDD 487, a communication interface 489 and a bus 491. For example, the CPU 481, the RAM 483, the internal memory storage 485, the HDD 487, the communication interface 489 are connected with each other via the bus 491. The information acquisition unit 410 and the prediction unit 420 may be implemented by the CPU 481 and the RAM 483 etc.
Needless to say, the hardware configuration of the information processing apparatus 400 is not limited to the example of FIG. 12. The information processing apparatus 400 may be implemented by other hardware configuration.
In addition, the information processing apparatus 400 may be virtualized. That is, the information processing apparatus 400 may be implemented as a virtual machine. In this case, the information processing apparatus 400 may operate as a virtual machine on a hypervisor and a physical machine (hardware) including a processor and a memory etc.
For example, more generally, the information processing apparatus 400 includes: a memory (e.g. the RAM 483 and/or the internal memory storage 485) storing instructions (a program), and one or more processors (e.g. CPU 481) configured to execute the instructions to execute processing of the information acquisition unit 410 and the prediction unit 420.
<2.2. Technical Features>
Next, with reference to FIG. 13, technical features of the second example embodiment are described.
Next, with reference to FIG. 13, technical features of the second example embodiment are described.
The information processing apparatus 400 (the information acquisition unit 410) acquires application state information indicating a state of an application working for a connected car. The information processing apparatus 400 (the prediction unit 420) performs prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
For example, description of the application state information, the application behavior information and the prediction of the resource demand is the same as the first example embodiment. However, needless to say, the application state information, the application behavior information and the prediction of the resource demand is not limited to the example of the first example embodiment.
FIG. 13 is a flow chart illustrating an example of a schematic flow of resource demand prediction according to the second example embodiment. In step 410, the information processing apparatus 400 (the information acquisition unit 410) acquires application state information indicating a state of an application working for a connected car. In step 420, the information processing apparatus 400 (the prediction unit 420) performs prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
The embodiments of the present invention have been described above. The present invention is not limited to the above-described embodiments and is possible to be implemented by making various changes within the scope of the gist of the present invention. The above-described embodiments are examples, and it should be understood by those skilled in the art that various modified examples can be made to combinations of the embodiments and combinations of constituent components and processing processes of the embodiments and that such modified examples are also within the scope of the present invention.
For example, the steps in any processing described herein need not be performed chronologically in the order illustrated in the corresponding flow chart. For example, the steps of the processing may be performed in a different order from the order illustrated as the corresponding flow chart or may be performed in parallel.
Moreover, a module (for an information processing apparatus or a control apparatus) including constituent elements of the information processing apparatus or the control apparatus described herein (e.g. the communication processing unit, the information acquisition unit, the generation unit, the location estimation unit, the prediction unit and/or the decision unit) may be provided. The module may be an integrated circuit (IC) chip. Moreover, methods including processing of such constituent elements may be provided, and programs for causing processors to execute processing of such constituent elements may be provided. Furthermore, a non-transitory computer readable recording medium storing the programs may be provided. It is apparent that such modules, methods, programs, and recording media are also included in the present invention.
Furthermore, an information processing apparatus or a control apparatus of the present invention is not limited to a complete product, and may be a module of a complete product. The module may be an IC chip.
Some of or all the above-described embodiments can be described as in the following Supplementary Notes, but are not limited to the following.
(Supplementary Note 1)
An information processing apparatus comprising:
an information acquisition unit configured to acquire application state information indicating a state of an application working for a connected car; and
a prediction unit configured to perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
An information processing apparatus comprising:
an information acquisition unit configured to acquire application state information indicating a state of an application working for a connected car; and
a prediction unit configured to perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
(Supplementary Note 2)
The information processing apparatus according toSupplementary Note 1, wherein
the resource demand is a resource demand for a communication node, and
the connected car is a connected car to be connected to the communication node.
The information processing apparatus according to
the resource demand is a resource demand for a communication node, and
the connected car is a connected car to be connected to the communication node.
(Supplementary Note 3)
The information processing apparatus according toSupplementary Note 2, wherein the communication node is a communication node to be wirelessly and directly connected with the connected car.
The information processing apparatus according to
(Supplementary Note 4)
The information processing apparatus according to Supplementary Note 2 or 3, wherein the connected car is a connected car estimated to be connected to the communication node by location estimation of the connected car.
The information processing apparatus according to
(Supplementary Note 5)
The information processing apparatus according to any one ofSupplementary Notes 1 to 4, wherein
the information acquisition unit is configured to acquire the application state information for each of one or more connected cars to be connected to the communication node, and
the prediction unit is configured to perform the prediction of the resource demand for communication node based on the application state information for each of the one or more connected cars and the application behavior information.
The information processing apparatus according to any one of
the information acquisition unit is configured to acquire the application state information for each of one or more connected cars to be connected to the communication node, and
the prediction unit is configured to perform the prediction of the resource demand for communication node based on the application state information for each of the one or more connected cars and the application behavior information.
(Supplementary Note 6)
The information processing apparatus according to any one ofSupplementary Notes 1 to 5, wherein the application state information indicates states of two or more applications working for the connected car.
The information processing apparatus according to any one of
(Supplementary Note 7)
The information processing apparatus according to Supplementary Note any one ofSupplementary Notes 1 to 6, wherein
the information acquisition unit is configured to acquire context feature information for the connected car,
the context feature information includes the application state information and other information related to the connected car or the application working for the connected car.
The information processing apparatus according to Supplementary Note any one of
the information acquisition unit is configured to acquire context feature information for the connected car,
the context feature information includes the application state information and other information related to the connected car or the application working for the connected car.
(Supplementary Note 8)
The information processing apparatus according to any one ofSupplementary Notes 1 to 7, wherein the prediction of the resource demand includes prediction of one or more applications to be working for the connected car.
The information processing apparatus according to any one of
(Supplementary Note 9)
The information processing apparatus according to any one ofSupplementary Notes 1 to 8, wherein the prediction of the resource demand includes prediction of a state of each of one or more applications to be working for the connected car.
The information processing apparatus according to any one of
(Supplementary Note 10)
The information processing apparatus according to Supplementary Note 8 or 9, wherein
the resource demand is a resource demand for a communication node to be connected with the connected car, and
the prediction of the resource demand includes estimation of offloading of tasks of the one or more applications from the connected car to the communication node.
The information processing apparatus according to Supplementary Note 8 or 9, wherein
the resource demand is a resource demand for a communication node to be connected with the connected car, and
the prediction of the resource demand includes estimation of offloading of tasks of the one or more applications from the connected car to the communication node.
(Supplementary Note 11)
The information processing apparatus according to any one ofSupplementary Notes 1 to 10, wherein
the resource demand is a resource demand for a communication node to be connected with the connected car, and
the predicted resource demand is information used to decide offloading of tasks of one or more applications to be working for the connected car, the offloading including offloading from the connected car onto the communication node.
The information processing apparatus according to any one of
the resource demand is a resource demand for a communication node to be connected with the connected car, and
the predicted resource demand is information used to decide offloading of tasks of one or more applications to be working for the connected car, the offloading including offloading from the connected car onto the communication node.
(Supplementary Note 12)
The information processing apparatus according to any one ofSupplementary Notes 1 to 10, wherein the offloading further includes offloading between the connected car and another connected car.
The information processing apparatus according to any one of
(Supplementary Note 13)
The information processing apparatus according to any one ofSupplementary Notes 1 to 12, wherein the application behavior information includes information per location.
The information processing apparatus according to any one of
(Supplementary Note 14)
The information processing apparatus according to any one ofSupplementary Notes 1 to 13, wherein the application behavior information includes state transition information indicating transitions among states of an application for connected cars.
The information processing apparatus according to any one of
(Supplementary Note 15)
The information processing apparatus according to Supplementary Note 14, wherein the state transition information indicates a graph including vertices representing the states and edges representing the transitions among the states with transition probabilities.
The information processing apparatus according to Supplementary Note 14, wherein the state transition information indicates a graph including vertices representing the states and edges representing the transitions among the states with transition probabilities.
(Supplementary Note 16)
The information processing apparatus according to any one ofSupplementary Notes 1 to 15, wherein the application behavior information includes performance model which calculates performance to be realized for an application based on resources for the application.
The information processing apparatus according to any one of
(Supplementary Note 17)
The information processing apparatus according to Supplementary Note 16, wherein the performance model is a mathematical formula or a machine learning model.
The information processing apparatus according to Supplementary Note 16, wherein the performance model is a mathematical formula or a machine learning model.
(Supplementary Note 18)
The information processing apparatus according to any one ofSupplementary Notes 1 to 17, wherein
the resource demand is a resource demand for a communication node to be connected with the connected car, and
the application behavior information includes network strength information indicating network strength in locations within a coverage area of the communication node.
The information processing apparatus according to any one of
the resource demand is a resource demand for a communication node to be connected with the connected car, and
the application behavior information includes network strength information indicating network strength in locations within a coverage area of the communication node.
(Supplementary Note 19)
A method comprising:
acquiring application state information indicating a state of an application working for a connected car; and
performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
A method comprising:
acquiring application state information indicating a state of an application working for a connected car; and
performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
(Supplementary Note 20)
A program that causes a processor to execute:
acquiring application state information indicating a state of an application working for a connected car; and
performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
A program that causes a processor to execute:
acquiring application state information indicating a state of an application working for a connected car; and
performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
(Supplementary Note 21)
A non-transitory computer readable recording medium storing a program that causes a processor to execute:
acquiring application state information indicating a state of an application working for a connected car; and
performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
A non-transitory computer readable recording medium storing a program that causes a processor to execute:
acquiring application state information indicating a state of an application working for a connected car; and
performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
(Supplementary Note 22)
An information processing apparatus comprising:
a memory storing instructions; and
one or more processors configured to execute the instructions to:
acquire application state information indicating a state of an application working for a connected car; and
perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
An information processing apparatus comprising:
a memory storing instructions; and
one or more processors configured to execute the instructions to:
acquire application state information indicating a state of an application working for a connected car; and
perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
In a connected car system, it is possible to perform resource management related to connected cars more properly.
1 System
10 Connected Car
20 Communication Node
21 Coverage Area
30 Network
100, 400 Information Processing Apparatus
153, 410 Information Acquisition Unit
159, 420 Prediction Unit
200 Control Apparatus
231 Information Acquisition Unit
235 Decision Unit
10 Connected Car
20 Communication Node
21 Coverage Area
30 Network
100, 400 Information Processing Apparatus
153, 410 Information Acquisition Unit
159, 420 Prediction Unit
200 Control Apparatus
231 Information Acquisition Unit
235 Decision Unit
Claims (22)
- An information processing apparatus comprising:
an information acquisition unit configured to acquire application state information indicating a state of an application working for a connected car; and
a prediction unit configured to perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars. - The information processing apparatus according to claim 1, wherein
the resource demand is a resource demand for a communication node, and
the connected car is a connected car to be connected to the communication node. - The information processing apparatus according to claim 2, wherein the communication node is a communication node to be wirelessly and directly connected with the connected car.
- The information processing apparatus according to claim 2 or 3, wherein the connected car is a connected car estimated to be connected to the communication node by location estimation of the connected car.
- The information processing apparatus according to any one of claims 1 to 4, wherein
the information acquisition unit is configured to acquire the application state information for each of one or more connected cars to be connected to the communication node, and
the prediction unit is configured to perform the prediction of the resource demand for communication node based on the application state information for each of the one or more connected cars and the application behavior information. - The information processing apparatus according to any one of claims 1 to 5, wherein the application state information indicates states of two or more applications working for the connected car.
- The information processing apparatus according to any one of claims 1 to 6, wherein
the information acquisition unit is configured to acquire context feature information for the connected car,
the context feature information includes the application state information and other information related to the connected car or the application working for the connected car. - The information processing apparatus according to any one of claims 1 to 7, wherein the prediction of the resource demand includes prediction of one or more applications to be working for the connected car.
- The information processing apparatus according to any one of claims 1 to 8, wherein the prediction of the resource demand includes prediction of a state of each of one or more applications to be working for the connected car.
- The information processing apparatus according to claim 8 or 9, wherein
the resource demand is a resource demand for a communication node to be connected with the connected car, and
the prediction of the resource demand includes estimation of offloading of tasks of the one or more applications from the connected car to the communication node. - The information processing apparatus according to any one of claims 1 to 10, wherein
the resource demand is a resource demand for a communication node to be connected with the connected car, and
the predicted resource demand is information used to decide offloading of tasks of one or more applications to be working for the connected car, the offloading including offloading from the connected car onto the communication node. - The information processing apparatus according to any one of claims 1 to 10, wherein the offloading further includes offloading between the connected car and another connected car.
- The information processing apparatus according to any one of claims 1 to 12, wherein the application behavior information includes information per location.
- The information processing apparatus according to any one of claims 1 to 13, wherein the application behavior information includes state transition information indicating transitions among states of an application for connected cars.
- The information processing apparatus according to claim 14, wherein the state transition information indicates a graph including vertices representing the states and edges representing the transitions among the states with transition probabilities.
- The information processing apparatus according to any one of claims 1 to 15, wherein the application behavior information includes performance model which calculates performance to be realized for an application based on resources for the application.
- The information processing apparatus according to claim 16, wherein the performance model is a mathematical formula or a machine learning model.
- The information processing apparatus according to any one of claims 1 to 17, wherein
the resource demand is a resource demand for a communication node to be connected with the connected car, and
the application behavior information includes network strength information indicating network strength in locations within a coverage area of the communication node - A method comprising:
acquiring application state information indicating a state of an application working for a connected car; and
performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars. - A program that causes a processor to execute:
acquiring application state information indicating a state of an application working for a connected car; and
performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars. - A non-transitory computer readable recording medium storing a program that causes a processor to execute:
acquiring application state information indicating a state of an application working for a connected car; and
performing prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars. - An information processing apparatus comprising:
a memory storing instructions; and
one or more processors configured to execute the instructions to:
acquire application state information indicating a state of an application working for a connected car; and
perform prediction of a resource demand based on the application state information and application behavior information concerning behavior of applications for connected cars.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2018/004903 WO2019159234A1 (en) | 2018-02-13 | 2018-02-13 | Information processing apparatuses, method, program and non-transitory computer readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2018/004903 WO2019159234A1 (en) | 2018-02-13 | 2018-02-13 | Information processing apparatuses, method, program and non-transitory computer readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019159234A1 true WO2019159234A1 (en) | 2019-08-22 |
Family
ID=61569310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/004903 WO2019159234A1 (en) | 2018-02-13 | 2018-02-13 | Information processing apparatuses, method, program and non-transitory computer readable recording medium |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019159234A1 (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2187567A1 (en) | 2008-11-18 | 2010-05-19 | Alcatel, Lucent | Predictive method and system for optimizing demands of connectivity services |
EP2320387A1 (en) * | 2009-10-29 | 2011-05-11 | Greenroad Driving Technologies Ltd. | A method and device for evaluating vehicle's fuel consumption efficiency |
US20110164562A1 (en) * | 2010-01-04 | 2011-07-07 | Lili Qiu | Vehicular Content Distribution |
US20130226443A1 (en) * | 2012-02-29 | 2013-08-29 | Inrix, Inc. | Fuel consumption calculations and warnings |
US20130308470A1 (en) * | 2012-05-18 | 2013-11-21 | Comcast Cable Communications, LLC. | Wireless Network Supporting Extended Coverage of Service |
EP2806413A1 (en) * | 2012-01-20 | 2014-11-26 | Toyota Jidosha Kabushiki Kaisha | Vehicle behavior prediction device and vehicle behavior prediction method, and driving assistance device |
US20150036663A1 (en) * | 2013-07-31 | 2015-02-05 | Qualcomm Incorporated | Handover and reselection searching using predictive mobility |
US20150271727A1 (en) * | 2014-03-19 | 2015-09-24 | Eden Rock Communications, Llc | Method & system for path predictive congestion avoidance |
US20150287057A1 (en) | 2014-04-04 | 2015-10-08 | International Business Machines Corporation | Network demand forecasting |
US20150295768A1 (en) * | 2014-04-11 | 2015-10-15 | International Business Machines Corporation | Role and proximity-based management of networks |
US20160301624A1 (en) | 2015-04-10 | 2016-10-13 | International Business Machines Corporation | Predictive computing resource allocation for distributed environments |
WO2017080596A1 (en) | 2015-11-11 | 2017-05-18 | Nokia Solutions And Networks Oy | Mechanism for optimizing communication network setting for moving communication elements |
WO2017200522A1 (en) * | 2016-05-16 | 2017-11-23 | Ford Global Technologies, Llc | Methods and apparatus for on-demand fuel delivery |
EP3267635A1 (en) * | 2015-03-06 | 2018-01-10 | Nec Corporation | Network control device, network control method, and recording medium for program |
-
2018
- 2018-02-13 WO PCT/JP2018/004903 patent/WO2019159234A1/en active Application Filing
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2187567A1 (en) | 2008-11-18 | 2010-05-19 | Alcatel, Lucent | Predictive method and system for optimizing demands of connectivity services |
EP2320387A1 (en) * | 2009-10-29 | 2011-05-11 | Greenroad Driving Technologies Ltd. | A method and device for evaluating vehicle's fuel consumption efficiency |
US20110164562A1 (en) * | 2010-01-04 | 2011-07-07 | Lili Qiu | Vehicular Content Distribution |
US8542636B2 (en) | 2010-01-04 | 2013-09-24 | Lili Qiu | Vehicular content distribution |
EP2806413A1 (en) * | 2012-01-20 | 2014-11-26 | Toyota Jidosha Kabushiki Kaisha | Vehicle behavior prediction device and vehicle behavior prediction method, and driving assistance device |
US20130226443A1 (en) * | 2012-02-29 | 2013-08-29 | Inrix, Inc. | Fuel consumption calculations and warnings |
US20130308470A1 (en) * | 2012-05-18 | 2013-11-21 | Comcast Cable Communications, LLC. | Wireless Network Supporting Extended Coverage of Service |
US20150036663A1 (en) * | 2013-07-31 | 2015-02-05 | Qualcomm Incorporated | Handover and reselection searching using predictive mobility |
US20150271727A1 (en) * | 2014-03-19 | 2015-09-24 | Eden Rock Communications, Llc | Method & system for path predictive congestion avoidance |
US20150287057A1 (en) | 2014-04-04 | 2015-10-08 | International Business Machines Corporation | Network demand forecasting |
US20150295768A1 (en) * | 2014-04-11 | 2015-10-15 | International Business Machines Corporation | Role and proximity-based management of networks |
EP3267635A1 (en) * | 2015-03-06 | 2018-01-10 | Nec Corporation | Network control device, network control method, and recording medium for program |
US20160301624A1 (en) | 2015-04-10 | 2016-10-13 | International Business Machines Corporation | Predictive computing resource allocation for distributed environments |
WO2017080596A1 (en) | 2015-11-11 | 2017-05-18 | Nokia Solutions And Networks Oy | Mechanism for optimizing communication network setting for moving communication elements |
WO2017200522A1 (en) * | 2016-05-16 | 2017-11-23 | Ford Global Technologies, Llc | Methods and apparatus for on-demand fuel delivery |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Waheed et al. | A comprehensive review of computing paradigms, enabling computation offloading and task execution in vehicular networks | |
EP4020935B1 (en) | Transportation operator collaboration system | |
JP6737706B2 (en) | Context-adaptive vehicle control system | |
Wang et al. | Online offloading scheduling and resource allocation algorithms for vehicular edge computing system | |
Sorkhoh et al. | Optimizing information freshness for MEC-enabled cooperative autonomous driving | |
Zhou et al. | Edge-facilitated augmented vision in vehicle-to-everything networks | |
Hazarika et al. | Hybrid machine learning approach for resource allocation of digital twin in UAV-aided internet-of-vehicles networks | |
Zheng et al. | Learning based task offloading in digital twin empowered internet of vehicles | |
Shinde et al. | A markov decision process solution for energy-saving network selection and computation offloading in vehicular networks | |
Bures et al. | Performance modelling of smart cyber-physical systems | |
US20210083937A1 (en) | Distributed systems and extracting configurations for edge servers using driving scenario awareness | |
Yuan et al. | A survey on computation offloading for vehicular edge computing | |
Puligheddu et al. | SEM-O-RAN: Semantic O-RAN slicing for mobile edge offloading of computer vision tasks | |
Chebaane et al. | Time‐Critical Fog Computing for Vehicular Networks | |
Yu et al. | A situation enabled framework for energy-efficient workload offloading in 5G vehicular edge computing | |
Cao et al. | An edge-fog-cloud platform for anticipatory learning process designed for internet of mobile things | |
Wen | Distributed reinforcement learning-based optimization of resource scheduling for telematics | |
CN113807913A (en) | A method, apparatus, device and storage medium for updating an order processing model | |
WO2019159234A1 (en) | Information processing apparatuses, method, program and non-transitory computer readable recording medium | |
Tariq et al. | AI‐Enabled Energy‐Efficient Fog Computing for Internet of Vehicles | |
CN108770014B (en) | Calculation evaluation method, system and device of network server and readable storage medium | |
US20230413155A1 (en) | Systems and methods for dynamic rate and hotspot identification for a user device | |
Zhao et al. | A research of task-offloading algorithm for distributed vehicles | |
Li et al. | DNN Inference Acceleration Based on Adaptive Task Partitioning and Offloading in Embedded VEC | |
Khattak et al. | PAM: Predictive analytics and modules‐based computation offloading framework using greedy heuristics and 5G NR‐V2X |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18709088 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18709088 Country of ref document: EP Kind code of ref document: A1 |