WO2022113169A1 - Communication quality prediction apparatus, communication quality prediction system, communication quality prediction method, and communication quality prediction program - Google Patents

Communication quality prediction apparatus, communication quality prediction system, communication quality prediction method, and communication quality prediction program Download PDF

Info

Publication number
WO2022113169A1
WO2022113169A1 PCT/JP2020/043666 JP2020043666W WO2022113169A1 WO 2022113169 A1 WO2022113169 A1 WO 2022113169A1 JP 2020043666 W JP2020043666 W JP 2020043666W WO 2022113169 A1 WO2022113169 A1 WO 2022113169A1
Authority
WO
WIPO (PCT)
Prior art keywords
communication
communication terminal
information
communication quality
surrounding
Prior art date
Application number
PCT/JP2020/043666
Other languages
French (fr)
Japanese (ja)
Inventor
馨子 高橋
理一 工藤
智明 小川
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2020/043666 priority Critical patent/WO2022113169A1/en
Priority to JP2022564854A priority patent/JP7453587B2/en
Publication of WO2022113169A1 publication Critical patent/WO2022113169A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements

Definitions

  • the present invention relates to a communication quality prediction device, a communication quality prediction system, a communication quality prediction method, and a communication quality prediction program.
  • the communication quality may change due to the movement of the communication terminal or the movement of objects around the communication terminal.
  • the straightness of the radio waves is enhanced, so that the influence of surrounding objects becomes more remarkable.
  • an autonomous traveling robot equipped with a communication terminal when moved from a predetermined starting point to a target point by wireless communication control, the orientation of the robot, fixed objects existing around the robot, moving objects, pedestrians, etc.
  • Communication quality such as received signal power, signal-to-noise power ratio, RSSI (Received Signal Strength Indication), RSRQ (Received Signal Reference Quality), etc. of the communication terminal changes depending on the surrounding objects.
  • the communication quality deteriorates, it may affect the control of the robot and make it impossible to move the robot from the starting point to the target point. For this reason, changes in communication quality when performing wireless communication are predicted in advance, and if deterioration in communication quality is predicted, switching to a communication channel with higher communication quality or moving with higher communication quality is expected. It is necessary to take measures such as selecting a route and running the robot. Therefore, it is desired to predict the communication quality when performing wireless communication control with high accuracy.
  • Non-Patent Document 1 proposes that the surrounding environment of a communication terminal is photographed by a camera, and the deterioration of communication quality when a pedestrian shields a wireless communication path by using the image of the surrounding environment is predicted by machine learning. ing.
  • Non-Patent Document 2 proposes to recognize an object from an image of the surrounding environment of a communication terminal and output bounding box information of the recognized object at high speed, which is combined with the technique of Non-Patent Document 1. Therefore, it is considered possible to improve the prediction accuracy of communication quality.
  • Non-Patent Document 1 it is possible to detect an object existing around the moving device, but it is difficult to acquire information on the direction, posture, and position of the detected object. Further, in Non-Patent Document 2, although a surrounding object can be detected by a bounding box, it is still difficult to acquire information on the direction, posture, and position of the detected object. For this reason, there is a problem that it is not possible to measure the influence of an object existing around the communication terminal on communication with high accuracy, and it is difficult to predict the communication quality with high accuracy.
  • the present invention has been made in view of the above circumstances, and an object thereof is a communication quality prediction device, a communication quality prediction system, and a communication quality prediction capable of predicting the communication quality of a communication terminal with high accuracy.
  • the purpose is to provide a method and a communication quality prediction program.
  • the communication quality prediction device of one aspect of the present invention is from a communication terminal, an object information acquisition unit that acquires information on surrounding objects existing around the communication terminal, and the surrounding object acquired by the object information acquisition unit. , The incidental items attached to the surrounding object are extracted, the extracted incidental items are classified into a plurality of predefined classes, and the incidental items of each class are associated with the surrounding object.
  • a database that stores class information and information of ancillary objects of each class, a communication state measuring unit that acquires communication information of the communication terminal, information of ancillary objects associated with the surrounding object, and the communication state.
  • the measuring unit Based on the communication information acquired by the measuring unit, it is estimated that the relationship between the positional relationship between the communication terminal and the accessory and the communication quality of the communication terminal is machine-learned to generate an estimation model.
  • the positional relationship between the communication terminal acquired by the object information acquisition unit and an accessory associated with the surrounding object when wireless communication is performed between the model generation unit and the communication terminal Based on this, it includes a communication state estimation unit that estimates the communication quality of the communication terminal with reference to the estimation model.
  • the communication quality prediction system of one aspect of the present invention is provided in a predetermined area set around the communication terminal and a marker attached to at least one of the communication terminal and surrounding objects existing around the communication terminal. It is provided with an object detection sensor for detecting the communication terminal and the surrounding object, and a communication quality prediction device for estimating the communication quality of the communication terminal based on the positional relationship between the communication terminal and the surrounding object.
  • the communication quality prediction device is from an object information acquisition unit that acquires information on the communication terminal and surrounding objects detected by the object detection sensor, and the communication terminal and surrounding objects acquired by the object information acquisition unit.
  • the accessory that accompanies the communication terminal and the surrounding object is extracted, the extracted accessory is classified into a plurality of predefined classes, and the accessory of each class is associated with the communication terminal and the surrounding object.
  • the object classification unit, the database for storing the information of the plurality of classes and the information of the incidental objects of each class, the communication state measuring unit for acquiring the communication information of the communication terminal, and the communication terminal and surrounding objects are associated with each other. The relationship between the positional relationship between the communication terminal and the accessory and the communication quality of the communication terminal based on the obtained information of the accessory and the communication information acquired by the communication state measuring unit.
  • the communication terminal acquired by the object information acquisition unit and the surrounding object when wireless communication is performed between the estimation model generation unit that generates an estimation model by machine learning of sex and the communication terminal. It is provided with a communication state estimation unit that estimates the communication quality of the communication terminal with reference to the estimation model based on the positional relationship with the accessory associated with the above.
  • the communication quality prediction method of one aspect of the present invention includes a step of acquiring information on a communication terminal and surrounding objects existing around the communication terminal, and incidental incidental to the surrounding object from the acquired peripheral object.
  • One aspect of the present invention is a communication quality prediction program for operating a computer as the communication quality prediction device.
  • FIG. 1 is a block diagram showing a configuration of a communication quality prediction device according to the first embodiment.
  • FIG. 2 is an explanatory diagram showing an area where a mobile robot equipped with a communication terminal moves and a position of a camera installed in this area.
  • FIG. 3 is an explanatory diagram showing a bounding box of a mobile robot detected by the communication quality prediction device according to the first embodiment.
  • FIG. 4A is an explanatory diagram showing a method of classifying a peripheral object into a plurality of incidental objects, and shows an example of classifying the peripheral object and the incidental objects of the peripheral object into different classes.
  • FIG. 4B shows an example of classifying a pedestrian, which is a surrounding object, into a pedestrian and its ancillary “head” and left and right “hands”.
  • FIG. 5A is an explanatory diagram showing a method of classifying surrounding objects into a plurality of incidental objects, and shows an example of classifying the incidental objects included in the surrounding objects into the same class.
  • FIG. 5B shows an example of classifying a truck, which is a peripheral object, into a driver's seat and a loading platform, which are ancillary objects thereof.
  • FIG. 6A is an explanatory diagram showing a method of classifying surrounding objects into a plurality of ancillary objects, and shows an example of classifying the ancillary objects into the same class as the surrounding objects.
  • FIG. 6B shows an example of classifying a pedestrian, which is a surrounding object, into the pedestrian and the mobile phone possessed by the pedestrian.
  • FIG. 7 is a flowchart showing a processing procedure of the communication quality prediction device according to the first embodiment.
  • FIG. 8 is a block diagram showing a configuration of the communication quality prediction device according to the second embodiment.
  • FIG. 9A is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object D1.
  • FIG. 9B is an explanatory diagram showing an example in which the marker 21b is attached to the surrounding object D2.
  • FIG. 9C is an explanatory diagram showing an example in which the marker 21c is attached to the surrounding object E1.
  • FIG. 10A is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object D1.
  • FIG. 10B is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object D2.
  • FIG. 9A is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object D1.
  • FIG. 10C is an explanatory diagram showing an example in which the marker 21c is attached to the surrounding object E1.
  • FIG. 11A is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object D1.
  • FIG. 11B is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object D2.
  • FIG. 11C is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object E1.
  • FIG. 12 is an explanatory diagram showing a bounding box of a mobile robot and a marker detected by the communication quality prediction device according to the second embodiment.
  • FIG. 13 is a flowchart showing a processing procedure of the communication quality prediction device according to the second embodiment.
  • FIG. 14 is a graph showing the change of the R2 value with respect to the predicted destination time when the marker is attached and when the marker is not attached.
  • FIG. 15 is a block diagram showing a hardware configuration of the communication quality prediction device.
  • the communication quality prediction device controls a mobile robot (hereinafter abbreviated as "robot") equipped with a communication terminal using a communication standard such as 5G when controlling the movement within a predetermined area.
  • robot a mobile robot
  • 5G a communication standard
  • An example of estimating the quality of wireless communication information (hereinafter referred to as "communication quality") between a base station that transmits a command and a communication terminal will be described.
  • Wireless communication information includes radio wave propagation information of wireless communication signal, channel information related to radio wave propagation information, feedback information of channel information, received signal power, signal to noise power ratio, signal to interference noise power ratio, RSSI (Received Signal).
  • Indicators related to QoE such as Strength Indication), RSRQ (Received Signal Reference Quality), packet error rate, number of reached bits, bit error rate, number of reached bits per unit time, and these numerical values. It refers to differential information, indicators calculated from these numerical values using calculation formulas, and setting items of the communication system that affect these indicators.
  • the communication quality of a robot equipped with a communication terminal changes due to the influence of the position in the area and the objects existing around it. If the communication quality deteriorates, it may not be possible to move the robot to the destination by wireless communication control.
  • the communication quality prediction device estimates the communication quality when the communication terminal communicates by using machine learning. Based on the estimated communication quality, the communication quality predictor selects a channel for wireless communication between the base station and the communication terminal, and selects the travel path of the robot in the area to select the base station. It suppresses the deterioration of the communication quality between the communication terminal and the communication terminal.
  • FIG. 1 is a block diagram showing a configuration of a communication quality prediction device 1 and its peripheral devices according to the first embodiment of the present invention.
  • the communication quality prediction device 1 includes an object information acquisition unit 11, an accessory classification unit 12, an accessory data creation unit 13, a communication state measurement unit 14, and an estimation model. It includes a generation unit 15, a database 16, and a communication state estimation unit 17.
  • the object information acquisition unit 11 acquires an image taken by at least one camera 4 (object detection sensor) installed in a predetermined area.
  • the predetermined area is an area in which the robot 3 equipped with the communication terminal TA can move.
  • the object information acquisition unit 11 acquires images taken by cameras 4 (4a, 4b) installed at two locations in the area RE.
  • the camera 4 for example, a depth camera or an RGB camera can be used.
  • the camera 4 is used as an example of the object detection sensor will be described, but it is also possible to use a rider, a laser radar, or the like as another object detection sensor.
  • the object information acquisition unit 11 also acquires at least one of the information of the position, orientation, size, movement speed, and orientation change speed of the object included in the image from the image taken by the camera 4. .
  • the objects included in the image include a fixed object installed in the area RE, a moving object moving in the area RE, an object 2 such as a pedestrian (hereinafter referred to as "surrounding object"), and a communication terminal TA. Includes the robot 3
  • the object information acquisition unit 11 also extracts an image of the robot 3 from the image taken by the camera 4 and sets a bounding box for the robot 3. Specifically, when the robot 3 is detected from the image, a rectangular bounding box B1 is set around the robot 3 as shown in FIG. 3, and the width W and height H of the bounding box B1 are measured. do. Further, the center coordinates (X, Y) of the bounding box B1 are set based on the width W and the height H. By setting the bounding box B1 and its center coordinates, the accuracy of recognizing the position of the robot 3 can be improved.
  • the accessory classification unit 12 classifies the accessory of the surrounding object 2 detected by the object information acquisition unit 11 into a plurality of classes (classification items).
  • the "class” is an index for classifying incidental objects of the surrounding object 2.
  • the classified ancillary objects are associated with each other and are defined as one peripheral object 2.
  • the accessory classification unit 12 also classifies the accessory of the robot 3 into a plurality of classes as needed.
  • the peripheral object 2 is defined as the first classification “class C1”, and the incidental objects of the peripheral object 2 are defined as the second classification “class C1a, C1b” which is a subdivision of the class C1.
  • An example to define is shown.
  • FIG. 4B when a pedestrian is detected as a surrounding object 2, this pedestrian is defined as "class C1".
  • the "head” that is an accessory of the pedestrian is defined as “class C1a”, and the left and right "hands” are defined as "class C1b”.
  • FIGS. 4A and 4B an example of classifying the first classification and the subdivided second classification is shown, but the second classification is further subdivided and the third and subsequent classifications are defined. Is also possible.
  • FIG. 5A shows an example in which the surrounding object 2 is classified into a plurality of incidental objects, and each incidental object is defined as the second classification "class C1a, C1b".
  • each incidental object is defined as the second classification "class C1a, C1b".
  • FIG. 5B when a truck is detected as a peripheral object 2, the truck itself is not defined, the driver's seat, which is an accessory of the truck, is "class C1a", and the loading platform is "class C1b". Is defined as.
  • FIG. 6A shows an example in which the peripheral object 2 and the incidental object 22 attached to the peripheral object 2 are defined as "classes C1 and C2", which are the first classifications, respectively.
  • FIG. 6B when a pedestrian is detected as a peripheral object 2, and the pedestrian has a mobile phone (incidental 22), the pedestrian and the mobile phone, respectively. Is defined as the first classification, "classes C1 and C2".
  • the accessory classification unit 12 classifies surrounding objects into a plurality of classes by using at least one of the above three classification methods. Further, as will be described later, the information of each class is associated with each other in one surrounding object 2 and is set as one class group.
  • the incidental data creation unit 13 creates incidental data based on the incidental data of each class classified by the incidental classification unit 12. Specifically, according to the setting input from the outside, the classified data is created by adopting one of the above-mentioned three methods and stored in the database 16.
  • the pedestrian is classified into class C1 as shown in FIG. 3B, for example.
  • the pedestrian "head” is classified into class C1a
  • the left and right “hands” are classified into class C1b
  • the pedestrian "head” is stored in the database 16 as an associated class group.
  • the database 16 stores the peripheral object 2 created by the incidental data creation unit 13 and the data related to the incidental objects attached to the peripheral object 2.
  • the database 16 also stores the data of the class group created by the accessory data creation unit 13.
  • the communication state measuring unit 14 measures the communication quality of wireless communication when transmitting a control wireless signal from a base station (not shown) to a communication terminal TA mounted on the robot 3. Specifically, when the robot 3 equipped with the communication terminal TA is moved and controlled in the area RE shown in FIG. 2, the wireless communication information between the base station that transmits the control radio signal and the communication terminal TA Measure quality. As described above, the wireless communication information is RSSI (Received Signal Strength Indication) or the like.
  • RSSI Receiveived Signal Strength Indication
  • the estimation model generation unit 15 acquires information on the class group of the surrounding object 2 and information such as the position, orientation, moving speed, and rotation speed of each accessory classified by the accessory classification unit 12 from the database 16.
  • the estimation model generation unit 15 also acquires the communication quality measured by the communication state measurement unit 14, and based on the relationship between the position information of the robot 3, the position information of the surrounding object 2 and its ancillary objects, and the communication quality. For example, machine learning using a neural network is performed to generate an estimation model of communication quality. The details of machine learning will be omitted.
  • the communication state estimation unit 17 transmits the position information of the robot 3 when the communication terminal TA performs the movement control of the robot 3 by transmitting the control radio signal to the communication terminal TA by the base station (not shown). ,
  • the communication quality when the robot 3 moves in the area RE is estimated based on the position information of the surrounding object 2 and its ancillary objects, and the communication quality estimation model generated by the estimation model generation unit 15. For example, as shown in FIG. 2, when the robot 3 moves from the starting point P1 to the target point P2, the communication quality by the communication terminal TA at the position of the robot 3 is estimated.
  • the communication state estimation unit 17 also outputs the estimated communication quality information to an external device.
  • the object information acquisition unit 11 has a robot 3 moving in the region RE shown in FIG. 2 based on an image taken by the camera 4, and a surrounding object 2 existing around the robot 3. Is detected.
  • the surrounding object 2 refers to a fixed object such as a support, a moving body such as a dolly, a pedestrian, or the like existing in the area RE.
  • the object information acquisition unit 11 also sets the bounding box B1 including the image of the robot 3, and further calculates the center coordinates (X, Y) of the bounding box B1.
  • step S12 the accessory classification unit 12 detects the accessory contained in the surrounding object 2 acquired by the object information acquisition unit 11. For example, as shown in FIG. 4B, when a pedestrian is detected as a peripheral object 2, the pedestrian's "head” and left and right "hands" are detected as incidental objects.
  • step S13 the accessory data creation unit 13 accepts the input of the class group setting by the user. For example, as shown in FIGS. 4A and 4B described above, a method of classifying into "class C1" indicating the first classification and "classes C1a and C1b" indicating the second subdivided classification is performed by the user. Set.
  • step S14 the accessory data creation unit 13 classifies the accessory of the surrounding object 2 acquired by the object information acquisition unit 11 into the class set in the process of step S13 described above. For example, when the methods shown in FIGS. 4A and 4B described above are set, the pedestrian detected as the surrounding object 2 is classified as class C1, the "head” of the pedestrian is classified as class C1a, and the left and right of the pedestrian are left and right. "Hand" is classified into class C1b.
  • step S15 the accessory data creation unit 13 sets the information about the accessory and the class classified in the process of step S14 as one class group in association with each other, and stores the information in the database 16.
  • step S16 the communication state measuring unit 14 measures the communication quality in wireless communication between the communication terminal TA mounted on the robot 3 and the base station (not shown).
  • step S17 the estimation model generation unit 15 performs machine learning based on the information on the communication quality measured by the communication state measurement unit 14 and the information on the surrounding object 2 stored in the database 16, and the communication state is performed. Generate an estimation model for.
  • the information about the surrounding object 2 stored in the database 16 classifies the surrounding object 2 into a plurality of incidental objects and classifies them into a plurality of classes, so that the situation of the surrounding object 2 can be recognized in more detail. Can be done.
  • step S18 the communication state estimation unit 17 acquires the information of the surrounding object 2 detected by the object information acquisition unit 11 and the information of each accessory classified by the accessory classification unit 12, and further generates an estimation model. Based on the estimation model generated by the unit 15, the communication state of the communication terminal TA mounted on the robot 3 is estimated.
  • step S19 the communication state estimation unit 17 outputs the estimated communication quality information to the external device. In this way, it is possible to obtain the estimation result of the communication quality by the communication terminal TA mounted on the robot 3.
  • the channel used for transmitting and receiving the control signal may be switched, or the robot 3 may be used. It is possible to take measures such as changing the movement route of. As a result, it becomes possible to stably carry out the movement control of the robot.
  • the communication quality prediction device 1 is the object information acquisition unit 11 for acquiring information on the communication terminal and surrounding objects existing around the communication terminal, and the object information acquisition unit 11. From the acquired peripheral object, the incidental objects attached to the peripheral object are extracted, the extracted incidental objects are classified into a plurality of predefined classes, and the incidental objects of each class are associated with the peripheral object.
  • the classification unit 12, the database 16 that stores the information of the plurality of classes and the information of the accessories of each class, the communication state measurement unit 14 that measures the communication state of the communication terminal, and the surrounding objects are associated with each other.
  • a plurality of incidental objects are extracted from the surrounding objects existing around the communication terminal, and each incidental object is further classified into a plurality of classes. Therefore, more detailed information on the surrounding object 2 can be obtained.
  • the method of detecting the surrounding object 2 as one object is walking. I can't recognize the person.
  • the pedestrian's "hand” is set as an accessory, and the pedestrian is defined as class C1 and the "hand” is defined as class C1b. The presence of pedestrians can be estimated even when the image is not shown.
  • the information of the surrounding object 2 existing around the communication terminal TA can be recognized in more detail, and the communication quality of the communication terminal can be estimated with high accuracy.
  • the information of a plurality of ancillary objects and classes contained in the same surrounding object are associated with each other and stored in the database 16 as one class group, when one ancillary object is detected, this ancillary substance is stored.
  • the surrounding object 2 related to the object can be easily and accurately identified, and the estimation accuracy by the estimation model generation unit 15 can be improved.
  • FIG. 8 is a block diagram showing a configuration of a communication quality estimation system 100 including a communication quality prediction device 1a according to a second embodiment.
  • the communication quality prediction device 1a is different from the communication quality prediction device 1 shown in FIG. 1 described above in that it includes a marker definition unit 18. Further, it is different in that the peripheral object 2 is provided with the marker 21 and the robot 3 is provided with the marker 31.
  • the components overlapping with FIG. 1 are designated by the same reference numerals, and the configuration description will be omitted.
  • the marker 21 (object marker) is attached to the surrounding object 2.
  • the marker 21 is an object having a predetermined shape or an object having a predetermined color, and is classified as an accessory of the surrounding object 2.
  • the marker 21 is attached in advance to a peripheral object 2 such as a fixed object or a moving body existing in the region RE shown in FIG.
  • the robot 3 is attached with a marker (marker 31 for communication terminals). Similar to the marker 21 described above, the marker 31 is an object having a predetermined shape or an object having a predetermined color, and is classified as an accessory of the robot 3.
  • the detection accuracy of the surrounding objects 2 and the robot 3 is further improved by recognizing the markers 21 and 31 as incidental objects.
  • FIGS. 9A to 11C are explanatory views showing an example of a marker attached to an object D1 and an object D2 classified in the same class, and an object E1 classified into a subdivided class of the objects D1 and D2.
  • FIGS. 9A to 9C are explanatory views showing an example in which different markers 21a, 21b, and 21c are attached to the objects D1, D2, and E1, respectively.
  • the marker 21a is attached to the object D1
  • the marker 21b is attached to the object D2 as shown in FIG. 9B
  • the marker 21c is attached to the object E1 as shown in FIG. 9C.
  • 10A to 10C are explanatory views showing an example in which the same marker is attached to an object of the same class among the objects D1, D2, and E1 and different markers are attached to an object of a different class.
  • the marker 21a is attached to the object D1
  • the marker 21a is similarly attached to the object D2
  • the marker 21c is attached to the object E1.
  • 11A to 11C are explanatory views showing an example in which the same marker is attached to each of the objects D1, D2, and E1.
  • the marker 21a is attached to the object D1, and as shown in FIG. 11B, the marker 21a is similarly attached to the object D2. Further, as shown in FIG. 11C, the marker 21a is similarly attached to the object E1. In this way, by attaching the same marker to all the objects included in one class group, the recognition accuracy of each object can be improved. In addition, since the number of types of markers can be reduced, it is possible to reduce the load of arithmetic processing.
  • Each of the markers 21a, 21b, 21c shown in FIGS. 9A to 11C is, for example, a pole having the same shape installed on the surrounding object 2, and can have a different color for each of the markers 21a to 21c.
  • the marker 21a can be set to red
  • the marker 21b can be set to blue
  • the marker 21c can be set to yellow, and the like.
  • a marker 31 (marker for a communication terminal) is attached to the robot 3.
  • the marker 31 for the communication terminal similarly to the marker 21 described above, an object having a predetermined shape set in advance or an object having a predetermined color is attached to the robot 3 moving in the area RE in advance.
  • a columnar pole is installed as a marker 31 on the robot 3 moving in the area RE (see FIG. 12).
  • the object information acquisition unit 11 extracts the image of the robot 3 from the image taken by the camera 4, and in addition to the bounding box B1 of the robot 3, the bounding box B2 of the marker 31 attached thereto To set. Specifically, when the robot 3 is detected from the image, the object information acquisition unit 11 sets a rectangular bounding box B1 around the robot 3 as shown in FIG. 12, and the width W of the bounding box B1. Measure the height H. The object information acquisition unit 11 sets the center coordinates (X, Y) of the bounding box B1 based on the width W and the height H. The object information acquisition unit 11 also sets the bounding box B2 of the marker 31.
  • the incidental object classification unit 12 extracts the markers 21 and 31 from the surrounding objects 2 and the robot 3 acquired by the object information acquisition unit 11 and classifies them as incidental objects.
  • the accessory classification unit 12 also classifies the accessory of the surrounding object 2 (including the marker 21) and the accessory of the robot 3 (including the marker 31) into a plurality of classes.
  • the marker definition unit 18 defines at least one of the shapes and colors of the object marker 21 and the communication terminal marker 31 and outputs the information to the object information acquisition unit 11. Specifically, as shown in FIGS. 9A to 9C, FIGS. 10A to 10B, and FIGS. 11A to 11C, a marker definition method is set.
  • step S31 the object information acquisition unit 11 moves in the area RE shown in FIG. 2 based on the image taken by the camera 4, and the surrounding object 2 existing around the robot 3. Is detected.
  • the object information acquisition unit 11 also sets a bounding box B1 including an image of the robot 3, a bounding box B2 including an image of the marker 31, and further, center coordinates (X,) of the bounding box B1. Y) is calculated.
  • step S32 the accessory classification unit 12 detects the accessory and the marker 21 included in the surrounding object 2 detected by the object information acquisition unit 11, and the accessory and the marker 31 included in the robot.
  • step S33 the accessory data creation unit 13 accepts the input of the class group setting by the user.
  • step S34 the accessory data creation unit 13 classifies the surrounding objects 2 and the robot 3 detected by the object information acquisition unit 11 into the accessory.
  • Ancillary items include markers 21 and 31.
  • step S35 the accessory data creation unit 13 saves the information about the created accessory in the database 16.
  • step S36 the communication state measuring unit 14 measures the communication quality in wireless communication between the communication terminal TA mounted on the robot 3 and the base station (not shown).
  • step S37 the estimation model generation unit 15 performs machine learning based on the information on the communication quality measured by the communication state measurement unit 14 and the information on the incidental items stored in the database 16, and the communication state is changed. Generate an estimation model.
  • step S38 the communication state estimation unit 17 acquires the information of the surrounding object 2 detected by the object information acquisition unit 11 and the information of the incidental objects classified by the incidental object classification unit 12, and further, the estimation model generation unit. Based on the estimation model generated in 15, the communication state of the communication terminal TA mounted on the robot 3 is estimated.
  • step S39 the communication state estimation unit 17 outputs the estimated communication quality information to the external device.
  • the inventors conducted an experiment to measure the estimation accuracy of communication quality when the marker 31 was installed on the robot 3 and when it was not installed. Specifically, in the area RE shown in FIG. 2, control was performed to move the mobile robot 3 from the starting point P1 to the target point P2 by wireless communication.
  • the communication standard of the communication terminal TA mounted on the robot 3 uses a wireless LAN (IEEE802.11ac), and the median value of RSSI for 0.1 seconds was measured.
  • IEEE802.11ac IEEE802.11ac
  • a random forest is used as a communication state estimation model, and the relationship between the bounding box information for the past 1 second obtained from the video and the RSSI information after t seconds is learned.
  • Equation (1) "Yi” is the measured value of RSSI in the i-th sample, “Yi ( ⁇ )” is the predicted value of RSSI predicted in the random forest, and “Yave” is the average value of the measured value of RSSI. Is.
  • FIG. 14 is a graph showing the relationship between the predicted destination time and R 2.
  • the curve s1 shows the case where the marker 31 is not used, and the curve s2 shows the case where the marker 31 is used. As shown in FIG. 14, it is understood that the prediction accuracy of RSSI is higher when the marker 31 is used at any prediction destination time.
  • the marker 21 is attached to the surrounding object 2 existing in the region RE, and the marker 31 is attached to the robot 3.
  • the accuracy when detecting the surrounding object 2 and the robot 3 is improved. Therefore, the position information of the surrounding objects 2 and the robot 3 existing around the communication terminal TA can be acquired with higher accuracy, and the estimation accuracy of the communication quality can be improved.
  • the present invention also has a configuration in which only one of them is classified into classes. good.
  • the communication quality prediction device 1 of the present embodiment described above includes, for example, a CPU (Central Processing Unit, processor) 901, a memory 902, and a storage 903 (HDD: Hard Disk Drive, SSD: Solid).
  • a general-purpose computer system including a State Drive), a communication device 904, an input device 905, and an output device 906 can be used.
  • the memory 902 and the storage 903 are storage devices.
  • each function of the communication quality prediction device 1 is realized by the CPU 901 executing a predetermined program loaded on the memory 902.
  • FIG. 15 shows an example in which the CPU 901 is used as the processor, it is also possible to use a GPU (Graphics Processing Unit). By using the GPU, the processing time can be shortened and the real-time performance can be improved.
  • a GPU Graphics Processing Unit
  • the communication quality prediction device 1 may be mounted on one computer or may be mounted on a plurality of computers. Further, the communication quality prediction device 1 may be a virtual machine mounted on a computer.
  • the program for the communication quality prediction device 1 can also be stored in a computer-readable recording medium such as an HDD, SSD, USB (Universal Serial Bus) memory, CD (Compact Disc), or DVD (Digital Versatile Disc). It can also be delivered over the network.
  • a computer-readable recording medium such as an HDD, SSD, USB (Universal Serial Bus) memory, CD (Compact Disc), or DVD (Digital Versatile Disc). It can also be delivered over the network.
  • the present invention is not limited to the above embodiment, and many modifications can be made within the scope of the gist thereof.
  • Communication quality prediction device Surrounding objects 3 Robot (mobile robot) 4 (4a, 4b) Camera (object detection sensor) 11 Object information acquisition unit 12 Ancillary object classification unit 13 Ancillary data creation unit 14 Communication state measurement unit 15 Estimated model generation unit 16 Database 17 Communication state estimation unit 18 Marker definition unit 21 (21a, 21b, 21c) Marker (marker for object) ) 31 Marker (Marker for communication terminal) 100 Communication quality estimation system RE area TA communication terminal

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

This communication quality prediction apparatus is provided with: an object information acquisition unit (11) for acquiring information on a peripheral object (2) which is present on the periphery of a communication terminal (TA) and the communication terminal; an accompaniment classification unit (12) for extracting accompaniments accompanying the peripheral object (2), classifying the extracted accompaniments into a plurality of classes, and associating the accompaniment of each class with the peripheral object; and a database (16) which stores information of the plurality of classes, and information of the accompaniment of each class. This communication quality prediction apparatus is further provided with: a communication condition measurement unit (14) for acquiring communication information of the communication terminal; an estimation model generation unit (15) for generating, on the basis of information on the accompaniment associated with the peripheral object (2) and the communication information acquired by the communication condition measurement unit, an estimation model from machine learning on how a positional relationship between the communication terminal and the accompaniment is related to the communication quality of the communication terminal; and a communication condition estimation unit (17) which, in performing wireless communication with the communication terminal, refers to the estimation model and estimates the communication quality of the communication terminal on the basis of the positional relationship between the communication terminal acquired by the object information acquisition unit (11) and the accompaniment associated with the peripheral object.

Description

通信品質予測装置、通信品質予測システム、通信品質予測方法、及び、通信品質予測プログラムCommunication quality prediction device, communication quality prediction system, communication quality prediction method, and communication quality prediction program
 本発明は、通信品質予測装置、通信品質予測システム、通信品質予測方法、及び、通信品質予測プログラムに関する。 The present invention relates to a communication quality prediction device, a communication quality prediction system, a communication quality prediction method, and a communication quality prediction program.
 5Gなどの通信規格を採用した無線通信システムでは、通信端末が移動したり、通信端末の周囲物体が移動することにより、通信品質が変化することがある。特に、高周波数の電波を用いる通信では、電波の直進性が高まるので周囲物体による影響がより顕著となる。 In a wireless communication system that adopts a communication standard such as 5G, the communication quality may change due to the movement of the communication terminal or the movement of objects around the communication terminal. In particular, in communication using high-frequency radio waves, the straightness of the radio waves is enhanced, so that the influence of surrounding objects becomes more remarkable.
 例えば、通信端末を搭載した自律走行型のロボットを無線通信制御で所定の出発地点から目標地点まで移動させる場合には、ロボットの向き、ロボットの周囲に存在する固定物、移動体、歩行者などの周囲物体に影響されて、通信端末の受信信号電力、信号対雑音電力比、RSSI(Received Signal Strength Indication)、RSRQ(Received Signal Reference Quality)、などの通信品質が変化する。 For example, when an autonomous traveling robot equipped with a communication terminal is moved from a predetermined starting point to a target point by wireless communication control, the orientation of the robot, fixed objects existing around the robot, moving objects, pedestrians, etc. Communication quality such as received signal power, signal-to-noise power ratio, RSSI (Received Signal Strength Indication), RSRQ (Received Signal Reference Quality), etc. of the communication terminal changes depending on the surrounding objects.
 通信品質が低下すると、ロボットの制御に影響を与え、ロボットを出発地点から目標地点まで移動させることができなくなる可能性がある。このため、無線通信を実施する際の通信品質の変化を予め予測し、通信品質の低下が予測される場合には、より通信品質の高い通信チャネルに切り替えたり、或いは、より通信品質の高い移動経路を選択してロボットを走行させるなどの対応を取る必要がある。従って、無線通信制御を実施する際の通信品質を高精度に予測することが望まれる。 If the communication quality deteriorates, it may affect the control of the robot and make it impossible to move the robot from the starting point to the target point. For this reason, changes in communication quality when performing wireless communication are predicted in advance, and if deterioration in communication quality is predicted, switching to a communication channel with higher communication quality or moving with higher communication quality is expected. It is necessary to take measures such as selecting a route and running the robot. Therefore, it is desired to predict the communication quality when performing wireless communication control with high accuracy.
 非特許文献1には、通信端末の周囲環境をカメラで撮影し、周囲環境の映像を用いて無線通信路を歩行者が遮蔽したときの通信品質の低下を機械学習により予測することが提案されている。 Non-Patent Document 1 proposes that the surrounding environment of a communication terminal is photographed by a camera, and the deterioration of communication quality when a pedestrian shields a wireless communication path by using the image of the surrounding environment is predicted by machine learning. ing.
 また、非特許文献2には、通信端末の周囲環境の映像から物体を認識し、認識した物体のバウンディングボックス情報を高速で出力することが提案されており、非特許文献1の技術と組み合わせることにより、通信品質の予測精度を向上させることが可能であると考えられる。 Further, Non-Patent Document 2 proposes to recognize an object from an image of the surrounding environment of a communication terminal and output bounding box information of the recognized object at high speed, which is combined with the technique of Non-Patent Document 1. Therefore, it is considered possible to improve the prediction accuracy of communication quality.
 上述した非特許文献1では、移動装置の周囲に存在する物体を検出できるが、検出した物体の向き、姿勢、位置の情報を取得することが難しい。また、非特許文献2では、周囲の物体をバウンディングボックスで検出できるものの、やはり検出した物体の向き、姿勢、位置の情報を取得することが難しい。このため、通信端末の周囲に存在する物体が通信に与える影響を高精度に測定することができず、ひいては、通信品質を高精度に予測することが難しいという問題があった。 In the above-mentioned non-patent document 1, it is possible to detect an object existing around the moving device, but it is difficult to acquire information on the direction, posture, and position of the detected object. Further, in Non-Patent Document 2, although a surrounding object can be detected by a bounding box, it is still difficult to acquire information on the direction, posture, and position of the detected object. For this reason, there is a problem that it is not possible to measure the influence of an object existing around the communication terminal on communication with high accuracy, and it is difficult to predict the communication quality with high accuracy.
 本発明は、上記事情に鑑みてなされたものであり、その目的とするところは、通信端末の通信品質を高精度に予測することが可能な通信品質予測装置、通信品質予測システム、通信品質予測方法、及び通信品質予測プログラムを提供することにある。 The present invention has been made in view of the above circumstances, and an object thereof is a communication quality prediction device, a communication quality prediction system, and a communication quality prediction capable of predicting the communication quality of a communication terminal with high accuracy. The purpose is to provide a method and a communication quality prediction program.
 本発明の一態様の通信品質予測装置は、通信端末、及び該通信端末の周囲に存在する周囲物体の情報を取得する物体情報取得部と、前記物体情報取得部で取得された前記周囲物体から、該周囲物体に付帯する付帯物を抽出し、抽出した付帯物を予め定義されている複数のクラスに分類し、各クラスの付帯物を前記周囲物体に関連付ける付帯物分類部と、前記複数のクラスの情報、及び、各クラスの付帯物の情報を記憶するデータベースと、前記通信端末の通信情報を取得する通信状態測定部と、前記周囲物体に関連付けられた付帯物の情報と、前記通信状態測定部にて取得された通信情報と、に基づいて、前記通信端末と前記付帯物との位置関係と、前記通信端末の通信品質と、の関連性を機械学習して推定モデルを生成する推定モデル生成部と、前記通信端末との間で無線通信を実施する際に、前記物体情報取得部にて取得される前記通信端末と、前記周囲物体に関連付けられた付帯物と、の位置関係に基づき、前記推定モデルを参照して前記通信端末の通信品質を推定する通信状態推定部と、を備える。 The communication quality prediction device of one aspect of the present invention is from a communication terminal, an object information acquisition unit that acquires information on surrounding objects existing around the communication terminal, and the surrounding object acquired by the object information acquisition unit. , The incidental items attached to the surrounding object are extracted, the extracted incidental items are classified into a plurality of predefined classes, and the incidental items of each class are associated with the surrounding object. A database that stores class information and information of ancillary objects of each class, a communication state measuring unit that acquires communication information of the communication terminal, information of ancillary objects associated with the surrounding object, and the communication state. Based on the communication information acquired by the measuring unit, it is estimated that the relationship between the positional relationship between the communication terminal and the accessory and the communication quality of the communication terminal is machine-learned to generate an estimation model. In the positional relationship between the communication terminal acquired by the object information acquisition unit and an accessory associated with the surrounding object when wireless communication is performed between the model generation unit and the communication terminal. Based on this, it includes a communication state estimation unit that estimates the communication quality of the communication terminal with reference to the estimation model.
 本発明の一態様の通信品質予測システムは、通信端末、及び前記通信端末の周囲に存在する周囲物体の少なくとも一方に付帯するマーカと、 前記通信端末の周囲に設定された所定の領域内に設けられ、前記通信端末、及び前記周囲物体を検出する物体検出センサと、前記通信端末と前記周囲物体との位置関係に基づいて、前記通信端末の通信品質を推定する通信品質予測装置と、を備え、前記通信品質予測装置は、前記物体検出センサで検出された前記通信端末及び前記周囲物体の情報を取得する物体情報取得部と、前記物体情報取得部で取得された前記通信端末及び周囲物体から、前記通信端末及び前記周囲物体に付帯する付帯物を抽出し、抽出した付帯物を予め定義されている複数のクラスに分類し、各クラスの付帯物を前記通信端末及び前記周囲物体に関連付ける付帯物分類部と、前記複数のクラスの情報、及び、各クラスの付帯物の情報を記憶するデータベースと、前記通信端末の通信情報を取得する通信状態測定部と、前記通信端末及び周囲物体に関連付けられた前記付帯物の情報と、前記通信状態測定部にて取得された通信情報と、に基づいて、前記通信端末と前記付帯物との位置関係と、前記通信端末の通信品質と、の関連性を機械学習して推定モデルを生成する推定モデル生成部と、前記通信端末との間で無線通信を実施する際に、前記物体情報取得部にて取得される前記通信端末と、前記周囲物体に関連付けられた前記付帯物と、の位置関係に基づき、前記推定モデルを参照して前記通信端末の通信品質を推定する通信状態推定部と、を備える。 The communication quality prediction system of one aspect of the present invention is provided in a predetermined area set around the communication terminal and a marker attached to at least one of the communication terminal and surrounding objects existing around the communication terminal. It is provided with an object detection sensor for detecting the communication terminal and the surrounding object, and a communication quality prediction device for estimating the communication quality of the communication terminal based on the positional relationship between the communication terminal and the surrounding object. The communication quality prediction device is from an object information acquisition unit that acquires information on the communication terminal and surrounding objects detected by the object detection sensor, and the communication terminal and surrounding objects acquired by the object information acquisition unit. , The accessory that accompanies the communication terminal and the surrounding object is extracted, the extracted accessory is classified into a plurality of predefined classes, and the accessory of each class is associated with the communication terminal and the surrounding object. The object classification unit, the database for storing the information of the plurality of classes and the information of the incidental objects of each class, the communication state measuring unit for acquiring the communication information of the communication terminal, and the communication terminal and surrounding objects are associated with each other. The relationship between the positional relationship between the communication terminal and the accessory and the communication quality of the communication terminal based on the obtained information of the accessory and the communication information acquired by the communication state measuring unit. The communication terminal acquired by the object information acquisition unit and the surrounding object when wireless communication is performed between the estimation model generation unit that generates an estimation model by machine learning of sex and the communication terminal. It is provided with a communication state estimation unit that estimates the communication quality of the communication terminal with reference to the estimation model based on the positional relationship with the accessory associated with the above.
 本発明の一態様の通信品質予測方法は、通信端末、及び該通信端末の周囲に存在する周囲物体の情報を取得するステップと、前記取得された前記周囲物体から、該周囲物体に付帯する付帯物を抽出し、抽出した付帯物を予め定義されている複数のクラスに分類し、各クラスの付帯物を前記周囲物体に関連付けるステップと、前記複数のクラスの情報、及び、各クラスの付帯物の情報を記憶するステップと、前記通信端末の通信情報を取得するステップと、前記周囲物体に関連付けられた付帯物の情報と、前記取得された通信情報と、に基づいて、前記通信端末と前記付帯物との位置関係と、前記通信端末の通信品質と、の関連性を機械学習して推定モデルを生成するステップと、前記通信端末との間で無線通信を実施する際に、前記通信端末と前記周囲物体に関連付けられた付帯物との位置関係に基づき、前記推定モデルを参照して前記通信端末の通信品質を推定するステップと、を備える。 The communication quality prediction method of one aspect of the present invention includes a step of acquiring information on a communication terminal and surrounding objects existing around the communication terminal, and incidental incidental to the surrounding object from the acquired peripheral object. A step of extracting an object, classifying the extracted ancillary objects into a plurality of predefined classes, and associating the ancillary objects of each class with the surrounding object, information of the plurality of classes, and an accessory of each class. Based on the step of storing the information of the above, the step of acquiring the communication information of the communication terminal, the information of the incidental objects associated with the surrounding object, and the acquired communication information, the communication terminal and the said. The step of machine learning the relationship between the positional relationship with an accessory and the communication quality of the communication terminal to generate an estimation model, and when performing wireless communication with the communication terminal, the communication terminal It is provided with a step of estimating the communication quality of the communication terminal with reference to the estimation model based on the positional relationship between the device and the incidental object associated with the surrounding object.
 本発明の一態様は、上記通信品質予測装置としてコンピュータを機能させるための通信品質予測プログラムである。 One aspect of the present invention is a communication quality prediction program for operating a computer as the communication quality prediction device.
 本発明によれば、通信端末の通信品質を高精度に予測することが可能となる。 According to the present invention, it is possible to predict the communication quality of a communication terminal with high accuracy.
図1は、第1実施形態に係る通信品質予測装置の構成を示すブロック図である。FIG. 1 is a block diagram showing a configuration of a communication quality prediction device according to the first embodiment. 図2は、通信端末を搭載した移動型ロボットが移動する領域、及びこの領域に設置されるカメラの位置を示す説明図である。FIG. 2 is an explanatory diagram showing an area where a mobile robot equipped with a communication terminal moves and a position of a camera installed in this area. 図3は、第1実施形態に係る通信品質予測装置により検出される移動型ロボットの、バウンディングボックスを示す説明図である。FIG. 3 is an explanatory diagram showing a bounding box of a mobile robot detected by the communication quality prediction device according to the first embodiment. 図4Aは、周囲物体を複数の付帯物に分類する方法を示す説明図であり、周囲物体及び該周囲物体の付帯物を異なるクラスに分類する例を示す。FIG. 4A is an explanatory diagram showing a method of classifying a peripheral object into a plurality of incidental objects, and shows an example of classifying the peripheral object and the incidental objects of the peripheral object into different classes. 図4Bは、周囲物体である歩行者を、歩行者、及びその付帯物である「頭」、左右の「手」に分類する例を示す。FIG. 4B shows an example of classifying a pedestrian, which is a surrounding object, into a pedestrian and its ancillary “head” and left and right “hands”. 図5Aは、周囲物体を複数の付帯物に分類する方法を示す説明図であり、周囲物体に含まれる付帯物を同一のクラスに分類する例を示す。FIG. 5A is an explanatory diagram showing a method of classifying surrounding objects into a plurality of incidental objects, and shows an example of classifying the incidental objects included in the surrounding objects into the same class. 図5Bは、周囲物体であるトラックを、その付帯物である運転席と荷台に分類する例を示す。FIG. 5B shows an example of classifying a truck, which is a peripheral object, into a driver's seat and a loading platform, which are ancillary objects thereof. 図6Aは、周囲物体を複数の付帯物に分類する方法を示す説明図であり、付帯物を周囲物体と同一のクラスに分類する例を示す。FIG. 6A is an explanatory diagram showing a method of classifying surrounding objects into a plurality of ancillary objects, and shows an example of classifying the ancillary objects into the same class as the surrounding objects. 図6Bは、周囲物体である歩行者を、この歩行者と、歩行者が所持するモバイルフォンに分類する例を示す。FIG. 6B shows an example of classifying a pedestrian, which is a surrounding object, into the pedestrian and the mobile phone possessed by the pedestrian. 図7は、第1実施形態に係る通信品質予測装置の処理手順を示すフローチャートである。FIG. 7 is a flowchart showing a processing procedure of the communication quality prediction device according to the first embodiment. 図8は、第2実施形態に係る通信品質予測装置の構成を示すブロック図である。FIG. 8 is a block diagram showing a configuration of the communication quality prediction device according to the second embodiment. 図9Aは、周囲物体D1にマーカ21aを付帯させる例を示す説明図である。FIG. 9A is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object D1. 図9Bは、周囲物体D2にマーカ21bを付帯させる例を示す説明図である。FIG. 9B is an explanatory diagram showing an example in which the marker 21b is attached to the surrounding object D2. 図9Cは、周囲物体E1にマーカ21cを付帯させる例を示す説明図である。FIG. 9C is an explanatory diagram showing an example in which the marker 21c is attached to the surrounding object E1. 図10Aは、周囲物体D1にマーカ21aを付帯させる例を示す説明図である。FIG. 10A is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object D1. 図10Bは、周囲物体D2にマーカ21aを付帯させる例を示す説明図である。FIG. 10B is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object D2. 図10Cは、周囲物体E1にマーカ21cを付帯させる例を示す説明図である。FIG. 10C is an explanatory diagram showing an example in which the marker 21c is attached to the surrounding object E1. 図11Aは、周囲物体D1にマーカ21aを付帯させる例を示す説明図である。FIG. 11A is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object D1. 図11Bは、周囲物体D2にマーカ21aを付帯させる例を示す説明図である。FIG. 11B is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object D2. 図11Cは、周囲物体E1にマーカ21aを付帯させる例を示す説明図である。FIG. 11C is an explanatory diagram showing an example in which the marker 21a is attached to the surrounding object E1. 図12は、第2実施形態に係る通信品質予測装置により検出される移動型ロボット及びマーカの、バウンディングボックスを示す説明図である。FIG. 12 is an explanatory diagram showing a bounding box of a mobile robot and a marker detected by the communication quality prediction device according to the second embodiment. 図13は、第2実施形態に係る通信品質予測装置の処理手順を示すフローチャートである。FIG. 13 is a flowchart showing a processing procedure of the communication quality prediction device according to the second embodiment. 図14は、マーカを付帯させた場合とマーカを付帯させない場合における、予測先時間に対するR値の変化を示すグラフである。FIG. 14 is a graph showing the change of the R2 value with respect to the predicted destination time when the marker is attached and when the marker is not attached. 図15は、通信品質予測装置のハードウェア構成を示すブロック図である。FIG. 15 is a block diagram showing a hardware configuration of the communication quality prediction device.
 以下、本発明の実施形態を図面を参照して説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 [第1実施形態の構成]
 本実施形態に係る通信品質予測装置は、5Gなどの通信規格を利用した通信端末を搭載した移動型ロボット(以下、「ロボット」と略す)を、所定の領域内において移動制御する際に、制御指令を送信する基地局と、通信端末との間の無線通信情報の品質(以下、「通信品質」という)を推定する例について説明する。
[Structure of the first embodiment]
The communication quality prediction device according to the present embodiment controls a mobile robot (hereinafter abbreviated as "robot") equipped with a communication terminal using a communication standard such as 5G when controlling the movement within a predetermined area. An example of estimating the quality of wireless communication information (hereinafter referred to as "communication quality") between a base station that transmits a command and a communication terminal will be described.
 無線通信情報とは、無線通信信号の電波伝搬情報、電波伝搬情報に関連するチャネル情報、チャネル情報のフィードバック情報、受信信号電力、信号対雑音電力比、信号対干渉雑音電力比、RSSI(Received Signal Strength Indication)、RSRQ(Received Signal Reference Quality)、パケット誤り率、到達ビット数、ビット誤り率、単位時間当たりの到達ビット数などの、QoE(Quality of experience)に関連する指標、またこれらの数値の微分情報、及び、これらの数値から計算式を用いて算出される指標、及び、これらの指標に影響を与える通信システムの設定項目を指す。 Wireless communication information includes radio wave propagation information of wireless communication signal, channel information related to radio wave propagation information, feedback information of channel information, received signal power, signal to noise power ratio, signal to interference noise power ratio, RSSI (Received Signal). Indicators related to QoE (Quality of experience) such as Strength Indication), RSRQ (Received Signal Reference Quality), packet error rate, number of reached bits, bit error rate, number of reached bits per unit time, and these numerical values. It refers to differential information, indicators calculated from these numerical values using calculation formulas, and setting items of the communication system that affect these indicators.
 通信端末を搭載したロボットは、領域内における位置、及び周囲に存在する物体の影響を受けて通信品質が変化する。通信品質が低下すると、ロボットを無線通信制御により目的地点まで移動させることができない可能性がある。本実施形態では、通信品質予測装置が、機械学習を用いることにより、通信端末が通信を行う際の通信品質を推定する。通信品質予測装置は、推定した通信品質に基づいて、基地局と通信端末との間で無線通信を実施する際のチャネルの選択や、領域内におけるロボットの走行経路を選択することにより、基地局と通信端末との間の通信品質が低下することを抑制する。 The communication quality of a robot equipped with a communication terminal changes due to the influence of the position in the area and the objects existing around it. If the communication quality deteriorates, it may not be possible to move the robot to the destination by wireless communication control. In the present embodiment, the communication quality prediction device estimates the communication quality when the communication terminal communicates by using machine learning. Based on the estimated communication quality, the communication quality predictor selects a channel for wireless communication between the base station and the communication terminal, and selects the travel path of the robot in the area to select the base station. It suppresses the deterioration of the communication quality between the communication terminal and the communication terminal.
 図1は、本発明の第1実施形態に係る通信品質予測装置1及びその周辺機器の構成を示すブロック図である。図1に示すように、本実施形態に係る通信品質予測装置1は、物体情報取得部11と、付帯物分類部12と、付帯物データ作成部13と、通信状態測定部14と、推定モデル生成部15と、データベース16と、通信状態推定部17と、を備えている。 FIG. 1 is a block diagram showing a configuration of a communication quality prediction device 1 and its peripheral devices according to the first embodiment of the present invention. As shown in FIG. 1, the communication quality prediction device 1 according to the present embodiment includes an object information acquisition unit 11, an accessory classification unit 12, an accessory data creation unit 13, a communication state measurement unit 14, and an estimation model. It includes a generation unit 15, a database 16, and a communication state estimation unit 17.
 物体情報取得部11は、所定の領域内に設置された少なくとも一つのカメラ4(物体検出センサ)で撮影された画像を取得する。所定の領域とは、通信端末TAを搭載したロボット3が移動可能な領域である。物体情報取得部11は、例えば図2に示すように、領域RE内の2箇所に設置されたカメラ4(4a、4b)で撮影された画像を取得する。カメラ4として、例えば深度カメラ、RGBカメラを用いることができる。本実施形態では、物体検出センサの一例としてカメラ4を用いる例について説明するが、他の物体検出センサとして、ライダーやレーザレーダなどを用いることも可能である。 The object information acquisition unit 11 acquires an image taken by at least one camera 4 (object detection sensor) installed in a predetermined area. The predetermined area is an area in which the robot 3 equipped with the communication terminal TA can move. As shown in FIG. 2, for example, the object information acquisition unit 11 acquires images taken by cameras 4 (4a, 4b) installed at two locations in the area RE. As the camera 4, for example, a depth camera or an RGB camera can be used. In this embodiment, an example in which the camera 4 is used as an example of the object detection sensor will be described, but it is also possible to use a rider, a laser radar, or the like as another object detection sensor.
 物体情報取得部11はまた、カメラ4で撮影された画像から、画像に含まれる物体の位置、向き、大きさ、移動速度、向きの変化速度の各情報のうちの少なくとも一つの情報を取得する。画像に含まれる物体とは、領域RE内に設置された固定物、領域RE内を移動する移動体、歩行者などの物体2(以下、「周囲物体」という)、及び、通信端末TAを搭載したロボット3を含む。 The object information acquisition unit 11 also acquires at least one of the information of the position, orientation, size, movement speed, and orientation change speed of the object included in the image from the image taken by the camera 4. .. The objects included in the image include a fixed object installed in the area RE, a moving object moving in the area RE, an object 2 such as a pedestrian (hereinafter referred to as "surrounding object"), and a communication terminal TA. Includes the robot 3
 物体情報取得部11はまた、カメラ4で撮影された画像から、ロボット3の画像を抽出し、ロボット3についてのバウンディングボックスを設定する。具体的に、画像からロボット3が検出された際に、図3に示すように、この周囲に矩形状のバウンディングボックスB1を設定し、更に、このバウンディングボックスB1の幅W、高さHを測定する。更に、幅W、高さHに基づいて、バウンディングボックスB1の中心座標(X、Y)を設定する。バウンディングボックスB1、及びその中心座標を設定することによりロボット3の位置の認識精度を向上させることができる。 The object information acquisition unit 11 also extracts an image of the robot 3 from the image taken by the camera 4 and sets a bounding box for the robot 3. Specifically, when the robot 3 is detected from the image, a rectangular bounding box B1 is set around the robot 3 as shown in FIG. 3, and the width W and height H of the bounding box B1 are measured. do. Further, the center coordinates (X, Y) of the bounding box B1 are set based on the width W and the height H. By setting the bounding box B1 and its center coordinates, the accuracy of recognizing the position of the robot 3 can be improved.
 付帯物分類部12は、物体情報取得部11で検出された周囲物体2の付帯物を、複数のクラス(分類項目)に分類する。「クラス」とは、周囲物体2の付帯物を分類する指標である。分類された付帯物は、互いに関連付けられ一つの周囲物体2として定義される。付帯物分類部12はまた、必要に応じてロボット3の付帯物を複数のクラスに分類する。 The accessory classification unit 12 classifies the accessory of the surrounding object 2 detected by the object information acquisition unit 11 into a plurality of classes (classification items). The "class" is an index for classifying incidental objects of the surrounding object 2. The classified ancillary objects are associated with each other and are defined as one peripheral object 2. The accessory classification unit 12 also classifies the accessory of the robot 3 into a plurality of classes as needed.
 以下、図4A~図6Bを参照して付帯物、及びクラスについて具体的に説明する。図4Aは、周囲物体2を、第1の分類である「クラスC1」として定義し、周囲物体2の付帯物を、クラスC1を細分化した第2の分類である「クラスC1a、C1b」として定義する例を示している。具体的に、図4Bに示すように、周囲物体2として歩行者が検出された場合には、この歩行者を「クラスC1」として定義する。また、歩行者の付帯物である「頭」を「クラスC1a」、左右の「手」を「クラスC1b」として定義する。 Hereinafter, incidental items and classes will be specifically described with reference to FIGS. 4A to 6B. In FIG. 4A, the peripheral object 2 is defined as the first classification “class C1”, and the incidental objects of the peripheral object 2 are defined as the second classification “class C1a, C1b” which is a subdivision of the class C1. An example to define is shown. Specifically, as shown in FIG. 4B, when a pedestrian is detected as a surrounding object 2, this pedestrian is defined as "class C1". Further, the "head" that is an accessory of the pedestrian is defined as "class C1a", and the left and right "hands" are defined as "class C1b".
 なお、図4A、図4Bでは、第1の分類、及びこれを細分化した第2の分類に区分する例について示すが、第2の分類を更に細分化して第3の分類以降を定義することも可能である。 In addition, in FIGS. 4A and 4B, an example of classifying the first classification and the subdivided second classification is shown, but the second classification is further subdivided and the third and subsequent classifications are defined. Is also possible.
 図5Aは、周囲物体2を複数の付帯物に分類し、各付帯物をそれぞれ第2の分類である「クラスC1a、C1b」として定義する例を示している。具体的に、図5Bに示すように、周囲物体2としてトラックが検出された場合には、トラック自体を定義せず、トラックの付帯物である運転席を「クラスC1a」、荷台を「クラスC1b」として定義する。 FIG. 5A shows an example in which the surrounding object 2 is classified into a plurality of incidental objects, and each incidental object is defined as the second classification "class C1a, C1b". Specifically, as shown in FIG. 5B, when a truck is detected as a peripheral object 2, the truck itself is not defined, the driver's seat, which is an accessory of the truck, is "class C1a", and the loading platform is "class C1b". Is defined as.
 図6Aは、周囲物体2、及び該周囲物体2に付帯する付帯物22を、それぞれを第1の分類である「クラスC1、C2」として定義する例を示している。具体的に、図6Bに示すように、周囲物体2として歩行者が検出され、更に、この歩行者がモバイルフォン(付帯物22)を所持している場合には、歩行者及びモバイルフォンのそれぞれを第1の分類である「クラスC1、C2」として定義する。 FIG. 6A shows an example in which the peripheral object 2 and the incidental object 22 attached to the peripheral object 2 are defined as "classes C1 and C2", which are the first classifications, respectively. Specifically, as shown in FIG. 6B, when a pedestrian is detected as a peripheral object 2, and the pedestrian has a mobile phone (incidental 22), the pedestrian and the mobile phone, respectively. Is defined as the first classification, "classes C1 and C2".
 付帯物分類部12は、上記した3通りの分類方法のうちの、少なくとも一つの方法を用いて周囲物体を複数のクラスに分類する。また、後述するように、各クラスの情報は、一つの周囲物体2において互いに関連付けられ、一つのクラスグループとして設定される。 The accessory classification unit 12 classifies surrounding objects into a plurality of classes by using at least one of the above three classification methods. Further, as will be described later, the information of each class is associated with each other in one surrounding object 2 and is set as one class group.
 図1に戻って、付帯物データ作成部13は、付帯物分類部12で分類された各クラスの付帯物に基づいて、付帯物データを作成する。具体的に、外部からの設定入力に応じて、前述した3通りの方法のうちのいずれかの方法を採用して分類したデータを作成し、データベース16に保存する。 Returning to FIG. 1, the incidental data creation unit 13 creates incidental data based on the incidental data of each class classified by the incidental classification unit 12. Specifically, according to the setting input from the outside, the classified data is created by adopting one of the above-mentioned three methods and stored in the database 16.
 例えば、図2に示す領域RE内を歩行する歩行者がカメラ4aで撮影され、該歩行者が周囲物体として検出された際には、例えば図3Bに示したように、歩行者をクラスC1に分類し、歩行者の「頭」をクラスC1aに分類し、左右の「手」をそれぞれクラスC1bに分類し、それぞれを関連付けたクラスグループとしてデータベース16に保存する。 For example, when a pedestrian walking in the area RE shown in FIG. 2 is photographed by the camera 4a and the pedestrian is detected as a surrounding object, the pedestrian is classified into class C1 as shown in FIG. 3B, for example. The pedestrian "head" is classified into class C1a, the left and right "hands" are classified into class C1b, and the pedestrian "head" is stored in the database 16 as an associated class group.
 データベース16は、付帯物データ作成部13で作成された周囲物体2、及び周囲物体2に付帯する付帯物に関するデータを保存する。データベース16はまた、付帯物データ作成部13で作成されたクラスグループのデータを保存する。 The database 16 stores the peripheral object 2 created by the incidental data creation unit 13 and the data related to the incidental objects attached to the peripheral object 2. The database 16 also stores the data of the class group created by the accessory data creation unit 13.
 通信状態測定部14は、基地局(図示省略)からロボット3に搭載されている通信端末TAに制御用の無線信号を送信する際の、無線通信の通信品質を測定する。具体的に、図2に示す領域RE内において通信端末TAを搭載したロボット3を移動制御する際に、制御用の無線信号を送信する基地局と、通信端末TAとの間の無線通信情報の品質を測定する。無線通信情報とは、前述したように、RSSI(Received Signal Strength Indication)などである。 The communication state measuring unit 14 measures the communication quality of wireless communication when transmitting a control wireless signal from a base station (not shown) to a communication terminal TA mounted on the robot 3. Specifically, when the robot 3 equipped with the communication terminal TA is moved and controlled in the area RE shown in FIG. 2, the wireless communication information between the base station that transmits the control radio signal and the communication terminal TA Measure quality. As described above, the wireless communication information is RSSI (Received Signal Strength Indication) or the like.
 推定モデル生成部15は、周囲物体2のクラスグループの情報、及び付帯物分類部12で分類された各付帯物の位置、向き、移動速度、回転速度などの情報をデータベース16から取得する。推定モデル生成部15はまた、通信状態測定部14で測定された通信品質を取得し、ロボット3の位置情報、周囲物体2及びその付帯物の位置情報と、通信品質との関係に基づいて、例えばニューラルネットワークを利用した機械学習を実施し、通信品質の推定モデルを生成する。なお、機械学習の詳細については、説明を省略する。 The estimation model generation unit 15 acquires information on the class group of the surrounding object 2 and information such as the position, orientation, moving speed, and rotation speed of each accessory classified by the accessory classification unit 12 from the database 16. The estimation model generation unit 15 also acquires the communication quality measured by the communication state measurement unit 14, and based on the relationship between the position information of the robot 3, the position information of the surrounding object 2 and its ancillary objects, and the communication quality. For example, machine learning using a neural network is performed to generate an estimation model of communication quality. The details of machine learning will be omitted.
 通信状態推定部17は、基地局(図示省略)が通信端末TAに制御用の無線信号を送信することにより、通信端末TAがロボット3の移動制御を実施する際の、該ロボット3の位置情報、周囲物体2及びその付帯物の位置情報、推定モデル生成部15で生成された通信品質の推定モデル、に基づいて、ロボット3が領域RE内を移動する際の、通信品質を推定する。例えば、図2に示したように、ロボット3が出発地点P1から目標地点P2まで移動する際の、ロボット3の位置における通信端末TAによる通信品質を推定する。通信状態推定部17はまた、推定した通信品質の情報を外部機器へ出力する。 The communication state estimation unit 17 transmits the position information of the robot 3 when the communication terminal TA performs the movement control of the robot 3 by transmitting the control radio signal to the communication terminal TA by the base station (not shown). , The communication quality when the robot 3 moves in the area RE is estimated based on the position information of the surrounding object 2 and its ancillary objects, and the communication quality estimation model generated by the estimation model generation unit 15. For example, as shown in FIG. 2, when the robot 3 moves from the starting point P1 to the target point P2, the communication quality by the communication terminal TA at the position of the robot 3 is estimated. The communication state estimation unit 17 also outputs the estimated communication quality information to an external device.
 [第1実施形態の動作]
 次に、上述した第1実施形態に係る通信品質予測装置1の動作を、図7に示すフローチャートを参照して説明する。
[Operation of the first embodiment]
Next, the operation of the communication quality prediction device 1 according to the first embodiment described above will be described with reference to the flowchart shown in FIG. 7.
 初めに、ステップS11において、物体情報取得部11は、カメラ4で撮影された画像に基づいて、図2に示した領域RE内を移動するロボット3、及びロボット3の周囲に存在する周囲物体2を検出する。前述したように、周囲物体2とは領域RE内に存在する支柱などの固定物、台車などの移動体、歩行者などを指す。 First, in step S11, the object information acquisition unit 11 has a robot 3 moving in the region RE shown in FIG. 2 based on an image taken by the camera 4, and a surrounding object 2 existing around the robot 3. Is detected. As described above, the surrounding object 2 refers to a fixed object such as a support, a moving body such as a dolly, a pedestrian, or the like existing in the area RE.
 物体情報取得部11はまた、図3に示したように、ロボット3の画像を含むバウンディングボックスB1を設定し、更に、該バウンディングボックスB1の中心座標(X、Y)を算出する。 As shown in FIG. 3, the object information acquisition unit 11 also sets the bounding box B1 including the image of the robot 3, and further calculates the center coordinates (X, Y) of the bounding box B1.
 ステップS12において、付帯物分類部12は、物体情報取得部11で取得された周囲物体2に含まれる付帯物を検出する。例えば、図4Bに示したように、周囲物体2として歩行者が検出された際には、該歩行者の「頭」、左右の「手」を付帯物として検出する。 In step S12, the accessory classification unit 12 detects the accessory contained in the surrounding object 2 acquired by the object information acquisition unit 11. For example, as shown in FIG. 4B, when a pedestrian is detected as a peripheral object 2, the pedestrian's "head" and left and right "hands" are detected as incidental objects.
 ステップS13において、付帯物データ作成部13は、ユーザによるクラスグループの設定入力を受け付ける。例えば、前述した図4A、図4Bに示したように、第1の分類を示す「クラスC1」、これを細分化した第2の分類を示す「クラスC1a、C1b」に分類する方法がユーザにより設定される。 In step S13, the accessory data creation unit 13 accepts the input of the class group setting by the user. For example, as shown in FIGS. 4A and 4B described above, a method of classifying into "class C1" indicating the first classification and "classes C1a and C1b" indicating the second subdivided classification is performed by the user. Set.
 ステップS14において、付帯物データ作成部13は、物体情報取得部11で取得された周囲物体2の付帯物を、前述したステップS13の処理で設定されたクラスに分類する。例えば、前述した図4A、図4Bに示した方法が設定された場合には、周囲物体2として検出された歩行者をクラスC1とし、歩行者の「頭」をクラスC1aとし、歩行者の左右の「手」をクラスC1bに分類する。 In step S14, the accessory data creation unit 13 classifies the accessory of the surrounding object 2 acquired by the object information acquisition unit 11 into the class set in the process of step S13 described above. For example, when the methods shown in FIGS. 4A and 4B described above are set, the pedestrian detected as the surrounding object 2 is classified as class C1, the "head" of the pedestrian is classified as class C1a, and the left and right of the pedestrian are left and right. "Hand" is classified into class C1b.
 ステップS15において、付帯物データ作成部13は、ステップS14の処理で分類した付帯物、及びクラスに関する情報を互いに関連付けて一つのクラスグループとして設定し、データベース16に保存する。 In step S15, the accessory data creation unit 13 sets the information about the accessory and the class classified in the process of step S14 as one class group in association with each other, and stores the information in the database 16.
 ステップS16において、通信状態測定部14は、ロボット3に搭載された通信端末TAと基地局(図示省略)との間の無線通信における通信品質を測定する。 In step S16, the communication state measuring unit 14 measures the communication quality in wireless communication between the communication terminal TA mounted on the robot 3 and the base station (not shown).
 ステップS17において、推定モデル生成部15は、通信状態測定部14で測定された通信品質に関する情報と、データベース16に保存されている周囲物体2に関する情報に基づいて、機械学習を実施し、通信状態の推定モデルを生成する。データベース16に保存されている周囲物体2に関する情報は、周囲物体2を複数の付帯物に分類し、且つ、複数のクラスに分類しているので、周囲物体2の状況をより詳細に認識することができる。 In step S17, the estimation model generation unit 15 performs machine learning based on the information on the communication quality measured by the communication state measurement unit 14 and the information on the surrounding object 2 stored in the database 16, and the communication state is performed. Generate an estimation model for. The information about the surrounding object 2 stored in the database 16 classifies the surrounding object 2 into a plurality of incidental objects and classifies them into a plurality of classes, so that the situation of the surrounding object 2 can be recognized in more detail. Can be done.
 ステップS18において、通信状態推定部17は、物体情報取得部11で検出された周囲物体2の情報、及び付帯物分類部12で分類された各付帯物の情報を取得し、更に、推定モデル生成部15で生成された推定モデルに基づいて、ロボット3に搭載されている通信端末TAの通信状態を推定する。 In step S18, the communication state estimation unit 17 acquires the information of the surrounding object 2 detected by the object information acquisition unit 11 and the information of each accessory classified by the accessory classification unit 12, and further generates an estimation model. Based on the estimation model generated by the unit 15, the communication state of the communication terminal TA mounted on the robot 3 is estimated.
 ステップS19において、通信状態推定部17は、推定した通信品質の情報を外部機器に出力する。こうして、ロボット3に搭載されている通信端末TAによる通信品質の推定結果を得ることができるのである。 In step S19, the communication state estimation unit 17 outputs the estimated communication quality information to the external device. In this way, it is possible to obtain the estimation result of the communication quality by the communication terminal TA mounted on the robot 3.
 領域RE内を移動するロボットに搭載された通信端末TAと基地局との間の通信品質が低下することが予測される場合には、制御信号の送受信に使用するチャネルを切り替えることや、ロボット3の移動経路を変更するなどの対応を取ることができる。その結果、ロボットの移動制御を安定的に実施することが可能となる。 When it is predicted that the communication quality between the communication terminal TA mounted on the robot moving in the area RE and the base station will deteriorate, the channel used for transmitting and receiving the control signal may be switched, or the robot 3 may be used. It is possible to take measures such as changing the movement route of. As a result, it becomes possible to stably carry out the movement control of the robot.
 [第1実施形態の効果]
 このように、第1実施形態に係る通信品質予測装置1は、通信端末、及び該通信端末の周囲に存在する周囲物体の情報を取得する物体情報取得部11と、前記物体情報取得部11で取得された前記周囲物体から、該周囲物体に付帯する付帯物を抽出し、抽出した付帯物を予め定義されている複数のクラスに分類し、各クラスの付帯物を前記周囲物体に関連付ける付帯物分類部12と、前記複数のクラスの情報、及び、各クラスの付帯物の情報を記憶するデータベース16と、前記通信端末の通信状態を測定する通信状態測定部14と、前記周囲物体に関連付けられた付帯物の情報と、前記通信状態測定部14にて取得された通信情報と、に基づいて、前記通信端末と前記付帯物との位置関係と、前記通信端末の通信品質と、の関連性を機械学習して推定モデルを生成する推定モデル生成部15と、前記通信端末との間で無線通信を実施する際に、前記物体情報取得部11にて取得される前記通信端末と、前記周囲物体に関連付けられた付帯物と、の位置関係に基づき、前記推定モデルを参照して前記通信端末の通信品質を推定する通信状態推定部17と、を備えている。
[Effect of the first embodiment]
As described above, the communication quality prediction device 1 according to the first embodiment is the object information acquisition unit 11 for acquiring information on the communication terminal and surrounding objects existing around the communication terminal, and the object information acquisition unit 11. From the acquired peripheral object, the incidental objects attached to the peripheral object are extracted, the extracted incidental objects are classified into a plurality of predefined classes, and the incidental objects of each class are associated with the peripheral object. The classification unit 12, the database 16 that stores the information of the plurality of classes and the information of the accessories of each class, the communication state measurement unit 14 that measures the communication state of the communication terminal, and the surrounding objects are associated with each other. The relationship between the positional relationship between the communication terminal and the accessory and the communication quality of the communication terminal based on the information of the accessory and the communication information acquired by the communication state measuring unit 14. The communication terminal acquired by the object information acquisition unit 11 when performing wireless communication between the estimation model generation unit 15 that generates an estimation model by machine learning and the communication terminal, and the surroundings. It includes a communication state estimation unit 17 that estimates the communication quality of the communication terminal with reference to the estimation model based on the positional relationship with the incidental object associated with the object.
 本実施形態では、通信端末の周囲に存在する周囲物体から複数の付帯物を抽出し、更に、各付帯物を複数のクラスに分類する。このため、周囲物体2のより詳細な情報を取得することができる。 In the present embodiment, a plurality of incidental objects are extracted from the surrounding objects existing around the communication terminal, and each incidental object is further classified into a plurality of classes. Therefore, more detailed information on the surrounding object 2 can be obtained.
 例えば、カメラ4で撮影される画像に歩行者の「手」の画像が映っており、歩行者全体の画像が映っていない場合には、周囲物体2を一つの物体として検出する手法では、歩行者を認識できない。しかし、図3A、図3Bに示したように、歩行者の「手」を付帯物として設定し、且つ、歩行者をクラスC1、「手」をクラスC1bとして定義することにより、歩行者全体の画像が映っていない場合でも、歩行者の存在を推定することができる。 For example, when the image of the pedestrian's "hand" is shown in the image taken by the camera 4 and the image of the entire pedestrian is not shown, the method of detecting the surrounding object 2 as one object is walking. I can't recognize the person. However, as shown in FIGS. 3A and 3B, the pedestrian's "hand" is set as an accessory, and the pedestrian is defined as class C1 and the "hand" is defined as class C1b. The presence of pedestrians can be estimated even when the image is not shown.
 また、「手」の位置、向き、移動速度、回転速度などの情報に基づいて、歩行者の位置、向き、移動速度、回転速度を高精度に予測することが可能となり、ひいては、高精度な機械学習を実施することが可能となる。 In addition, it is possible to predict the position, orientation, movement speed, and rotation speed of a pedestrian with high accuracy based on information such as the position, orientation, movement speed, and rotation speed of the "hand", which in turn is highly accurate. It becomes possible to carry out machine learning.
 その結果、通信端末TAの周囲に存在する周囲物体2の情報をより詳細に認識することができ、通信端末の通信品質を高精度に推定することが可能となる。 As a result, the information of the surrounding object 2 existing around the communication terminal TA can be recognized in more detail, and the communication quality of the communication terminal can be estimated with high accuracy.
 本実施形態では、周囲物体2を複数のクラスに分類する例について説明したが、通信端末TAを搭載するロボット3を複数のクラスに分類することも可能である。ロボット3を複数のクラスに分類することにより、該ロボットのより詳細な情報を取得することが可能となり、ひいては、通信品質の推定精度を向上させることが可能となる。 In the present embodiment, an example of classifying the surrounding object 2 into a plurality of classes has been described, but it is also possible to classify the robot 3 equipped with the communication terminal TA into a plurality of classes. By classifying the robot 3 into a plurality of classes, it becomes possible to acquire more detailed information of the robot, and by extension, it is possible to improve the estimation accuracy of the communication quality.
 また、同一の周囲物体に含まれる複数の付帯物、及びクラスの情報はそれぞれ関連付けられて一つのクラスグループとしてデータベース16に保存されるので、一つの付帯物が検出された際には、この付帯物に関連する周囲物体2を容易且つ正確に特定することができ、推定モデル生成部15による推定精度を向上させることができる。 Further, since the information of a plurality of ancillary objects and classes contained in the same surrounding object are associated with each other and stored in the database 16 as one class group, when one ancillary object is detected, this ancillary substance is stored. The surrounding object 2 related to the object can be easily and accurately identified, and the estimation accuracy by the estimation model generation unit 15 can be improved.
 [第2実施形態の構成]
 次に、本発明の第2実施形態について説明する。図8は、第2実施形態に係る通信品質予測装置1aを含む通信品質推定システム100の構成を示すブロック図である。
[Structure of the second embodiment]
Next, a second embodiment of the present invention will be described. FIG. 8 is a block diagram showing a configuration of a communication quality estimation system 100 including a communication quality prediction device 1a according to a second embodiment.
 図8に示すように、第2実施形態に係る通信品質予測装置1aは、前述した図1に示した通信品質予測装置1と比較して、マーカ定義部18を備えている点で相違する。また、周囲物体2がマーカ21を備えている点、及び、ロボット3がマーカ31を備えている点で相違している。以下では、図1と重複する構成要素には同一符号を付して、構成説明を省略する。 As shown in FIG. 8, the communication quality prediction device 1a according to the second embodiment is different from the communication quality prediction device 1 shown in FIG. 1 described above in that it includes a marker definition unit 18. Further, it is different in that the peripheral object 2 is provided with the marker 21 and the robot 3 is provided with the marker 31. In the following, the components overlapping with FIG. 1 are designated by the same reference numerals, and the configuration description will be omitted.
 図8に示すように、第2実施形態では周囲物体2に、マーカ21(物体用マーカ)を付帯させている。マーカ21は、所定の形状を有する物体、或いは所定の色を付した物体であり、周囲物体2の付帯物として分類される。マーカ21は、図2に示す領域RE内に存在する固定物、移動体などの周囲物体2に予め付帯させる。 As shown in FIG. 8, in the second embodiment, the marker 21 (object marker) is attached to the surrounding object 2. The marker 21 is an object having a predetermined shape or an object having a predetermined color, and is classified as an accessory of the surrounding object 2. The marker 21 is attached in advance to a peripheral object 2 such as a fixed object or a moving body existing in the region RE shown in FIG.
 同様に、ロボット3に、マーカ(通信端末用マーカ31)を付帯させる。マーカ31は前述したマーカ21と同様に、所定の形状を有する物体、或いは所定の色を付した物体であり、ロボット3の付帯物として分類される。 Similarly, the robot 3 is attached with a marker (marker 31 for communication terminals). Similar to the marker 21 described above, the marker 31 is an object having a predetermined shape or an object having a predetermined color, and is classified as an accessory of the robot 3.
 第2実施形態では、マーカ21、31を付帯物として認識させることにより、周囲物体2、及びロボット3の検出精度をより一層向上させる。 In the second embodiment, the detection accuracy of the surrounding objects 2 and the robot 3 is further improved by recognizing the markers 21 and 31 as incidental objects.
 以下、図9A~図11Cを参照して、マーカの具体的な例について説明する。図9A~図11Cは、同一のクラスに分類される物体D1、物体D2、及び物体D1、D2を細分化したクラスに分類される物体E1に付帯させるマーカの例を示す説明図である。 Hereinafter, specific examples of markers will be described with reference to FIGS. 9A to 11C. 9A to 11C are explanatory views showing an example of a marker attached to an object D1 and an object D2 classified in the same class, and an object E1 classified into a subdivided class of the objects D1 and D2.
 図9A~図9Cは、各物体D1、D2、E1にそれぞれ異なるマーカ21a、21b、21cを付帯させる例を示す説明図である。具体的に、図9Aに示すように物体D1にマーカ21aを付帯させ、図9Bに示すように物体D2にマーカ21bを付帯させ、図9Cに示すように物体E1にマーカ21cを付帯させる。このように、各物体D1、D2、E1にそれぞれ異なるマーカが付帯するので、各物体を高精度に認識することが可能となる。 FIGS. 9A to 9C are explanatory views showing an example in which different markers 21a, 21b, and 21c are attached to the objects D1, D2, and E1, respectively. Specifically, as shown in FIG. 9A, the marker 21a is attached to the object D1, the marker 21b is attached to the object D2 as shown in FIG. 9B, and the marker 21c is attached to the object E1 as shown in FIG. 9C. In this way, since different markers are attached to each of the objects D1, D2, and E1, it is possible to recognize each object with high accuracy.
 図10A~図10Cは、各物体D1、D2、E1のうち、同一クラスの物体に、同一のマーカを付帯させ、異なるクラスの物体に異なるマーカを付帯させる例を示す説明図である。 10A to 10C are explanatory views showing an example in which the same marker is attached to an object of the same class among the objects D1, D2, and E1 and different markers are attached to an object of a different class.
 具体的に、図10Aに示すように物体D1にマーカ21aを付帯させ、図10Bに示すように物体D2にも同様にマーカ21aを付帯させる。また、図10Cに示すように、物体E1にはマーカ21cを付帯させる。このように、各物体のクラスごとにそれぞれ異なるマーカを付帯させることにより、異なるクラスの物体を高精度に認識することが可能となる。 Specifically, as shown in FIG. 10A, the marker 21a is attached to the object D1, and as shown in FIG. 10B, the marker 21a is similarly attached to the object D2. Further, as shown in FIG. 10C, the marker 21c is attached to the object E1. In this way, by attaching different markers to each class of objects, it is possible to recognize objects of different classes with high accuracy.
 図11A~図11Cは、各物体D1、D2、E1にそれぞれ同一のマーカを付帯させる例を示す説明図である。 11A to 11C are explanatory views showing an example in which the same marker is attached to each of the objects D1, D2, and E1.
 具体的に、図11Aに示すように物体D1にマーカ21aを付帯させ、図11Bに示すように物体D2にも同様にマーカ21aを付帯させる。更に、図11Cに示すように物体E1にも同様にマーカ21aを付帯させる。このように、一つのクラスグループに含まれる全ての物体に同一のマーカを付帯させることにより、各物体の認識精度を高めることができる。加えて、マーカの種類を少なくできるので演算処理の負荷を軽減することが可能である。 Specifically, as shown in FIG. 11A, the marker 21a is attached to the object D1, and as shown in FIG. 11B, the marker 21a is similarly attached to the object D2. Further, as shown in FIG. 11C, the marker 21a is similarly attached to the object E1. In this way, by attaching the same marker to all the objects included in one class group, the recognition accuracy of each object can be improved. In addition, since the number of types of markers can be reduced, it is possible to reduce the load of arithmetic processing.
 図9A~図11Cに示す各マーカ21a、21b、21cは、例えば、周囲物体2に設置した同一形状を有するポールであり、各マーカ21a~21cごとに異なる色とすることができる。例えば、マーカ21aは赤色、マーカ21bは青色、マーカ21cは黄色、などに設定することが可能である。 Each of the markers 21a, 21b, 21c shown in FIGS. 9A to 11C is, for example, a pole having the same shape installed on the surrounding object 2, and can have a different color for each of the markers 21a to 21c. For example, the marker 21a can be set to red, the marker 21b can be set to blue, the marker 21c can be set to yellow, and the like.
 更に、ロボット3にマーカ31(通信端末用マーカ)を付帯させる。通信端末用マーカ31についても前述したマーカ21と同様に、予め設定した所定の形状を有する物体、或いは所定の色を付した物体を、領域RE内を移動するロボット3に予め付帯させる。例えば、領域RE内を移動するロボット3に、マーカ31として円柱状のポールを設置する(図12参照)。 Further, a marker 31 (marker for a communication terminal) is attached to the robot 3. As for the marker 31 for the communication terminal, similarly to the marker 21 described above, an object having a predetermined shape set in advance or an object having a predetermined color is attached to the robot 3 moving in the area RE in advance. For example, a columnar pole is installed as a marker 31 on the robot 3 moving in the area RE (see FIG. 12).
 図8に戻って、物体情報取得部11は、カメラ4で撮影された画像から、ロボット3の画像を抽出し、ロボット3のバウンディングボックスB1に加えて、これに付帯するマーカ31のバウンディングボックスB2を設定する。具体的に、物体情報取得部11は、画像からロボット3が検出された際に、図12に示すように、この周囲に矩形状のバウンディングボックスB1を設定し、このバウンディングボックスB1の幅W、高さHを測定する。物体情報取得部11は、幅W、高さHに基づいて、バウンディングボックスB1の中心座標(X、Y)を設定する。物体情報取得部11はまた、マーカ31のバウンディングボックスB2を設定する。 Returning to FIG. 8, the object information acquisition unit 11 extracts the image of the robot 3 from the image taken by the camera 4, and in addition to the bounding box B1 of the robot 3, the bounding box B2 of the marker 31 attached thereto To set. Specifically, when the robot 3 is detected from the image, the object information acquisition unit 11 sets a rectangular bounding box B1 around the robot 3 as shown in FIG. 12, and the width W of the bounding box B1. Measure the height H. The object information acquisition unit 11 sets the center coordinates (X, Y) of the bounding box B1 based on the width W and the height H. The object information acquisition unit 11 also sets the bounding box B2 of the marker 31.
 付帯物分類部12は、物体情報取得部11で取得された周囲物体2、及び、ロボット3から、マーカ21、31を抽出し、付帯物として分類する。付帯物分類部12はまた、周囲物体2の付帯物(マーカ21を含む)、及び、ロボット3の付帯物(マーカ31を含む)を複数のクラスに分類する。 The incidental object classification unit 12 extracts the markers 21 and 31 from the surrounding objects 2 and the robot 3 acquired by the object information acquisition unit 11 and classifies them as incidental objects. The accessory classification unit 12 also classifies the accessory of the surrounding object 2 (including the marker 21) and the accessory of the robot 3 (including the marker 31) into a plurality of classes.
 マーカ定義部18は、物体用マーカ21及び通信端末用マーカ31の形状、及び色のうち少なくとも一方の情報を定義し、物体情報取得部11に出力する。具体的に、図9A~図9C、図10A~図10B、図11A~図11Cに示したように、マーカの定義方法を設定する。 The marker definition unit 18 defines at least one of the shapes and colors of the object marker 21 and the communication terminal marker 31 and outputs the information to the object information acquisition unit 11. Specifically, as shown in FIGS. 9A to 9C, FIGS. 10A to 10B, and FIGS. 11A to 11C, a marker definition method is set.
 [第2実施形態の動作]
 次に、上述した第2実施形態に係る通信品質予測装置1aの動作を、図13に示すフローチャートを参照して説明する。
[Operation of the second embodiment]
Next, the operation of the communication quality prediction device 1a according to the second embodiment described above will be described with reference to the flowchart shown in FIG.
 初めに、ステップS31において、物体情報取得部11は、カメラ4で撮影された画像に基づいて、図2に示した領域RE内を移動するロボット3、及びロボット3の周囲に存在する周囲物体2を検出する。 First, in step S31, the object information acquisition unit 11 moves in the area RE shown in FIG. 2 based on the image taken by the camera 4, and the surrounding object 2 existing around the robot 3. Is detected.
 物体情報取得部11はまた、図12に示したように、ロボット3の画像を含むバウンディングボックスB1、マーカ31の画像を含むバウンディングボックスB2を設定し、更に、バウンディングボックスB1の中心座標(X、Y)を算出する。 As shown in FIG. 12, the object information acquisition unit 11 also sets a bounding box B1 including an image of the robot 3, a bounding box B2 including an image of the marker 31, and further, center coordinates (X,) of the bounding box B1. Y) is calculated.
 ステップS32において、付帯物分類部12は、物体情報取得部11で検出された周囲物体2に含まれる付帯物及びマーカ21、ロボットに含まれる付帯物及びマーカ31を検出する。 In step S32, the accessory classification unit 12 detects the accessory and the marker 21 included in the surrounding object 2 detected by the object information acquisition unit 11, and the accessory and the marker 31 included in the robot.
 ステップS33において、付帯物データ作成部13は、ユーザによるクラスグループの設定入力を受け付ける。 In step S33, the accessory data creation unit 13 accepts the input of the class group setting by the user.
 ステップS34において、付帯物データ作成部13は、物体情報取得部11で検出された周囲物体2、及びロボット3を、付帯物に分類する。付帯物には、マーカ21、31が含まれる。 In step S34, the accessory data creation unit 13 classifies the surrounding objects 2 and the robot 3 detected by the object information acquisition unit 11 into the accessory. Ancillary items include markers 21 and 31.
 ステップS35において、付帯物データ作成部13は、作成した付帯物に関する情報をデータベース16に保存する。 In step S35, the accessory data creation unit 13 saves the information about the created accessory in the database 16.
 ステップS36において、通信状態測定部14は、ロボット3に搭載された通信端末TAと基地局(図示省略)との間の無線通信における通信品質を測定する。 In step S36, the communication state measuring unit 14 measures the communication quality in wireless communication between the communication terminal TA mounted on the robot 3 and the base station (not shown).
 ステップS37において、推定モデル生成部15は、通信状態測定部14で測定された通信品質に関する情報と、データベース16に保存されている付帯物に関する情報に基づいて、機械学習を実施し、通信状態の推定モデルを生成する。 In step S37, the estimation model generation unit 15 performs machine learning based on the information on the communication quality measured by the communication state measurement unit 14 and the information on the incidental items stored in the database 16, and the communication state is changed. Generate an estimation model.
 ステップS38において、通信状態推定部17は、物体情報取得部11で検出された周囲物体2の情報、及び付帯物分類部12で分類された付帯物の情報を取得し、更に、推定モデル生成部15で生成された推定モデルに基づいて、ロボット3に搭載されている通信端末TAの通信状態を推定する。 In step S38, the communication state estimation unit 17 acquires the information of the surrounding object 2 detected by the object information acquisition unit 11 and the information of the incidental objects classified by the incidental object classification unit 12, and further, the estimation model generation unit. Based on the estimation model generated in 15, the communication state of the communication terminal TA mounted on the robot 3 is estimated.
 ステップS39において、通信状態推定部17は、推定した通信品質の情報を外部機器に出力する。 In step S39, the communication state estimation unit 17 outputs the estimated communication quality information to the external device.
 発明者らは、ロボット3にマーカ31を設置した場合と設置しない場合での、通信品質の推定精度を測定する実験を実施した。具体的に、図2に示した領域REにおいて、無線通信により、移動型ロボット3を出発地点P1から目標地点P2まで移動させる制御を実施した。 The inventors conducted an experiment to measure the estimation accuracy of communication quality when the marker 31 was installed on the robot 3 and when it was not installed. Specifically, in the area RE shown in FIG. 2, control was performed to move the mobile robot 3 from the starting point P1 to the target point P2 by wireless communication.
 ロボット3に搭載する通信端末TAの通信規格は、無線LAN(IEEE802.11ac)を用いており、0.1秒間のRSSIの中央値を計測した。本実験では、通信状態推定モデルとしてランダムフォレストを用い、映像から得られた過去1秒間のバウンディングボックス情報とt秒後のRSSI情報との関係性を学習する。 The communication standard of the communication terminal TA mounted on the robot 3 uses a wireless LAN (IEEE802.11ac), and the median value of RSSI for 0.1 seconds was measured. In this experiment, a random forest is used as a communication state estimation model, and the relationship between the bounding box information for the past 1 second obtained from the video and the RSSI information after t seconds is learned.
 本実験では、予測先時間tが、0.1秒、0.2秒、1.0秒、2.0秒、5.0秒、10.0秒の場合を評価した。ランダムフォレスト内の決定木の数を「500」とした。 In this experiment, the cases where the predicted destination time t was 0.1 seconds, 0.2 seconds, 1.0 seconds, 2.0 seconds, 5.0 seconds, and 10.0 seconds were evaluated. The number of decision trees in the random forest was set to "500".
 図3に示したロボット3のバウンディングボックスB1のみの情報を用いて学習したモデルと、図12に示したロボット3のバウンディングボックスB1とマーカ31のバウンディングボックスB2の双方の情報を用いて学習したモデルの性能を比較した。その結果、図14に示す如くの結果が得られた。評価には、下記(1)式で算出される「Rスコア」を用いた。 A model learned using only the information of the bounding box B1 of the robot 3 shown in FIG. 3 and a model learned using the information of both the bounding box B1 of the robot 3 and the bounding box B2 of the marker 31 shown in FIG. Performance was compared. As a result, the result as shown in FIG. 14 was obtained. For the evaluation, the "R 2 score" calculated by the following equation (1) was used.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 (1)式に示す「Yi」はi番目のサンプルでのRSSIの実測値、「Yi(^)」はランダムフォレストで予測されたRSSIの予測値、「Yave」はRSSIの実測値の平均値である。 In equation (1), "Yi" is the measured value of RSSI in the i-th sample, "Yi (^)" is the predicted value of RSSI predicted in the random forest, and "Yave" is the average value of the measured value of RSSI. Is.
 (1)式において、「Rscore」の数値が「1」に近いほど予測精度が高いことを示す。図14は、予測先時間とRの関係を示すグラフであり、曲線s1はマーカ31を用いない場合を示し、曲線s2はマーカ31を用いた場合を示している。図14に示すように、RSSIの予測精度はマーカ31を用いた方が、いずれの予測先時間においても予測精度が高いことが理解される。 In equation (1), the closer the numerical value of "R 2 score" is to "1", the higher the prediction accuracy. FIG. 14 is a graph showing the relationship between the predicted destination time and R 2. The curve s1 shows the case where the marker 31 is not used, and the curve s2 shows the case where the marker 31 is used. As shown in FIG. 14, it is understood that the prediction accuracy of RSSI is higher when the marker 31 is used at any prediction destination time.
 [第2実施形態の効果]
 このように、第2実施形態に係る通信品質予測装置1aでは、領域RE内に存在する周囲物体2にマーカ21を付帯させ、ロボット3にマーカ31を付帯させている。マーカ21、31が付帯されることにより、周囲物体2及びロボット3を検出する際の精度が向上する。このため、通信端末TAの周囲に存在する周囲物体2、及びロボット3の位置情報をより高精度に取得することができ、通信品質の推定精度を向上させることが可能となる。
[Effect of the second embodiment]
As described above, in the communication quality prediction device 1a according to the second embodiment, the marker 21 is attached to the surrounding object 2 existing in the region RE, and the marker 31 is attached to the robot 3. By attaching the markers 21 and 31, the accuracy when detecting the surrounding object 2 and the robot 3 is improved. Therefore, the position information of the surrounding objects 2 and the robot 3 existing around the communication terminal TA can be acquired with higher accuracy, and the estimation accuracy of the communication quality can be improved.
 また、第2実施形態では、周囲物体2及び通信端末TAを搭載したロボット3の双方を、クラスに分類する例について説明したが、本発明は、いずれか一方のみをクラスに分類する構成としてもよい。 Further, in the second embodiment, an example of classifying both the surrounding object 2 and the robot 3 equipped with the communication terminal TA into classes has been described, but the present invention also has a configuration in which only one of them is classified into classes. good.
 また、上述した第1、第2実施形態では、移動型ロボット3に搭載される通信端末TAの通信品質を予測する例について説明したが、本発明はこれに限定されるものではなく、ユーザが携行するモバイルフォンの通信品質を予測することなどにも適用することが可能である。 Further, in the first and second embodiments described above, an example of predicting the communication quality of the communication terminal TA mounted on the mobile robot 3 has been described, but the present invention is not limited to this, and the user is not limited to this. It can also be applied to predict the communication quality of mobile phones that you carry.
 上記説明した本実施形態の通信品質予測装置1には、図15に示すように例えば、CPU(Central Processing Unit、プロセッサ)901と、メモリ902と、ストレージ903(HDD:Hard Disk Drive、SSD:Solid State Drive)と、通信装置904と、入力装置905と、出力装置906とを備える汎用的なコンピュータシステムを用いることができる。メモリ902およびストレージ903は、記憶装置である。このコンピュータシステムにおいて、CPU901がメモリ902上にロードされた所定のプログラムを実行することにより、通信品質予測装置1の各機能が実現される。 As shown in FIG. 15, the communication quality prediction device 1 of the present embodiment described above includes, for example, a CPU (Central Processing Unit, processor) 901, a memory 902, and a storage 903 (HDD: Hard Disk Drive, SSD: Solid). A general-purpose computer system including a State Drive), a communication device 904, an input device 905, and an output device 906 can be used. The memory 902 and the storage 903 are storage devices. In this computer system, each function of the communication quality prediction device 1 is realized by the CPU 901 executing a predetermined program loaded on the memory 902.
 また、図15では、プロセッサとしてCPU901を用いる例について示しているが、GPU(Graphics Processing Unit)を用いる構成とすることも可能である。GPUを用いることにより、処理時間の短縮化を図ることができ、リアルタイム性を高めることが可能となる。 Further, although FIG. 15 shows an example in which the CPU 901 is used as the processor, it is also possible to use a GPU (Graphics Processing Unit). By using the GPU, the processing time can be shortened and the real-time performance can be improved.
 なお、通信品質予測装置1は、1つのコンピュータで実装されてもよく、あるいは複数のコンピュータで実装されても良い。また、通信品質予測装置1は、コンピュータに実装される仮想マシンであっても良い。 The communication quality prediction device 1 may be mounted on one computer or may be mounted on a plurality of computers. Further, the communication quality prediction device 1 may be a virtual machine mounted on a computer.
 なお、通信品質予測装置1用のプログラムは、HDD、SSD、USB(Universal Serial Bus)メモリ、CD (Compact Disc)、DVD (Digital Versatile Disc)などのコンピュータ読取り可能な記録媒体に記憶することも、ネットワークを介して配信することもできる。 The program for the communication quality prediction device 1 can also be stored in a computer-readable recording medium such as an HDD, SSD, USB (Universal Serial Bus) memory, CD (Compact Disc), or DVD (Digital Versatile Disc). It can also be delivered over the network.
 なお、本発明は上記実施形態に限定されるものではなく、その要旨の範囲内で数々の変形が可能である。 The present invention is not limited to the above embodiment, and many modifications can be made within the scope of the gist thereof.
 1、1a 通信品質予測装置
 2 周囲物体
 3 ロボット(移動型ロボット)
 4(4a、4b) カメラ(物体検出センサ)
 11 物体情報取得部
 12 付帯物分類部
 13 付帯物データ作成部
 14 通信状態測定部
 15 推定モデル生成部
 16 データベース
 17 通信状態推定部
 18 マーカ定義部
 21(21a、21b、21c) マーカ(物体用マーカ)
 31 マーカ(通信端末用マーカ)
 100 通信品質推定システム
 RE 領域
 TA 通信端末
1, 1a Communication quality prediction device 2 Surrounding objects 3 Robot (mobile robot)
4 (4a, 4b) Camera (object detection sensor)
11 Object information acquisition unit 12 Ancillary object classification unit 13 Ancillary data creation unit 14 Communication state measurement unit 15 Estimated model generation unit 16 Database 17 Communication state estimation unit 18 Marker definition unit 21 (21a, 21b, 21c) Marker (marker for object) )
31 Marker (Marker for communication terminal)
100 Communication quality estimation system RE area TA communication terminal

Claims (7)

  1.  通信端末、及び該通信端末の周囲に存在する周囲物体の情報を取得する物体情報取得部と、
     前記物体情報取得部で取得された前記周囲物体から、該周囲物体に付帯する付帯物を抽出し、抽出した付帯物を予め定義されている複数のクラスに分類し、各クラスの付帯物を前記周囲物体に関連付ける付帯物分類部と、
     前記複数のクラスの情報、及び、各クラスの付帯物の情報を記憶するデータベースと、
     前記通信端末の通信情報を取得する通信状態測定部と、
     前記周囲物体に関連付けられた付帯物の情報と、前記通信状態測定部にて取得された通信情報と、に基づいて、前記通信端末と前記付帯物との位置関係と、前記通信端末の通信品質と、の関連性を機械学習して推定モデルを生成する推定モデル生成部と、
     前記通信端末との間で無線通信を実施する際に、前記物体情報取得部にて取得される前記通信端末と、前記周囲物体に関連付けられた付帯物と、の位置関係に基づき、前記推定モデルを参照して前記通信端末の通信品質を推定する通信状態推定部と、
     を備えたことを特徴とする通信品質予測装置。
    A communication terminal and an object information acquisition unit that acquires information on surrounding objects existing around the communication terminal.
    From the surrounding object acquired by the object information acquisition unit, the incidental objects attached to the surrounding object are extracted, the extracted incidental objects are classified into a plurality of predefined classes, and the incidental objects of each class are classified into the above-mentioned. Ancillary object classification unit associated with surrounding objects,
    A database that stores information on the plurality of classes and information on incidental items in each class,
    A communication state measuring unit that acquires communication information of the communication terminal, and
    Based on the information of the accessory associated with the surrounding object and the communication information acquired by the communication state measuring unit, the positional relationship between the communication terminal and the accessory and the communication quality of the communication terminal. And the estimation model generator that machine-learns the relationship between and generates an estimation model,
    The estimation model based on the positional relationship between the communication terminal acquired by the object information acquisition unit and an accessory associated with the surrounding object when performing wireless communication with the communication terminal. A communication state estimation unit that estimates the communication quality of the communication terminal with reference to
    A communication quality prediction device characterized by being equipped with.
  2.  前記付帯物分類部は、前記通信端末に付帯する付帯物を抽出し、抽出した付帯物を前記複数のクラスに区分し、各クラスの付帯物を前記通信端末に関連付けし、
     前記推定モデル生成部は、前記通信端末及び該通信端末に関連付けられた付帯物の情報と、前記通信状態測定部にて取得された通信情報と、に基づいて、前記推定モデルを生成すること、
     を特徴とする請求項1に記載の通信品質予測装置。
    The accessory classification unit extracts the accessory that accompanies the communication terminal, classifies the extracted accessory into the plurality of classes, and associates the accessory of each class with the communication terminal.
    The estimation model generation unit generates the estimation model based on the information of the communication terminal and the accessories associated with the communication terminal and the communication information acquired by the communication state measurement unit.
    The communication quality prediction device according to claim 1.
  3.  前記物体情報取得部は、前記通信端末、及び前記周囲物体のうちの少なくとも一方に付帯したマーカの情報を取得し、
     前記付帯物分類部は、前記マーカを、前記通信端末または前記周囲物体の付帯物として抽出すること
     を特徴とする請求項2に記載の通信品質予測装置。
    The object information acquisition unit acquires information on a marker attached to at least one of the communication terminal and the surrounding object.
    The communication quality prediction device according to claim 2, wherein the accessory classification unit extracts the marker as an accessory of the communication terminal or the surrounding object.
  4.  前記マーカは、予め設定した形状、または、予め設定した色を有すること
     を特徴とする請求項3に記載の通信品質予測装置。
    The communication quality prediction device according to claim 3, wherein the marker has a preset shape or a preset color.
  5.  通信端末、及び前記通信端末の周囲に存在する周囲物体の少なくとも一方に付帯するマーカと、
     前記通信端末の周囲に設定された所定の領域内に設けられ、前記通信端末、及び前記周囲物体を検出する物体検出センサと、
     前記通信端末と前記周囲物体との位置関係に基づいて、前記通信端末の通信品質を推定する通信品質予測装置と、を備え、
     前記通信品質予測装置は、
     前記物体検出センサで検出された前記通信端末及び前記周囲物体の情報を取得する物体情報取得部と、
     前記物体情報取得部で取得された前記通信端末及び周囲物体から、前記通信端末及び前記周囲物体に付帯する付帯物を抽出し、抽出した付帯物を予め定義されている複数のクラスに分類し、各クラスの付帯物を前記通信端末及び前記周囲物体に関連付ける付帯物分類部と、
     前記複数のクラスの情報、及び、各クラスの付帯物の情報を記憶するデータベースと、
     前記通信端末の通信情報を取得する通信状態測定部と、
     前記通信端末及び周囲物体に関連付けられた前記付帯物の情報と、前記通信状態測定部にて取得された通信情報と、に基づいて、前記通信端末と前記付帯物との位置関係と、前記通信端末の通信品質と、の関連性を機械学習して推定モデルを生成する推定モデル生成部と、
     前記通信端末との間で無線通信を実施する際に、前記物体情報取得部にて取得される前記通信端末と、前記周囲物体に関連付けられた前記付帯物と、の位置関係に基づき、前記推定モデルを参照して前記通信端末の通信品質を推定する通信状態推定部と、を備えていること
     を特徴とする通信品質予測システム。
    A marker attached to the communication terminal and at least one of the surrounding objects existing around the communication terminal,
    An object detection sensor provided in a predetermined area set around the communication terminal and detecting the communication terminal and surrounding objects, and an object detection sensor.
    A communication quality prediction device for estimating the communication quality of the communication terminal based on the positional relationship between the communication terminal and the surrounding object is provided.
    The communication quality prediction device is
    An object information acquisition unit that acquires information on the communication terminal and surrounding objects detected by the object detection sensor, and an object information acquisition unit.
    From the communication terminal and surrounding objects acquired by the object information acquisition unit, incidental objects attached to the communication terminal and surrounding objects are extracted, and the extracted incidental objects are classified into a plurality of predefined classes. An accessory classification unit that associates an accessory of each class with the communication terminal and the surrounding object,
    A database that stores information on the plurality of classes and information on incidental items in each class,
    A communication state measuring unit that acquires communication information of the communication terminal, and
    Based on the information of the accessory associated with the communication terminal and surrounding objects and the communication information acquired by the communication state measuring unit, the positional relationship between the communication terminal and the accessory and the communication. An estimation model generator that machine-learns the relationship between the communication quality of the terminal and generates an estimation model,
    The estimation is based on the positional relationship between the communication terminal acquired by the object information acquisition unit and the accessory associated with the surrounding object when performing wireless communication with the communication terminal. A communication quality prediction system including a communication state estimation unit that estimates the communication quality of the communication terminal with reference to a model.
  6.  通信端末、及び該通信端末の周囲に存在する周囲物体の情報を取得するステップと、
     前記取得された前記周囲物体から、該周囲物体に付帯する付帯物を抽出し、抽出した付帯物を予め定義されている複数のクラスに分類し、各クラスの付帯物を前記周囲物体に関連付けるステップと、
     前記複数のクラスの情報、及び、各クラスの付帯物の情報を記憶するステップと、
     前記通信端末の通信情報を取得するステップと、
     前記周囲物体に関連付けられた付帯物の情報と、前記取得された通信情報と、に基づいて、前記通信端末と前記付帯物との位置関係と、前記通信端末の通信品質と、の関連性を機械学習して推定モデルを生成するステップと、
     前記通信端末との間で無線通信を実施する際に、前記通信端末と前記周囲物体に関連付けられた付帯物との位置関係に基づき、前記推定モデルを参照して前記通信端末の通信品質を推定するステップと、
     を備えたことを特徴とする通信品質予測方法。
    A step of acquiring information on a communication terminal and surrounding objects existing around the communication terminal, and
    A step of extracting ancillary objects attached to the surrounding object from the acquired peripheral object, classifying the extracted ancillary objects into a plurality of predefined classes, and associating the ancillary substances of each class with the surrounding object. When,
    The step of storing the information of the plurality of classes and the information of the incidental items of each class, and
    The step of acquiring the communication information of the communication terminal and
    Based on the information of the accessory associated with the surrounding object and the acquired communication information, the relationship between the positional relationship between the communication terminal and the accessory and the communication quality of the communication terminal is determined. The steps of machine learning to generate an estimation model,
    When performing wireless communication with the communication terminal, the communication quality of the communication terminal is estimated with reference to the estimation model based on the positional relationship between the communication terminal and an accessory associated with the surrounding object. Steps to do and
    A communication quality prediction method characterized by being equipped with.
  7.  請求項1~4のいずれか1項に記載した通信品質予測装置としてコンピュータを機能させるための通信品質予測プログラム。 A communication quality prediction program for operating a computer as the communication quality prediction device according to any one of claims 1 to 4.
PCT/JP2020/043666 2020-11-24 2020-11-24 Communication quality prediction apparatus, communication quality prediction system, communication quality prediction method, and communication quality prediction program WO2022113169A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2020/043666 WO2022113169A1 (en) 2020-11-24 2020-11-24 Communication quality prediction apparatus, communication quality prediction system, communication quality prediction method, and communication quality prediction program
JP2022564854A JP7453587B2 (en) 2020-11-24 2020-11-24 Communication quality prediction device, communication quality prediction system, communication quality prediction method, and communication quality prediction program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/043666 WO2022113169A1 (en) 2020-11-24 2020-11-24 Communication quality prediction apparatus, communication quality prediction system, communication quality prediction method, and communication quality prediction program

Publications (1)

Publication Number Publication Date
WO2022113169A1 true WO2022113169A1 (en) 2022-06-02

Family

ID=81754105

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/043666 WO2022113169A1 (en) 2020-11-24 2020-11-24 Communication quality prediction apparatus, communication quality prediction system, communication quality prediction method, and communication quality prediction program

Country Status (2)

Country Link
JP (1) JP7453587B2 (en)
WO (1) WO2022113169A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020177557A (en) * 2019-04-22 2020-10-29 キヤノン株式会社 Information processor, information processing method, and program
WO2020217458A1 (en) * 2019-04-26 2020-10-29 日本電信電話株式会社 Communication system and terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020177557A (en) * 2019-04-22 2020-10-29 キヤノン株式会社 Information processor, information processing method, and program
WO2020217458A1 (en) * 2019-04-26 2020-10-29 日本電信電話株式会社 Communication system and terminal

Also Published As

Publication number Publication date
JP7453587B2 (en) 2024-03-21
JPWO2022113169A1 (en) 2022-06-02

Similar Documents

Publication Publication Date Title
CN109784424B (en) Image classification model training method, image processing method and device
US11216694B2 (en) Method and apparatus for recognizing object
CN108304758B (en) Face characteristic point tracking method and device
CN109785368B (en) Target tracking method and device
US9524562B2 (en) Object tracking method and device
KR20210078539A (en) Target detection method and apparatus, model training method and apparatus, apparatus and storage medium
US9852511B2 (en) Systems and methods for tracking and detecting a target object
KR102546491B1 (en) Method and apparatus for estimating location using access point in wireless communication system
WO2018145611A1 (en) Effective indoor localization using geo-magnetic field
JP6462528B2 (en) MOBILE BODY TRACKING DEVICE, MOBILE BODY TRACKING METHOD, AND MOBILE BODY TRACKING PROGRAM
KR20210081618A (en) Apparatus for real-time monitoring for construction object and monitoring method and and computer program for the same
CN110728650A (en) Well lid depression detection method based on intelligent terminal and related equipment
US20210174116A1 (en) Neural network system and operating method thereof
Atashi et al. Orientation-matched multiple modeling for RSSI-based indoor localization via BLE sensors
CN112926461A (en) Neural network training and driving control method and device
WO2020221121A1 (en) Video query method, device, apparatus, and storage medium
US9984296B2 (en) Misaligned tire detection method and apparatus
JP2024508359A (en) Cross-spectral feature mapping for camera calibration
WO2022113169A1 (en) Communication quality prediction apparatus, communication quality prediction system, communication quality prediction method, and communication quality prediction program
JP5258651B2 (en) Object detection apparatus, object detection method, and program
JP7299560B2 (en) Learning data generation method, training method, prediction model, computer program
US20210174079A1 (en) Method and apparatus for object recognition
Jin et al. Performance comparison of moving target recognition between Faster R-CNN and SSD
KR102293570B1 (en) Image Analysis Apparatus for Providing Search Service by Using Location Information of Object and Driving Method Thereof
JP2022148383A (en) Learning method, learning device and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20963434

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022564854

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20963434

Country of ref document: EP

Kind code of ref document: A1