WO2022264493A1 - 情報生成方法、情報生成装置及びプログラム - Google Patents
情報生成方法、情報生成装置及びプログラム Download PDFInfo
- Publication number
- WO2022264493A1 WO2022264493A1 PCT/JP2022/005115 JP2022005115W WO2022264493A1 WO 2022264493 A1 WO2022264493 A1 WO 2022264493A1 JP 2022005115 W JP2022005115 W JP 2022005115W WO 2022264493 A1 WO2022264493 A1 WO 2022264493A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- moving body
- area
- move
- movement
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000010801 machine learning Methods 0.000 claims abstract description 30
- 230000005540 biological transmission Effects 0.000 description 13
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 5
- 230000002411 adverse Effects 0.000 description 3
- 239000002131 composite material Substances 0.000 description 3
- 239000000470 constituent Substances 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/22—Command input arrangements
- G05D1/229—Command input data, e.g. waypoints
- G05D1/2297—Command input data, e.g. waypoints positional data taught by the user, e.g. paths
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/22—Command input arrangements
- G05D1/221—Remote-control arrangements
- G05D1/222—Remote-control arrangements operated by humans
- G05D1/224—Output arrangements on the remote controller, e.g. displays, haptics or speakers
- G05D1/2244—Optic
- G05D1/2247—Optic providing the operator with simple or augmented images from one or more cameras
- G05D1/2249—Optic providing the operator with simple or augmented images from one or more cameras using augmented reality
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/22—Command input arrangements
- G05D1/221—Remote-control arrangements
- G05D1/222—Remote-control arrangements operated by humans
- G05D1/224—Output arrangements on the remote controller, e.g. displays, haptics or speakers
- G05D1/2244—Optic
- G05D1/2247—Optic providing the operator with simple or augmented images from one or more cameras
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/60—Intended control result
- G05D1/617—Safety or protection, e.g. defining protection zones around obstacles or avoiding hazards
- G05D1/622—Obstacle avoidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0112—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0133—Traffic data processing for classifying traffic situation
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D2101/00—Details of software or hardware architectures used for the control of position
- G05D2101/10—Details of software or hardware architectures used for the control of position using artificial intelligence [AI] techniques
- G05D2101/15—Details of software or hardware architectures used for the control of position using artificial intelligence [AI] techniques using machine learning, e.g. neural networks
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D2109/00—Types of controlled vehicles
- G05D2109/10—Land vehicles
Definitions
- the present invention relates to an information generation method, an information generation device, and a program.
- the present disclosure provides an information generation method and the like that can appropriately generate learning data.
- An information generation method is an information generation method in an information generation device for generating machine learning information for estimating whether a mobile object can move in a predetermined area, When the mobile body moves in the first area, (1) first information obtained from at least a sensor installed on the mobile body, and (2) second information about the movement of the mobile body are acquired. and estimating whether or not the moving object is movable in the first area according to the second information, and estimating whether or not the moving object is movable using the first information and the second information.
- the fourth information for the learning model is generated in association with the third information indicating the result.
- the information generation method of the present disclosure can appropriately generate learning data.
- FIG. 1 is a diagram schematically showing the configuration of a learning system according to an embodiment.
- FIG. 2 is a diagram schematically showing the functional configuration of the vehicle according to the embodiment.
- FIG. 3 is a diagram schematically showing the configuration of the remote control device according to the embodiment.
- FIG. 4 is a diagram showing an example of semi-automatic remote control according to the embodiment.
- FIG. 5 is a first diagram showing an example of teacher data according to the embodiment.
- FIG. 6 is a second diagram showing an example of teacher data according to the embodiment.
- FIG. 7 is a flow chart showing processing of the learning system in the embodiment.
- a moving region also referred to as a running region
- Impossible regions must be added as annotation information. Attachment of such annotation information requires a large human cost because it is necessary to designate movable and immovable regions pixel by pixel while viewing the image. This human cost makes it difficult to generate a huge amount of data for learning. Therefore, in the present disclosure, an information generation device capable of generating information for machine learning that can automatically add annotation information to a self-propelled (autonomously movable) mobile body, and An information generation method and the like in the information generation device are provided.
- an information generation method provides information generation for generating machine learning information for estimating whether a moving body can move in a predetermined area.
- a method for generating information in a device comprising: (1) first information obtained from at least a sensor installed on the moving body and (2) movement of the moving body when the moving body moves in a first region obtaining second information about the moving object, estimating whether or not the moving body can move in the first area according to the second information, determining whether or not moving is possible with the first information and the second information
- the fourth information for the learning model is generated in association with the third information indicating the estimation result.
- Such an information generating method includes, in a first area, first information about the movement of a moving body when it moves, second information obtained from a sensor when the moving body moves within the area, and Fourth information is generated in association with third information of the estimation result as to whether or not the moving body can move in the first area.
- the result of estimating whether or not the moving object can move in the first area is added as annotation information to the fourth information.
- the fourth information it is possible to determine whether the moving object can move in the predetermined area corresponding to the third information from the first information and the second information in the predetermined area. It is possible to build a learning model that can output In this way, the fourth information to which the annotation information is automatically added is generated, so that there is no need to perform the conventional annotation addition operation, which requires a large human cost. Therefore, learning data can be generated appropriately.
- the estimation of whether or not the mobile body can move in the first area is performed by moving the first area included in the first information obtained when the mobile body moves according to the second information. It may be based on whether the difference between the movement data in and the threshold is within a predetermined range.
- the third information can be generated based on the first information obtained when the moving body moves according to the second information.
- the fourth information it is possible to generate the fourth information in which the first information, the second information, and the third information are associated with each other, so that there is no need to perform the conventional operation of adding annotations, which requires a large human cost. Therefore, learning data can be generated appropriately. can.
- the second information may be input by an operator who remotely operates the mobile object.
- the fourth information that is, the learning data is appropriately generated from the first information and the second information when the mobile body moves according to the second information input by the operator who remotely operates the mobile body. can be done.
- the difficulty level of movement of the moving body is further estimated, and in the generation of the fourth information, the first information and the second information, the fifth information indicating the estimation result of the difficulty level, and the third information. You may generate
- the mode of machine learning can be changed according to the estimation result of the movement difficulty of the moving object.
- the generation of the fourth information may be performed only when the third information and the fifth information satisfy predetermined conditions.
- the fourth information is generated and generated only when the predetermined condition indicating that the generated fourth information is appropriate learning data is satisfied. It is possible to prevent the fourth information from being generated when a predetermined condition indicating that the fourth information is not suitable learning data is not satisfied. As a result, it is possible not to generate the fourth information, which is inappropriate to be used as learning data and meaningless to generate.
- the fourth information may include the reliability estimated based on the first information.
- the fourth information including the reliability estimated based on the first information is generated.
- reliability information can be used in machine learning.
- the sixth information that divides the first region into a plurality of sections for each type is acquired, and the fourth information is the image of the first region included in the first information, and the sixth information is used to divide the image of the first region.
- a composite image may be included in which information indicating whether or not the moving object based on the third information is movable is superimposed on the image of the first area.
- the other moving bodies in estimating whether or not other moving bodies existing in the first area can be identified based on the first information, and whether or not the first area can be moved, (a) the other moving bodies (b) estimating that the other moving body can move in a first area where the other moving body is present if the first condition is satisfied; and (b) moving the other moving body if the second condition is satisfied. At least one of presuming that the first region in which the body resides is not movable may be performed.
- the first area in which the other moving object exists is assumed to be movable, and the other moving object exists in the first area. If the other moving body that is moving satisfies the second condition, third information can be generated that estimates that the first area in which the other moving body is present is immovable. Since the third information can generate the fourth information in correspondence with the first information and the second information, it is not necessary to perform the conventional operation of annotating with a large human cost. Therefore, learning data can be generated appropriately.
- a program according to one aspect of the present disclosure is a program for causing a computer to execute the information generation method described above.
- a computer can be used to achieve the same effect as the information generation method described above.
- an information generation device for generating information for a learning model that estimates whether or not a mobile body can move in a predetermined area, an acquisition unit that acquires (1) first information acquired from at least a sensor installed on the moving body and (2) second information about the first area when the moves in the first area; an estimating unit that estimates whether or not the moving body can move in the first region based on the first information; the first information and the second information; and a generation unit that generates fourth information for a learning model associated with the indicated third information.
- these general or specific aspects may be realized by a system, device, integrated circuit, computer program, or a recording medium such as a computer-readable CD-ROM. Or it may be realized by any combination of recording media.
- FIG. 1 is a diagram schematically showing the configuration of the learning system according to the embodiment.
- FIG. 2 is a diagram schematically showing the functional configuration of the vehicle according to the embodiment.
- a learning system 500 of the present embodiment is implemented by a mobile object 100, a server device 200 connected via a network 150, and a remote control device 300.
- the moving body 100 is a device capable of autonomous movement, such as an automobile, robot, drone, bicycle, or wheelchair, and is used, for example, for the purpose of delivering a package while it is placed thereon.
- the moving body 100 includes a motor or the like that serves as power, a drive unit such as wheels that are driven by the motor, and a function unit that stores a power source (for example, electric power) for operating the power.
- learning system 500 only needs to be able to generate teacher data with annotations, and a dedicated moving object for this learning may be used.
- This study-only mobile body may not include a function such as placing a load, or may not include a function for autonomous movement.
- the network 150 is a communication network for communicably connecting the mobile unit 100, the server device 200, and the remote control device 300.
- a communication network such as the Internet is used as the network 150 here, it is not limited to this.
- the connection between the mobile object 100 and the network 150, the connection between the server device 200 and the network 150, and the connection between the remote control device 300 and the network 150 may be performed by wireless communication or by wired communication.
- mobile unit 100 is preferably connected to network 150 by wireless communication.
- the mobile unit 100 In order to connect the mobile unit 100 and the network 150 by wire communication, the mobile unit 100 only needs to be equipped with a storage device for accumulating various data. Then, at the timing when the power source of the mobile object 100 is replenished, a wired connection with the network 150 is formed, and various data accumulated in the storage device are transmitted to the network 150 .
- the moving object 100 includes a control unit 101, a control information reception unit 102, a sensor information transmission unit 103, a sensing unit 104, and a recognition unit 105, as shown in FIG.
- Each functional block that constitutes the mobile object 100 is realized using, for example, a processor and a memory. Details of each functional block will be described later.
- the server device 200 is a device for performing information processing and the like, and is implemented using, for example, a processor and memory.
- the server device 200 may be implemented by an edge computer or by a cloud computer.
- one server device 200 may be provided for one mobile object 100 , or one server device 200 may be provided for a plurality of mobile objects 100 .
- the server device 200 includes a remote control unit 201, a control information transmission unit 202, a sensor information reception unit 203, a learning data generation unit 204, a learning unit 205, and a storage device 206. Prepare. Details of each functional block constituting the server device 200 will be described later.
- FIG. 3 is a diagram schematically showing the configuration of the remote control device according to the embodiment.
- Remote control device 300 is implemented by, for example, a computer.
- the remote control device 300 includes, for example, a display device 301 and a steering section 302 which is a user interface for input.
- a keyboard, a touch panel, a mouse, a dedicated controller, a foot panel, a combination of VR glasses and a controller, a sound pickup for voice input, and the like may be used. It may be a combination using a plurality of.
- a remote control device 300 is used to remotely control the mobile object 100 .
- the sensing unit 104 is connected to sensors (not shown), and acquires the result of sensing the environment of the moving object 100 (sensor information: first information) from these sensors.
- This sensor includes, for example, a camera, LiDAR, radar, sonar, microphone, GPS, vibration sensor, acceleration sensor, gyro sensor, temperature sensor, and the like.
- the sensing unit 104 transmits the acquired sensor information to the sensor information transmission unit 103 , the recognition unit 105 and the control unit 101 .
- the recognition unit 105 acquires the sensor information transmitted from the sensing unit 104, recognizes the environment of the mobile object 100, and transmits the recognition result to the sensor information transmission unit 103 and the control unit 101.
- the environment recognition performed by the recognition unit 105 includes generating information (sixth information) that divides the environment of the mobile object 100 into a plurality of categories for each type.
- the recognition unit 105 recognizes the environment necessary for the movement of the mobile body 100 . For example, the recognition unit 105 recognizes obstacles of the moving object 100, other moving objects, and movable areas.
- the recognition unit 105 also performs a process of recognizing the difficulty level by estimating the difficulty level of movement of the mobile body 100 .
- the degree of difficulty of movement of the moving body 100 refers to, for example, the number of times the moving body 100 brakes and changes directions required to move within the area, and the magnitude of vibrations that adversely affect the movement of the moving body. It is estimated based on the speed and number of detections, the number of other moving objects, and the like, and the greater the number (or magnitude), the higher the difficulty of movement of the moving object 100 is.
- examples of the difficulty of moving the mobile body 100 include roads with narrow roads, such as roads where it is difficult for the mobile body 100 to pass other mobile bodies, and roads where on-street parking of cars occurs frequently. , roads scattered with objects large enough to hinder movement, roads with radio wave environments such as low quality (frequently interrupted) communication with the network 150, and the like.
- At least part of the environment recognition processing may be executed sequentially as soon as the sensor information is input from the sensing unit 104, or the sensor information may be buffered (accumulated) in a data storage area (not shown) and later ( For example, after the delivery service using the mobile unit 100 ends).
- a learning model learned in advance by machine learning may be used, or may be performed based on predetermined rules. Note that this learning model may be learned using teacher data to which annotation information generated in the present embodiment is attached.
- the sensor information transmission unit 103 transmits the sensor information transmitted from the sensing unit 104 and the recognition result transmitted from the recognition unit 105 to the sensor information reception unit 203 of the server device 200 .
- transmission and reception of information between the mobile unit 100 and the server device 200 undergoes compression for the purpose of data reduction and alteration of information such as encryption for the purpose of maintaining confidentiality. may be In this case, decompression of compressed information, decoding of encrypted information, and restoration of altered information may be performed in the functional block that received the altered information.
- various types of information may be transmitted and received only when the network bandwidth (communication capacity) between the mobile unit 100 and the server device 200 is equal to or greater than a predetermined value. Additional information such as an ID and time stamp may also be attached to match the information sent and received. In this manner, any form of transmission and reception of information between mobile unit 100 and server device 200 may be employed.
- the sensor information reception unit 203 transmits the sensor information and recognition results transmitted from the sensor information transmission unit 103 to the remote control unit 201 and the learning data generation unit 204.
- the remote control unit 201 determines the moving direction of the moving body 100 as necessary based on the sensor information and the recognition result transmitted from the sensor information receiving unit 203, and remotely controls the moving body 100 so that the moving body 100 moves in the determined moving direction. perform an operation. Specifically, control information (second information) relating to movement of the mobile object 100 for remote control is generated and transmitted to the control information transmission unit 202 . Also, the control information is transmitted to the learning data generation unit 204 .
- Determination of the movement direction of the mobile body 100 in the present embodiment that is, remote control of the mobile body 100 is performed by the operator. Therefore, the remote control device 300 shown in FIG. 3 is used by the operator.
- an image of the environment of the mobile object included in the sensor information is transmitted from the remote control unit 201 to the remote control device 300 and displayed on the display device 301 .
- a display device such as a smartphone or VR/AR glasses may be used instead of the display device 301 .
- the operator can check the sensor information (the image of the environment in this case) displayed on the display device 301, and if he/she determines that remote control is necessary, he can operate the movement direction of the mobile body 100.
- the direction of movement is determined by the operator by judging the area in which the moving body 100 can move.
- the operator makes an input through the user interface such as the steering unit 302 so that the moving body 100 moves in the determined moving direction, and an input signal is sent from the remote control device 300 to the remote control unit 201 based on the input. sent.
- the remote control unit 201 generates control information according to the input signal.
- the control information generated by the remote control unit 201 includes information such as the angle of the tires as the driving unit of the moving body 100, information regarding accelerator opening and opening/closing timing, and information regarding brake strength and braking timing.
- a dedicated controller, smartphone, keyboard, mouse, or the like may be used instead of the steering unit 302 .
- the determination of the moving direction of the moving body 100 and the remote control of the moving body 100 may be performed automatically or semi-automatically.
- the movement direction may be automatically determined from within the movable area of the mobile body 100 based on the recognition result, and control information may be generated so that the mobile body 100 moves in the determined movement direction.
- operator intervention is not essential, and remote control device 300 may not be provided.
- the learning system 500 can be configured without providing the remote control unit 201 in the server device 200.
- the moving direction of the moving body 100 can also be determined manually.
- an administrator or a maintenance company of the mobile body 100 may move the mobile body 100 by directly urging the mobile body 100 by pushing the mobile body 100 by hand.
- the administrator or maintenance company of the mobile body 100 may remotely operate the mobile body 100 from the vicinity of the mobile body 100 using radio control or infrared control to move the mobile body 100 .
- a manager or a maintenance company of the mobile body 100 may operate a device such as a steering unit provided in the mobile body 100 while riding on the mobile body 100 to move the mobile body 100 .
- FIG. 4 is a diagram showing an example of semi-automatic remote control according to the embodiment.
- candidates for the direction of movement are automatically generated and displayed from the sensor information and the recognition results, and an appropriate one from these candidates is given to the operator.
- the direction of movement may be determined by making a selection. In this case, there is an advantage that it becomes easy to simplify the configuration of the user interface.
- the control information transmission unit 202 transmits the control information transmitted from the remote control unit 201 to the control information reception unit 102 within the mobile object 100 .
- Control information receiving section 102 transmits the control information transmitted from control information transmitting section 202 to control section 101 .
- the control unit 101 drives the driving unit and the like to move the moving object 100 according to the control information transmitted from the control information receiving unit 102 .
- the control information generated by the intervention of a person or the functions of the relatively high-performance server device 200 as described above is information that allows the mobile object 100 to move appropriately.
- the sensor information transmitted to the remote control unit 201 includes information directly obtained by the mobile object from the surrounding environment in order to determine the moving direction of the mobile object 100. . That is, by using sensor information as input data for teacher data and adding control information as annotation information to the input data, it is possible to automatically generate teacher data to which annotation information is added. . That is, the generation of information for machine learning, which will be described below, is performed using at least sensor information and control information.
- the learning data generation unit 204 acquires sensor information transmitted from the sensor information reception unit 203 and control information input from the remote control unit 201 .
- the function of the learning data generation unit 204 relating to acquisition of information is an example of what implements the function of the acquisition unit.
- the learning data generating unit 204 also acquires the recognition result transmitted from the sensor information receiving unit 203.
- As annotation information for the sensor information area information on the sensor information corresponding to the area where the moving body 100 actually moved (area where the moving body 100 can move) and information indicating the difficulty of movement in the surrounding environment.
- teacher data fourth information for machine learning is generated.
- the function of the learning data generation unit 204 relating to the generation of teacher data is an example of what realizes the function of the generation unit.
- the learning data generation unit 204 further determines whether the movement based on the control information was appropriate based on the sensor information acquired by the sensing unit 104 when the moving object 100 moved based on the control information.
- An estimation result (third information) is generated by estimating whether or not it was possible to move this area.
- the function of the learning data generation unit 204 related to the generation of the estimation result is an example of what realizes the function of the estimation unit.
- the learning data generation unit 204 is an example of an information generation device having functions of an acquisition unit, a generation unit, and an estimation unit.
- the number of times the mobile body 100 brakes and changes direction while moving according to the control information, the magnitude and number of detections of vibrations that adversely affect the movement of the mobile body, and the number of other mobile bodies. is estimated based on the difficulty level of movement of the moving object 100 estimated based on the above.
- the number of braking times is 1, 2, 3, 5, or 10, and the number of braking times actually used is greater than the threshold value set based on the moving distance of the moving body 100. If so, an estimation result is generated that indicates that movement of this region was impossible, and if it is small, an estimation result is generated that indicates that movement of this region was possible.
- the number of direction changes, the magnitude and number of detections of vibrations that adversely affect the movement of the moving body, and the number of other moving bodies are set based on the moving distance of the moving body. For a threshold value, if the difference is within a pre-determined range, an estimate is generated that indicates that movement of this region was possible; Producing an inferred result indicating that it was not possible.
- the estimation result generated in this way is used to verify whether or not the moving object 100 could actually move in this area. For example, if the control information indicates that the moving object 100 cannot move, information indicating that the mobile object 100 cannot move is attached to the control information as annotation information. If it is desired to use only movable annotation information for machine learning, teacher data to which annotation information related to the above control information is attached should be excluded. Also, if the learning model is a model for distinguishing (clustering) whether it corresponds to a movable area or an immovable area, annotation information indicating that it is movable, and , and annotation information indicating that it cannot be moved. Thus, the estimation results generated above are utilized to determine the type of annotation information.
- the information (fifth information) regarding the difficulty level of movement described above may also be used.
- the combination of the sensor information, the control information, and the estimation result satisfies the threshold criteria and is movable, if the movement of the mobile body 100 is relatively difficult, other Depending on the factors, it may become immobile. In other words, the combination of sensor information, control information, and estimation results for which movement is highly difficult has poor reliability. Therefore, such combinations of sensor information, control information, and estimation results may not be used for machine learning.
- the reliability and difference to other combinations of sensor information, control information, and estimation results are given to the machine It may be used for learning.
- the difficulty of movement may be used for clustering of movable regions.
- an area that is movable and has a high difficulty of movement is defined as an "area that requires careful movement”
- an area that is movable and has a low difficulty of movement is defined as an "area that can be easily moved”.
- the area in which the moving body 100 can move is expanded. be able to.
- sensor information, control information, and A combination of estimation results may be preferentially used for machine learning.
- a sufficient amount of training data necessary for learning is needed to accurately determine whether or not to move an area that is a combination of sensor information, control information, and estimation results that are difficult to move. Therefore, the effect of the information generation method of this embodiment can be significant.
- the term “preferentially used for machine learning” means that after all machine learning using teacher data of combinations of highly difficult sensor information, control information, and estimation results is completed, teacher data of other combinations machine learning is performed. At this time, machine learning for other combinations of teacher data may be omitted.
- Other combinations of teacher data are teacher data for areas where the degree of difficulty of movement is relatively low. In other words, in such areas, rule-based autonomous movement based on sensor information may be sufficient without relying on machine learning. Omissions are permissible.
- further learning may be performed using teacher data with a high degree of difficulty.
- bias may be applied so that teacher data with a high degree of difficulty influences learning results more. For example, even if the ratio of the number of teacher data with a high difficulty level to the number of teacher data with a low difficulty level is larger than the number of teacher data with a high difficulty level, the teacher data with a low difficulty level may be thinned out. good.
- the combination of sensor information and control information is determined so that both the estimation result of whether or not the mobile body 100 can move and the difficulty of moving the mobile body 100 satisfy a predetermined condition.
- teacher data with annotation information for machine learning is generated.
- the learning data generating unit 204 transmits the teacher data generated in this way to the storage device 206 and stores it.
- FIG. 5 is the first diagram showing an example of teacher data according to the embodiment.
- FIG. 6 is a second diagram showing an example of teacher data according to the embodiment.
- the teacher data generated as described above includes annotation information as a movable area (rough dot hatching in the figure) with respect to the movable area where the moving object 100 was movable.
- the teacher data output in the present embodiment divides the image of the region in which the moving object 100 moves, which is included in the sensor information, into a plurality of segments, and for each of the plurality of segments, Information indicating whether or not the moving body 100 can move may include a composite image superimposed on the above image. Therefore, for example, information (sixth information) that divides the area in which the moving body 100 moves into a plurality of sections for each type may be acquired. This information may be generated separately by using, for example, recognition results.
- an area having a predetermined width along the direction of movement may be used by superimposing the history of passing through the same area several times. However, as shown in FIG. 6, a region obtained by combining the width of the vehicle of the moving object 100 may be used as the movement region for one passage.
- the association between the area where the moving object 100 has moved and the pixels on the image may be determined from a combination of control information, sensor information, and additional information such as IDs and time stamps for matching information. good.
- a region in which a moving body (another moving body) other than the moving body 100 moving in the surrounding environment moves may be used for adding annotation information.
- a moving body that is sufficiently larger and faster than the moving body 100 is identified as a dangerous moving body, and an area in which such a dangerous moving body moves is set as an unmovable area. good.
- a dangerous moving body an automobile is exemplified for the moving body 100 for transporting cargo.
- a vehicle or the like may be directly identified by a technique such as pattern matching.
- a moving object having a size equivalent to that of the moving object 100 and not moving so fast is identified as a trackable moving object, and the area in which such a trackable moving object moves is a movable area.
- Such trackable moving bodies include bicycles and wheelchairs for the moving body 100 for transporting goods.
- bicycles, wheelchairs, and the like may be directly identified by techniques such as pattern matching.
- annotation information is attached not only to a part of the sensor information (a certain area in the image of one frame, that is, some pixels, etc.), but also to the entire sensor information (the whole image of one frame, etc.). good too.
- a learning model that can obtain a determination result such as whether the object can be moved or not, or whether it is easy or difficult to move.
- the learning unit 205 uses the teacher data to which the annotation information is added and stored in the storage device 206 to learn the learning model by machine learning. After completing this learning, the learning unit 205 outputs the learned learning model to the recognition unit.
- FIG. 7 is a flowchart showing processing of the learning system according to the embodiment.
- the learning data generation unit 204 acquires sensor information and control information (acquisition step S101). Sensor information and control information are acquired from the moving body via the sensor information acquisition unit 203 .
- the learning data generation unit 204 acquires sensor information and recognition results of the moving body 100 when moving according to the control information, and based on the sensor information and recognition results, the moving body 100 moves according to the control information. It is estimated whether or not the area is movable when it is set (estimation step S102).
- the learning data generation unit 204 generates teacher data with annotation information based on the sensor information, the control information, and the estimation result in the estimation step S102 (generation step S103).
- the teacher data generated in this way is given appropriate annotation information, so it can be used as it is as teacher data for machine learning.
- this training data is generated automatically, the human cost is suppressed, and a large amount of training data can be generated at once by using a plurality of moving bodies 100 or the like.
- each component may be configured with dedicated hardware or realized by executing a software program suitable for each component.
- Each component may be realized by reading and executing a software program recorded in a recording medium such as a hard disk or a semiconductor memory by a program execution unit such as a CPU or processor.
- the software that realizes the information generation device and the like of the above embodiment is the following program.
- this program is a program that causes a computer to execute an information generation method.
- the present disclosure can be used, for example, to learn a learning model that is used for determining the movement area of a self-propelled mobile body.
- REFERENCE SIGNS LIST 100 moving object 101 control unit 102 control information reception unit 103 sensor information transmission unit 104 sensing unit 105 recognition unit 150 network 200 server device 201 remote control unit 202 control information transmission unit 203 sensor information reception unit 204 learning data generation unit 205 learning unit 206 Storage Device 300 Remote Controller 301 Display Device 302 Steering Unit 500 Learning System
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Theoretical Computer Science (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
本開示の発明者は、「背景技術」の欄において記載した、機械学習を用いてロボット等を自動的に制御(自律動作)させる技術に関し、以下の問題が生じることを見出した。
まず、本実施の形態における学習システムの構成について説明する。
101 制御部
102 制御情報受信部
103 センサ情報送信部
104 センシング部
105 認識部
150 ネットワーク
200 サーバ装置
201 遠隔操作部
202 制御情報送信部
203 センサ情報受信部
204 学習データ生成部
205 学習部
206 記憶デバイス
300 遠隔操作装置
301 ディスプレイ装置
302 操舵部
500 学習システム
Claims (10)
- 移動体が所定の領域を移動可能であるか否かを推定する機械学習用の情報を生成するための情報生成装置における情報生成方法であって、
移動体が第1の領域を移動する際に、(1)少なくとも前記移動体に設置されたセンサから取得される第1情報、及び、(2)前記移動体の移動に関する第2情報を取得し、
前記第2情報に従って、前記移動体が、前記第1の領域を移動可能であるか否かを推定し、
前記第1情報及び前記第2情報と、移動可能であるか否かの推定の結果を示す第3情報とを対応付けた前記学習モデル用の第4情報を生成する
情報生成方法。 - 前記移動体が、前記第1の領域を移動可能であるか否かの推定は、前記第2情報に従って前記移動体が移動した場合に得られた前記第1情報に含まれる前記第1の領域を移動中の移動データと閾値との差が所定の範囲内であるか否かに基づく
請求項1に記載の情報生成方法。 - 前記第2情報は、前記移動体を遠隔操作するオペレータによって入力される
請求項2に記載の情報生成方法。 - さらに、前記移動体の移動の難易度を推定し、
前記第4情報の生成では、前記第1情報及び前記第2情報と、難易度の推定の結果を示す第5情報と、前記第3情報とを対応付けた前記第4情報を生成する
請求項1~3のいずれか1項に記載の情報生成方法。 - 前記第4情報の生成は、前記第3情報及び前記第5情報が所定の条件を満たす場合にのみ実行される
請求項4に記載の情報生成方法。 - 前記第4情報は、前記第1情報に基づいて推定された信頼度を含む
請求項1~5のいずれか1項に記載の情報生成方法。 - さらに、前記第1の領域を種別ごとの複数の区分に分割する第6情報を取得し、
前記第4情報は、前記第1情報に含まれる前記第1の領域の画像を、前記第6情報によって分割したときの複数の区分のそれぞれについて、前記第3情報に基づく前記移動体が移動可能であるか否かを示す情報が前記第1の領域の画像に重畳された合成画像を含む
請求項1~6のいずれか1項に記載の情報生成方法。 - 前記第1情報に基づいて、前記第1の領域に存在する他の移動体を識別し、
前記第1の領域を移動可能であるか否かの推定では、
(a)前記他の移動体が第1条件を満たす場合に、前記他の移動体が存在する前記第1の領域を移動可能であると推定すること、及び、
(b)前記他の移動体が第2条件を満たす場合に、前記他の移動体が存在する前記第1の領域を移動可能でないと推定することの、少なくとも一方を行う
請求項1~7のいずれか1項に記載の情報生成方法。 - 請求項1~8のいずれか1項に記載の情報生成方法をコンピュータに実行させるための
プログラム。 - 移動体が所定の領域を移動可能であるか否かを推定する学習モデル用の情報を生成するための情報生成装置であって、
移動体が第1の領域を移動する際に、(1)少なくとも前記移動体に設置されたセンサから取得される第1情報、及び、(2)前記第1の領域に関する第2情報を取得する取得部と、
前記第1情報に基づき、前記移動体が、前記第1の領域を移動可能であるか否かを推定する推定部と、
前記第1情報及び前記第2情報と、移動可能であるか否かの推定の結果を示す第3情報とを対応付けた前記学習モデル用の第4情報を生成する生成部と、を備える
情報生成装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023529482A JPWO2022264493A1 (ja) | 2021-06-15 | 2022-02-09 | |
EP22824494.3A EP4357872A1 (en) | 2021-06-15 | 2022-02-09 | Information-generating method, information-generating device, and program |
CN202280040217.8A CN117529693A (zh) | 2021-06-15 | 2022-02-09 | 信息生成方法、信息生成装置以及程序 |
US18/530,534 US20240103541A1 (en) | 2021-06-15 | 2023-12-06 | Information generation method, information generation device, and recording medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-099525 | 2021-06-15 | ||
JP2021099525 | 2021-06-15 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/530,534 Continuation US20240103541A1 (en) | 2021-06-15 | 2023-12-06 | Information generation method, information generation device, and recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022264493A1 true WO2022264493A1 (ja) | 2022-12-22 |
Family
ID=84526979
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/005115 WO2022264493A1 (ja) | 2021-06-15 | 2022-02-09 | 情報生成方法、情報生成装置及びプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240103541A1 (ja) |
EP (1) | EP4357872A1 (ja) |
JP (1) | JPWO2022264493A1 (ja) |
CN (1) | CN117529693A (ja) |
WO (1) | WO2022264493A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001067125A (ja) * | 1999-08-27 | 2001-03-16 | Fujitsu Ltd | 実世界情報データベース構築方法及び装置と自律移動走行体学習方法 |
JP2006185438A (ja) * | 2004-12-03 | 2006-07-13 | Matsushita Electric Ind Co Ltd | ロボット制御装置 |
JP2018149669A (ja) | 2017-03-14 | 2018-09-27 | オムロン株式会社 | 学習装置及び学習方法 |
JP6815571B1 (ja) * | 2020-02-27 | 2021-01-20 | 三菱電機株式会社 | ロボット制御装置、ロボット制御方法及び学習モデル生成装置 |
-
2022
- 2022-02-09 JP JP2023529482A patent/JPWO2022264493A1/ja active Pending
- 2022-02-09 WO PCT/JP2022/005115 patent/WO2022264493A1/ja active Application Filing
- 2022-02-09 EP EP22824494.3A patent/EP4357872A1/en active Pending
- 2022-02-09 CN CN202280040217.8A patent/CN117529693A/zh active Pending
-
2023
- 2023-12-06 US US18/530,534 patent/US20240103541A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001067125A (ja) * | 1999-08-27 | 2001-03-16 | Fujitsu Ltd | 実世界情報データベース構築方法及び装置と自律移動走行体学習方法 |
JP2006185438A (ja) * | 2004-12-03 | 2006-07-13 | Matsushita Electric Ind Co Ltd | ロボット制御装置 |
JP2018149669A (ja) | 2017-03-14 | 2018-09-27 | オムロン株式会社 | 学習装置及び学習方法 |
JP6815571B1 (ja) * | 2020-02-27 | 2021-01-20 | 三菱電機株式会社 | ロボット制御装置、ロボット制御方法及び学習モデル生成装置 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022264493A1 (ja) | 2022-12-22 |
EP4357872A1 (en) | 2024-04-24 |
US20240103541A1 (en) | 2024-03-28 |
CN117529693A (zh) | 2024-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110214107B (zh) | 提供驾驶员教育的自主车辆 | |
US11061406B2 (en) | Object action classification for autonomous vehicles | |
US10915101B2 (en) | Context-dependent alertness monitor in an autonomous vehicle | |
JP6962926B2 (ja) | 自律車両の軌道修正のための遠隔操作システムおよび方法 | |
JP6605642B2 (ja) | 車両及び車両を管理制御するシステム | |
US11157001B2 (en) | Device and method for assisting with driving of vehicle | |
KR102106875B1 (ko) | 차량학습에 기반한 자율주행 중 사고 회피 시스템 및 방법 | |
CN112166304A (zh) | 传感器数据的误差检测 | |
CN110796692A (zh) | 用于同时定位与建图的端到端深度生成模型 | |
WO2019047596A1 (zh) | 一种用于驾驶模式切换的方法和装置 | |
JPWO2017168883A1 (ja) | 情報処理装置、情報処理方法、プログラム、およびシステム | |
US11508158B2 (en) | Electronic device and method for vehicle driving assistance | |
US20220261590A1 (en) | Apparatus, system and method for fusing sensor data to do sensor translation | |
US11270689B2 (en) | Detection of anomalies in the interior of an autonomous vehicle | |
US11847562B2 (en) | Obstacle recognition assistance device, obstacle recognition assistance method, and storage medium | |
KR102659059B1 (ko) | 차량용 아바타 처리 장치 및 그를 제어 방법 | |
US11904853B2 (en) | Apparatus for preventing vehicle collision and method thereof | |
US20230046289A1 (en) | Automatic labeling of objects in sensor data | |
US20230343108A1 (en) | Systems and methods for detecting projection attacks on object identification systems | |
WO2022264493A1 (ja) | 情報生成方法、情報生成装置及びプログラム | |
US11704827B2 (en) | Electronic apparatus and method for assisting with driving of vehicle | |
EP3786854A1 (en) | Methods and systems for determining driving behavior | |
US20220284746A1 (en) | Collecting sensor data of vehicles | |
US11881065B2 (en) | Information recording device, information recording method, and program for recording information | |
US20240062550A1 (en) | Method for Providing a Neural Network for Directly Validating an Environment Map in a Vehicle by Means of Sensor Data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22824494 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280040217.8 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023529482 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022824494 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022824494 Country of ref document: EP Effective date: 20240115 |