WO2021075112A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2021075112A1
WO2021075112A1 PCT/JP2020/027813 JP2020027813W WO2021075112A1 WO 2021075112 A1 WO2021075112 A1 WO 2021075112A1 JP 2020027813 W JP2020027813 W JP 2020027813W WO 2021075112 A1 WO2021075112 A1 WO 2021075112A1
Authority
WO
WIPO (PCT)
Prior art keywords
animal body
unit
information
map information
information processing
Prior art date
Application number
PCT/JP2020/027813
Other languages
French (fr)
Japanese (ja)
Inventor
康平 小島
佐藤 直之
邦在 鳥居
祐介 工藤
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2021075112A1 publication Critical patent/WO2021075112A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram

Definitions

  • This technology makes it possible to quickly generate map information excluding animals with respect to information processing devices, information processing methods and programs.
  • map information has been updated as well as self-position estimation and map information generation using distance measurement sensors and image sensors.
  • an object is extracted and tracked from a time-series image, an optical flow is calculated, an animal body is discriminated based on the calculated optical flow, and the discriminated animal body is removed to generate map information. Is being done.
  • Patent Document 2 the number of measurements is recorded for each point on the map information indicating the driving environment, and the position data of the point where the number of measurements is equal to or less than the threshold value is due to a measurement error or in an animal body. Since there is a high possibility that it is, it is deleted and the map information is updated.
  • Patent Document 1 and Patent Document 2 a time-series image or a plurality of measurement results are required to update the map information. Therefore, if the images and measurement results for a predetermined time cannot be obtained, the animal body cannot be deleted from the map information, and therefore the map information in which the animal body is deleted cannot be promptly generated.
  • these methods which deal with changes in the environment, update the map information based on the self-position estimation result and the measurement result for each measurement, but it is difficult to estimate the self-position when the spatial structure changes significantly. And the animal body cannot be deleted.
  • the purpose of this technology is to provide an information processing device, an information processing method, and a program that can quickly generate map information excluding animal bodies.
  • the first aspect of this technology is An extraction unit that extracts the characteristic shape of an object existing in the sensing data or the area information indicating the spatial area in which the object is located in the map information.
  • a determination unit that determines whether the object is an animal body based on the extraction result by the extraction unit, and
  • An information processing device including a map information processing unit that deletes the object from the map information when the determination unit determines that the object is an animal body.
  • the extraction unit extracts the characteristic shape of the object existing in the sensing data acquired by using the external sensor, or the area information indicating the spatial area in which the object is located in the map information.
  • the extraction unit extracts the feature shape of the object using, for example, a feature shape recognition model generated in advance, and sets an animal body score indicating the animal body likeness to the object based on the extraction result.
  • the extraction unit extracts animal body candidates based on the animal body score.
  • the extraction unit recognizes the spatial region indicated by the sensing data using the region recognition model generated in advance, and the animal body existence score indicating the ease of existence of the animal body for each recognized region. To set.
  • the determination unit determines, for example, whether the animal body candidate is an animal body based on the extraction result by the extraction unit.
  • the determination unit determines whether or not the animal body candidate is an animal body, for example, based on at least one of the animal body score of the animal body candidate and the animal body presence score of the region where the animal body candidate is located.
  • the map information processing unit deletes the object from the map information when the determination unit determines that the object is an animal body. Further, the map information processing unit adds the area label information regarding the recognized area to the map information based on the area recognition result of the extraction unit.
  • the map information is generated by the map information generation unit based on the sensing data acquired by the external sensor.
  • the star reckoning unit detects its own position using the sensing data and the map information in which the animal body has been deleted by the map information processing unit. Further, a weighting processing unit for weighting the sensing data is provided, and the star reckoning unit detects the self-position using the sensing data weighted by the weighting processing unit. The weighting processing unit weights the sensing data indicating the animal body candidates extracted by the extraction unit. Further, the map information is provided with area label information indicating the animal body existence score indicating the ease of existence of the animal body for each area, and the weighting processing unit weights the sensing data based on the area label information. You may.
  • the weighting processing unit weights the sensing data according to at least one of the animal body presence score and the animal body score indicating the animal-likeness set as the animal body candidate based on the extraction result of the feature shape in the extraction unit. As the animal body becomes easier to exist or becomes more animal-like, the weight is reduced.
  • the dead reckoning unit that detects the self-position based on the sensing data acquired by the internal sensor, the self-position detected by the dead reckoning unit, and the self-position integration that integrates the self-position detected by the star reckoning unit. It may be provided with a part.
  • the second aspect of this technology is The extraction unit extracts the characteristic shape of the object existing in the sensing data or the area information indicating the area where the object is located in the map information. Based on the extraction result by the extraction unit, the determination unit determines whether the object is an animal body. When the determination unit determines that the object is an animal body, the information processing method includes deleting the object from the map information by the map information processing unit.
  • the third aspect of this technology is A program that causes a computer to generate map information.
  • a procedure for determining whether the object is an animal body based on the feature shape or the extraction result of the region information and There is a program in which the computer executes a procedure for deleting an object determined to be an animal body from the map information.
  • the program of the present technology provides, for example, a storage medium, a communication medium, for example, a storage medium such as an optical disk, a magnetic disk, or a semiconductor memory, which is provided in a computer-readable format to a general-purpose computer capable of executing various program codes. Or, it is a program that can be provided by a communication medium such as a network. By providing such a program in a computer-readable format, processing according to the program can be realized on the computer.
  • FIG. 1 illustrates the configuration of an information processing system using the information processing device of the present technology.
  • the information processing system 10 has a learning block 20, a map generation block 30, and a map utilization block 40.
  • the learning block 20 performs learning using a learning image data group and a feature shape table showing which region in the image corresponds to what kind of feature shape, and generates a feature shape recognition model. .. Further, the learning block 20 performs learning using a learning map data group and a region label table showing regions in which the possibility of existence of an animal body differs from a spatial region in the map, and generates a region recognition model. ..
  • the map generation block 30 relates to the characteristic shape of the object existing in the sensing data acquired by the internal world sensor or the external world sensor, or the spatial region in which the object is located in the map information, using the model generated by the learning block 20.
  • the area information is extracted, and it is determined whether the object is an animal body based on the extraction result. Further, when the object is determined to be an animal body, the map generation block 30 deletes the animal body from the map information generated based on the sensing data or the map information generated in advance.
  • the map utilization block 40 estimates its own position based on the map generated by the map generation block 30 and the sensing data acquired by the outside world sensor or the inside world sensor and the outside world sensor.
  • the learning block 20, the map generation block 30, and the map utilization block 40 may be provided independently, or a plurality of blocks may be provided integrally.
  • a map generation block 30 and a map utilization block 40 are provided on a moving body such as a robot or a vehicle, and the moving body can generate map information in which an animal body is deleted based on sensing data, a feature shape recognition model, and a region recognition model. Update to the map information with the animal body deleted.
  • the moving body performs self-position estimation based on the sensing data and map information in which the animal body is deleted, and moves according to the action plan based on the self-position estimation result.
  • map information generated by the map generation block 30 of the moving body may be subsequently used in the map utilization block 40 of the same moving body or another moving body to perform self-position estimation.
  • a learning block 20 and a map generation block 30 are provided on a server or the like to generate map information, and the generated map information is used by the map utilization block 40 of a moving body as a client to perform self-position estimation and the like. You may.
  • FIG. 2 illustrates the configuration of the learning block.
  • the learning block 20 has a data storage unit 21, a feature shape learner 22, and a region learner 23.
  • the data storage unit 21 stores, for example, a learning image data group such as a color image or a color image and a depth image, and a feature shape table indicating which region in the image corresponds to what kind of feature shape. ing. Further, in the data storage unit 21, for example, a map data group for learning a movement route of a robot or the like, or a space area in the map in which the possibility of existence of an animal body is different, for example, a passage, a room, a doorway, etc. The area label table indicating whether or not it corresponds to the area of is stored.
  • the feature shape learner 22 performs learning using the learning image data group and the feature shape table stored in the data storage unit 21, and recognizes the feature shape of an object existing in the sensing data. Generate a model. Further, the area learner 23 performs learning using the learning map data group and the area label table stored in the data storage unit 21, and the possibility of existence of an animal body in the spatial area indicated by the map information (map data). Generates a region recognition model for recognizing different regions.
  • FIG. 3 is a flowchart illustrating the operation of the learning block.
  • the learning block 20 reads out the data group and the table.
  • the learning block 20 reads out the learning image data group and the feature shape table, and the learning map data group and the area label table stored in the data storage unit 21, and proceeds to step ST2.
  • step ST2 the learning block generates a feature shape recognition model.
  • the learning block 20 performs learning with the feature shape learner 22 using the learning image data group and the feature shape table, and generates a feature shape recognition model.
  • FIG. 4 illustrates the feature shape to be recognized.
  • wheels are used in chairs, desks, luggage carriers, movable whiteboards, etc. in offices. Wheels are used for chairs, desks, TV stands, side tables, etc. in the home environment. Further, the wheels are used outdoors for cars, bicycles, strollers and the like.
  • the handle is used for drawers and sliding doors in the office.
  • handles are used for drawers, sliding doors, buckets, etc. in the home environment.
  • the handle is used outdoors for a push cart, a front door, and the like.
  • Rails are used for sliding doors, movable bookshelves, etc. in the office.
  • rails are used for sliding doors and the like in a home environment. Further, rails are used outdoors for gates, sliding doors, and the like.
  • the learning block 20 generates a feature shape recognition model for recognizing feature shapes related to the animal body, for example, wheels, handles, rails, etc. shown in FIG. 4, and proceeds to step ST3.
  • the learning block In step ST3, the learning block generates an area recognition model.
  • the learning block 20 performs learning with the area learner 23 using the learning map data group and the area label table, and recognizes an area such as a passage, a room, a doorway, etc., in which the possibility of existence of an animal body is different. Generate a model.
  • step ST2 and step ST3 may be performed first, or may be processed in parallel.
  • FIG. 5 illustrates the configuration of the map generation block.
  • the map generation block 30 includes a sensor unit 31, a self-position estimation unit 32, a map generation unit 34, a region extraction unit 35, a feature shape extraction unit 36, an animal body candidate extraction unit 37, a determination unit 38, and a map information processing unit 39. doing. Further, the map generation block 30 may have an animal body deletion filter 33 as described later.
  • the region extraction unit 35, the feature shape extraction unit 36, and the animal body candidate extraction unit 37 correspond to the extraction unit in the claims.
  • the sensor unit 31 has an inner world sensor 311 and an outer world sensor 312.
  • the internal sensor 311 acquires information about the mobile body itself on which the information processing device is provided (for example, information indicating the position and posture of the moving body and its change).
  • the internal world sensor 311 outputs the generated sensing data (also referred to as “internal world sensing data”) to the self-position estimation unit 32.
  • the external sensor 312 acquires information on the surrounding environment of the moving body in which the information processing device is provided (for example, information on surrounding objects and the like).
  • a range finder LIDAR (Light Detection and Ringing, Laser Imaging Detection and Ringing), TOF (Time Of Flight), stereo camera, etc.) 312a, an image sensor 312b for acquiring an captured image, and the like are used. ..
  • the distance measuring sensor 312a of the outside world sensor 312 generates sensing data (also referred to as “distance measuring data”) indicating the distance to a surrounding object and sends it to the self-position estimation unit 32, the map generation unit 34, and the feature shape extraction unit 36. Output.
  • the image sensor 312b of the external world sensor 312 generates sensing data (also referred to as “image image data”) indicating an image captured in the peripheral region and outputs the sensing data (also referred to as “image data”) to the feature shape extraction unit 36.
  • the self-position estimation unit 32 estimates and estimates the self-position based on the inside world sensing data or the inside world sensing data generated by the inside world sensor 311 of the sensor unit 31 and the distance measurement data generated by the outside world sensor 312.
  • the self-position information indicating the self-position is output to the determination unit 38 and the map information processing unit 39.
  • the map generation unit 34 generates map information based on the distance measurement data generated by the outside world sensor 312 of the sensor unit 31.
  • the map generation unit 34 generates an occupied grid map by, for example, dividing a two-dimensional plane into grids and assigning the AF points indicated by the distance measurement data to the corresponding grids.
  • the map generation unit 34 outputs the generated map information to the map information processing unit 39.
  • the region extraction unit 35 uses the region recognition model generated by the learning block 20 to perform region recognition for the map information generated by the map generation unit 34, and extracts regions having different possibilities of existence of the animal body. Further, the region extraction unit 35 assigns region label information including region identification information and an animal body existence score according to the existence possibility of the animal body for each region in which the existence possibility of the animal body is different. For example, the region extraction unit 35 performs region recognition using the region recognition model, and in the map information generated by the map generation unit 34, each region indicates a passage, a room, a doorway, etc., in which the possibility of existence of an animal body is different. Judgment is made, and area identification information indicating which area is set is set.
  • the region extraction unit 35 raises the animal presence score in the region corresponding to the passage and lowers the animal presence score in the region corresponding to the wall in the room.
  • the animal presence score in the area corresponding to the doorway or the like is lower than the animal presence score in the area indicating the passage and higher than the animal presence score in the area indicating the wall of the room.
  • the feature shape extraction unit 36 uses the feature shape recognition model generated by the learning block 20 based on the captured image indicated by the captured image data output from the external world sensor 312 or the depth image indicated by the captured image and the distance measurement data. Feature shape extraction is performed. For example, the feature shape extraction unit 36 extracts the feature shape using the feature shape recognition model, and extracts the feature shape, for example, a wheel, a handle, a rail, or the like from the captured image or the depth image. The feature shape extraction unit 36 generates a feature shape list showing the extracted feature shape and outputs it to the animal body candidate extraction unit 37.
  • the animal body candidate extraction unit 37 extracts an object having the feature shape shown in the feature shape list as an animal body candidate based on the feature shape list output from the feature shape extraction unit 36.
  • the animal body candidate extraction unit 37 may, for example, separate an object based on sensing data and extract an object including a feature shape as an animal body candidate, and may extract the distance to the feature shape and the distance or shape from the feature shape. Animal candidates may be extracted based on continuity or the like.
  • the animal body candidate extraction unit 37 generates the animal body candidate identification information indicating the extracted animal body candidate and the feature shape label information including the animal body score set based on the feature shape of the animal body candidate.
  • the animal body candidate identification information includes label information and object position information for individually identifying animal body candidates.
  • the animal body score is calculated using a score set in advance for each feature shape. For example, an object with wheels often has a high wheel score because it is often an animal body. Further, since the object provided with the handle may be used not only for the animal body but also for other objects, the score of the handle is made smaller than the score of the wheel. In addition, the score is increased when other characteristic shapes are often used in animals.
  • the animal body candidate extraction unit 37 sets the cumulative value of the score of the feature shape in the animal body candidate as the animal body score indicating the animal body-likeness. In addition, the animal body candidate extraction unit 37 may extract animal body candidates using the animal body score. For example, the animal body candidate extraction unit 37 extracts an object whose animal body score is higher than a preset threshold value as an animal body candidate. The animal body candidate extraction unit 37 outputs the generated feature shape label information to the determination unit 38.
  • the determination unit 38 determines whether the object is an animal body based on the extraction results of the region extraction unit 35, the feature shape extraction unit 36, and the animal body candidate extraction unit 37.
  • the determination unit 38 uses the self-position estimated by the self-position estimation unit 32 as a reference, the region indicated by the region label information generated by the region extraction unit 35, and the feature shape label generated by the animal candidate extraction unit 37. Correspond to the positions of animal candidates indicated in the information. Further, the determination unit 38 determines whether the animal candidate is an animal body based on the animal body score of the animal body candidate and the animal body existence score of the region where the animal body candidate is located, and determines whether the animal body candidate is an animal body, and determines the determination result in the map information processing unit 39. Output to.
  • the map information processing unit 39 adds the area label information generated by the area extraction unit 35 to the map information generated by the map generation unit 34, and maps based on the self-position estimated by the self-position estimation unit 32. Store as information. Further, the map information processing unit 39 updates the stored map information by using the newly generated map information with the area label information. Therefore, by repeating the movement of the sensor unit 31 and the generation of map information, the spatial area indicated by the map information stored in the map information processing unit 39 can be expanded. Further, the map information processing unit 39 deletes the animal body candidate determined to be the deletion target in the determination result of the determination unit 38 from the map information. The map information to which the area label information is added is used in the map utilization block 40 as advance map information.
  • FIG. 6 is a flowchart illustrating the operation of the map generation block.
  • the map generation block acquires sensing data.
  • the map generation block 30 acquires the sensing data generated by the inner world sensor 311 and the outer world sensor 312 of the sensor unit 31 and proceeds to step ST12 and step ST14.
  • step ST12 the map generation block performs feature shape extraction processing.
  • the map generation block 30 extracts the feature shape using the feature shape recognition model generated in the learning block 20 based on the sensing data (for example, ranging data and the captured image data) acquired in step ST11, and for example, the extracted feature.
  • a feature shape list indicating the type of shape and the detection position is generated, and the process proceeds to step ST13.
  • step ST13 the map generation block performs animal body candidate extraction processing.
  • the map generation block 30 detects an object that moves based on the feature shape extracted in step ST12, and sets the detected object as an animal body candidate. Further, the map generation block 30 generates the feature shape label information including the animal body candidate identification information and the animal body score, and proceeds to step ST16.
  • FIG. 7 is a diagram for explaining the extraction of animal body candidates.
  • FIG. 7A exemplifies an object included in the sensing range of the external sensor, and it is assumed that, for example, a luggage carrier OB1, a chair OB2, and a box OB3 are included.
  • FIG. 7B the wheel FS1 and the handles FS2a and FS2b are extracted by extracting the characteristic shape. Therefore, as shown in FIG. 7 (c), the luggage carrier OB1 shown in the object region which is substantially equal in distance from the wheel FS1 and the handle FS2a and is continuous with the wheel FS1 and the handle FS2a is used as an animal body candidate.
  • the box OB3 indicated by the object region continuous with the handle FS2b at a distance substantially equal to that of the handle FS2b is set as an animal body candidate.
  • the scores for each of the extracted feature shapes are accumulated and the cumulative value is set as the animal body score SL.
  • step ST14 the map generation block performs map generation processing.
  • the map generation block 30 generates map information such as an occupied grid map based on the sensing data (for example, depth image) acquired in step ST11, and proceeds to step ST15.
  • step ST15 the map generation block performs area recognition processing.
  • the map generation block 30 performs area recognition for the map information generated in step ST14 using the area recognition model generated in the learning block 20, and based on the area recognition result, the area label including the area identification information and the animal presence score. Information is generated and the process proceeds to step ST16.
  • FIG. 8 illustrates map information.
  • FIG. 8A exemplifies the map information generated by the map generation process
  • FIG. 8B exemplifies the map information to which the area label information is added.
  • FIG. 8B illustrates a case where the area identification information LB1 is assigned to the area indicating the passage, the area identification information LB2 is provided to the area indicating the doorway, and the area identification information LB3 is provided to the area indicating the room.
  • the animal presence score SR1 in the area showing the passage, the animal presence score SR2 in the area showing the doorway, and the animal presence score SR3 in the area showing the room are set to "SR1> SR2> SR3".
  • the map generation block performs the animal body deletion process.
  • the map generation block 30 is based on the feature shape label information generated in step ST13 and the area label information generated in step ST15, and the animal body score SL of the animal body candidate and the animal body presence score of the area where the animal body candidate is located.
  • the evaluation value VE is calculated by performing the calculation of the equation (1) using the animal body score SL and the animal body existence score SR.
  • the coefficients ⁇ and ⁇ are weights for the animal body score SL and the animal body existence score SR, and are set in advance. Further, one of the coefficients ⁇ and ⁇ may be “0”.
  • VE ⁇ ⁇ SR + ⁇ ⁇ SL ⁇ ⁇ ⁇ (1)
  • the map generation block 30 determines that the animal candidate whose evaluation value VE is larger than the preset threshold Th is the deletion target, and deletes the animal candidate to be deleted from the map information. For example, in the case of (b) of FIG. 8, the luggage carrier OB1 at which the passage is located is deleted from the map information, and the box OB3 placed in the room is included in the map information.
  • FIG. 6 a case where the process related to the detection of the animal body candidate and the process related to the area recognition are performed in parallel is illustrated, but one of the processes may be performed first and the other process may be performed later.
  • the map generation block is based on the animal body score of the animal body candidate based on the feature shape extracted by the feature shape extraction process and the animal body score of the area recognized by the area recognition process, and the animal body from the map information.
  • the map information in which the animal body is deleted can be quickly obtained.
  • the animal body can be deleted even if the spatial structure changes significantly with the passage of time.
  • the animal body can be determined based on the characteristic shape of the object or the area information indicating the spatial area where the object is located in the map information, even if the object cannot be recognized by semantic segmentation, for example, from the map information. You will be able to delete the animal body.
  • the animal body deletion filter 33 deletes the distance measurement data of the animal body candidate to be deleted from the distance measurement data based on the determination result of the animal body. ..
  • the map generation unit 34 can generate map information that does not include the animal body.
  • FIG. 9 illustrates the configuration of the map utilization block.
  • the map utilization block 40 includes a sensor unit 41, a dead reckoning unit 42, a feature shape extraction unit 43, an animal body candidate extraction unit 44, a map information storage unit 45, a weighting processing unit 46, a star reckoning unit 47, and a self-position integration unit 48. Have.
  • the sensor unit 41 is configured in the same manner as the map generation block 30, and has an inner world sensor 411 and an outer world sensor 412.
  • the internal world sensor 411 is configured by using a position sensor, an angle sensor, an acceleration sensor, a gyro sensor, and the like, and acquires sensing data related to the moving body itself and outputs the sensing data to the dead reckoning unit 42.
  • the external world sensor 412 is configured by using the distance measuring sensor 412a, the image sensor 412b, and the like, and acquires sensing data regarding the surrounding environment of the moving body.
  • the distance measuring sensor 412a of the outside world sensor 412 outputs sensing data (distance measuring data) indicating the distance to a surrounding object to the feature shape extraction unit 43 and the weighting processing unit 46.
  • the image sensor 412b of the external world sensor 412 outputs sensing data (imaging image data) to the feature shape extraction unit 43.
  • the dead reckoning unit 42 estimates the self-position by determining in which direction and how much the moving body has moved based on the sensing data output from the internal sensor 411, and position information indicating the estimated current self-position. Is output to the self-position integration unit 48.
  • the feature shape extraction unit 43 uses the feature shape recognition model generated by the learning block 20 based on the captured image indicated by the captured image data output from the external world sensor 412 or the depth image indicated by the captured image and the distance measurement data. Feature shape extraction is performed. The feature shape extraction unit 43 generates a feature shape list showing the extracted feature shape and outputs it to the animal body candidate extraction unit 44.
  • the animal body candidate extraction unit 44 extracts an object including the feature shape shown in the feature shape list as an animal body candidate based on the feature shape list output from the feature shape extraction unit 43.
  • the animal body candidate extraction unit 44 outputs information indicating the extracted animal body candidate to the weighting processing unit 46.
  • the map information storage unit 45 stores the map information (preliminary map information) generated by the map generation block 30.
  • the map information storage unit 45 outputs the prior map information to which the area label information is added to the weighting processing unit 46. Further, the map information storage unit 45 outputs the advance map information to the star reckoning unit 47.
  • the weighting processing unit 46 weights the sensing data supplied from the external sensor 412. For example, the weighting processing unit 46 weights the depth image indicated by the distance measurement data based on at least one of the information indicating the animal body candidate or the area label information of the prior map information, and the area of the animal body candidate. And reduce the weight of the area where the animal body is likely to exist. The weighting processing unit 46 weights the depth image according to at least one of the animal body score of the animal body candidate and the animal body presence score of the region where the animal body candidate is located, so that the animal body is likely to exist. The weight is reduced as the animal-like appearance increases. The weighting processing unit 46 outputs the depth image after the weighting processing to the star reckoning unit 47.
  • the weighting processing unit 46 determines the animal body in the same manner as the determination unit 38 of the map generation block 30 based on the information indicating the animal body candidate and the area label information of the prior map information, and determines that the animal body is an animal body in the depth image. The weight of the created area may be reduced.
  • the star reckoning unit 47 estimates the self-position by matching the depth image output from the weighting processing unit 46 with the prior map information stored in the map information storage unit 45, and indicates the estimated current self-position.
  • the position information is output to the self-position integration unit 48.
  • the weights of the animal body region and the region where the animal body is likely to exist are reduced, so that the position information generated by the star reckoning unit 47 is influenced by the animal body. It becomes less information.
  • the star reckoning unit 47 outputs the generated position information to the self-position integration unit 48.
  • the self-position integration unit 48 integrates the position information output from the dead reckoning unit 42 and the position information output from the star reckoning unit 47.
  • the self-position integration unit 48 integrates the two position information obtained by the dead reckoning unit 42 and the star reckoning unit 47 by using an extended Kalman filter, a grid point observer, or the like, and is obtained by, for example, the dead reckoning unit 42. It generates and outputs self-position information that has less error than the position information and is more accurate than the position information obtained by the star reckoning unit 47.
  • FIG. 10 is a flowchart illustrating a part of the operation of the map utilization block.
  • the map utilization block 40 acquires sensing data.
  • the map utilization block 40 acquires the sensing data generated by the inner world sensor 411 and the outer world sensor 412 of the sensor unit 41, and proceeds to step ST22.
  • step ST22 the map utilization block performs feature shape extraction processing.
  • the map utilization block 40 extracts the feature shape using the feature shape recognition model generated in the learning block 20 based on the sensing data (for example, ranging data and the captured image data) acquired in step ST21, and the extracted feature shape.
  • a feature shape list indicating the above is generated, and the process proceeds to step ST23.
  • step ST23 the map generation block performs animal candidate extraction processing.
  • the map utilization block 40 determines an animal body based on the feature shape extracted in step ST22, and sets the determined animal body as an animal body candidate. Further, the map utilization block 40 assigns an animal body score indicating the animal body-likeness to the animal body candidate, and proceeds to step ST23.
  • the map utilization block is weighted.
  • the map utilization block 40 calculates the weight for each animal body candidate extracted in step ST23 based on the animal body score of the animal body candidate and the animal body presence score of the area on the map where the animal body candidate is located. For example, the calculation of the equation (2) is performed using the animal body score SL and the animal body existence score SR to calculate the weight WE of the distance measurement data indicating the animal body candidate.
  • FIG. 11 illustrates the weighting processing result.
  • the animal presence score SR1 of the area identification information LB1 and the animal score SL1 of the luggage carrier have score values as described above.
  • the weight WEob1 of the grid Gob1 on which the luggage carrier is located becomes smaller.
  • the grid is weighted based on the animal body score SL of the animal body candidate and the animal body existence score SR of the area where the animal body candidate is located, and the estimation level with the animal body area is performed.
  • the weight is reduced as the value increases. Therefore, for example, when performing star reckoning, it is possible to estimate the self-position by reducing the influence of the animal body.
  • the animal body is quickly deleted from the pre-map information used in the map utilization block 40, so that the star reckoning unit 47 can estimate the self-position with less influence of the animal body. .. Further, since the weight of the grid indicating the animal body candidate is reduced, the star reckoning unit 47 can estimate the self-position with less influence of the animal body. Further, since the self-position is estimated by integrating the position information output from the dead reckoning unit 42 and the position information output from the star reckoning unit 47, for example, the position information based on the sensing data from the internal sensor is used. Even if the accuracy decreases, the self-position can be estimated accurately. Furthermore, the self-position can be estimated even when the spatial structure changes significantly with the passage of time.
  • animal body candidates can be extracted based on the characteristic shape of the object, the influence of the animal body that cannot be recognized by, for example, semantic segmentation can be reduced. Further, the animal body may be detected by semantic segmentation, and the object which is not detected as an animal body by the semantic segmentation may be determined to be an animal body by using this technique.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure includes any type of movement such as automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobility, airplanes, drones, ships, robots, construction machines, agricultural machines (tractors), and the like. It may be realized as a device mounted on the body.
  • FIG. 12 is a block diagram showing a schematic configuration example of a vehicle control system 7000, which is an example of a mobile control system to which the technique according to the present disclosure can be applied.
  • the vehicle control system 7000 includes a plurality of electronic control units connected via the communication network 7010.
  • the vehicle control system 7000 includes a drive system control unit 7100, a body system control unit 7200, a battery control unit 7300, an external information detection unit 7400, an in-vehicle information detection unit 7500, and an integrated control unit 7600. ..
  • the communication network 7010 connecting these plurality of control units conforms to any standard such as CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network) or FlexRay (registered trademark). It may be an in-vehicle communication network.
  • CAN Controller Area Network
  • LIN Local Interconnect Network
  • LAN Local Area Network
  • FlexRay registered trademark
  • Each control unit includes a microcomputer that performs arithmetic processing according to various programs, a storage unit that stores a program executed by the microcomputer or parameters used for various arithmetics, and a drive circuit that drives various control target devices. To be equipped.
  • Each control unit is provided with a network I / F for communicating with other control units via the communication network 7010, and is connected to devices or sensors inside or outside the vehicle by wired communication or wireless communication.
  • a communication I / F for performing communication is provided. In FIG.
  • control unit 7600 As the functional configuration of the integrated control unit 7600, the microcomputer 7610, the general-purpose communication I / F 7620, the dedicated communication I / F 7630, the positioning unit 7640, the beacon receiving unit 7650, the in-vehicle device I / F 7660, the audio image output unit 7670, The vehicle-mounted network I / F 7680 and the storage unit 7690 are shown.
  • Other control units also include a microcomputer, a communication I / F, a storage unit, and the like.
  • the drive system control unit 7100 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 7100 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle.
  • the drive system control unit 7100 may have a function as a control device such as ABS (Antilock Brake System) or ESC (Electronic Stability Control).
  • the vehicle condition detection unit 7110 is connected to the drive system control unit 7100.
  • the vehicle state detection unit 7110 may include, for example, a gyro sensor that detects the angular velocity of the axial rotation motion of the vehicle body, an acceleration sensor that detects the acceleration of the vehicle, an accelerator pedal operation amount, a brake pedal operation amount, or steering wheel steering. Includes at least one of the sensors for detecting angular velocity, engine speed, wheel speed, and the like.
  • the drive system control unit 7100 performs arithmetic processing using a signal input from the vehicle state detection unit 7110 to control an internal combustion engine, a drive motor, an electric power steering device, a braking device, and the like.
  • the body system control unit 7200 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body system control unit 7200 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps.
  • the body system control unit 7200 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches.
  • the body system control unit 7200 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
  • the battery control unit 7300 controls the secondary battery 7310, which is the power supply source of the drive motor, according to various programs. For example, information such as the battery temperature, the battery output voltage, or the remaining capacity of the battery is input to the battery control unit 7300 from the battery device including the secondary battery 7310. The battery control unit 7300 performs arithmetic processing using these signals, and controls the temperature control of the secondary battery 7310 or the cooling device provided in the battery device.
  • the vehicle outside information detection unit 7400 detects information outside the vehicle equipped with the vehicle control system 7000.
  • the image pickup unit 7410 and the vehicle exterior information detection unit 7420 is connected to the vehicle exterior information detection unit 7400.
  • the imaging unit 7410 includes at least one of a ToF (Time Of Flight) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras.
  • the vehicle exterior information detection unit 7420 is used to detect, for example, the current weather or an environmental sensor for detecting the weather, or other vehicles, obstacles, pedestrians, etc. around the vehicle equipped with the vehicle control system 7000. At least one of the ambient information detection sensors is included.
  • the environmental sensor may be, for example, at least one of a raindrop sensor that detects rainy weather, a fog sensor that detects fog, a sunshine sensor that detects the degree of sunshine, and a snow sensor that detects snowfall.
  • the ambient information detection sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) device.
  • the image pickup unit 7410 and the vehicle exterior information detection unit 7420 may be provided as independent sensors or devices, or may be provided as a device in which a plurality of sensors or devices are integrated.
  • FIG. 13 shows an example of the installation positions of the image pickup unit 7410 and the vehicle exterior information detection unit 7420.
  • the imaging units 7910, 7912, 7914, 7916, 7918 are provided, for example, at at least one of the front nose, side mirrors, rear bumpers, back door, and upper part of the windshield of the vehicle interior of the vehicle 7900.
  • the image pickup unit 7910 provided on the front nose and the image pickup section 7918 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 7900.
  • the imaging units 7912 and 7914 provided in the side mirrors mainly acquire images of the side of the vehicle 7900.
  • the image pickup unit 7916 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 7900.
  • the imaging unit 7918 provided on the upper part of the windshield in the vehicle interior is mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 13 shows an example of the shooting range of each of the imaging units 7910, 7912, 7914, 7916.
  • the imaging range a indicates the imaging range of the imaging unit 7910 provided on the front nose
  • the imaging ranges b and c indicate the imaging ranges of the imaging units 7912 and 7914 provided on the side mirrors, respectively
  • the imaging range d indicates the imaging range d.
  • the imaging range of the imaging unit 7916 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 7910, 7912, 7914, 7916, a bird's-eye view image of the vehicle 7900 as viewed from above can be obtained.
  • the vehicle exterior information detection units 7920, 7922, 7924, 7926, 7928, 7930 provided on the front, rear, side, corners and the upper part of the windshield in the vehicle interior of the vehicle 7900 may be, for example, an ultrasonic sensor or a radar device.
  • the vehicle exterior information detection units 7920, 7926, 7930 provided on the front nose, rear bumper, back door, and upper part of the windshield in the vehicle interior of the vehicle 7900 may be, for example, a lidar device.
  • These out-of-vehicle information detection units 7920 to 7930 are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, or the like.
  • the vehicle outside information detection unit 7400 causes the image pickup unit 7410 to capture an image of the outside of the vehicle and receives the captured image data. Further, the vehicle exterior information detection unit 7400 receives detection information from the connected vehicle exterior information detection unit 7420. When the vehicle exterior information detection unit 7420 is an ultrasonic sensor, a radar device, or a lidar device, the vehicle exterior information detection unit 7400 transmits ultrasonic waves, electromagnetic waves, or the like, and receives received reflected wave information.
  • the vehicle exterior information detection unit 7400 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or a character on a road surface based on the received information.
  • the vehicle exterior information detection unit 7400 may perform an environment recognition process for recognizing rainfall, fog, road surface conditions, etc., based on the received information.
  • the vehicle outside information detection unit 7400 may calculate the distance to an object outside the vehicle based on the received information.
  • the vehicle exterior information detection unit 7400 may perform image recognition processing or distance detection processing for recognizing a person, a vehicle, an obstacle, a sign, a character on the road surface, or the like based on the received image data.
  • the vehicle exterior information detection unit 7400 performs processing such as distortion correction or alignment on the received image data, and synthesizes the image data captured by different imaging units 7410 to generate a bird's-eye view image or a panoramic image. May be good.
  • the vehicle exterior information detection unit 7400 may perform the viewpoint conversion process using the image data captured by different imaging units 7410.
  • the in-vehicle information detection unit 7500 detects the in-vehicle information.
  • a driver state detection unit 7510 that detects the driver's state is connected to the in-vehicle information detection unit 7500.
  • the driver state detection unit 7510 may include a camera that captures the driver, a biosensor that detects the driver's biological information, a microphone that collects sound in the vehicle interior, and the like.
  • the biosensor is provided on, for example, the seat surface or the steering wheel, and detects the biometric information of the passenger sitting on the seat or the driver holding the steering wheel.
  • the in-vehicle information detection unit 7500 may calculate the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 7510, and may determine whether the driver is dozing or not. You may.
  • the in-vehicle information detection unit 7500 may perform processing such as noise canceling processing on the collected audio signal.
  • the integrated control unit 7600 controls the overall operation in the vehicle control system 7000 according to various programs.
  • An input unit 7800 is connected to the integrated control unit 7600.
  • the input unit 7800 is realized by a device such as a touch panel, a button, a microphone, a switch or a lever, which can be input-operated by a passenger. Data obtained by recognizing the voice input by the microphone may be input to the integrated control unit 7600.
  • the input unit 7800 may be, for example, a remote control device using infrared rays or other radio waves, or an externally connected device such as a mobile phone or a PDA (Personal Digital Assistant) that supports the operation of the vehicle control system 7000. You may.
  • the input unit 7800 may be, for example, a camera, in which case the passenger can input information by gesture. Alternatively, data obtained by detecting the movement of the wearable device worn by the passenger may be input. Further, the input unit 7800 may include, for example, an input control circuit that generates an input signal based on the information input by the passenger or the like using the input unit 7800 and outputs the input signal to the integrated control unit 7600. By operating the input unit 7800, the passenger or the like inputs various data to the vehicle control system 7000 and instructs the processing operation.
  • the storage unit 7690 may include a ROM (Read Only Memory) for storing various programs executed by the microcomputer, and a RAM (Random Access Memory) for storing various parameters, calculation results, sensor values, and the like. Further, the storage unit 7690 may be realized by a magnetic storage device such as an HDD (Hard Disc Drive), a semiconductor storage device, an optical storage device, an optical magnetic storage device, or the like.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the general-purpose communication I / F 7620 is a general-purpose communication I / F that mediates communication with various devices existing in the external environment 7750.
  • General-purpose communication I / F7620 is a cellular communication protocol such as GSM (registered trademark) (Global System of Mobile communications), WiMAX (registered trademark), LTE (registered trademark) (Long Term Evolution) or LTE-A (LTE-Advanced).
  • GSM Global System of Mobile communications
  • WiMAX registered trademark
  • LTE registered trademark
  • LTE-A Long Term Evolution-Advanced
  • Bluetooth® may be implemented.
  • the general-purpose communication I / F 7620 connects to a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or a business-specific network) via, for example, a base station or an access point. You may. Further, the general-purpose communication I / F7620 uses, for example, P2P (Peer To Peer) technology, and is a terminal existing in the vicinity of the vehicle (for example, a terminal of a driver, a pedestrian, or a store, or an MTC (Machine Type Communication) terminal). May be connected with.
  • P2P Peer To Peer
  • MTC Machine Type Communication
  • the dedicated communication I / F 7630 is a communication I / F that supports a communication protocol formulated for use in a vehicle.
  • the dedicated communication I / F7630 uses a standard protocol such as WAVE (Wireless Access in Vehicle Environment), DSRC (Dedicated Short Range Communications), or a cellular communication protocol, which is a combination of the lower layer IEEE802.11p and the upper layer IEEE1609. May be implemented.
  • the dedicated communication I / F7630 typically includes vehicle-to-vehicle (Vehicle to Vehicle) communication, road-to-vehicle (Vehicle to Infrastructure) communication, vehicle-to-home (Vehicle to Home) communication, and pedestrian-to-pedestrian (Vehicle to Pedestrian) communication. ) Carry out V2X communication, which is a concept that includes one or more of communications.
  • the positioning unit 7640 receives, for example, a GNSS signal from a GNSS (Global Navigation Satellite System) satellite (for example, a GPS signal from a GPS (Global Positioning System) satellite), executes positioning, and executes positioning, and the latitude, longitude, and altitude of the vehicle. Generate location information including.
  • the positioning unit 7640 may specify the current position by exchanging signals with the wireless access point, or may acquire position information from a terminal such as a mobile phone, PHS, or smartphone having a positioning function.
  • the beacon receiving unit 7650 receives radio waves or electromagnetic waves transmitted from a radio station or the like installed on the road, and acquires information such as the current position, traffic jam, road closure, or required time.
  • the function of the beacon receiving unit 7650 may be included in the above-mentioned dedicated communication I / F 7630.
  • the in-vehicle device I / F 7660 is a communication interface that mediates the connection between the microprocessor 7610 and various in-vehicle devices 7760 existing in the vehicle.
  • the in-vehicle device I / F7660 may establish a wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication) or WUSB (Wireless USB).
  • a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication) or WUSB (Wireless USB).
  • the in-vehicle device I / F7660 is connected via a connection terminal (and a cable if necessary) (not shown), USB (Universal Serial Bus), HDMI (registered trademark) (High-Definition Multimedia Interface, or MHL (Mobile High)).
  • a wired connection such as -definition Link
  • MHL Mobile High-definition Link
  • the in-vehicle device 7760 includes, for example, at least one of a mobile device or a wearable device owned by a passenger, or an information device carried in or attached to a vehicle.
  • the in-vehicle device 7760 may include a navigation device that searches for a route to an arbitrary destination.
  • the in-vehicle device I / F 7660 is a control signal to and from these in-vehicle devices 7760. Or exchange the data signal.
  • the in-vehicle network I / F7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010.
  • the vehicle-mounted network I / F7680 transmits and receives signals and the like according to a predetermined protocol supported by the communication network 7010.
  • the microcomputer 7610 of the integrated control unit 7600 is via at least one of general-purpose communication I / F7620, dedicated communication I / F7630, positioning unit 7640, beacon receiving unit 7650, in-vehicle device I / F7660, and in-vehicle network I / F7680. Based on the information acquired in the above, the vehicle control system 7000 is controlled according to various programs. For example, the microcomputer 7610 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the acquired information inside and outside the vehicle, and outputs a control command to the drive system control unit 7100. May be good.
  • the microcomputer 7610 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. Cooperative control may be performed for the purpose of.
  • the microcomputer 7610 automatically travels autonomously without relying on the driver's operation by controlling the driving force generator, steering mechanism, braking device, etc. based on the acquired information on the surroundings of the vehicle. Coordinated control for the purpose of driving or the like may be performed.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 7610 has information acquired via at least one of a general-purpose communication I / F7620, a dedicated communication I / F7630, a positioning unit 7640, a beacon receiving unit 7650, an in-vehicle device I / F7660, and an in-vehicle network I / F7680. Based on the above, the three-dimensional distance information between the vehicle and an object such as a surrounding structure or a person may be generated, and local map information including the peripheral information of the current position of the vehicle may be generated. Further, the microprocessor 7610 may predict a danger such as a vehicle collision, a pedestrian or the like approaching or entering a closed road based on the acquired information, and may generate a warning signal.
  • the warning signal may be, for example, a signal for generating a warning sound or turning on a warning lamp.
  • the audio image output unit 7670 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger or the outside of the vehicle of the information.
  • an audio speaker 7710, a display unit 7720, and an instrument panel 7730 are exemplified as output devices.
  • the display unit 7720 may include, for example, at least one of an onboard display and a heads-up display.
  • the display unit 7720 may have an AR (Augmented Reality) display function.
  • the output device may be a wearable device such as a headphone or a spectacle-type display worn by a passenger, or another device such as a projector or a lamp other than these devices.
  • the display device displays the results obtained by various processes performed by the microcomputer 7610 or the information received from other control units in various formats such as texts, images, tables, and graphs. Display visually.
  • the audio output device converts an audio signal composed of reproduced audio data, acoustic data, or the like into an analog signal and outputs the audio signal audibly.
  • At least two control units connected via the communication network 7010 may be integrated as one control unit.
  • each control unit may be composed of a plurality of control units.
  • the vehicle control system 7000 may include another control unit (not shown).
  • the other control unit may have a part or all of the functions carried out by any of the control units. That is, as long as information is transmitted and received via the communication network 7010, predetermined arithmetic processing may be performed by any control unit.
  • a sensor or device connected to one of the control units may be connected to the other control unit, and the plurality of control units may send and receive detection information to and from each other via the communication network 7010. .
  • a computer program for realizing the function of the information processing system 10 according to the present embodiment shown in FIG. 1 can be implemented in any control unit or the like. It is also possible to provide a computer-readable recording medium in which such a computer program is stored.
  • the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Further, the above computer program may be distributed via, for example, a network without using a recording medium.
  • the animal body can be quickly deleted from the map information. , It becomes possible to detect the self-position accurately by reducing the influence of the animal body. Therefore, it is possible to accurately perform coordinated control for the purpose of automatic driving and the like.
  • the series of processes described in the specification can be executed by hardware, software, or a composite configuration of both.
  • the program that records the processing sequence is installed in the memory in the computer embedded in the dedicated hardware and executed.
  • the program can be installed and executed on a general-purpose computer capable of executing various processes.
  • the program can be recorded in advance on a hard disk as a recording medium, an SSD (Solid State Drive), or a ROM (Read Only Memory).
  • the program is a flexible disc, CD-ROM (Compact Disc Read Only Memory), MO (Magneto optical) disc, DVD (Digital Versatile Disc), BD (Blu-Ray Disc (registered trademark)), magnetic disc, semiconductor memory card. It can be temporarily or permanently stored (recorded) on a removable recording medium such as an optical disc.
  • a removable recording medium can be provided as so-called package software.
  • the program may be transferred from the download site to the computer wirelessly or by wire via a network such as LAN (Local Area Network) or the Internet.
  • the computer can receive the program transferred in this way and install it on a recording medium such as a built-in hard disk.
  • the information processing device of the present technology can have the following configurations.
  • An extraction unit that extracts characteristic shape of an object existing in sensing data or area information indicating a spatial area in which the object is located in map information.
  • a determination unit that determines whether the object is an animal body based on the extraction result by the extraction unit, and
  • An information processing device including a map information processing unit that deletes the object from the map information when the determination unit determines that the object is an animal body.
  • the extraction unit extracts animal body candidates based on the characteristic shape of the extracted object.
  • the information processing device according to (1), wherein the determination unit determines whether the animal body candidate extracted by the extraction unit is an animal body.
  • the determination unit determines whether or not the animal body candidate is an animal body based on at least one of the animal body score of the animal body candidate and the animal body presence score of the region where the animal body candidate is located ( The information processing apparatus according to 5).
  • the extraction unit performs region recognition on the spatial region indicated by the sensing data using a region recognition model generated in advance, and then performs region recognition.
  • the information processing apparatus according to any one of (1) to (6), wherein the map information processing unit adds area label information relating to the recognized area to the map information based on the area recognition result of the extraction unit.
  • the sensing data is data acquired by an external sensor, and is The information processing apparatus according to any one of (1) to (7), comprising a map information generation unit that generates the map information based on the sensing data.
  • a weighting processing unit for weighting the sensing data is further provided.
  • the extraction unit extracts animal body candidates based on the characteristic shape of the extracted object.
  • the information processing apparatus wherein the weighting processing unit weights the sensing data indicating the animal body candidate extracted by the extraction unit.
  • the map information is provided with area label information indicating the animal body existence score indicating the ease of existence of the animal body for each area.
  • the information processing device according to (10) or (11), wherein the weighting processing unit weights the sensing data based on the area label information.
  • a dead reckoning unit that detects its own position based on the sensing data acquired by the internal sensor, and The information processing apparatus according to any one of (9) to (13), further comprising a self-position integrating unit that integrates the self-position detected by the dead reckoning and the self-position detected by the star reckoning unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A region extraction unit 35 performs region recognition on map information and extracts region information indicating regions having different possibilities of presence of a moving object. In the region information, a moving object presence score indicating the ease of presence of a moving object in each recognized region is set. A characteristic shape extraction unit 36 extracts characteristic shapes of objects present in sensing data acquired by an external sensor 312. A moving object candidate extraction unit 37 sets, for the objects, moving object scores indicating moving object-likenesses on the basis of the extraction results of the characteristic shapes, and extracts a moving object candidate on the basis of the moving object scores. A determination unit 38 determines whether the moving object candidate is a moving object on the basis of at least either of the moving object score indicated by the extraction result of the moving object candidate extraction unit 37 and the moving object presence score of a region where the moving object candidate is located in the region information extracted by the region extraction unit 35. A map information processing unit 39 deletes the moving object from map information. Thus, the map information from which the moving object is removed can be rapidly generated.

Description

情報処理装置と情報処理方法およびプログラムInformation processing equipment and information processing methods and programs
 この技術は、情報処理装置と情報処理方法およびプログラムに関し、動物体が除かれた地図情報を速やかに生成できるようにする。 This technology makes it possible to quickly generate map information excluding animals with respect to information processing devices, information processing methods and programs.
 従来、測距センサやイメージセンサ等を利用して自己位置の推定や地図情報の生成だけでなく地図情報の更新が行われている。例えば特許文献1では、時系列画像から物体の抽出および追跡を行いオプティカルフローを算出して、算出されたオプティカルフローに基づき動物体を判別して、判別した動物体を除去して地図情報を生成することが行われている。また、特許文献2では、走行環境を示す地図情報上の各点に対して計測回数を記録して、計測回数がしきい値以下である点の位置データは、計測誤差によるか、動物体である可能性が高いので削除して、地図情報を更新することが行われている。 Conventionally, map information has been updated as well as self-position estimation and map information generation using distance measurement sensors and image sensors. For example, in Patent Document 1, an object is extracted and tracked from a time-series image, an optical flow is calculated, an animal body is discriminated based on the calculated optical flow, and the discriminated animal body is removed to generate map information. Is being done. Further, in Patent Document 2, the number of measurements is recorded for each point on the map information indicating the driving environment, and the position data of the point where the number of measurements is equal to or less than the threshold value is due to a measurement error or in an animal body. Since there is a high possibility that it is, it is deleted and the map information is updated.
特開2016-126662号公報Japanese Unexamined Patent Publication No. 2016-126662 特開2004-326264号公報Japanese Unexamined Patent Publication No. 2004-326264
 ところで、特許文献1や特許文献2では、地図情報を更新するために時系列画像あるいは複数回の計測結果が必要となる。このため、所定時間分の画像や計測結果が得られないと、地図情報から動物体を削除できないため、速やかに動物体を削除した地図情報を生成することができない。また、環境が変化してから対処するこれらの手法は、計測ごとに自己位置推定結果と計測結果に基づき地図情報の更新処理を行うが、空間構造が著しく変化した場合には自己位置推定が困難となり、動物体を削除できない。 By the way, in Patent Document 1 and Patent Document 2, a time-series image or a plurality of measurement results are required to update the map information. Therefore, if the images and measurement results for a predetermined time cannot be obtained, the animal body cannot be deleted from the map information, and therefore the map information in which the animal body is deleted cannot be promptly generated. In addition, these methods, which deal with changes in the environment, update the map information based on the self-position estimation result and the measurement result for each measurement, but it is difficult to estimate the self-position when the spatial structure changes significantly. And the animal body cannot be deleted.
 そこで、この技術では動物体が除かれた地図情報を速やかに生成できる情報処理装置と情報処理方法およびプログラムを提供することを目的とする。 Therefore, the purpose of this technology is to provide an information processing device, an information processing method, and a program that can quickly generate map information excluding animal bodies.
 この技術の第1の側面は、
 センシングデータ中に存在する物体の特徴形状、または地図情報中における前記物体が位置する空間領域を示す領域情報を抽出する抽出部と、
 前記抽出部による抽出結果に基づいて、前記物体が動物体であるか判定する判定部と、
 前記判定部において前記物体が動物体と判定された場合、前記地図情報から前記物体を削除する地図情報処理部と
を備える情報処理装置にある。
The first aspect of this technology is
An extraction unit that extracts the characteristic shape of an object existing in the sensing data or the area information indicating the spatial area in which the object is located in the map information.
A determination unit that determines whether the object is an animal body based on the extraction result by the extraction unit, and
An information processing device including a map information processing unit that deletes the object from the map information when the determination unit determines that the object is an animal body.
 この技術において、抽出部は、外界センサを用いて取得されたセンシングデータ中に存在する物体の特徴形状、または地図情報中における物体が位置する空間領域を示す領域情報を抽出する。抽出部は、例えば予め生成されている特徴形状認識モデルを用いて物体の特徴形状を抽出して、この抽出結果に基づき、動物体らしさを示す動物体スコアを物体に設定する。抽出部は、動物体スコアに基づいて動物体候補の抽出を行う。また、抽出部は、予め生成されている領域認識モデルを用いてセンシングデータが示す空間領域に対して領域認識を行い、認識された領域毎に動物体の存在しやすさを示す動物体存在スコアを設定する。 In this technology, the extraction unit extracts the characteristic shape of the object existing in the sensing data acquired by using the external sensor, or the area information indicating the spatial area in which the object is located in the map information. The extraction unit extracts the feature shape of the object using, for example, a feature shape recognition model generated in advance, and sets an animal body score indicating the animal body likeness to the object based on the extraction result. The extraction unit extracts animal body candidates based on the animal body score. In addition, the extraction unit recognizes the spatial region indicated by the sensing data using the region recognition model generated in advance, and the animal body existence score indicating the ease of existence of the animal body for each recognized region. To set.
 判定部は、抽出部による抽出結果に基づいて、例えば動物体候補が動物体であるか判定する。判定部は、例えば動物体候補の動物体スコアと動物体候補が位置する領域の動物体存在スコアの少なくともいずれかに基づいて、動物体候補が動物体であるか判定する。 The determination unit determines, for example, whether the animal body candidate is an animal body based on the extraction result by the extraction unit. The determination unit determines whether or not the animal body candidate is an animal body, for example, based on at least one of the animal body score of the animal body candidate and the animal body presence score of the region where the animal body candidate is located.
 地図情報処理部は、判定部において物体が動物体と判定された場合、地図情報から物体を削除する。また、地図情報処理部は、抽出部の領域認識結果に基づき、認識された領域に関する領域ラベル情報を地図情報に付与する。地図情報は、外界センサで取得されセンシングデータに基づいて地図情報生成部で生成する。 The map information processing unit deletes the object from the map information when the determination unit determines that the object is an animal body. Further, the map information processing unit adds the area label information regarding the recognized area to the map information based on the area recognition result of the extraction unit. The map information is generated by the map information generation unit based on the sensing data acquired by the external sensor.
 また、スターレコニング部は、センシングデータと地図情報処理部で動物体が削除された地図情報を用いて自己位置を検出する。さらに、センシングデータに対する重み付けを行う重み付け処理部を設けて、スターレコニング部は、重み付け処理部で重み付けが行われたセンシングデータを用いて自己位置を検出する。重み付け処理部は、抽出部によって抽出された動物体候補を示すセンシングデータに対して重み付けを行う。また、地図情報には、動物体の存在しやすさを示す動物体存在スコアを領域毎に示す領域ラベル情報が付与されており、重み付け処理部は、領域ラベル情報に基づきセンシングデータの重み付けを行ってもよい。例えば、重み付け処理部は、動物体存在スコアと抽出部で特徴形状の抽出結果に基づき動物体候補に設定された動物体らしさを示す動物体スコアとの少なくともいずれかに応じて、センシングデータに対する重み付けを行い、動物体が存在しやすくなり、あるいは動物体らしさが高くなるに伴い、重みを少なくする。 In addition, the star reckoning unit detects its own position using the sensing data and the map information in which the animal body has been deleted by the map information processing unit. Further, a weighting processing unit for weighting the sensing data is provided, and the star reckoning unit detects the self-position using the sensing data weighted by the weighting processing unit. The weighting processing unit weights the sensing data indicating the animal body candidates extracted by the extraction unit. Further, the map information is provided with area label information indicating the animal body existence score indicating the ease of existence of the animal body for each area, and the weighting processing unit weights the sensing data based on the area label information. You may. For example, the weighting processing unit weights the sensing data according to at least one of the animal body presence score and the animal body score indicating the animal-likeness set as the animal body candidate based on the extraction result of the feature shape in the extraction unit. As the animal body becomes easier to exist or becomes more animal-like, the weight is reduced.
 さらに、内界センサで取得されたセンシングデータに基づいて自己位置を検出するデッドレコニング部と、デッドレコニング部で検出された自己位置と、スターレコニング部で検出された自己位置を統合する自己位置統合部を備えてもよい。 Furthermore, the dead reckoning unit that detects the self-position based on the sensing data acquired by the internal sensor, the self-position detected by the dead reckoning unit, and the self-position integration that integrates the self-position detected by the star reckoning unit. It may be provided with a part.
 この技術の第2の側面は、
 センシングデータ中に存在する物体の特徴形状、または地図情報中における前記物体が位置する領域を示す領域情報を抽出部で抽出することと、
 前記抽出部による抽出結果に基づいて、前記物体が動物体であるか判定部で判定することと、
 前記判定部において前記物体が動物体と判定された場合、前記地図情報から前記物体を地図情報処理部で削除すること
を含む情報処理方法にある。
The second aspect of this technology is
The extraction unit extracts the characteristic shape of the object existing in the sensing data or the area information indicating the area where the object is located in the map information.
Based on the extraction result by the extraction unit, the determination unit determines whether the object is an animal body.
When the determination unit determines that the object is an animal body, the information processing method includes deleting the object from the map information by the map information processing unit.
 この技術の第3の側面は、
 地図情報の生成をコンピュータで実行させるプログラムであって、
 センシングデータ中に存在する物体の特徴形状、または地図情報中における前記物体が位置する領域を示す領域情報を抽出する手順と、
 前記特徴形状または前記領域情報の抽出結果に基づいて、前記物体が動物体であるか判定する手順と、
 動物体と判定された物体を前記地図情報から削除する手順と
を前記コンピュータで実行させるプログラムにある。
The third aspect of this technology is
A program that causes a computer to generate map information.
The procedure for extracting the characteristic shape of an object existing in the sensing data or the area information indicating the area where the object is located in the map information, and the procedure for extracting the area information.
A procedure for determining whether the object is an animal body based on the feature shape or the extraction result of the region information, and
There is a program in which the computer executes a procedure for deleting an object determined to be an animal body from the map information.
 なお、本技術のプログラムは、例えば、様々なプログラムコードを実行可能な汎用コンピュータに対して、コンピュータ可読な形式で提供する記憶媒体、通信媒体、例えば、光ディスクや磁気ディスク、半導体メモリなどの記憶媒体、あるいは、ネットワークなどの通信媒体によって提供可能なプログラムである。このようなプログラムをコンピュータ可読な形式で提供することにより、コンピュータ上でプログラムに応じた処理が実現される。 The program of the present technology provides, for example, a storage medium, a communication medium, for example, a storage medium such as an optical disk, a magnetic disk, or a semiconductor memory, which is provided in a computer-readable format to a general-purpose computer capable of executing various program codes. Or, it is a program that can be provided by a communication medium such as a network. By providing such a program in a computer-readable format, processing according to the program can be realized on the computer.
本技術の情報処理装置を用いた情報処理システムの構成を例示した図である。It is a figure which illustrated the structure of the information processing system using the information processing apparatus of this technology. 学習ブロックの構成を例示した図である。It is a figure which illustrated the structure of the learning block. 学習ブロックの動作を例示したフローチャートである。It is a flowchart which exemplifies the operation of the learning block. 認識する特徴形状を例示した図である。It is a figure which illustrated the feature shape to recognize. 地図生成ブロックの構成を例示した図である。It is a figure which illustrated the structure of the map generation block. 地図生成ブロックの動作を例示したフローチャートである。It is a flowchart exemplifying the operation of the map generation block. 動物体候補の抽出を説明するための図である。It is a figure for demonstrating the extraction of an animal body candidate. 地図情報を例示した図である。It is a figure which illustrated the map information. 地図活用ブロックの構成を例示した図である。It is a figure which illustrated the structure of the map utilization block. 地図活用ブロックの動作の一部を例示したフローチャートである。It is a flowchart which exemplifies a part of the operation of a map utilization block. 重み付け処理結果を例示した図である。It is a figure which illustrated the weighting processing result. 車両制御システムの概略的な構成の一例を示すブロック図である。It is a block diagram which shows an example of the schematic structure of a vehicle control system. 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。It is explanatory drawing which shows an example of the installation position of the vehicle exterior information detection unit and the image pickup unit.
 以下、本技術を実施するための形態について説明する。なお、説明は以下の順序で行う。
 1.情報処理システムの構成
 2.学習ブロックの構成と動作
 3.地図生成ブロックの構成と動作
 4.地図活用ブロックの構成と動作
 5.応用例
Hereinafter, modes for implementing the present technology will be described. The explanation will be given in the following order.
1. 1. Information processing system configuration 2. Structure and operation of learning block 3. Configuration and operation of map generation block 4. Configuration and operation of map utilization block 5. Application example
 <1.情報処理システムの構成>
 図1は、本技術の情報処理装置を用いた情報処理システムの構成を例示している。情報処理システム10は、学習ブロック20と地図生成ブロック30および地図活用ブロック40を有している。
<1. Information processing system configuration>
FIG. 1 illustrates the configuration of an information processing system using the information processing device of the present technology. The information processing system 10 has a learning block 20, a map generation block 30, and a map utilization block 40.
 学習ブロック20は、学習用画像データ群と画像内のいずれの領域がどのような種類の特徴形状に対応するかを示した特徴形状テーブルとを用いて学習を行い、特徴形状認識モデルを生成する。また、学習ブロック20は、学習用地図データ群と地図内の空間領域に対して動物体の存在可能性が異なる領域を示した領域ラベルテーブルとを用いて学習を行い、領域認識モデルを生成する。 The learning block 20 performs learning using a learning image data group and a feature shape table showing which region in the image corresponds to what kind of feature shape, and generates a feature shape recognition model. .. Further, the learning block 20 performs learning using a learning map data group and a region label table showing regions in which the possibility of existence of an animal body differs from a spatial region in the map, and generates a region recognition model. ..
 地図生成ブロック30は、学習ブロック20で生成したモデルを用いて、内界センサや外界センサで取得されたセンシングデータ中に存在する物体の特徴形状、または地図情報中における物体が位置する空間領域に関する領域情報を抽出して、抽出結果に基づいて物体が動物体であるか判定する。さらに、地図生成ブロック30は、物体が動物体と判定された場合、センシングデータに基づいて生成した地図情報あるいは事前に生成されている地図情報から動物体を削除する。 The map generation block 30 relates to the characteristic shape of the object existing in the sensing data acquired by the internal world sensor or the external world sensor, or the spatial region in which the object is located in the map information, using the model generated by the learning block 20. The area information is extracted, and it is determined whether the object is an animal body based on the extraction result. Further, when the object is determined to be an animal body, the map generation block 30 deletes the animal body from the map information generated based on the sensing data or the map information generated in advance.
 地図活用ブロック40は、地図生成ブロック30で生成された地図と、外界センサあるいは内界センサと外界センサで取得されたセンシングデータに基づき自己位置を推定する。 The map utilization block 40 estimates its own position based on the map generated by the map generation block 30 and the sensing data acquired by the outside world sensor or the inside world sensor and the outside world sensor.
 学習ブロック20と地図生成ブロック30および地図活用ブロック40は独立に設けられてもよく複数のブロックが一体に設けられてもよい。例えば地図生成ブロック30と地図活用ブロック40をロボットや車両等の移動体に設けて、移動体では、センシングデータと特徴形状認識モデルと領域認識モデルに基づき、動物体を削除した地図情報の生成や動物体を削除した地図情報への更新を行う。また、移動体は、センシングデータと動物体を削除した地図情報に基づき自己位置推定や、自己位置推定結果に基づいて行動計画に沿った移動動作を行う。また、移動体の地図生成ブロック30で生成された地図情報を、その後に同じ移動体または他の移動体の地図活用ブロック40で利用して自己位置推定を行うようにしてもよい。また、学習ブロック20や地図生成ブロック30をサーバ等に設けて地図情報の生成を行い、生成した地図情報をクライアントである移動体の地図活用ブロック40で利用して自己位置推定等を行うようにしてもよい。 The learning block 20, the map generation block 30, and the map utilization block 40 may be provided independently, or a plurality of blocks may be provided integrally. For example, a map generation block 30 and a map utilization block 40 are provided on a moving body such as a robot or a vehicle, and the moving body can generate map information in which an animal body is deleted based on sensing data, a feature shape recognition model, and a region recognition model. Update to the map information with the animal body deleted. In addition, the moving body performs self-position estimation based on the sensing data and map information in which the animal body is deleted, and moves according to the action plan based on the self-position estimation result. Further, the map information generated by the map generation block 30 of the moving body may be subsequently used in the map utilization block 40 of the same moving body or another moving body to perform self-position estimation. In addition, a learning block 20 and a map generation block 30 are provided on a server or the like to generate map information, and the generated map information is used by the map utilization block 40 of a moving body as a client to perform self-position estimation and the like. You may.
 <2.学習ブロックの構成と動作>
 図2は、学習ブロックの構成を例示している。学習ブロック20は、データ記憶部21と特徴形状学習器22および領域学習器23を有している。
<2. Learning block structure and operation>
FIG. 2 illustrates the configuration of the learning block. The learning block 20 has a data storage unit 21, a feature shape learner 22, and a region learner 23.
 データ記憶部21には、例えばカラー画像あるいはカラー画像とデプス画像等の学習用画像データ群、および画像内のいずれの領域がどのような種類の特徴形状に対応するか示す特徴形状テーブルが記憶されている。また、データ記憶部21には、例えばロボット等の移動経路の学習用地図データ群、および地図内のいずれの空間領域が動物体の存在可能性が異なる空間領域例えば通路や室内、戸口等のいずれの領域に対応するかを示した領域ラベルテーブルが記憶されている。 The data storage unit 21 stores, for example, a learning image data group such as a color image or a color image and a depth image, and a feature shape table indicating which region in the image corresponds to what kind of feature shape. ing. Further, in the data storage unit 21, for example, a map data group for learning a movement route of a robot or the like, or a space area in the map in which the possibility of existence of an animal body is different, for example, a passage, a room, a doorway, etc. The area label table indicating whether or not it corresponds to the area of is stored.
 特徴形状学習器22は、データ記憶部21に記憶されている学習用画像データ群と特徴形状テーブルを用いて学習を行い、センシングデータ中に存在する物体の特徴形状を認識するための特徴形状認識モデルを生成する。また、領域学習器23は、データ記憶部21に記憶されている学習用地図データ群と領域ラベルテーブルを用いて学習を行い、地図情報(地図データ)が示す空間領域について動物体の存在可能性が異なる領域を認識するための領域認識モデルを生成する。 The feature shape learner 22 performs learning using the learning image data group and the feature shape table stored in the data storage unit 21, and recognizes the feature shape of an object existing in the sensing data. Generate a model. Further, the area learner 23 performs learning using the learning map data group and the area label table stored in the data storage unit 21, and the possibility of existence of an animal body in the spatial area indicated by the map information (map data). Generates a region recognition model for recognizing different regions.
 図3は、学習ブロックの動作を例示したフローチャートである。ステップST1で学習ブロック20はデータ群とテーブルの読み出しを行う。学習ブロック20は、データ記憶部21に記憶されている学習用画像データ群と特徴形状テーブル、および学習用地図データ群と領域ラベルテーブルを読み出してステップST2に進む。 FIG. 3 is a flowchart illustrating the operation of the learning block. In step ST1, the learning block 20 reads out the data group and the table. The learning block 20 reads out the learning image data group and the feature shape table, and the learning map data group and the area label table stored in the data storage unit 21, and proceeds to step ST2.
 ステップST2で学習ブロックは特徴形状認識モデルを生成する。学習ブロック20は、学習用画像データ群と特徴形状テーブルを用いて特徴形状学習器22で学習を行い、特徴形状認識モデルを生成する。 In step ST2, the learning block generates a feature shape recognition model. The learning block 20 performs learning with the feature shape learner 22 using the learning image data group and the feature shape table, and generates a feature shape recognition model.
 図4は、認識する特徴形状を例示している。例えば車輪は、オフィス内において、椅子や机,荷物運搬台車,可動式ホワイトボード等に用いられている。また、車輪は、家庭環境内において、椅子や机,テレビ台,サイドテーブル等に用いられている。さらに、車輪は、屋外において、車や自転車,ベビーカー等に用いられている。 FIG. 4 illustrates the feature shape to be recognized. For example, wheels are used in chairs, desks, luggage carriers, movable whiteboards, etc. in offices. Wheels are used for chairs, desks, TV stands, side tables, etc. in the home environment. Further, the wheels are used outdoors for cars, bicycles, strollers and the like.
 把手は、オフィス内において、引き出しや引き戸に用いられている。また、把手は、家庭環境内において、引き出しや引き戸,バケツ等に用いられている。さらに、把手は、屋外において、手押し台車や玄関扉等に用いられている。 The handle is used for drawers and sliding doors in the office. In addition, handles are used for drawers, sliding doors, buckets, etc. in the home environment. Further, the handle is used outdoors for a push cart, a front door, and the like.
 レールは、オフィス内において、スライドドアや可動本棚等に用いられている。また、レールは、家庭環境内において、スライドドア等に用いられている。さらに、レールは、屋外において、ゲートやスライドドア等に用いられている。 Rails are used for sliding doors, movable bookshelves, etc. in the office. In addition, rails are used for sliding doors and the like in a home environment. Further, rails are used outdoors for gates, sliding doors, and the like.
 このような特徴形状を用いることにより少ないクラス数(特徴形状数)で多くの動物体を扱うことが可能となり、後述する動物体候補の抽出を容易かつ効率的に行うことができる。 By using such a feature shape, it is possible to handle a large number of animal bodies with a small number of classes (number of feature shapes), and it is possible to easily and efficiently extract animal body candidates described later.
 学習ブロック20は、動物体に関連する特徴形状、例えば図4に示す車輪や把手、レール等を認識するための特徴形状認識モデルを生成してステップST3に進む。 The learning block 20 generates a feature shape recognition model for recognizing feature shapes related to the animal body, for example, wheels, handles, rails, etc. shown in FIG. 4, and proceeds to step ST3.
 ステップST3で学習ブロックは領域認識モデルを生成する。学習ブロック20は、学習用地図データ群と領域ラベルテーブルを用いて領域学習器23で学習を行い、動物体の存在可能性が異なる空間領域例えば通路や室内、戸口等の領域を認識する領域認識モデルを生成する。 In step ST3, the learning block generates an area recognition model. The learning block 20 performs learning with the area learner 23 using the learning map data group and the area label table, and recognizes an area such as a passage, a room, a doorway, etc., in which the possibility of existence of an animal body is different. Generate a model.
 なお、ステップST2とステップST3の処理は、いずれを先に行ってもよく、並列して処理を行ってもよい。 Note that the processes of step ST2 and step ST3 may be performed first, or may be processed in parallel.
 <3.地図生成ブロックの構成と動作>
 図5は地図生成ブロックの構成を例示している。地図生成ブロック30は、センサ部31、自己位置推定部32、地図生成部34、領域抽出部35、特徴形状抽出部36、動物体候補抽出部37、判定部38、地図情報処理部39を有している。また、地図生成ブロック30は、後述するように動物体削除フィルタ33を有してもよい。なお、領域抽出部35と特徴形状抽出部36と動物体候補抽出部37は、請求の範囲の抽出部に相当する。
<3. Configuration and operation of map generation block>
FIG. 5 illustrates the configuration of the map generation block. The map generation block 30 includes a sensor unit 31, a self-position estimation unit 32, a map generation unit 34, a region extraction unit 35, a feature shape extraction unit 36, an animal body candidate extraction unit 37, a determination unit 38, and a map information processing unit 39. doing. Further, the map generation block 30 may have an animal body deletion filter 33 as described later. The region extraction unit 35, the feature shape extraction unit 36, and the animal body candidate extraction unit 37 correspond to the extraction unit in the claims.
 センサ部31は内界センサ311と外界センサ312を有している。内界センサ311は、情報処理装置が設けられる移動体自身に関する情報(例えば移動体の位置や姿勢およびその変化等を示す情報)を取得する。内界センサ311は、例えば位置センサや角度センサ、加速度センサ、ジャイロセンサ等が用いられる。内界センサ311は、生成したセンシングデータ(「内界センシングデータ」ともいう)を自己位置推定部32へ出力する。外界センサ312は情報処理装置が設けられる移動体の周辺環境に関する情報(例えば周囲の物体等に関する情報)を取得する。外界センサ312は、例えば測距センサ(LIDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)やTOF(Time Of Flight),ステレオカメラ等)312aと、撮像画像を取得するイメージセンサ312b等が用いられる。外界センサ312の測距センサ312aは、周囲の物体までの距離を示すセンシングデータ(「測距データ」ともいう)を生成して自己位置推定部32と地図生成部34および特徴形状抽出部36へ出力する。また、外界センサ312のイメージセンサ312bは、周辺領域の撮像画像を示すセンシングデータ(「撮像画データ」ともいう)を生成して特徴形状抽出部36へ出力する。 The sensor unit 31 has an inner world sensor 311 and an outer world sensor 312. The internal sensor 311 acquires information about the mobile body itself on which the information processing device is provided (for example, information indicating the position and posture of the moving body and its change). As the internal world sensor 311, for example, a position sensor, an angle sensor, an acceleration sensor, a gyro sensor, or the like is used. The internal world sensor 311 outputs the generated sensing data (also referred to as “internal world sensing data”) to the self-position estimation unit 32. The external sensor 312 acquires information on the surrounding environment of the moving body in which the information processing device is provided (for example, information on surrounding objects and the like). As the external world sensor 312, for example, a range finder (LIDAR (Light Detection and Ringing, Laser Imaging Detection and Ringing), TOF (Time Of Flight), stereo camera, etc.) 312a, an image sensor 312b for acquiring an captured image, and the like are used. .. The distance measuring sensor 312a of the outside world sensor 312 generates sensing data (also referred to as “distance measuring data”) indicating the distance to a surrounding object and sends it to the self-position estimation unit 32, the map generation unit 34, and the feature shape extraction unit 36. Output. Further, the image sensor 312b of the external world sensor 312 generates sensing data (also referred to as “image image data”) indicating an image captured in the peripheral region and outputs the sensing data (also referred to as “image data”) to the feature shape extraction unit 36.
 自己位置推定部32は、センサ部31の内界センサ311で生成された内界センシングデータあるいは内界センシングデータと外界センサ312で生成された測距データに基づき自己位置を推定して、推定した自己位置を示す自己位置情報を判定部38と地図情報処理部39へ出力する。 The self-position estimation unit 32 estimates and estimates the self-position based on the inside world sensing data or the inside world sensing data generated by the inside world sensor 311 of the sensor unit 31 and the distance measurement data generated by the outside world sensor 312. The self-position information indicating the self-position is output to the determination unit 38 and the map information processing unit 39.
 地図生成部34は、センサ部31の外界センサ312で生成された測距データに基づき地図情報を生成する。地図生成部34は、例えば二次元平面を格子に分割して、測距データが示す測距点を対応する格子に割り当てることで占有格子地図を生成する。地図生成部34は、生成した地図情報を地図情報処理部39へ出力する。 The map generation unit 34 generates map information based on the distance measurement data generated by the outside world sensor 312 of the sensor unit 31. The map generation unit 34 generates an occupied grid map by, for example, dividing a two-dimensional plane into grids and assigning the AF points indicated by the distance measurement data to the corresponding grids. The map generation unit 34 outputs the generated map information to the map information processing unit 39.
 領域抽出部35は、学習ブロック20で生成された領域認識モデルを用いて、地図生成部34で生成された地図情報に対する領域認識を行い、動物体の存在可能性が異なる領域をそれぞれ抽出する。また、領域抽出部35は、動物体の存在可能性が異なる領域毎に、領域識別情報と動物体の存在可能性に応じた動物体存在スコアとを含む領域ラベル情報を付与する。例えば、領域抽出部35は、領域認識モデルを用いて領域認識を行い、地図生成部34で生成された地図情報において、動物体の存在可能性が異なる通路や室内,戸口等を示す領域をそれぞれ判定して、いずれの領域であるかを示す領域識別情報を設定する。また、一般的に、通路には動物体が位置することが多く、室内の壁際には固定物が多い。さらに、戸口では通路に比べて動物体が位置することが少ない。したがって、領域抽出部35は、通路に対応する領域では動物体存在スコアを高くして、室内の壁際に対応する領域では動物体存在スコアを低くする。また、戸口等に対応する領域の動物体存在スコアは、通路を示す領域の動物体存在スコアよりも低く、室内の壁際を示す領域の動物体存在スコアよりも高くする。 The region extraction unit 35 uses the region recognition model generated by the learning block 20 to perform region recognition for the map information generated by the map generation unit 34, and extracts regions having different possibilities of existence of the animal body. Further, the region extraction unit 35 assigns region label information including region identification information and an animal body existence score according to the existence possibility of the animal body for each region in which the existence possibility of the animal body is different. For example, the region extraction unit 35 performs region recognition using the region recognition model, and in the map information generated by the map generation unit 34, each region indicates a passage, a room, a doorway, etc., in which the possibility of existence of an animal body is different. Judgment is made, and area identification information indicating which area is set is set. In general, animals are often located in the passages, and there are many fixed objects near the walls in the room. Furthermore, animals are less likely to be located in the doorway than in the passage. Therefore, the region extraction unit 35 raises the animal presence score in the region corresponding to the passage and lowers the animal presence score in the region corresponding to the wall in the room. In addition, the animal presence score in the area corresponding to the doorway or the like is lower than the animal presence score in the area indicating the passage and higher than the animal presence score in the area indicating the wall of the room.
 特徴形状抽出部36は、学習ブロック20で生成された特徴形状認識モデルを用いて、外界センサ312から出力された撮像画データが示す撮像画像、あるいは撮像画像と測距データが示すデプス画像に基づき特徴形状抽出を行う。例えば、特徴形状抽出部36は、特徴形状認識モデルを用いて特徴形状の抽出を行い、撮像画像やデプス画像から特徴形状、例えば車輪や把手、レール等を抽出する。特徴形状抽出部36は、抽出した特徴形状を示す特徴形状リストを生成して、動物体候補抽出部37へ出力する。 The feature shape extraction unit 36 uses the feature shape recognition model generated by the learning block 20 based on the captured image indicated by the captured image data output from the external world sensor 312 or the depth image indicated by the captured image and the distance measurement data. Feature shape extraction is performed. For example, the feature shape extraction unit 36 extracts the feature shape using the feature shape recognition model, and extracts the feature shape, for example, a wheel, a handle, a rail, or the like from the captured image or the depth image. The feature shape extraction unit 36 generates a feature shape list showing the extracted feature shape and outputs it to the animal body candidate extraction unit 37.
 動物体候補抽出部37は、特徴形状抽出部36から出力された特徴形状リストに基づき、特徴形状リストで示された特徴形状を有する物体を動物体候補として抽出する。動物体候補抽出部37は、例えば、センシングデータに基づき物体の切り分けを行い、特徴形状を含む物体を動物体候補として抽出してもよく、特徴形状までの距離および特徴形状との距離や形状の連続性等に基づいて動物体候補を抽出してもよい。動物体候補抽出部37は、抽出した動物体候補を示す動物体候補識別情報と動物体候補の特徴形状に基づき設定した動物体スコアを含む特徴形状ラベル情報を生成する。動物体候補識別情報は、個々に動物体候補を識別するためのラベル情報や物体の位置情報を含む。動物体スコアは、予め特徴形状毎に設定されたスコアを用いて算出する。例えば、車輪を備える物体は、動物体であることが多いことから車輪のスコアを高くする。また、把手を備える物体は、動物体に限らず他の物体でも用いられる場合があることから把手のスコアを車輪のスコアよりも小さくする。また、他の特徴形状についても動物体で多く用いられる場合にスコアを大きくする。動物体候補抽出部37は、動物体候補における特徴形状のスコアの累積値を、動物体らしさを示す動物体スコアとして設定する。また、動物体候補抽出部37は、動物体スコアを用いて動物体候補を抽出してもよい。例えば、動物体候補抽出部37は、動物体スコアが予め設定された閾値よりも高い物体を動物体候補として抽出する。動物体候補抽出部37は、生成した特徴形状ラベル情報を判定部38へ出力する。 The animal body candidate extraction unit 37 extracts an object having the feature shape shown in the feature shape list as an animal body candidate based on the feature shape list output from the feature shape extraction unit 36. The animal body candidate extraction unit 37 may, for example, separate an object based on sensing data and extract an object including a feature shape as an animal body candidate, and may extract the distance to the feature shape and the distance or shape from the feature shape. Animal candidates may be extracted based on continuity or the like. The animal body candidate extraction unit 37 generates the animal body candidate identification information indicating the extracted animal body candidate and the feature shape label information including the animal body score set based on the feature shape of the animal body candidate. The animal body candidate identification information includes label information and object position information for individually identifying animal body candidates. The animal body score is calculated using a score set in advance for each feature shape. For example, an object with wheels often has a high wheel score because it is often an animal body. Further, since the object provided with the handle may be used not only for the animal body but also for other objects, the score of the handle is made smaller than the score of the wheel. In addition, the score is increased when other characteristic shapes are often used in animals. The animal body candidate extraction unit 37 sets the cumulative value of the score of the feature shape in the animal body candidate as the animal body score indicating the animal body-likeness. In addition, the animal body candidate extraction unit 37 may extract animal body candidates using the animal body score. For example, the animal body candidate extraction unit 37 extracts an object whose animal body score is higher than a preset threshold value as an animal body candidate. The animal body candidate extraction unit 37 outputs the generated feature shape label information to the determination unit 38.
 判定部38は、領域抽出部35と特徴形状抽出部36および動物体候補抽出部37による抽出結果に基づいて、物体が動物体であるか判別する。判定部38は、自己位置推定部32で推定された自己位置を基準として、領域抽出部35で生成された領域ラベル情報で示された領域と動物体候補抽出部37で生成された特徴形状ラベル情報で示された動物体候補の位置を対応させる。さらに、判定部38は、動物体候補の動物体スコアと動物体候補が位置する領域の動物体存在スコアに基づき動物体候補が動物体であるか判定して、判定結果を地図情報処理部39へ出力する。 The determination unit 38 determines whether the object is an animal body based on the extraction results of the region extraction unit 35, the feature shape extraction unit 36, and the animal body candidate extraction unit 37. The determination unit 38 uses the self-position estimated by the self-position estimation unit 32 as a reference, the region indicated by the region label information generated by the region extraction unit 35, and the feature shape label generated by the animal candidate extraction unit 37. Correspond to the positions of animal candidates indicated in the information. Further, the determination unit 38 determines whether the animal candidate is an animal body based on the animal body score of the animal body candidate and the animal body existence score of the region where the animal body candidate is located, and determines whether the animal body candidate is an animal body, and determines the determination result in the map information processing unit 39. Output to.
 地図情報処理部39は、領域抽出部35で生成された領域ラベル情報を地図生成部34で生成された地図情報に付与して、自己位置推定部32で推定された自己位置を基準とした地図情報として記憶する。また、地図情報処理部39は、新たに生成された領域ラベル情報付きの地図情報を用いて、記憶している地図情報を更新する。したがって、センサ部31の移動と地図情報の生成等を繰り返すことで、地図情報処理部39に記憶されている地図情報によって示される空間領域を拡張できる。さらに、地図情報処理部39は、判定部38の判定結果で削除対象と判定された動物体候補を地図情報から削除する。この領域ラベル情報が付与された地図情報は、事前地図情報として、地図活用ブロック40で利用される。 The map information processing unit 39 adds the area label information generated by the area extraction unit 35 to the map information generated by the map generation unit 34, and maps based on the self-position estimated by the self-position estimation unit 32. Store as information. Further, the map information processing unit 39 updates the stored map information by using the newly generated map information with the area label information. Therefore, by repeating the movement of the sensor unit 31 and the generation of map information, the spatial area indicated by the map information stored in the map information processing unit 39 can be expanded. Further, the map information processing unit 39 deletes the animal body candidate determined to be the deletion target in the determination result of the determination unit 38 from the map information. The map information to which the area label information is added is used in the map utilization block 40 as advance map information.
 図6は地図生成ブロックの動作を例示したフローチャートである。ステップST11で地図生成ブロックはセンシングデータを取得する。地図生成ブロック30は、センサ部31の内界センサ311や外界センサ312で生成されたセンシングデータを取得してステップST12とステップST14に進む。 FIG. 6 is a flowchart illustrating the operation of the map generation block. In step ST11, the map generation block acquires sensing data. The map generation block 30 acquires the sensing data generated by the inner world sensor 311 and the outer world sensor 312 of the sensor unit 31 and proceeds to step ST12 and step ST14.
 ステップST12で地図生成ブロックは特徴形状抽出処理を行う。地図生成ブロック30は、ステップST11で取得したセンシングデータ(例えば測距データや撮像画データ)に基づき、学習ブロック20で生成された特徴形状認識モデルを用いて特徴形状抽出を行い、例えば抽出した特徴形状の種類と検出位置を示す特徴形状リストを生成してステップST13に進む。 In step ST12, the map generation block performs feature shape extraction processing. The map generation block 30 extracts the feature shape using the feature shape recognition model generated in the learning block 20 based on the sensing data (for example, ranging data and the captured image data) acquired in step ST11, and for example, the extracted feature. A feature shape list indicating the type of shape and the detection position is generated, and the process proceeds to step ST13.
 ステップST13で地図生成ブロックは動物体候補抽出処理を行う。地図生成ブロック30は、ステップST12で抽出した特徴形状に基づいて動きを生じる物体の検出を行い、検出した物体を動物体候補とする。さらに、地図生成ブロック30は、動物体候補識別情報と動物体スコアを含む特徴形状ラベル情報を生成してステップST16に進む。 In step ST13, the map generation block performs animal body candidate extraction processing. The map generation block 30 detects an object that moves based on the feature shape extracted in step ST12, and sets the detected object as an animal body candidate. Further, the map generation block 30 generates the feature shape label information including the animal body candidate identification information and the animal body score, and proceeds to step ST16.
 図7は動物体候補の抽出を説明するための図である。図7の(a)は、外界センサのセンシング範囲に含まれている物体を例示しており、例えば荷物運搬台車OB1と椅子OB2および箱OB3が含まれているとする。図7の(b)は特徴形状の抽出によって、車輪FS1と把手FS2a,FS2bが抽出されている。したがって、図7の(c)に示すように、車輪FS1や把手FS2aと距離が略等しく車輪FS1と把手FS2aに連続している物体領域で示された荷物運搬台車OB1を動物体候補とする。また、把手FS2bと距離が略等しく把手FS2bに連続している物体領域で示された箱OB3が動物体候補に設定される。また、抽出された特徴形状毎のスコアを累積して累積値を動物体スコアSLとして設定する。 FIG. 7 is a diagram for explaining the extraction of animal body candidates. FIG. 7A exemplifies an object included in the sensing range of the external sensor, and it is assumed that, for example, a luggage carrier OB1, a chair OB2, and a box OB3 are included. In FIG. 7B, the wheel FS1 and the handles FS2a and FS2b are extracted by extracting the characteristic shape. Therefore, as shown in FIG. 7 (c), the luggage carrier OB1 shown in the object region which is substantially equal in distance from the wheel FS1 and the handle FS2a and is continuous with the wheel FS1 and the handle FS2a is used as an animal body candidate. Further, the box OB3 indicated by the object region continuous with the handle FS2b at a distance substantially equal to that of the handle FS2b is set as an animal body candidate. In addition, the scores for each of the extracted feature shapes are accumulated and the cumulative value is set as the animal body score SL.
 ステップST14で地図生成ブロックは地図生成処理を行う。地図生成ブロック30は、ステップST11で取得したセンシングデータ(例えばデプス画像)に基づき地図情報例えば占有格子地図を生成してステップST15に進む。 In step ST14, the map generation block performs map generation processing. The map generation block 30 generates map information such as an occupied grid map based on the sensing data (for example, depth image) acquired in step ST11, and proceeds to step ST15.
 ステップST15で地図生成ブロックは領域認識処理を行う。地図生成ブロック30は、ステップST14で生成した地図情報に対する領域認識を、学習ブロック20で生成された領域認識モデルを用いて行い、領域認識結果に基づき領域識別情報と動物体存在スコアを含む領域ラベル情報を生成してステップST16に進む。 In step ST15, the map generation block performs area recognition processing. The map generation block 30 performs area recognition for the map information generated in step ST14 using the area recognition model generated in the learning block 20, and based on the area recognition result, the area label including the area identification information and the animal presence score. Information is generated and the process proceeds to step ST16.
 図8は地図情報を例示している。図8の(a)は地図生成処理によって生成された地図情報、図8の(b)は領域ラベル情報が付与された地図情報を例示している。例えば、図8の(b)では、通路を示す領域に領域識別情報LB1、戸口を示す領域に領域識別情報LB2、室内を示す領域に領域識別情報LB3を付与した場合を例示している。また、通路を示す領域の動物体存在スコアSR1と戸口を示す領域の動物体存在スコアSR2および室内を示す領域の動物体存在スコアSR3は「SR1>SR2>SR3」とする。 FIG. 8 illustrates map information. FIG. 8A exemplifies the map information generated by the map generation process, and FIG. 8B exemplifies the map information to which the area label information is added. For example, FIG. 8B illustrates a case where the area identification information LB1 is assigned to the area indicating the passage, the area identification information LB2 is provided to the area indicating the doorway, and the area identification information LB3 is provided to the area indicating the room. Further, the animal presence score SR1 in the area showing the passage, the animal presence score SR2 in the area showing the doorway, and the animal presence score SR3 in the area showing the room are set to "SR1> SR2> SR3".
 ステップST16で地図生成ブロックは動物体削除処理を行う。地図生成ブロック30は、ステップST13で生成された特徴形状ラベル情報とステップST15で生成された領域ラベル情報に基づき、動物体候補の動物体スコアSLと動物体候補が位置する領域の動物体存在スコアSRに基づいて動物体候補毎に削除対象とするか判定する。例えば動物体スコアSLと動物体存在スコアSRを用いて式(1)の演算を行い、評価値VEを算出する。なお、係数α,βは、動物体スコアSLと動物体存在スコアSRに対する重みであり、予め設定されている。また、係数α,βは、いずれか一方が「0」であってもよい。
 VE=α×SR+β×SL   ・・・(1)
In step ST16, the map generation block performs the animal body deletion process. The map generation block 30 is based on the feature shape label information generated in step ST13 and the area label information generated in step ST15, and the animal body score SL of the animal body candidate and the animal body presence score of the area where the animal body candidate is located. Based on the SR, it is determined whether or not each animal candidate is to be deleted. For example, the evaluation value VE is calculated by performing the calculation of the equation (1) using the animal body score SL and the animal body existence score SR. The coefficients α and β are weights for the animal body score SL and the animal body existence score SR, and are set in advance. Further, one of the coefficients α and β may be “0”.
VE = α × SR + β × SL ・ ・ ・ (1)
 地図生成ブロック30は、評価値VEが予め設定されている閾値Thよりも大きい動物体候補を削除対象と判定して、削除対象の動物体候補を地図情報から削除する。例えば、図8の(b)の場合、通路の位置する荷物運搬台車OB1を地図情報から削除して、室内に置かれている箱OB3は、地図情報に含められた状態とする。 The map generation block 30 determines that the animal candidate whose evaluation value VE is larger than the preset threshold Th is the deletion target, and deletes the animal candidate to be deleted from the map information. For example, in the case of (b) of FIG. 8, the luggage carrier OB1 at which the passage is located is deleted from the map information, and the box OB3 placed in the room is included in the map information.
 なお、図6では、動物体候補の検出に関する処理と領域認識に関する処理を並列に行う場合を例示したが、いずれか一方の処理を先に行い、他方の処理を後に行うようにしてもよい。 Note that, in FIG. 6, a case where the process related to the detection of the animal body candidate and the process related to the area recognition are performed in parallel is illustrated, but one of the processes may be performed first and the other process may be performed later.
 このように、地図生成ブロックは、特徴形状抽出処理によって抽出した特徴形状に基づく動物体候補の動物体スコアと、領域認識処理によって認識された領域の動物体スコアに基づいて、地図情報から動物体を削除する。したがって、所定時間分のセンシングデータが得られていなくとも、速やかに動物体を削除した地図情報を得られるようになる。また、空間構造が時間経過により著しく変化する場合でも動物体を削除できるようになる。さらに、物体の特徴形状、または地図情報中における物体が位置する空間領域を示す領域情報に基づいて動物体を判定できるので、例えばセマンティックセグメンテーションによって認識することができない物体であっても、地図情報から動物体を削除できるようになる。 In this way, the map generation block is based on the animal body score of the animal body candidate based on the feature shape extracted by the feature shape extraction process and the animal body score of the area recognized by the area recognition process, and the animal body from the map information. To delete. Therefore, even if the sensing data for a predetermined time is not obtained, the map information in which the animal body is deleted can be quickly obtained. In addition, the animal body can be deleted even if the spatial structure changes significantly with the passage of time. Furthermore, since the animal body can be determined based on the characteristic shape of the object or the area information indicating the spatial area where the object is located in the map information, even if the object cannot be recognized by semantic segmentation, for example, from the map information. You will be able to delete the animal body.
 ところで、地図生成ブロック30に動物体削除フィルタ33を設けた場合、動物体削除フィルタ33は、動物体の判定結果に基づき、測距データから削除対象である動物体候補の測距データを削除する。このように動物体候補に関する測距データを削除すれば、地図生成部34で動物体が含まれていない地図情報を生成することが可能となる。 By the way, when the animal body deletion filter 33 is provided in the map generation block 30, the animal body deletion filter 33 deletes the distance measurement data of the animal body candidate to be deleted from the distance measurement data based on the determination result of the animal body. .. By deleting the distance measurement data related to the animal body candidate in this way, the map generation unit 34 can generate map information that does not include the animal body.
 <4.地図活用ブロックの構成と動作>
 図9は、地図活用ブロックの構成を例示している。地図活用ブロック40は、センサ部41、デッドレコニング部42、特徴形状抽出部43、動物体候補抽出部44、地図情報記憶部45、重み付け処理部46、スターレコニング部47、自己位置統合部48を有している。
<4. Configuration and operation of map utilization block>
FIG. 9 illustrates the configuration of the map utilization block. The map utilization block 40 includes a sensor unit 41, a dead reckoning unit 42, a feature shape extraction unit 43, an animal body candidate extraction unit 44, a map information storage unit 45, a weighting processing unit 46, a star reckoning unit 47, and a self-position integration unit 48. Have.
 センサ部41は、地図生成ブロック30と同様に構成されており、内界センサ411と外界センサ412を有している。内界センサ411は、位置センサや角度センサ、加速度センサ、ジャイロセンサ等を用いて構成されており、移動体自身に関するセンシングデータを取得してデッドレコニング部42へ出力する。外界センサ412は、測距センサ412aとイメージセンサ412b等を用いて構成されており、移動体の周辺環境に関するセンシングデータを取得する。外界センサ412の測距センサ412aは、周囲の物体までの距離を示すセンシングデータ(測距データ)を特徴形状抽出部43と重み付け処理部46へ出力する。また、外界センサ412のイメージセンサ412bは、センシングデータ(撮像画データ)を特徴形状抽出部43へ出力する。 The sensor unit 41 is configured in the same manner as the map generation block 30, and has an inner world sensor 411 and an outer world sensor 412. The internal world sensor 411 is configured by using a position sensor, an angle sensor, an acceleration sensor, a gyro sensor, and the like, and acquires sensing data related to the moving body itself and outputs the sensing data to the dead reckoning unit 42. The external world sensor 412 is configured by using the distance measuring sensor 412a, the image sensor 412b, and the like, and acquires sensing data regarding the surrounding environment of the moving body. The distance measuring sensor 412a of the outside world sensor 412 outputs sensing data (distance measuring data) indicating the distance to a surrounding object to the feature shape extraction unit 43 and the weighting processing unit 46. Further, the image sensor 412b of the external world sensor 412 outputs sensing data (imaging image data) to the feature shape extraction unit 43.
 デッドレコニング部42は、内界センサ411から出力されたセンシングデータに基づき、移動体がどの方向にどの程度移動したか判定して自己位置の推定を行い、推定した現在の自己位置を示す位置情報を自己位置統合部48へ出力する。 The dead reckoning unit 42 estimates the self-position by determining in which direction and how much the moving body has moved based on the sensing data output from the internal sensor 411, and position information indicating the estimated current self-position. Is output to the self-position integration unit 48.
 特徴形状抽出部43は、外界センサ412から出力された撮像画データが示す撮像画像、あるいは撮像画像と測距データが示すデプス画像に基づき、学習ブロック20で生成された特徴形状認識モデルを用いて特徴形状抽出を行う。特徴形状抽出部43は、抽出した特徴形状を示す特徴形状リストを生成して、動物体候補抽出部44へ出力する。 The feature shape extraction unit 43 uses the feature shape recognition model generated by the learning block 20 based on the captured image indicated by the captured image data output from the external world sensor 412 or the depth image indicated by the captured image and the distance measurement data. Feature shape extraction is performed. The feature shape extraction unit 43 generates a feature shape list showing the extracted feature shape and outputs it to the animal body candidate extraction unit 44.
 動物体候補抽出部44は、特徴形状抽出部43から出力された特徴形状リストに基づき、特徴形状リストで示された特徴形状を含む物体を動物体候補として抽出する。動物体候補抽出部44は、抽出した動物体候補を示す情報を重み付け処理部46へ出力する。 The animal body candidate extraction unit 44 extracts an object including the feature shape shown in the feature shape list as an animal body candidate based on the feature shape list output from the feature shape extraction unit 43. The animal body candidate extraction unit 44 outputs information indicating the extracted animal body candidate to the weighting processing unit 46.
 地図情報記憶部45には、地図生成ブロック30で生成された地図情報(事前地図情報)が記憶されている。地図情報記憶部45は、領域ラベル情報が付与されている事前地図情報を重み付け処理部46へ出力する。また地図情報記憶部45は、事前地図情報をスターレコニング部47へ出力する。 The map information storage unit 45 stores the map information (preliminary map information) generated by the map generation block 30. The map information storage unit 45 outputs the prior map information to which the area label information is added to the weighting processing unit 46. Further, the map information storage unit 45 outputs the advance map information to the star reckoning unit 47.
 重み付け処理部46は、外界センサ412から供給されたセンシングデータに対して重み付けを行う。例えば、重み付け処理部46は、測距データが示すデプス画像に対して、動物体候補を示す情報または事前地図情報の領域ラベル情報の少なくともいずれかの情報に基づき重み付けを行い、動物体候補の領域や動物体が存在しやすい領域の重みを小さくする。重み付け処理部46は、例えば動物体候補の動物体スコアと動物体候補が位置する領域の動物体存在スコアの少なくともいずれかに応じて、デプス画像に対する重み付けを行い、動物体が存在しやすくなりあるいは動物体らしさが高くなるに伴い重みを少なくする。重み付け処理部46は重み付け処理後のデプス画像をスターレコニング部47へ出力する。また、重み付け処理部46は、動物体候補を示す情報と事前地図情報の領域ラベル情報に基づき、地図生成ブロック30の判定部38と同様に動物体の判定を行い、デプス画像における動物体と判定した領域の重みを小さくしてもよい。 The weighting processing unit 46 weights the sensing data supplied from the external sensor 412. For example, the weighting processing unit 46 weights the depth image indicated by the distance measurement data based on at least one of the information indicating the animal body candidate or the area label information of the prior map information, and the area of the animal body candidate. And reduce the weight of the area where the animal body is likely to exist. The weighting processing unit 46 weights the depth image according to at least one of the animal body score of the animal body candidate and the animal body presence score of the region where the animal body candidate is located, so that the animal body is likely to exist. The weight is reduced as the animal-like appearance increases. The weighting processing unit 46 outputs the depth image after the weighting processing to the star reckoning unit 47. Further, the weighting processing unit 46 determines the animal body in the same manner as the determination unit 38 of the map generation block 30 based on the information indicating the animal body candidate and the area label information of the prior map information, and determines that the animal body is an animal body in the depth image. The weight of the created area may be reduced.
 スターレコニング部47は、重み付け処理部46から出力されたデプス画像と、地図情報記憶部45に記憶されている事前地図情報とのマッチングによって自己位置の推定を行い、推定した現在の自己位置を示す位置情報を自己位置統合部48へ出力する。ここで、スターレコニング部47で用いるデプス画像は、動物体の領域や動物体が存在しやすい領域の重みが小さくされているので、スターレコニング部47で生成される位置情報は、動物体の影響の少ない情報となる。スターレコニング部47は生成した位置情報を自己位置統合部48へ出力する。 The star reckoning unit 47 estimates the self-position by matching the depth image output from the weighting processing unit 46 with the prior map information stored in the map information storage unit 45, and indicates the estimated current self-position. The position information is output to the self-position integration unit 48. Here, in the depth image used by the star reckoning unit 47, the weights of the animal body region and the region where the animal body is likely to exist are reduced, so that the position information generated by the star reckoning unit 47 is influenced by the animal body. It becomes less information. The star reckoning unit 47 outputs the generated position information to the self-position integration unit 48.
 自己位置統合部48は、デッドレコニング部42から出力された位置情報と、スターレコニング部47から出力された位置情報を統合する。例えば、自己位置統合部48は、拡張カルマンフィルタあるいは格子点オブザーバ等を用いて、デッドレコニング部42とスターレコニング部47で得られた2つの位置情報を統合して、例えばデッドレコニング部42で得られた位置情報よりも誤差が少なく、スターレコニング部47で得られた位置情報よりも精度のよい自己位置情報を生成して出力する。 The self-position integration unit 48 integrates the position information output from the dead reckoning unit 42 and the position information output from the star reckoning unit 47. For example, the self-position integration unit 48 integrates the two position information obtained by the dead reckoning unit 42 and the star reckoning unit 47 by using an extended Kalman filter, a grid point observer, or the like, and is obtained by, for example, the dead reckoning unit 42. It generates and outputs self-position information that has less error than the position information and is more accurate than the position information obtained by the star reckoning unit 47.
 図10は、地図活用ブロックの動作の一部を例示したフローチャートである。ステップST21で地図活用ブロック40はセンシングデータを取得する。地図活用ブロック40は、センサ部41の内界センサ411や外界センサ412で生成されたセンシングデータを取得してステップST22に進む。 FIG. 10 is a flowchart illustrating a part of the operation of the map utilization block. In step ST21, the map utilization block 40 acquires sensing data. The map utilization block 40 acquires the sensing data generated by the inner world sensor 411 and the outer world sensor 412 of the sensor unit 41, and proceeds to step ST22.
 ステップST22で地図活用ブロックは特徴形状抽出処理を行う。地図活用ブロック40は、ステップST21で取得したセンシングデータ(例えば測距データや撮像画データ)に基づき、学習ブロック20で生成された特徴形状認識モデルを用いて特徴形状抽出を行い、抽出した特徴形状を示す特徴形状リストを生成してステップST23に進む。 In step ST22, the map utilization block performs feature shape extraction processing. The map utilization block 40 extracts the feature shape using the feature shape recognition model generated in the learning block 20 based on the sensing data (for example, ranging data and the captured image data) acquired in step ST21, and the extracted feature shape. A feature shape list indicating the above is generated, and the process proceeds to step ST23.
 ステップST23で地図生成ブロックは動物体候補抽出処理を行う。地図活用ブロック40は、ステップST22で抽出した特徴形状に基づいて動物体の判定を行い、判定した動物体を動物体候補とする。さらに、地図活用ブロック40は、動物体候補に動物体らしさを示す動物体スコアを付与してステップST23に進む。 In step ST23, the map generation block performs animal candidate extraction processing. The map utilization block 40 determines an animal body based on the feature shape extracted in step ST22, and sets the determined animal body as an animal body candidate. Further, the map utilization block 40 assigns an animal body score indicating the animal body-likeness to the animal body candidate, and proceeds to step ST23.
 ステップST24で地図活用ブロックは重み付け処理を行う。地図活用ブロック40は、ステップST23で抽出した動物体候補毎に、動物体候補の動物体スコアと動物体候補が位置する地図上の領域の動物体存在スコアに基づいて重みを算出する。例えば動物体スコアSLと動物体存在スコアSRを用いて式(2)の演算を行い、動物体候補を示す測距データの重みWEを算出する。なお、係数α,βは、動物体スコアSLと動物体存在スコアSRに対する重みであり、予め設定されている。
 WE=1/(α×SR+β×SL)  ・・・(2)
In step ST24, the map utilization block is weighted. The map utilization block 40 calculates the weight for each animal body candidate extracted in step ST23 based on the animal body score of the animal body candidate and the animal body presence score of the area on the map where the animal body candidate is located. For example, the calculation of the equation (2) is performed using the animal body score SL and the animal body existence score SR to calculate the weight WE of the distance measurement data indicating the animal body candidate. The coefficients α and β are weights for the animal body score SL and the animal body existence score SR, and are set in advance.
WE = 1 / (α × SR + β × SL) ・ ・ ・ (2)
 図11は、重み付け処理結果を例示している。例えば領域識別情報LB1で示された通路に荷物運搬台車が置かれている場合、領域識別情報LB1の動物体存在スコアSR1と荷物運搬台車の動物体スコアSL1は、上述したようにスコアの値が高くなることから、荷物運搬台車が位置するグリッドGob1の重みWEob1は小さくなる。 FIG. 11 illustrates the weighting processing result. For example, when the luggage carrier is placed in the aisle indicated by the area identification information LB1, the animal presence score SR1 of the area identification information LB1 and the animal score SL1 of the luggage carrier have score values as described above. As the height increases, the weight WEob1 of the grid Gob1 on which the luggage carrier is located becomes smaller.
 このように、地図活用ブロック40では、動物体候補の動物体スコアSLと動物体候補が位置する領域の動物体存在スコアSRに基づきグリッドの重み付けが行われて、動物体の領域との推定レベルが高くなるに伴い重みが小さくされる。したがって、例えばスターレコニングを行う際に動物体の影響を少なくして自己位置の推定が可能となる。 In this way, in the map utilization block 40, the grid is weighted based on the animal body score SL of the animal body candidate and the animal body existence score SR of the area where the animal body candidate is located, and the estimation level with the animal body area is performed. The weight is reduced as the value increases. Therefore, for example, when performing star reckoning, it is possible to estimate the self-position by reducing the influence of the animal body.
 このような本技術によれば、地図活用ブロック40で用いる事前地図情報は、動物体が速やかに削除されるので、スターレコニング部47では、動物体の影響の少ない自己位置の推定が可能となる。また、動物体候補を示すグリッドの重みが少なくされるので、スターレコニング部47では、さらに動物体の影響の少ない自己位置の推定が可能となる。また、デッドレコニング部42から出力された位置情報と、スターレコニング部47から出力された位置情報を統合して自己位置が推定されるので、例えば内界センサからのセンシングデータに基づいた位置情報の精度が低下しても、精度よく自己位置を推定できる。さらに、空間構造が時間経過により著しく変化した場合でも自己位置を推定できる。 According to this technology, the animal body is quickly deleted from the pre-map information used in the map utilization block 40, so that the star reckoning unit 47 can estimate the self-position with less influence of the animal body. .. Further, since the weight of the grid indicating the animal body candidate is reduced, the star reckoning unit 47 can estimate the self-position with less influence of the animal body. Further, since the self-position is estimated by integrating the position information output from the dead reckoning unit 42 and the position information output from the star reckoning unit 47, for example, the position information based on the sensing data from the internal sensor is used. Even if the accuracy decreases, the self-position can be estimated accurately. Furthermore, the self-position can be estimated even when the spatial structure changes significantly with the passage of time.
 また、物体の特徴形状に基づいて動物体候補を抽出できることから、例えばセマンティックセグメンテーションによって認識することができない動物体の影響も少なくできる。さらに、セマンティックセグメンテーションによって動物体の検出を行い、セマンティックセグメンテーションで動物体と検出されていない物体について、本技術を用いて動物体であるかを判定してもよい。 In addition, since animal body candidates can be extracted based on the characteristic shape of the object, the influence of the animal body that cannot be recognized by, for example, semantic segmentation can be reduced. Further, the animal body may be detected by semantic segmentation, and the object which is not detected as an animal body by the semantic segmentation may be determined to be an animal body by using this technique.
 <5.応用例>
 本開示に係る技術は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット、建設機械、農業機械(トラクター)などのいずれかの種類の移動体に搭載される装置として実現されてもよい。
<5. Application example>
The technology according to the present disclosure can be applied to various products. For example, the technology according to the present disclosure includes any type of movement such as automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobility, airplanes, drones, ships, robots, construction machines, agricultural machines (tractors), and the like. It may be realized as a device mounted on the body.
 図12は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システム7000の概略的な構成例を示すブロック図である。車両制御システム7000は、通信ネットワーク7010を介して接続された複数の電子制御ユニットを備える。図12に示した例では、車両制御システム7000は、駆動系制御ユニット7100、ボディ系制御ユニット7200、バッテリ制御ユニット7300、車外情報検出ユニット7400、車内情報検出ユニット7500、及び統合制御ユニット7600を備える。これらの複数の制御ユニットを接続する通信ネットワーク7010は、例えば、CAN(Controller Area Network)、LIN(Local Interconnect Network)、LAN(Local Area Network)又はFlexRay(登録商標)等の任意の規格に準拠した車載通信ネットワークであってよい。 FIG. 12 is a block diagram showing a schematic configuration example of a vehicle control system 7000, which is an example of a mobile control system to which the technique according to the present disclosure can be applied. The vehicle control system 7000 includes a plurality of electronic control units connected via the communication network 7010. In the example shown in FIG. 12, the vehicle control system 7000 includes a drive system control unit 7100, a body system control unit 7200, a battery control unit 7300, an external information detection unit 7400, an in-vehicle information detection unit 7500, and an integrated control unit 7600. .. The communication network 7010 connecting these plurality of control units conforms to any standard such as CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network) or FlexRay (registered trademark). It may be an in-vehicle communication network.
 各制御ユニットは、各種プログラムにしたがって演算処理を行うマイクロコンピュータと、マイクロコンピュータにより実行されるプログラム又は各種演算に用いられるパラメータ等を記憶する記憶部と、各種制御対象の装置を駆動する駆動回路とを備える。各制御ユニットは、通信ネットワーク7010を介して他の制御ユニットとの間で通信を行うためのネットワークI/Fを備えるとともに、車内外の装置又はセンサ等との間で、有線通信又は無線通信により通信を行うための通信I/Fを備える。図12では、統合制御ユニット7600の機能構成として、マイクロコンピュータ7610、汎用通信I/F7620、専用通信I/F7630、測位部7640、ビーコン受信部7650、車内機器I/F7660、音声画像出力部7670、車載ネットワークI/F7680及び記憶部7690が図示されている。他の制御ユニットも同様に、マイクロコンピュータ、通信I/F及び記憶部等を備える。 Each control unit includes a microcomputer that performs arithmetic processing according to various programs, a storage unit that stores a program executed by the microcomputer or parameters used for various arithmetics, and a drive circuit that drives various control target devices. To be equipped. Each control unit is provided with a network I / F for communicating with other control units via the communication network 7010, and is connected to devices or sensors inside or outside the vehicle by wired communication or wireless communication. A communication I / F for performing communication is provided. In FIG. 12, as the functional configuration of the integrated control unit 7600, the microcomputer 7610, the general-purpose communication I / F 7620, the dedicated communication I / F 7630, the positioning unit 7640, the beacon receiving unit 7650, the in-vehicle device I / F 7660, the audio image output unit 7670, The vehicle-mounted network I / F 7680 and the storage unit 7690 are shown. Other control units also include a microcomputer, a communication I / F, a storage unit, and the like.
 駆動系制御ユニット7100は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット7100は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。駆動系制御ユニット7100は、ABS(Antilock Brake System)又はESC(Electronic Stability Control)等の制御装置としての機能を有してもよい。 The drive system control unit 7100 controls the operation of the device related to the drive system of the vehicle according to various programs. For example, the drive system control unit 7100 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle. The drive system control unit 7100 may have a function as a control device such as ABS (Antilock Brake System) or ESC (Electronic Stability Control).
 駆動系制御ユニット7100には、車両状態検出部7110が接続される。車両状態検出部7110には、例えば、車体の軸回転運動の角速度を検出するジャイロセンサ、車両の加速度を検出する加速度センサ、あるいは、アクセルペダルの操作量、ブレーキペダルの操作量、ステアリングホイールの操舵角、エンジン回転数又は車輪の回転速度等を検出するためのセンサのうちの少なくとも一つが含まれる。駆動系制御ユニット7100は、車両状態検出部7110から入力される信号を用いて演算処理を行い、内燃機関、駆動用モータ、電動パワーステアリング装置又はブレーキ装置等を制御する。 The vehicle condition detection unit 7110 is connected to the drive system control unit 7100. The vehicle state detection unit 7110 may include, for example, a gyro sensor that detects the angular velocity of the axial rotation motion of the vehicle body, an acceleration sensor that detects the acceleration of the vehicle, an accelerator pedal operation amount, a brake pedal operation amount, or steering wheel steering. Includes at least one of the sensors for detecting angular velocity, engine speed, wheel speed, and the like. The drive system control unit 7100 performs arithmetic processing using a signal input from the vehicle state detection unit 7110 to control an internal combustion engine, a drive motor, an electric power steering device, a braking device, and the like.
 ボディ系制御ユニット7200は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット7200は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット7200には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット7200は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 7200 controls the operation of various devices mounted on the vehicle body according to various programs. For example, the body system control unit 7200 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps. In this case, the body system control unit 7200 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches. The body system control unit 7200 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
 バッテリ制御ユニット7300は、各種プログラムにしたがって駆動用モータの電力供給源である二次電池7310を制御する。例えば、バッテリ制御ユニット7300には、二次電池7310を備えたバッテリ装置から、バッテリ温度、バッテリ出力電圧又はバッテリの残存容量等の情報が入力される。バッテリ制御ユニット7300は、これらの信号を用いて演算処理を行い、二次電池7310の温度調節制御又はバッテリ装置に備えられた冷却装置等の制御を行う。 The battery control unit 7300 controls the secondary battery 7310, which is the power supply source of the drive motor, according to various programs. For example, information such as the battery temperature, the battery output voltage, or the remaining capacity of the battery is input to the battery control unit 7300 from the battery device including the secondary battery 7310. The battery control unit 7300 performs arithmetic processing using these signals, and controls the temperature control of the secondary battery 7310 or the cooling device provided in the battery device.
 車外情報検出ユニット7400は、車両制御システム7000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット7400には、撮像部7410及び車外情報検出部7420のうちの少なくとも一方が接続される。撮像部7410には、ToF(Time Of Flight)カメラ、ステレオカメラ、単眼カメラ、赤外線カメラ及びその他のカメラのうちの少なくとも一つが含まれる。車外情報検出部7420には、例えば、現在の天候又は気象を検出するための環境センサ、あるいは、車両制御システム7000を搭載した車両の周囲の他の車両、障害物又は歩行者等を検出するための周囲情報検出センサのうちの少なくとも一つが含まれる。 The vehicle outside information detection unit 7400 detects information outside the vehicle equipped with the vehicle control system 7000. For example, at least one of the image pickup unit 7410 and the vehicle exterior information detection unit 7420 is connected to the vehicle exterior information detection unit 7400. The imaging unit 7410 includes at least one of a ToF (Time Of Flight) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras. The vehicle exterior information detection unit 7420 is used to detect, for example, the current weather or an environmental sensor for detecting the weather, or other vehicles, obstacles, pedestrians, etc. around the vehicle equipped with the vehicle control system 7000. At least one of the ambient information detection sensors is included.
 環境センサは、例えば、雨天を検出する雨滴センサ、霧を検出する霧センサ、日照度合いを検出する日照センサ、及び降雪を検出する雪センサのうちの少なくとも一つであってよい。周囲情報検出センサは、超音波センサ、レーダ装置及びLIDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)装置のうちの少なくとも一つであってよい。これらの撮像部7410及び車外情報検出部7420は、それぞれ独立したセンサないし装置として備えられてもよいし、複数のセンサないし装置が統合された装置として備えられてもよい。 The environmental sensor may be, for example, at least one of a raindrop sensor that detects rainy weather, a fog sensor that detects fog, a sunshine sensor that detects the degree of sunshine, and a snow sensor that detects snowfall. The ambient information detection sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) device. The image pickup unit 7410 and the vehicle exterior information detection unit 7420 may be provided as independent sensors or devices, or may be provided as a device in which a plurality of sensors or devices are integrated.
 ここで、図13は、撮像部7410及び車外情報検出部7420の設置位置の例を示す。撮像部7910,7912,7914,7916,7918は、例えば、車両7900のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部のうちの少なくとも一つの位置に設けられる。フロントノーズに備えられる撮像部7910及び車室内のフロントガラスの上部に備えられる撮像部7918は、主として車両7900の前方の画像を取得する。サイドミラーに備えられる撮像部7912,7914は、主として車両7900の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部7916は、主として車両7900の後方の画像を取得する。車室内のフロントガラスの上部に備えられる撮像部7918は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 Here, FIG. 13 shows an example of the installation positions of the image pickup unit 7410 and the vehicle exterior information detection unit 7420. The imaging units 7910, 7912, 7914, 7916, 7918 are provided, for example, at at least one of the front nose, side mirrors, rear bumpers, back door, and upper part of the windshield of the vehicle interior of the vehicle 7900. The image pickup unit 7910 provided on the front nose and the image pickup section 7918 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 7900. The imaging units 7912 and 7914 provided in the side mirrors mainly acquire images of the side of the vehicle 7900. The image pickup unit 7916 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 7900. The imaging unit 7918 provided on the upper part of the windshield in the vehicle interior is mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
 なお、図13には、それぞれの撮像部7910,7912,7914,7916の撮影範囲の一例が示されている。撮像範囲aは、フロントノーズに設けられた撮像部7910の撮像範囲を示し、撮像範囲b,cは、それぞれサイドミラーに設けられた撮像部7912,7914の撮像範囲を示し、撮像範囲dは、リアバンパ又はバックドアに設けられた撮像部7916の撮像範囲を示す。例えば、撮像部7910,7912,7914,7916で撮像された画像データが重ね合わせられることにより、車両7900を上方から見た俯瞰画像が得られる。 Note that FIG. 13 shows an example of the shooting range of each of the imaging units 7910, 7912, 7914, 7916. The imaging range a indicates the imaging range of the imaging unit 7910 provided on the front nose, the imaging ranges b and c indicate the imaging ranges of the imaging units 7912 and 7914 provided on the side mirrors, respectively, and the imaging range d indicates the imaging range d. The imaging range of the imaging unit 7916 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 7910, 7912, 7914, 7916, a bird's-eye view image of the vehicle 7900 as viewed from above can be obtained.
 車両7900のフロント、リア、サイド、コーナ及び車室内のフロントガラスの上部に設けられる車外情報検出部7920,7922,7924,7926,7928,7930は、例えば超音波センサ又はレーダ装置であってよい。車両7900のフロントノーズ、リアバンパ、バックドア及び車室内のフロントガラスの上部に設けられる車外情報検出部7920,7926,7930は、例えばLIDAR装置であってよい。これらの車外情報検出部7920~7930は、主として先行車両、歩行者又は障害物等の検出に用いられる。 The vehicle exterior information detection units 7920, 7922, 7924, 7926, 7928, 7930 provided on the front, rear, side, corners and the upper part of the windshield in the vehicle interior of the vehicle 7900 may be, for example, an ultrasonic sensor or a radar device. The vehicle exterior information detection units 7920, 7926, 7930 provided on the front nose, rear bumper, back door, and upper part of the windshield in the vehicle interior of the vehicle 7900 may be, for example, a lidar device. These out-of-vehicle information detection units 7920 to 7930 are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, or the like.
 図12に戻って説明を続ける。車外情報検出ユニット7400は、撮像部7410に車外の画像を撮像させるとともに、撮像された画像データを受信する。また、車外情報検出ユニット7400は、接続されている車外情報検出部7420から検出情報を受信する。車外情報検出部7420が超音波センサ、レーダ装置又はLIDAR装置である場合には、車外情報検出ユニット7400は、超音波又は電磁波等を発信させるとともに、受信された反射波の情報を受信する。車外情報検出ユニット7400は、受信した情報に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。車外情報検出ユニット7400は、受信した情報に基づいて、降雨、霧又は路面状況等を認識する環境認識処理を行ってもよい。車外情報検出ユニット7400は、受信した情報に基づいて、車外の物体までの距離を算出してもよい。 Return to FIG. 12 and continue the explanation. The vehicle outside information detection unit 7400 causes the image pickup unit 7410 to capture an image of the outside of the vehicle and receives the captured image data. Further, the vehicle exterior information detection unit 7400 receives detection information from the connected vehicle exterior information detection unit 7420. When the vehicle exterior information detection unit 7420 is an ultrasonic sensor, a radar device, or a lidar device, the vehicle exterior information detection unit 7400 transmits ultrasonic waves, electromagnetic waves, or the like, and receives received reflected wave information. The vehicle exterior information detection unit 7400 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or a character on a road surface based on the received information. The vehicle exterior information detection unit 7400 may perform an environment recognition process for recognizing rainfall, fog, road surface conditions, etc., based on the received information. The vehicle outside information detection unit 7400 may calculate the distance to an object outside the vehicle based on the received information.
 また、車外情報検出ユニット7400は、受信した画像データに基づいて、人、車、障害物、標識又は路面上の文字等を認識する画像認識処理又は距離検出処理を行ってもよい。車外情報検出ユニット7400は、受信した画像データに対して歪補正又は位置合わせ等の処理を行うとともに、異なる撮像部7410により撮像された画像データを合成して、俯瞰画像又はパノラマ画像を生成してもよい。車外情報検出ユニット7400は、異なる撮像部7410により撮像された画像データを用いて、視点変換処理を行ってもよい。 Further, the vehicle exterior information detection unit 7400 may perform image recognition processing or distance detection processing for recognizing a person, a vehicle, an obstacle, a sign, a character on the road surface, or the like based on the received image data. The vehicle exterior information detection unit 7400 performs processing such as distortion correction or alignment on the received image data, and synthesizes the image data captured by different imaging units 7410 to generate a bird's-eye view image or a panoramic image. May be good. The vehicle exterior information detection unit 7400 may perform the viewpoint conversion process using the image data captured by different imaging units 7410.
 車内情報検出ユニット7500は、車内の情報を検出する。車内情報検出ユニット7500には、例えば、運転者の状態を検出する運転者状態検出部7510が接続される。運転者状態検出部7510は、運転者を撮像するカメラ、運転者の生体情報を検出する生体センサ又は車室内の音声を集音するマイク等を含んでもよい。生体センサは、例えば、座面又はステアリングホイール等に設けられ、座席に座った搭乗者又はステアリングホイールを握る運転者の生体情報を検出する。車内情報検出ユニット7500は、運転者状態検出部7510から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。車内情報検出ユニット7500は、集音された音声信号に対してノイズキャンセリング処理等の処理を行ってもよい。 The in-vehicle information detection unit 7500 detects the in-vehicle information. For example, a driver state detection unit 7510 that detects the driver's state is connected to the in-vehicle information detection unit 7500. The driver state detection unit 7510 may include a camera that captures the driver, a biosensor that detects the driver's biological information, a microphone that collects sound in the vehicle interior, and the like. The biosensor is provided on, for example, the seat surface or the steering wheel, and detects the biometric information of the passenger sitting on the seat or the driver holding the steering wheel. The in-vehicle information detection unit 7500 may calculate the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 7510, and may determine whether the driver is dozing or not. You may. The in-vehicle information detection unit 7500 may perform processing such as noise canceling processing on the collected audio signal.
 統合制御ユニット7600は、各種プログラムにしたがって車両制御システム7000内の動作全般を制御する。統合制御ユニット7600には、入力部7800が接続されている。入力部7800は、例えば、タッチパネル、ボタン、マイクロフォン、スイッチ又はレバー等、搭乗者によって入力操作され得る装置によって実現される。統合制御ユニット7600には、マイクロフォンにより入力される音声を音声認識することにより得たデータが入力されてもよい。入力部7800は、例えば、赤外線又はその他の電波を利用したリモートコントロール装置であってもよいし、車両制御システム7000の操作に対応した携帯電話又はPDA(Personal Digital Assistant)等の外部接続機器であってもよい。入力部7800は、例えばカメラであってもよく、その場合搭乗者はジェスチャにより情報を入力することができる。あるいは、搭乗者が装着したウェアラブル装置の動きを検出することで得られたデータが入力されてもよい。さらに、入力部7800は、例えば、上記の入力部7800を用いて搭乗者等により入力された情報に基づいて入力信号を生成し、統合制御ユニット7600に出力する入力制御回路などを含んでもよい。搭乗者等は、この入力部7800を操作することにより、車両制御システム7000に対して各種のデータを入力したり処理動作を指示したりする。 The integrated control unit 7600 controls the overall operation in the vehicle control system 7000 according to various programs. An input unit 7800 is connected to the integrated control unit 7600. The input unit 7800 is realized by a device such as a touch panel, a button, a microphone, a switch or a lever, which can be input-operated by a passenger. Data obtained by recognizing the voice input by the microphone may be input to the integrated control unit 7600. The input unit 7800 may be, for example, a remote control device using infrared rays or other radio waves, or an externally connected device such as a mobile phone or a PDA (Personal Digital Assistant) that supports the operation of the vehicle control system 7000. You may. The input unit 7800 may be, for example, a camera, in which case the passenger can input information by gesture. Alternatively, data obtained by detecting the movement of the wearable device worn by the passenger may be input. Further, the input unit 7800 may include, for example, an input control circuit that generates an input signal based on the information input by the passenger or the like using the input unit 7800 and outputs the input signal to the integrated control unit 7600. By operating the input unit 7800, the passenger or the like inputs various data to the vehicle control system 7000 and instructs the processing operation.
 記憶部7690は、マイクロコンピュータにより実行される各種プログラムを記憶するROM(Read Only Memory)、及び各種パラメータ、演算結果又はセンサ値等を記憶するRAM(Random Access Memory)を含んでいてもよい。また、記憶部7690は、HDD(Hard Disc Drive)等の磁気記憶デバイス、半導体記憶デバイス、光記憶デバイス又は光磁気記憶デバイス等によって実現してもよい。 The storage unit 7690 may include a ROM (Read Only Memory) for storing various programs executed by the microcomputer, and a RAM (Random Access Memory) for storing various parameters, calculation results, sensor values, and the like. Further, the storage unit 7690 may be realized by a magnetic storage device such as an HDD (Hard Disc Drive), a semiconductor storage device, an optical storage device, an optical magnetic storage device, or the like.
 汎用通信I/F7620は、外部環境7750に存在する様々な機器との間の通信を仲介する汎用的な通信I/Fである。汎用通信I/F7620は、GSM(登録商標)(Global System of Mobile communications)、WiMAX(登録商標)、LTE(登録商標)(Long Term Evolution)若しくはLTE-A(LTE-Advanced)などのセルラー通信プロトコル、又は無線LAN(Wi-Fi(登録商標)ともいう)、Bluetooth(登録商標)などのその他の無線通信プロトコルを実装してよい。汎用通信I/F7620は、例えば、基地局又はアクセスポイントを介して、外部ネットワーク(例えば、インターネット、クラウドネットワーク又は事業者固有のネットワーク)上に存在する機器(例えば、アプリケーションサーバ又は制御サーバ)へ接続してもよい。また、汎用通信I/F7620は、例えばP2P(Peer To Peer)技術を用いて、車両の近傍に存在する端末(例えば、運転者、歩行者若しくは店舗の端末、又はMTC(Machine Type Communication)端末)と接続してもよい。 The general-purpose communication I / F 7620 is a general-purpose communication I / F that mediates communication with various devices existing in the external environment 7750. General-purpose communication I / F7620 is a cellular communication protocol such as GSM (registered trademark) (Global System of Mobile communications), WiMAX (registered trademark), LTE (registered trademark) (Long Term Evolution) or LTE-A (LTE-Advanced). , Or other wireless communication protocols such as wireless LAN (also referred to as Wi-Fi®), Bluetooth® may be implemented. The general-purpose communication I / F 7620 connects to a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or a business-specific network) via, for example, a base station or an access point. You may. Further, the general-purpose communication I / F7620 uses, for example, P2P (Peer To Peer) technology, and is a terminal existing in the vicinity of the vehicle (for example, a terminal of a driver, a pedestrian, or a store, or an MTC (Machine Type Communication) terminal). May be connected with.
 専用通信I/F7630は、車両における使用を目的として策定された通信プロトコルをサポートする通信I/Fである。専用通信I/F7630は、例えば、下位レイヤのIEEE802.11pと上位レイヤのIEEE1609との組合せであるWAVE(Wireless Access in Vehicle Environment)、DSRC(Dedicated Short Range Communications)、又はセルラー通信プロトコルといった標準プロトコルを実装してよい。専用通信I/F7630は、典型的には、車車間(Vehicle to Vehicle)通信、路車間(Vehicle to Infrastructure)通信、車両と家との間(Vehicle to Home)の通信及び歩車間(Vehicle to Pedestrian)通信のうちの1つ以上を含む概念であるV2X通信を遂行する。 The dedicated communication I / F 7630 is a communication I / F that supports a communication protocol formulated for use in a vehicle. The dedicated communication I / F7630 uses a standard protocol such as WAVE (Wireless Access in Vehicle Environment), DSRC (Dedicated Short Range Communications), or a cellular communication protocol, which is a combination of the lower layer IEEE802.11p and the upper layer IEEE1609. May be implemented. The dedicated communication I / F7630 typically includes vehicle-to-vehicle (Vehicle to Vehicle) communication, road-to-vehicle (Vehicle to Infrastructure) communication, vehicle-to-home (Vehicle to Home) communication, and pedestrian-to-pedestrian (Vehicle to Pedestrian) communication. ) Carry out V2X communication, which is a concept that includes one or more of communications.
 測位部7640は、例えば、GNSS(Global Navigation Satellite System)衛星からのGNSS信号(例えば、GPS(Global Positioning System)衛星からのGPS信号)を受信して測位を実行し、車両の緯度、経度及び高度を含む位置情報を生成する。なお、測位部7640は、無線アクセスポイントとの信号の交換により現在位置を特定してもよく、又は測位機能を有する携帯電話、PHS若しくはスマートフォンといった端末から位置情報を取得してもよい。 The positioning unit 7640 receives, for example, a GNSS signal from a GNSS (Global Navigation Satellite System) satellite (for example, a GPS signal from a GPS (Global Positioning System) satellite), executes positioning, and executes positioning, and the latitude, longitude, and altitude of the vehicle. Generate location information including. The positioning unit 7640 may specify the current position by exchanging signals with the wireless access point, or may acquire position information from a terminal such as a mobile phone, PHS, or smartphone having a positioning function.
 ビーコン受信部7650は、例えば、道路上に設置された無線局等から発信される電波あるいは電磁波を受信し、現在位置、渋滞、通行止め又は所要時間等の情報を取得する。なお、ビーコン受信部7650の機能は、上述した専用通信I/F7630に含まれてもよい。 The beacon receiving unit 7650 receives radio waves or electromagnetic waves transmitted from a radio station or the like installed on the road, and acquires information such as the current position, traffic jam, road closure, or required time. The function of the beacon receiving unit 7650 may be included in the above-mentioned dedicated communication I / F 7630.
 車内機器I/F7660は、マイクロコンピュータ7610と車内に存在する様々な車内機器7760との間の接続を仲介する通信インタフェースである。車内機器I/F7660は、無線LAN、Bluetooth(登録商標)、NFC(Near Field Communication)又はWUSB(Wireless USB)といった無線通信プロトコルを用いて無線接続を確立してもよい。また、車内機器I/F7660は、図示しない接続端子(及び、必要であればケーブル)を介して、USB(Universal Serial Bus)、HDMI(登録商標)(High-Definition Multimedia Interface、又はMHL(Mobile High-definition Link)等の有線接続を確立してもよい。車内機器7760は、例えば、搭乗者が有するモバイル機器若しくはウェアラブル機器、又は車両に搬入され若しくは取り付けられる情報機器のうちの少なくとも1つを含んでいてもよい。また、車内機器7760は、任意の目的地までの経路探索を行うナビゲーション装置を含んでいてもよい。車内機器I/F7660は、これらの車内機器7760との間で、制御信号又はデータ信号を交換する。 The in-vehicle device I / F 7660 is a communication interface that mediates the connection between the microprocessor 7610 and various in-vehicle devices 7760 existing in the vehicle. The in-vehicle device I / F7660 may establish a wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication) or WUSB (Wireless USB). In addition, the in-vehicle device I / F7660 is connected via a connection terminal (and a cable if necessary) (not shown), USB (Universal Serial Bus), HDMI (registered trademark) (High-Definition Multimedia Interface, or MHL (Mobile High)). A wired connection such as -definition Link) may be established. The in-vehicle device 7760 includes, for example, at least one of a mobile device or a wearable device owned by a passenger, or an information device carried in or attached to a vehicle. The in-vehicle device 7760 may include a navigation device that searches for a route to an arbitrary destination. The in-vehicle device I / F 7660 is a control signal to and from these in-vehicle devices 7760. Or exchange the data signal.
 車載ネットワークI/F7680は、マイクロコンピュータ7610と通信ネットワーク7010との間の通信を仲介するインタフェースである。車載ネットワークI/F7680は、通信ネットワーク7010によりサポートされる所定のプロトコルに則して、信号等を送受信する。 The in-vehicle network I / F7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010. The vehicle-mounted network I / F7680 transmits and receives signals and the like according to a predetermined protocol supported by the communication network 7010.
 統合制御ユニット7600のマイクロコンピュータ7610は、汎用通信I/F7620、専用通信I/F7630、測位部7640、ビーコン受信部7650、車内機器I/F7660及び車載ネットワークI/F7680のうちの少なくとも一つを介して取得される情報に基づき、各種プログラムにしたがって、車両制御システム7000を制御する。例えば、マイクロコンピュータ7610は、取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット7100に対して制御指令を出力してもよい。例えば、マイクロコンピュータ7610は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行ってもよい。また、マイクロコンピュータ7610は、取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行ってもよい。 The microcomputer 7610 of the integrated control unit 7600 is via at least one of general-purpose communication I / F7620, dedicated communication I / F7630, positioning unit 7640, beacon receiving unit 7650, in-vehicle device I / F7660, and in-vehicle network I / F7680. Based on the information acquired in the above, the vehicle control system 7000 is controlled according to various programs. For example, the microcomputer 7610 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the acquired information inside and outside the vehicle, and outputs a control command to the drive system control unit 7100. May be good. For example, the microcomputer 7610 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. Cooperative control may be performed for the purpose of. In addition, the microcomputer 7610 automatically travels autonomously without relying on the driver's operation by controlling the driving force generator, steering mechanism, braking device, etc. based on the acquired information on the surroundings of the vehicle. Coordinated control for the purpose of driving or the like may be performed.
 マイクロコンピュータ7610は、汎用通信I/F7620、専用通信I/F7630、測位部7640、ビーコン受信部7650、車内機器I/F7660及び車載ネットワークI/F7680のうちの少なくとも一つを介して取得される情報に基づき、車両と周辺の構造物や人物等の物体との間の3次元距離情報を生成し、車両の現在位置の周辺情報を含むローカル地図情報を生成してもよい。また、マイクロコンピュータ7610は、取得される情報に基づき、車両の衝突、歩行者等の近接又は通行止めの道路への進入等の危険を予測し、警告用信号を生成してもよい。警告用信号は、例えば、警告音を発生させたり、警告ランプを点灯させたりするための信号であってよい。 The microcomputer 7610 has information acquired via at least one of a general-purpose communication I / F7620, a dedicated communication I / F7630, a positioning unit 7640, a beacon receiving unit 7650, an in-vehicle device I / F7660, and an in-vehicle network I / F7680. Based on the above, the three-dimensional distance information between the vehicle and an object such as a surrounding structure or a person may be generated, and local map information including the peripheral information of the current position of the vehicle may be generated. Further, the microprocessor 7610 may predict a danger such as a vehicle collision, a pedestrian or the like approaching or entering a closed road based on the acquired information, and may generate a warning signal. The warning signal may be, for example, a signal for generating a warning sound or turning on a warning lamp.
 音声画像出力部7670は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図12の例では、出力装置として、オーディオスピーカ7710、表示部7720及びインストルメントパネル7730が例示されている。表示部7720は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。表示部7720は、AR(Augmented Reality)表示機能を有していてもよい。出力装置は、これらの装置以外の、ヘッドホン、搭乗者が装着する眼鏡型ディスプレイ等のウェアラブルデバイス、プロジェクタ又はランプ等の他の装置であってもよい。出力装置が表示装置の場合、表示装置は、マイクロコンピュータ7610が行った各種処理により得られた結果又は他の制御ユニットから受信された情報を、テキスト、イメージ、表、グラフ等、様々な形式で視覚的に表示する。また、出力装置が音声出力装置の場合、音声出力装置は、再生された音声データ又は音響データ等からなるオーディオ信号をアナログ信号に変換して聴覚的に出力する。 The audio image output unit 7670 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger or the outside of the vehicle of the information. In the example of FIG. 12, an audio speaker 7710, a display unit 7720, and an instrument panel 7730 are exemplified as output devices. The display unit 7720 may include, for example, at least one of an onboard display and a heads-up display. The display unit 7720 may have an AR (Augmented Reality) display function. The output device may be a wearable device such as a headphone or a spectacle-type display worn by a passenger, or another device such as a projector or a lamp other than these devices. When the output device is a display device, the display device displays the results obtained by various processes performed by the microcomputer 7610 or the information received from other control units in various formats such as texts, images, tables, and graphs. Display visually. When the output device is an audio output device, the audio output device converts an audio signal composed of reproduced audio data, acoustic data, or the like into an analog signal and outputs the audio signal audibly.
 なお、図12に示した例において、通信ネットワーク7010を介して接続された少なくとも二つの制御ユニットが一つの制御ユニットとして一体化されてもよい。あるいは、個々の制御ユニットが、複数の制御ユニットにより構成されてもよい。さらに、車両制御システム7000が、図示されていない別の制御ユニットを備えてもよい。また、上記の説明において、いずれかの制御ユニットが担う機能の一部又は全部を、他の制御ユニットに持たせてもよい。つまり、通信ネットワーク7010を介して情報の送受信がされるようになっていれば、所定の演算処理が、いずれかの制御ユニットで行われるようになってもよい。同様に、いずれかの制御ユニットに接続されているセンサ又は装置が、他の制御ユニットに接続されるとともに、複数の制御ユニットが、通信ネットワーク7010を介して相互に検出情報を送受信してもよい。 In the example shown in FIG. 12, at least two control units connected via the communication network 7010 may be integrated as one control unit. Alternatively, each control unit may be composed of a plurality of control units. Further, the vehicle control system 7000 may include another control unit (not shown). Further, in the above description, the other control unit may have a part or all of the functions carried out by any of the control units. That is, as long as information is transmitted and received via the communication network 7010, predetermined arithmetic processing may be performed by any control unit. Similarly, a sensor or device connected to one of the control units may be connected to the other control unit, and the plurality of control units may send and receive detection information to and from each other via the communication network 7010. ..
 なお、図1に示す本実施形態に係る情報処理システム10の機能を実現するためのコンピュータプログラムを、いずれかの制御ユニット等に実装することができる。また、このようなコンピュータプログラムが格納された、コンピュータで読み取り可能な記録媒体を提供することもできる。記録媒体は、例えば、磁気ディスク、光ディスク、光磁気ディスク、フラッシュメモリ等である。また、上記のコンピュータプログラムは、記録媒体を用いずに、例えばネットワークを介して配信されてもよい。 A computer program for realizing the function of the information processing system 10 according to the present embodiment shown in FIG. 1 can be implemented in any control unit or the like. It is also possible to provide a computer-readable recording medium in which such a computer program is stored. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Further, the above computer program may be distributed via, for example, a network without using a recording medium.
 以上説明した車両制御システム7000において、本実施形態に係る地図生成ブロック30や地図活用ブロック40を、図12に示した統合制御ユニット7600に適用すれば、速やかに動物体を地図情報から削除できるので、動物体の影響を少なくして自己位置を精度よく検出できるようになる。したがって、自動運転等を目的とした協調制御を精度よく行うことが可能となる。 In the vehicle control system 7000 described above, if the map generation block 30 and the map utilization block 40 according to the present embodiment are applied to the integrated control unit 7600 shown in FIG. 12, the animal body can be quickly deleted from the map information. , It becomes possible to detect the self-position accurately by reducing the influence of the animal body. Therefore, it is possible to accurately perform coordinated control for the purpose of automatic driving and the like.
 明細書中において説明した一連の処理はハードウェア、またはソフトウェア、あるいは両者の複合構成によって実行することが可能である。ソフトウェアによる処理を実行する場合は、処理シーケンスを記録したプログラムを、専用のハードウェアに組み込まれたコンピュータ内のメモリにインストールして実行させる。または、各種処理が実行可能な汎用コンピュータにプログラムをインストールして実行させることが可能である。 The series of processes described in the specification can be executed by hardware, software, or a composite configuration of both. When executing processing by software, the program that records the processing sequence is installed in the memory in the computer embedded in the dedicated hardware and executed. Alternatively, the program can be installed and executed on a general-purpose computer capable of executing various processes.
 例えば、プログラムは記録媒体としてのハードディスクやSSD(Solid State Drive)、ROM(Read Only Memory)に予め記録しておくことができる。あるいは、プログラムはフレキシブルディスク、CD-ROM(Compact Disc Read Only Memory),MO(Magneto optical)ディスク,DVD(Digital Versatile Disc)、BD(Blu-Ray Disc(登録商標))、磁気ディスク、半導体メモリカード等のリムーバブル記録媒体に、一時的または永続的に格納(記録)しておくことができる。このようなリムーバブル記録媒体は、いわゆるパッケージソフトウェアとして提供することができる。 For example, the program can be recorded in advance on a hard disk as a recording medium, an SSD (Solid State Drive), or a ROM (Read Only Memory). Alternatively, the program is a flexible disc, CD-ROM (Compact Disc Read Only Memory), MO (Magneto optical) disc, DVD (Digital Versatile Disc), BD (Blu-Ray Disc (registered trademark)), magnetic disc, semiconductor memory card. It can be temporarily or permanently stored (recorded) on a removable recording medium such as an optical disc. Such a removable recording medium can be provided as so-called package software.
 また、プログラムは、リムーバブル記録媒体からコンピュータにインストールする他、ダウンロードサイトからLAN(Local Area Network)やインターネット等のネットワークを介して、コンピュータに無線または有線で転送してもよい。コンピュータでは、そのようにして転送されてくるプログラムを受信し、内蔵するハードディスク等の記録媒体にインストールすることができる。 In addition to installing the program on the computer from the removable recording medium, the program may be transferred from the download site to the computer wirelessly or by wire via a network such as LAN (Local Area Network) or the Internet. The computer can receive the program transferred in this way and install it on a recording medium such as a built-in hard disk.
 なお、本明細書に記載した効果はあくまで例示であって限定されるものではなく、記載されていない付加的な効果があってもよい。また、本技術は、上述した技術の実施の形態に限定して解釈されるべきではない。この技術の実施の形態は、例示という形態で本技術を開示しており、本技術の要旨を逸脱しない範囲で当業者が実施の形態の修正や代用をなし得ることは自明である。すなわち、本技術の要旨を判断するためには、請求の範囲を参酌すべきである。 Note that the effects described in this specification are merely examples and are not limited, and there may be additional effects not described. In addition, the present technology should not be construed as being limited to the embodiments of the above-mentioned technology. The embodiment of this technique discloses the present technology in the form of an example, and it is obvious that a person skilled in the art can modify or substitute the embodiment without departing from the gist of the present technique. That is, in order to judge the gist of this technology, the claims should be taken into consideration.
 また、本技術の情報処理装置は以下のような構成も取ることができる。
 (1) センシングデータ中に存在する物体の特徴形状、または地図情報中における前記物体が位置する空間領域を示す領域情報を抽出する抽出部と、
 前記抽出部による抽出結果に基づいて、前記物体が動物体であるか判定する判定部と、
 前記判定部において前記物体が動物体と判定された場合、前記地図情報から前記物体を削除する地図情報処理部と
を備える情報処理装置。
 (2) 前記抽出部は、前記抽出した物体の特徴形状に基づき動物体候補を抽出して、
 前記判定部は、前記抽出部で抽出された前記動物体候補が動物体であるか判定する(1)に記載の情報処理装置。
 (3) 前記抽出部は、予め生成されている特徴形状認識モデルを用いて前記物体の特徴形状を抽出する(2)に記載の情報処理装置。
 (4) 前記抽出部は、前記特徴形状の抽出結果に基づき、動物体らしさを示す動物体スコアを前記物体に設定する(2)または(3)に記載の情報処理装置。
 (5) 前記抽出部は、前記動物体スコアに基づいて前記動物体候補を抽出する(4)に記載の情報処理装置。
 (6) 前記抽出部は、予め生成されている領域認識モデルを用いて前記センシングデータが示す空間領域に対して領域認識を行い、認識された領域毎に動物体の存在しやすさを示す動物体存在スコアを設定して、
 前記判定部は、前記動物体候補の前記動物体スコアと前記動物体候補が位置する領域の前記動物体存在スコアの少なくともいずれかに基づいて、前記動物体候補が動物体であるか判定する(5)に記載の情報処理装置。
 (7) 前記抽出部は、予め生成されている領域認識モデルを用いて前記センシングデータが示す空間領域に対して領域認識を行い、
 前記地図情報処理部は、前記抽出部の領域認識結果に基づき、認識された領域に関する領域ラベル情報を前記地図情報に付与する(1)乃至(6)のいずれかに記載の情報処理装置。
 (8) 前記センシングデータは外界センサで取得されたデータであり、
 前記センシングデータに基づいて前記地図情報を生成する地図情報生成部を備える(1)乃至(7)のいずれかに記載の情報処理装置。
 (9) 前記センシングデータと前記地図情報処理部で動物体が削除された前記地図情報を用いて自己位置を検出するスターレコニング部を備える(1)乃至(8)のいずれかに記載の情報処理装置。
 (10) 前記センシングデータに対する重み付けを行う重み付け処理部をさらに備え、
 前記スターレコニング部は、前記重み付け処理部で重み付けが行われた前記前記センシングデータを用いて自己位置を検出する(9)に記載の情報処理装置。
 (11) 前記抽出部は、前記抽出した物体の特徴形状に基づき動物体候補を抽出して、
 前記重み付け処理部は、前記抽出部によって抽出された前記動物体候補を示す前記センシングデータに対して重み付けを行う(10)に記載の情報処理装置。
 (12) 前記地図情報には、動物体の存在しやすさを示す動物体存在スコアを領域毎に示す領域ラベル情報が付与されており、
 前記重み付け処理部は、前記領域ラベル情報に基づき前記センシングデータの重み付けを行う(10)または(11)に記載の情報処理装置。
 (13) 前記重み付け処理部は、前記動物体存在スコアと前記抽出部で前記特徴形状の抽出結果に基づき前記動物体候補に設定された動物体らしさを示す動物体スコアの少なくともいずれかに応じて、前記センシングデータに対する重み付けを行い、
 動物体が存在しやすくなりあるいは動物体らしさが高くなるに伴い重みを少なくする(12)に記載の情報処理装置。
 (14) 内界センサで取得されたセンシングデータに基づいて自己位置を検出するデッドレコニング部と、
 前記デッドレコニングで検出された自己位置と、前記スターレコニング部で検出された自己位置とを統合する自己位置統合部をさらに備える(9)乃至(13)のいずれかに記載の情報処理装置。
In addition, the information processing device of the present technology can have the following configurations.
(1) An extraction unit that extracts characteristic shape of an object existing in sensing data or area information indicating a spatial area in which the object is located in map information.
A determination unit that determines whether the object is an animal body based on the extraction result by the extraction unit, and
An information processing device including a map information processing unit that deletes the object from the map information when the determination unit determines that the object is an animal body.
(2) The extraction unit extracts animal body candidates based on the characteristic shape of the extracted object.
The information processing device according to (1), wherein the determination unit determines whether the animal body candidate extracted by the extraction unit is an animal body.
(3) The information processing apparatus according to (2), wherein the extraction unit extracts a feature shape of the object using a feature shape recognition model generated in advance.
(4) The information processing apparatus according to (2) or (3), wherein the extraction unit sets an animal body score indicating animal-likeness on the object based on the extraction result of the characteristic shape.
(5) The information processing apparatus according to (4), wherein the extraction unit extracts the animal body candidate based on the animal body score.
(6) The extraction unit recognizes a space region indicated by the sensing data using a region recognition model generated in advance, and indicates the susceptibility of an animal body to each recognized region. Set the body presence score,
The determination unit determines whether or not the animal body candidate is an animal body based on at least one of the animal body score of the animal body candidate and the animal body presence score of the region where the animal body candidate is located ( The information processing apparatus according to 5).
(7) The extraction unit performs region recognition on the spatial region indicated by the sensing data using a region recognition model generated in advance, and then performs region recognition.
The information processing apparatus according to any one of (1) to (6), wherein the map information processing unit adds area label information relating to the recognized area to the map information based on the area recognition result of the extraction unit.
(8) The sensing data is data acquired by an external sensor, and is
The information processing apparatus according to any one of (1) to (7), comprising a map information generation unit that generates the map information based on the sensing data.
(9) The information processing according to any one of (1) to (8), comprising a star reckoning unit that detects a self-position using the sensing data and the map information in which the animal body is deleted by the map information processing unit. apparatus.
(10) A weighting processing unit for weighting the sensing data is further provided.
The information processing apparatus according to (9), wherein the star reckoning unit detects a self-position using the sensing data weighted by the weighting processing unit.
(11) The extraction unit extracts animal body candidates based on the characteristic shape of the extracted object.
The information processing apparatus according to (10), wherein the weighting processing unit weights the sensing data indicating the animal body candidate extracted by the extraction unit.
(12) The map information is provided with area label information indicating the animal body existence score indicating the ease of existence of the animal body for each area.
The information processing device according to (10) or (11), wherein the weighting processing unit weights the sensing data based on the area label information.
(13) The weighting processing unit corresponds to at least one of the animal body presence score and the animal body score indicating the animal body-likeness set as the animal body candidate based on the extraction result of the characteristic shape by the extraction unit. , Weighting the sensing data
The information processing apparatus according to (12), wherein the weight is reduced as the animal body becomes more likely to exist or the animal body becomes more likely to exist.
(14) A dead reckoning unit that detects its own position based on the sensing data acquired by the internal sensor, and
The information processing apparatus according to any one of (9) to (13), further comprising a self-position integrating unit that integrates the self-position detected by the dead reckoning and the self-position detected by the star reckoning unit.
 10・・・情報処理システム
 20・・・学習ブロック
 21・・・データ記憶部
 22・・・特徴形状学習器
 23・・・領域学習器
 30・・・地図生成ブロック
 31・・・センサ部
 32・・・自己位置推定部
 33・・・動物体削除フィルタ
 34・・・地図生成部
 35・・・領域抽出部
 36・・・特徴形状抽出部
 37・・・動物体候補抽出部
 38・・・判定部
 39・・・地図情報処理部
 40・・・地図活用ブロック
 41・・・センサ部
 42・・・デッドレコニング部
 43・・・特徴形状抽出部
 44・・・動物体候補抽出部
 45・・・地図情報記憶部
 46・・・重み付け処理部
 47・・・スターレコニング部
 48・・・自己位置統合部
 311,411・・・内界センサ
 312,412・・・外界センサ
 312a,411a・・・測距センサ
 312b,412b・・・イメージセンサ
10 ... Information processing system 20 ... Learning block 21 ... Data storage unit 22 ... Feature shape learner 23 ... Area learner 30 ... Map generation block 31 ... Sensor unit 32 ...・ ・ Self-position estimation unit 33 ・ ・ ・ Animal body deletion filter 34 ・ ・ ・ Map generation unit 35 ・ ・ ・ Area extraction unit 36 ・ ・ ・ Feature shape extraction unit 37 ・ ・ ・ Animal body candidate extraction unit 38 ・ ・ ・ Judgment Part 39 ・ ・ ・ Map information processing part 40 ・ ・ ・ Map utilization block 41 ・ ・ ・ Sensor part 42 ・ ・ ・ Dead reckoning part 43 ・ ・ ・ Characteristic shape extraction part 44 ・ ・ ・ Animal body candidate extraction part 45 ・ ・ ・Map information storage unit 46 ・ ・ ・ Weight processing unit 47 ・ ・ ・ Star reckoning unit 48 ・ ・ ・ Self- position integration unit 311, 411 ・ ・ ・ Internal world sensor 312, 412 ・ ・ ・ External world sensor 312a, 411a ・ ・ ・ Measurement Distance sensor 312b, 412b ... Image sensor

Claims (16)

  1.  センシングデータ中に存在する物体の特徴形状、または地図情報中における前記物体が位置する空間領域を示す領域情報を抽出する抽出部と、
     前記抽出部による抽出結果に基づいて、前記物体が動物体であるか判定する判定部と、
     前記判定部において前記物体が動物体と判定された場合、前記地図情報から前記物体を削除する地図情報処理部と
    を備える情報処理装置。
    An extraction unit that extracts the characteristic shape of an object existing in the sensing data or the area information indicating the spatial area in which the object is located in the map information.
    A determination unit that determines whether the object is an animal body based on the extraction result by the extraction unit, and
    An information processing device including a map information processing unit that deletes the object from the map information when the determination unit determines that the object is an animal body.
  2.  前記抽出部は、前記抽出した物体の特徴形状に基づき動物体候補を抽出して、
     前記判定部は、前記抽出部で抽出された前記動物体候補が動物体であるか判定する
    請求項1に記載の情報処理装置。
    The extraction unit extracts animal body candidates based on the characteristic shape of the extracted object, and then
    The information processing device according to claim 1, wherein the determination unit determines whether the animal body candidate extracted by the extraction unit is an animal body.
  3.  前記抽出部は、予め生成されている特徴形状認識モデルを用いて前記物体の特徴形状を抽出する
    請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the extraction unit extracts a feature shape of the object by using a feature shape recognition model generated in advance.
  4.  前記抽出部は、前記特徴形状の抽出結果に基づき、動物体らしさを示す動物体スコアを前記物体に設定する
    請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the extraction unit sets an animal body score indicating animal-likeness on the object based on the extraction result of the characteristic shape.
  5.  前記抽出部は、前記動物体スコアに基づいて前記動物体候補を抽出する
    請求項4に記載の情報処理装置。
    The information processing device according to claim 4, wherein the extraction unit extracts the animal body candidate based on the animal body score.
  6.  前記抽出部は、予め生成されている領域認識モデルを用いて前記センシングデータが示す空間領域に対して領域認識を行い、認識された領域毎に動物体の存在しやすさを示す動物体存在スコアを設定して、
     前記判定部は、前記動物体候補の前記動物体スコアと前記動物体候補が位置する領域の前記動物体存在スコアの少なくともいずれかに基づいて、前記動物体候補が動物体であるか判定する
    請求項5に記載の情報処理装置。
    The extraction unit performs region recognition for the spatial region indicated by the sensing data using a region recognition model generated in advance, and an animal body existence score indicating the ease of existence of the animal body for each recognized region. Set,
    The determination unit determines whether or not the animal body candidate is an animal body based on at least one of the animal body score of the animal body candidate and the animal body presence score of the region where the animal body candidate is located. Item 5. The information processing apparatus according to Item 5.
  7.  前記抽出部は、予め生成されている領域認識モデルを用いて前記センシングデータが示す空間領域に対して領域認識を行い、
     前記地図情報処理部は、前記抽出部の領域認識結果に基づき、認識された領域に関する領域ラベル情報を前記地図情報に付与する
    請求項1に記載の情報処理装置。
    The extraction unit performs region recognition on the spatial region indicated by the sensing data using a region recognition model generated in advance, and then performs region recognition.
    The information processing device according to claim 1, wherein the map information processing unit adds area label information relating to the recognized area to the map information based on the area recognition result of the extraction unit.
  8.  前記センシングデータは外界センサで取得されたデータであり、
     前記センシングデータに基づいて前記地図情報を生成する地図情報生成部を備える
    請求項1に記載の情報処理装置。
    The sensing data is data acquired by an external sensor, and is
    The information processing apparatus according to claim 1, further comprising a map information generation unit that generates the map information based on the sensing data.
  9.  前記センシングデータと前記地図情報処理部で動物体が削除された前記地図情報を用いて自己位置を検出するスターレコニング部を備える
    請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, further comprising a star reckoning unit that detects a self-position using the sensing data and the map information in which an animal body has been deleted by the map information processing unit.
  10.  前記センシングデータに対する重み付けを行う重み付け処理部をさらに備え、
     前記スターレコニング部は、前記重み付け処理部で重み付けが行われた前記前記センシングデータを用いて自己位置を検出する
    請求項9に記載の情報処理装置。
    A weighting processing unit that weights the sensing data is further provided.
    The information processing device according to claim 9, wherein the star reckoning unit detects a self-position using the sensing data weighted by the weighting processing unit.
  11.  前記抽出部は、前記抽出した物体の特徴形状に基づき動物体候補を抽出して、
     前記重み付け処理部は、前記抽出部によって抽出された前記動物体候補を示す前記センシングデータに対して重み付けを行う
    請求項10に記載の情報処理装置。
    The extraction unit extracts animal body candidates based on the characteristic shape of the extracted object, and then
    The information processing device according to claim 10, wherein the weighting processing unit weights the sensing data indicating the animal body candidate extracted by the extraction unit.
  12.  前記地図情報には、動物体の存在しやすさを示す動物体存在スコアを領域毎に示す領域ラベル情報が付与されており、
     前記重み付け処理部は、前記領域ラベル情報に基づき前記センシングデータの重み付けを行う
    請求項10に記載の情報処理装置。
    Area label information indicating the animal body existence score indicating the ease of existence of the animal body for each area is added to the map information.
    The information processing device according to claim 10, wherein the weighting processing unit weights the sensing data based on the area label information.
  13.  前記重み付け処理部は、前記動物体存在スコアと前記抽出部で前記特徴形状の抽出結果に基づき前記動物体候補に設定された動物体らしさを示す動物体スコアの少なくともいずれかに応じて、前記センシングデータに対する重み付けを行い、
     動物体が存在しやすくなりあるいは動物体らしさが高くなるに伴い重みを少なくする
    請求項12に記載の情報処理装置。
    The weighting processing unit responds to at least one of the animal body presence score and the animal body score indicating the animal-likeness set as the animal body candidate based on the extraction result of the characteristic shape by the extraction unit. Weight the data and
    The information processing device according to claim 12, wherein the weight is reduced as the animal body becomes more likely to exist or the animal body becomes more likely to exist.
  14.  内界センサで取得されたセンシングデータに基づいて自己位置を検出するデッドレコニング部と、
     前記デッドレコニングで検出された自己位置と、前記スターレコニング部で検出された自己位置を統合する自己位置統合部をさらに備える
    請求項9に記載の情報処理装置。
    A dead reckoning unit that detects its own position based on the sensing data acquired by the internal sensor,
    The information processing apparatus according to claim 9, further comprising a self-position integration unit that integrates the self-position detected by the dead reckoning and the self-position detected by the star reckoning unit.
  15.  センシングデータ中に存在する物体の特徴形状、または地図情報中における前記物体が位置する領域を示す領域情報を抽出部で抽出することと、
     前記抽出部による抽出結果に基づいて、前記物体が動物体であるか判定部で判定することと、
     前記判定部において前記物体が動物体と判定された場合、前記地図情報から前記物体を地図情報処理部で削除すること
    を含む情報処理方法。
    The extraction unit extracts the characteristic shape of the object existing in the sensing data or the area information indicating the area where the object is located in the map information.
    Based on the extraction result by the extraction unit, the determination unit determines whether the object is an animal body.
    An information processing method including deleting the object from the map information by the map information processing unit when the determination unit determines that the object is an animal body.
  16.  地図情報の生成をコンピュータで実行させるプログラムであって、
     センシングデータ中に存在する物体の特徴形状、または地図情報中における前記物体が位置する領域を示す領域情報を抽出する手順と、
     前記特徴形状または前記領域情報の抽出結果に基づいて、前記物体が動物体であるか判定する手順と、
     動物体と判定された物体を前記地図情報から削除する手順と
    を前記コンピュータで実行させるプログラム。
    A program that causes a computer to generate map information.
    The procedure for extracting the characteristic shape of an object existing in the sensing data or the area information indicating the area where the object is located in the map information, and the procedure for extracting the area information.
    A procedure for determining whether the object is an animal body based on the feature shape or the extraction result of the region information, and
    A program that causes the computer to execute a procedure for deleting an object determined to be an animal body from the map information.
PCT/JP2020/027813 2019-10-16 2020-07-17 Information processing device, information processing method, and program WO2021075112A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-189217 2019-10-16
JP2019189217A JP2021064237A (en) 2019-10-16 2019-10-16 Information processing device, information processing method and program

Publications (1)

Publication Number Publication Date
WO2021075112A1 true WO2021075112A1 (en) 2021-04-22

Family

ID=75486330

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/027813 WO2021075112A1 (en) 2019-10-16 2020-07-17 Information processing device, information processing method, and program

Country Status (2)

Country Link
JP (1) JP2021064237A (en)
WO (1) WO2021075112A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004326264A (en) * 2003-04-22 2004-11-18 Matsushita Electric Works Ltd Obstacle detecting device and autonomous mobile robot using the same and obstacle detecting method and obstacle detecting program
JP2006079325A (en) * 2004-09-09 2006-03-23 Matsushita Electric Works Ltd Autonomous mobile device
JP2014203429A (en) * 2013-04-10 2014-10-27 トヨタ自動車株式会社 Map generation apparatus, map generation method, and control program
JP2016126662A (en) * 2015-01-07 2016-07-11 株式会社リコー Map creation device, map creation method, and program
JP2017181870A (en) * 2016-03-31 2017-10-05 ソニー株式会社 Information processing device and information processing server
US20170299714A1 (en) * 2016-04-15 2017-10-19 Mohsen Rohani Systems and methods for side-directed radar from a vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004326264A (en) * 2003-04-22 2004-11-18 Matsushita Electric Works Ltd Obstacle detecting device and autonomous mobile robot using the same and obstacle detecting method and obstacle detecting program
JP2006079325A (en) * 2004-09-09 2006-03-23 Matsushita Electric Works Ltd Autonomous mobile device
JP2014203429A (en) * 2013-04-10 2014-10-27 トヨタ自動車株式会社 Map generation apparatus, map generation method, and control program
JP2016126662A (en) * 2015-01-07 2016-07-11 株式会社リコー Map creation device, map creation method, and program
JP2017181870A (en) * 2016-03-31 2017-10-05 ソニー株式会社 Information processing device and information processing server
US20170299714A1 (en) * 2016-04-15 2017-10-19 Mohsen Rohani Systems and methods for side-directed radar from a vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TOMOAKI YOSHIDA , IRIE KIYOSHI, KOYANAGI EIJI, TOMONO MASAHIRO: "An outdoor navigation platform with a 3d scanner and Gyro-assisted odometry", TRANSACTIONS OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS, vol. 47, no. 10, pages 493 - 500, XP055819184 *

Also Published As

Publication number Publication date
JP2021064237A (en) 2021-04-22

Similar Documents

Publication Publication Date Title
US11531354B2 (en) Image processing apparatus and image processing method
WO2017057055A1 (en) Information processing device, information terminal and information processing method
WO2017057044A1 (en) Information processing device and information processing method
JP2019045892A (en) Information processing apparatus, information processing method, program and movable body
JP7294148B2 (en) CALIBRATION DEVICE, CALIBRATION METHOD AND PROGRAM
US20200263994A1 (en) Information processing apparatus, information processing method, program, and moving body
JP6764573B2 (en) Image processing equipment, image processing methods, and programs
JP7180670B2 (en) Control device, control method and program
KR20220020804A (en) Information processing devices and information processing methods, and programs
US11533420B2 (en) Server, method, non-transitory computer-readable medium, and system
GB2611589A (en) Techniques for finding and accessing vehicles
US20220277556A1 (en) Information processing device, information processing method, and program
JP7409309B2 (en) Information processing device, information processing method, and program
WO2021075112A1 (en) Information processing device, information processing method, and program
WO2022044830A1 (en) Information processing device and information processing method
JP7363890B2 (en) Information processing device, information processing method and program
WO2022024602A1 (en) Information processing device, information processing method and program
JP2020056757A (en) Information processor, method, program, and movable body control system
WO2020203240A1 (en) Information processing device and information processing method
CN115128566A (en) Radar data determination circuit and radar data determination method
WO2022059489A1 (en) Information processing device, information processing method, and program
WO2020195969A1 (en) Information processing device, information processing method, and program
WO2021065510A1 (en) Information processing device, information processing method, information processing system, and program
WO2023117398A1 (en) Circuitry, system, and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20877014

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20877014

Country of ref document: EP

Kind code of ref document: A1