WO2020045345A1 - Sign recognition system and sign recognition method - Google Patents

Sign recognition system and sign recognition method Download PDF

Info

Publication number
WO2020045345A1
WO2020045345A1 PCT/JP2019/033315 JP2019033315W WO2020045345A1 WO 2020045345 A1 WO2020045345 A1 WO 2020045345A1 JP 2019033315 W JP2019033315 W JP 2019033315W WO 2020045345 A1 WO2020045345 A1 WO 2020045345A1
Authority
WO
WIPO (PCT)
Prior art keywords
sign
vehicle
data
attribute information
information
Prior art date
Application number
PCT/JP2019/033315
Other languages
French (fr)
Japanese (ja)
Inventor
臼井 美雅
朋夫 野村
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2019136947A external-priority patent/JP7088136B2/en
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Priority to CN201980056008.0A priority Critical patent/CN112639905B/en
Priority to DE112019004319.6T priority patent/DE112019004319T5/en
Publication of WO2020045345A1 publication Critical patent/WO2020045345A1/en
Priority to US17/186,948 priority patent/US11830255B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions

Definitions

  • the present disclosure relates to a sign object recognition system and a sign object that photograph a predetermined sign object including a guide sign and a signboard for road guidance by an on-vehicle camera mounted on a vehicle and recognize the sign object based on the photographed image data. It relates to an object recognition method.
  • Patent Document 1 proposes the following. That is, the presence of a guide sign in front of the road is detected from the image of the vehicle-mounted camera, and the image data of the guide sign portion is processed. Then, characters such as a place name, a direction, a distance, and an intersection display on the destination local plane are recognized, and simplified display is performed on a monitor in the vehicle.
  • the present disclosure provides a sign object recognition system and a sign object recognition method that can easily execute a process of specifying which sign object in the database is a sign object photographed by the vehicle-mounted camera.
  • the purpose is to: Note that the sign here includes, for example, guide signs and signboards for road guidance.
  • a sign object recognition system that picks up a sign object with an on-board camera mounted on a vehicle and recognizes the sign object based on the shot image data. From the data, an extraction device that recognizes and extracts only a specific type of character in the sign, and information on the character string extracted by the extraction device is used as attribute information for identifying the sign, as the attribute information of the sign.
  • a sign object database for storing sign object data including the installation position information and the attribute information.
  • the sign object data is stored in the sign object database, and the information of the character string extracted by the extraction device is used as the attribute information, and the sign object data includes the installation position information of the sign object and its information. Contains attribute information.
  • the attribute information information of a character string extracted by recognizing only a specific type of character in the sign can be used to specify or identify each sign. Therefore, the amount of data to be handled in order to specify the sign is significantly reduced, and the communication time for communication and the time required for data processing such as image recognition can be shortened.
  • the recognition target is limited to a specific type of character
  • the dictionary used for recognition can be small, and the processing can be speeded up. As a result, there is an excellent effect that it is possible to easily execute the process of specifying which of the sign objects in the sign object database the sign object photographed by the vehicle-mounted camera has.
  • FIG. 1 is a block diagram schematically showing an overall configuration of a system according to the first embodiment
  • FIG. 2 is a flowchart schematically illustrating a procedure of a process of registering guide sign data performed by the control unit and the processing control device according to the first embodiment
  • FIG. 3A is a diagram (part 1) illustrating a specific example of a guide sign according to the first embodiment
  • FIG. 3B is a diagram (part 2) illustrating a specific example of the guide sign according to the first embodiment
  • FIG. 3C is a diagram (part 3) illustrating a specific example of the guide sign according to the first embodiment
  • FIG. 3A is a diagram (part 1) illustrating a specific example of a guide sign according to the first embodiment
  • FIG. 3B is a diagram (part 2) illustrating a specific example of the guide sign according to the first embodiment
  • FIG. 3C is a diagram (part 3) illustrating a specific example of the guide sign according to the first embodiment
  • FIG. 3A is a diagram (part 1) illustrating
  • FIG. 3D is a diagram (part 4) illustrating a specific example of the guidance sign according to the first embodiment
  • FIG. 3E is a diagram (part 5) illustrating a specific example of the guidance sign according to the first embodiment
  • FIG. 4 is a diagram for explaining a method of processing for extracting only numbers from captured image data according to the first embodiment
  • FIG. 5 is a diagram for explaining a case where the installation position information of the guide sign data according to the first embodiment is updated and registered
  • FIG. 6 is a block diagram schematically illustrating an entire configuration of a system according to the second embodiment.
  • FIG. 7 is a flowchart schematically illustrating a procedure of a process of collating guide sign data executed by the control unit and the processing control device according to the second embodiment;
  • FIG. 8 is a diagram for explaining a method of performing collation according to the second embodiment.
  • FIG. 9 is a block diagram schematically illustrating a configuration of an in-vehicle device according to the third embodiment.
  • FIG. 10 is a flowchart schematically illustrating a procedure of a process of collating guide sign data executed by the control unit according to the third embodiment;
  • FIG. 11 is a flowchart schematically illustrating a procedure of a process of collating guide sign data executed by the control unit according to the fourth embodiment;
  • FIG. 12 is a diagram illustrating a specific example of the guide sign according to the fifth embodiment,
  • FIG. 13 is a diagram for explaining a method of processing for extracting a number together with position information from captured image data according to the sixth embodiment.
  • the "guidance sign” is a sign that the road manager installs at a required position on the road and performs route guidance, point guidance, attached facility guidance, etc., and the installation position, shape, color, It is installed according to a prescribed format such as the size of characters.
  • “signboards” are installed in stores or along roads, etc., and are made mainly for commercial purposes for advertising purposes and to be shown to passersby. , A position including a direction and a distance, and the like.
  • the "signs” include information signs and signs for road guidance.
  • FIG. 1 schematically shows an entire configuration of a guide sign recognition system 1 as a sign recognition system according to the present embodiment.
  • the guide sign recognition system 1 includes a data center 2 and a vehicle-mounted device 3.
  • the data center 2 collects and analyzes data, and generates a guide sign database as a highly accurate sign object database.
  • the in-vehicle device 3 is provided in each of a plurality of vehicles such as a passenger car and a truck traveling on a road, and only one is shown for convenience.
  • the on-vehicle device 3 mounted on each vehicle includes an on-vehicle camera 4, a position detection unit 5, various on-vehicle sensors 6, a map data storage unit 7, a communication unit 8, a detection data storage unit 9, an operation display unit 10, and a control unit 11. It has.
  • the in-vehicle camera 4 is provided, for example, at the front of a vehicle and configured to photograph at least a road condition ahead in the traveling direction.
  • the position detector 5 has a well-known configuration for detecting the position of the vehicle based on data received by a GPS receiver and the like.
  • the various in-vehicle sensors 6 include sensors for detecting speed information and traveling direction of the own vehicle, that is, information on the direction of the vehicle body.
  • the map data storage unit 7 stores, for example, road map information for the whole country.
  • the communication unit 8 performs communication with the data center 2 via a mobile communication network or using road-to-vehicle communication or the like.
  • the communication unit 8 functions as a transmission unit 8a as a transmission device and a reception unit 8b as a reception device.
  • the detection data storage unit 9 stores detection data including shooting position information at which a guide sign is estimated, attribute information obtained, and the like.
  • the operation display unit 10 has a switch (not shown), for example, a touch panel or a display, and is operated by a user of the vehicle, for example, a driver, and performs necessary display such as a navigation screen to the user.
  • the control unit 11 includes a computer and controls the entire on-vehicle device 3. At this time, the control unit 11 captures an image of a road condition ahead by the on-vehicle camera 4 while the vehicle is running. Then, when a guide sign as a sign is detected in the photographed image data, a specific type of characters in the guide sign is obtained from the photographed image data of the guide sign using, for example, a well-known OCR technique. Recognize and extract characters belonging to characters one by one. Further, the extracted character string is used as attribute information for specifying the guide sign. Therefore, the control unit 11 has a function as an extraction device. Details of the attribute information in the present embodiment will be described later.
  • the control unit 11 estimates the position where the guide sign is installed from the own vehicle position, the traveling speed, the traveling direction, and the like detected by the position detection unit 5 at the time of photographing the guide sign as the sign object. The position is taken as the shooting position information. Then, the control unit 11 causes the detection data storage unit 9 to store the detection data including the photographing position information of the guide sign and the obtained attribute information together with data such as the photographing date and time. After that, the communication unit 8 transmits the detection data stored in the detection data storage unit 9 to the data center 2.
  • the data center 2 includes a communication unit 12, an input operation unit 13, a processing control unit 14, a detection data storage unit 15, a road map database 16, and a guide sign database 17 as a sign object database.
  • the communication unit 12 receives the detection data by communication with the communication unit 8 of each vehicle, and functions as a receiving unit 12a as a receiving device and a transmitting unit 12b as a transmitting device.
  • the input operation unit 13 is for an operator to perform necessary input operations.
  • the processing control device 14 is mainly composed of a computer, and controls the entire data center 2. At the same time, as described later in detail, the processing control device 14 performs a process of generating road map data and the like, and also performs a process of generating and updating guide sign data (see FIG. 5) as the sign object data. . At this time, the detection data transmitted from each vehicle is collected and temporarily stored in the detection data storage unit 15. At this time, for example, a huge amount of detection data is collected from a large number of ordinary vehicles traveling all over Japan.
  • the road map database 16 stores high-precision road map data generated by the processing control device 14.
  • the guide sign database 17 stores guide sign data as sign data used for landmark information and the like. As shown in FIG. 5, the guide sign data includes installation position information of a sign including a guide sign installed on each major road in the whole country and a commercial signboard installed near the road, that is, longitude and latitude. , And attribute information for specifying the guide sign.
  • the road map database 16 may include a guide sign database 17 as a sign object database. It is also possible to include sign object data as landmarks in the road map data and to include attribute information in each sign object data.
  • the control unit 11 of the in-vehicle apparatus 3 uses the captured image data of the in-vehicle camera 4 to convert a specific type of character in a guide sign as a sign, as will be described later in the description of the operation, that is, the description of the flowchart.
  • An extraction step of recognizing and extracting only numbers from 0 to 9 is executed.
  • the information of the extracted character string is used as attribute information.
  • a specific example is described in which a guide sign is photographed as a sign.
  • the control unit 11 determines that the search for the number from the left to the right of the guide sign from the photographed image data of the guide sign is performed in the up-down direction. Repeat in order.
  • a character string in which numbers are arranged in the extracted order is used as attribute information.
  • the processing control device 14 of the data center 2 receives and collects the detection data from the vehicle-mounted device 3 of each vehicle, and stores the data in the detection data storage unit 15.
  • a sign object data storing step of registering and updating guide sign data in the guide sign database 17 based on the collected detection data is executed. Therefore, the processing control device 14 also has functions as a collection unit 14a as a collection device and a registration unit 14b as a registration device.
  • the processing control device 14 registers the same attribute information from the plurality of pieces of detection data received and stored in the detection data storage unit 15 when registering the guide sign data. Collect the detection data that you have.
  • the installation position information is determined based on statistical processing of the shooting position information of the collected detection data, and is configured to be used as guide sign data.
  • FIGS. 3A to 3E show examples of images of guide signs A1 to A5 installed on a highway, for example, as specific examples of guide signs as sign objects.
  • These guide signs A1 to A5 are for guiding a direction and an exit on an expressway, etc., and are formed by writing white letters mainly on a green background on a square signboard.
  • the guide sign A1 shown in FIG. 3A indicates that the distance to the exit of "Yatomi" of the inter number "26" is 2 km.
  • the guide sign A2 shown in FIG. 3B indicates that the exit to the “Yatomi, Tsushima” area is 1 km, and the exit is connected to National Route 155.
  • the guide sign A3 shown in FIG. 3C indicates that the distance to the exit in the direction of "Yatomi, Tsushima” is 550 m.
  • the guidance sign A4 shown in FIG. 3D indicates that the exit is in the direction of "Yatomi, Tsushima”.
  • the guide sign A5 shown in FIG. 3E indicates that the highway radio can be heard from this point at a frequency of 1620 kHz.
  • FIG. 2 shows a procedure of processing up to registration of guide sign data, which is executed by the control unit 11 on the in-vehicle apparatus 3 and the processing control apparatus 14 on the data center 2 side, ie, each of the display object recognition methods in the present embodiment.
  • steps S1 to S3 are processes executed by the control unit 11 of the vehicle-mounted device 3 while the vehicle is running.
  • step S1 the front of the vehicle is photographed by the in-vehicle camera 4, and the presence or absence of a guide sign as a sign in the photographed image is constantly monitored.
  • step S2 a process of recognizing and extracting a specific type of character, in this case, a numeral, is performed from the photographed image data, in this case, a still image (extraction step).
  • FIG. 4 shows a processing method when the control unit 11 extracts only numbers from the captured image data of the guide sign. That is, taking the guide sign A2 shown in FIG. 3B as a specific example, first, when tracing from left to right in the upper stage, the number “155” is recognized and extracted. No numbers are recognized for the second and third rows from the top. In the lower part, the number “26” is recognized on the left side, and the number “1” is recognized and extracted on the right side.
  • the attribute information of the guide sign A2 is a character string composed of six numbers “155261”.
  • the attribute information of the guide sign A1 shown in FIG. 3A is a character string of “262”.
  • the attribute information of the guide sign A3 illustrated in FIG. 3C is a character string of “15552650”.
  • the attribute information of the guide sign A4 illustrated in FIG. 3D is a character string of “15526”.
  • the attribute information of the guide sign A5 illustrated in FIG. 3E is a character string of “1620”.
  • the position of the photographed guide sign that is, photographing position information
  • the photographing position information and the character string information, ie, attribute information, obtained in step S2 are detected data. Is transmitted to the data center 2 by the communication unit 8.
  • the photographing position information the own vehicle position detected by the position detection unit 5 at the time of photographing the guide sign, and the guide sign obtained from the position and size of the guide sign in the photographed image data. Is estimated based on the distance to
  • step S4 and S5 are processing executed by the processing control device 14 of the data center 2.
  • step S ⁇ b> 4 in the data center 2, the detection data transmitted from the in-vehicle device 3 is received by the communication unit 12 and written into the detection data storage unit 15.
  • step S5 statistical processing is performed on the received large number of pieces of detected data, the location where the guide sign is present is specified, and the guide sign data including the information on the position coordinates of the installation position of the guide sign and the attribute information is set as A process of registering in the sign database 17 is performed (a sign object data storing step).
  • the registration includes not only new registration but also update registration.
  • the same attribute information is collected from a large number of detection data collected in the detection data storage unit 15, data having a number in this case, and the position coordinates, which are the photographing position information of the detection data, are statistically processed.
  • the obtained position coordinates are used as true installation position information.
  • the statistical processing at that time can be performed by, for example, excluding data having an outlier, that is, an abnormal value, and then calculating the average, median, mode, and the like of the installation position information.
  • FIG. 5 shows an example in the case where update registration has been performed.
  • the installation position coordinates of the guide sign whose first attribute information is “155261” are updated from (X1, Y1) to (X11, Y11). It shows how it was.
  • the following effects can be obtained. That is, in the in-vehicle apparatus 3, the guide sign is photographed by the in-vehicle camera 4 while the vehicle is traveling, and the control unit 11 extracts only a specific type of character in the guide sign from the photographed image data to obtain attribute information. (Extraction step). Then, the communication unit 8 transmits detection data including the photographing position information of the guide sign and the obtained attribute information to the data center 2. In the data center 2, the processing control device 14 receives and collects detection data from the vehicle-mounted devices 3 of a plurality of vehicles. Then, based on the collected detection data, guide sign data including the installation position information and attribute information of the guide sign is generated and registered in the guide sign database 17 (sign object data storing step).
  • the attribute information for specifying the guide sign is composed of character string data in which only a specific type of character in the guide sign is extracted, the amount of data transmitted from the vehicle-mounted device 3 to the data center 2 side Can be greatly reduced. Thereby, the communication time for communication and the time required for data processing can be shortened. The data amount of the guide sign data in the road map database 16 is also reduced, the storage capacity is reduced, and the handling of the data is facilitated.
  • the guide sign recognition system 1 and the recognition method of the present embodiment use the in-vehicle camera 4 mounted on the vehicle to photograph a guide sign for road guidance as a sign, and based on the photographed image data, A system 1 and a method for recognizing guide signs. According to the guide sign recognition system 1 and the recognition method, it is possible to easily execute a process of specifying which guide sign in the guide sign database 17 is a guide sign photographed by the vehicle-mounted camera 4. It has excellent effects.
  • the processing control device 14 of the data center 2 collects detection data having the same attribute information from the received plurality of detection data, and performs statistical processing on the shooting position information of the collected detection data. To determine the installation location information and use it as guide sign data. As a result, data including more accurate installation position information can be generated, and a highly accurate guide sign database 17 can be constructed.
  • numbers are used as specific types of characters constituting attribute information.
  • character type identification processing can be performed with sufficient certainty and in a short time, only ten characters from 0 to 9 need to be extracted and recognized, so that character recognition is extremely simple. And data processing becomes easier.
  • extracting numbers a search for numbers from the photographed image data of the guide sign from left to right of the guide sign is repeated in the vertical direction, and a character string in which the numbers are arranged in the extracted order is repeated. A rule of attribute information was adopted. Thereby, the process of extracting the attribute information can be easily performed.
  • the in-vehicle camera 4 repeatedly executes image capture, for example, every 100 msec, and sets at which time point the captured image data is to be used for processing.
  • image capture for example, every 100 msec
  • photographed image data in which the sign appears most immediately before the sign deviates from the screen.
  • recognition based on photographed image data photographed relatively early becomes more useful. For example, it is also possible to make settings such that processing of photographed image data is performed at a timing 50 m away from the sign.
  • the localization refers to the relative position of the sign recognized by analyzing the captured image data with respect to the own vehicle, and the position coordinates of the sign registered in the map data, based on the position of the sign. Refers to the process of specifying the position coordinates.
  • the guide sign recognition system 21 as the sign object recognition system according to the second embodiment can communicate between the data center 22 and the in-vehicle devices 23 mounted on a plurality of vehicles.
  • the in-vehicle device 23 includes an in-vehicle camera 4, a position detection unit 5, various on-vehicle sensors 6, a map data storage unit 7, a communication unit 24, an operation display unit 10, and a control unit 25 having a function as an extraction device.
  • the communication unit 24 has functions of a transmission unit 24a as a transmission device and a reception unit 24b as a reception device.
  • the control unit 25 uses the vehicle-mounted camera 4 to photograph a road condition ahead while the vehicle is running.
  • a guide sign as a sign is detected in the photographed image data
  • a specific type of character in the guide sign in this case, only a numeral, is recognized and extracted from the photographed image data of the guide sign, and extracted.
  • the character string is used as attribute information for specifying the guide sign.
  • the control unit 25 transmits detection data including the photographing position information of the guide sign and the obtained attribute information to the data center 22 by the communication unit 24.
  • the communication unit 24 receives the vehicle position data transmitted from the communication unit 26 of the data center 22.
  • the data center 22 includes a communication unit 26, an input operation unit 13, a processing control device 27, a road map database 16, and a guide sign database 28 as a sign object database.
  • the communication unit 26 receives the detection data transmitted from the communication unit 24 of the vehicle-mounted device 23 and transmits vehicle position data to the vehicle-mounted device 23 of the vehicle. Therefore, the communication unit 26 has the functions of a receiving unit 26a as a receiving device and a transmitting unit 26b as a transmitting device.
  • the guide sign database 28 stores guide sign data including installation position information and attribute information of a guide sign as a sign.
  • the processing control device 27 When the processing control device 27 receives the detection data from the in-vehicle device 23 of the vehicle through the communication unit 26, the processing control device 27 checks the attribute information of the detection data with the guide sign data in the guide sign database 28. When there is matching attribute information in the guide sign data of the guide sign database 28, the processing control device 27 refers to the road map database 16 based on the installation position information of the guide sign to determine the position of the vehicle. to decide. Therefore, the processing control device 27 has a function of a collating unit 27a as a collating device and a vehicle position determining unit 27b as a vehicle position determining device.
  • the processing control device 27 performs the following processing when collating the attribute information of the detected data with the guide sign data in the guide sign database 28. That is, based on the shooting position information, the attribute information matches from the guide sign data in which the installation position information exists within a predetermined range around the shooting position, for example, within a circle having a radius of 100 m centered on the coordinates indicated by the shooting position information. Collation is performed by searching for a guide sign. Further, in the present embodiment, the processing control device 27 causes the communication unit 26 to transmit data of the determined vehicle position to the vehicle-mounted device 23 of the vehicle. The in-vehicle device 23 of the vehicle can recognize the own vehicle position or update the own vehicle position on the navigation based on receiving the information of the vehicle position.
  • steps S11 to S13 are processes executed by the control unit 25 of the vehicle-mounted device 23 while the vehicle is running.
  • a guidance sign is photographed by the vehicle-mounted camera 4, and in step S12, a specific type of character, in this case, a number, is recognized and extracted from the photographed image data, and the attribute information is extracted. Is required.
  • the detection data including the shooting position information and the attribute information is transmitted to the data center 22 by the communication unit 24.
  • Steps S14 to S16 are processing executed by the processing control device 27 of the data center 22.
  • step S14 in the data center 22, the detection data transmitted from the in-vehicle device 23 is received by the communication unit 26.
  • step S15 the attribute information in the received detection data is compared with the attribute information in the guide sign data of the guide sign database 28, and the guide sign as the photographed sign is specified.
  • step S16 the position of the vehicle is determined from the installation position information of the guide sign specified in step S15, the vehicle position information is transmitted to the vehicle, and the process ends.
  • FIG. 8 shows an example of a method when the processing control device 27 performs the collation in step S15.
  • the shooting position information of the detection data is, for example, (X0, Y0) and the attribute information is, for example, “155261”.
  • the processing control device 27 first draws a circle R having a radius of 100 m centered on the photographing position information (X0, Y0) as a predetermined range, and extracts guide sign data located within the circle R. In this example, three pieces of guidance sign data of numbers 1, 2, and 3 are extracted.
  • the detection data is transmitted from the vehicle-mounted device 23 to the data center 22.
  • the attribute information in the received detection data is compared with the attribute information in the guide sign data of the guide sign database 28.
  • the guide sign can be specified, and the position of the vehicle can be determined from the installation position information of the guide sign.
  • the on-vehicle device 23 can accurately recognize the own vehicle position.
  • the attribute information for specifying the guide sign also includes data of a character string in which only a specific type of character in the guide sign, in this case, only a numeral is extracted. Therefore, the amount of data transmitted from the vehicle-mounted device 23 to the data center 22 can be reduced, and the communication time for communication and the time required for data processing can be reduced. Since the collation processing in the processing control device 27 also requires a small amount of data, it can be easily performed in a short time.
  • the matching process in the processing control device 27 matches the attribute information from the guide sign database 28 from the guide sign data within a predetermined range around the shooting position based on the shooting position in the detection data. This is performed by searching for a guide sign. Thereby, the process of collating the photographed guide sign with the guide sign data in the guide sign database 28 can be easily performed with sufficient certainty.
  • FIGS. 9 and 10 show a third embodiment.
  • a guide sign recognition system 31 as a sign object recognition system includes an in-vehicle device 32 mounted on a vehicle.
  • the in-vehicle device 32 includes an in-vehicle camera 4, a position detection unit 5, various on-vehicle sensors 6, a map data storage unit 7, a communication unit 8, a guide sign database 33 as a sign object database, an operation display unit 10, and a control unit 34.
  • the guide sign database 33 stores guide sign data as the latest high-precision sign object data. For example, those data are generated and updated with high accuracy in the data center 2 or the like described in the first embodiment, and are distributed to the in-vehicle devices 31.
  • the control unit 34 uses the on-board camera 4 to photograph the road condition ahead while the vehicle is running.
  • a guide sign as a sign is detected in the photographed image data
  • a specific type of character in the guide sign in this case, only a numeral, is recognized and extracted from the photographed image data of the guide sign, and extracted.
  • the character string is used as attribute information for specifying the guide sign.
  • the control unit 34 checks the detection data including the photographing position information of the photographed guide sign and the extracted attribute information with the guide sign data of the guide sign database 33.
  • the location of the own vehicle is determined from the installation position information of the guide sign, so-called localization is performed. Therefore, the control unit 34 has the function of the extracting unit 34a as the extracting device, and also has the function of the checking unit 34b as the checking device and the own vehicle position determining unit 34c as the own vehicle position determining device.
  • FIG. 10 shows a procedure of a process executed by the control unit 34 of the in-vehicle device 32 from the photographing of the guide sign by the in-vehicle camera 4 to the determination of the position of the own vehicle.
  • a guidance sign as a sign is photographed by the vehicle-mounted camera 4.
  • a specific type of character in this case, a number, is recognized and extracted from the captured image data, and attribute information is obtained.
  • step S23 the obtained attribute information is compared with the attribute information in the guide sign data of the guide sign database 33, and the photographed guide sign is specified.
  • the position of the own vehicle is determined from the specified installation position information of the guide sign, and the process ends.
  • the same processing as in the second embodiment is performed as a collation method. That is, the guide sign data located within a predetermined range around the photographing position, for example, within a circle having a radius of 100 m, is extracted from the guide sign database 33. If there is one whose attribute information matches, it can be determined that the guide sign has been taken.
  • the in-vehicle device 32 captures a guide sign as a sign by the in-vehicle camera 4 while the vehicle is running, and only specific types of characters in the guide sign are captured from the captured image data.
  • the extracted attribute information is obtained.
  • the detection data including the photographing position information of the guide sign and the obtained attribute information is collated with the guide sign data of the road map database 33, and if there is a matching attribute information, the information is obtained from the installation position information of the guide sign.
  • the own vehicle position is determined.
  • similar sign objects that is, guide signs and signboards appear continuously, for example, on an expressway or a general road in an urban area
  • localization can be performed without confusing the sign objects.
  • the attribute information for specifying the guide sign is composed of character string data from which only a specific type of character in the guide sign, in this case, only a numeral is extracted. Therefore, the amount of data to be handled in the collation processing and the like is significantly reduced, and data processing can be performed easily and in a short time. At this time, since the data amount of the guide sign data in the road map database 33 is also small, it is possible to obtain a sufficiently accurate position of the own vehicle even though the storage capacity is small.
  • Some of the in-vehicle devices incorporated in the vehicle have a function of collecting probe data including positional information of the vehicle during traveling and image information of the in-vehicle camera at that time.
  • the probe data is transmitted to the center of the map data generation system, and the center collects a large number of probe data and integrates them, based on which high-precision map data that can be applied to automatic driving is also generated. Generated and updated.
  • a sign object such as a guide sign as a landmark for the alignment between the probe data and the map data and the alignment between the probe data.
  • FIG. 11 shows a fourth embodiment.
  • the vehicle-mounted device mounted on the vehicle includes a vehicle-mounted camera, a position detection unit, various vehicle-mounted sensors, a map data storage unit, a communication unit, a guide sign database as a sign object database, an operation display unit, and a control unit.
  • the control unit functions as an extraction device, a collation device, a vehicle position determination device, and the like.
  • the control unit recognizes only a specific type of character from photographed image data of a guide sign as a sign taken by a vehicle-mounted camera and extracts the character as attribute information.
  • Chinese characters are used in addition to numbers as specific types of characters.
  • the number of Chinese characters in the character group to be recognized by the control unit is limited to a predetermined number, and the character group to be recognized is dynamically changed according to the position of the own vehicle detected by the position detection unit. Is done.
  • a sign present at an intersection or the like there is a guide sign indicating an intersection name, a point name, a facility name, and the like. Often described in.
  • the kanji expected to be used for the guidance sign is narrowed down to, for example, about ten to at most several tens of characters from the current position and the traveling direction of the vehicle as a recognition target character group. Has become.
  • step S31 rough position information of the current vehicle position is acquired based on the detection of the position detection unit.
  • step S32 a character group to be recognized is set based on the current position and the traveling direction of the vehicle.
  • characters such as "Kari”, “Valley”, “Station”, and “West”, that is, Chinese characters, are added to the dictionary for recognition.
  • the character group to be recognized may be distributed from the data center or may be extracted in the vehicle-mounted device.
  • step S33 a guidance sign as a sign is photographed by the vehicle-mounted camera, and photographed image data is acquired.
  • step S34 a process of recognizing and extracting characters, that is, kanji and numerals, in the captured image data is executed. In this case, even in the case of recognizing a kanji, since the character group to be recognized is limited to a very small number, the recognition process can be easily performed in a short time.
  • the processing from step S31 is repeated toward the next guide sign.
  • the process of specifying which guide sign in the database is the guide sign as the sign taken by the vehicle-mounted camera.
  • the mode in which the kanji to be recognized is dynamically changed according to the position of the vehicle is disclosed, but the present invention is not limited to this.
  • a plurality of types of characters may be mixed as the types of characters to be recognized according to the own vehicle position.
  • the character group to be recognized may be a combination of hiragana, katakana, and kanji.
  • the character type to be recognized may be dynamically changed according to the position of the host vehicle.
  • the configuration may be such that only numbers are recognized when traveling on a motorway such as an expressway, while numbers and alphabets are recognized when traveling on a general road.
  • the processing load on the CPU is reduced as in the other embodiments by limiting the characters to be recognized not to all character groups used in the area where the vehicle is used, but to some of them. it can.
  • FIG. 12 shows a fifth embodiment.
  • guide signs A6 and A7 as two right and left signs are provided side by side, but they are very similar, and when recognizing a number, both "26 1/2" are extracted and the difference is detected. No dots are seen.
  • a sign A8 of “95” is provided above the left guide sign A6, and a sign A9 of “20” is provided above the right guide sign A7.
  • the guide sign A6 and the signboard A8 are originally handled as different things, and similarly, the guide sign A7 and the signboard A9 are handled as different things.
  • the guide sign A6 and the sign A8 are handled as an integral sign, and the guide sign A7 and the sign A9 are treated as an integral sign.
  • the extracted character strings that is, the attribute information
  • the extracted character strings become “95 26 1/2” and “20 26 1/2”, and can be easily distinguished from each other. That is, by treating two guide signs or signboards arranged side by side in the vertical direction, that is, the Z-axis direction, as an integral object, an advantage that attribute information can be more easily distinguished can be obtained.
  • the character strings it is also possible to adopt a configuration in which information giving higher priority than other numbers is added to “95” and “20”.
  • FIG. 13 shows a sixth embodiment.
  • a specific type of character in this case, a numeral
  • a sign for example, a guide sign A2.
  • information on the position of the character in the sign in this case, coordinate information with the horizontal axis being the X axis and the vertical axis being the Y axis, is also included in the attribute information.
  • attribute information can be more easily distinguished from a sign having a similar specific type of character, and recognition processing can be performed more accurately in a shorter time.
  • the position information may be rough position information such as the upper left, the center, and the right end instead of the coordinate information.
  • ⁇ ⁇ In addition, when a plurality of character strings are used as attribute information for one sign, not only the character strings are uniformly used as attribute information but also the following changes are possible. That is, it is possible to extract only the largest character from the plurality of character strings and use the extracted character string as attribute information.
  • a delimiter such as a space, a colon, or a slash can be inserted for each character set.
  • a unit such as “minute”, “min”, “km”, “m” can be included in the attribute information as a specific type of character. If the position information of the character in the sign and the font size information of the character are included in the map data, the localization is effective.
  • numbers or kanji are adopted as specific types of characters. Hiragana or the like may be adopted. A plurality of types may be used, such as using both kanji and numbers, or using both uppercase letters and numbers in the alphabet. In addition, a combination of only a specific number of the numbers, for example, only 1 to 9, and only a specific one of the alphabets, for example, only A to N, may be used as a specific type of character.
  • the specific type character may be changed depending on the type of landmark or sign. For example, a sign for a direction may be a number, a sign for a large commercial facility may be the letter of the store name, and a sign for an intersection may be a number + kanji or a number + alphabet.
  • the guide sign on the expressway is mainly used as an example, but it is needless to say that the guide sign can be implemented on a general road.
  • the process may be stopped until the vehicle position is determined or grasped in step S15, that is, step S16 may not be executed.
  • a guide sign is used as an example of a sign, but various signs may be recognized as the sign.
  • a signboard indicating the name of the large shopping center and the distance to it a signboard indicating the name of the building or facility, a gas station, a restaurant, a store name or logo of a fast food store having a drive-through, etc. are displayed.
  • signboards Signs installed mainly for commercial purposes can also be covered.
  • control unit and the technique according to the present disclosure are realized by a dedicated computer provided by configuring a processor and a memory programmed to execute one or a plurality of functions embodied by a computer program. May be.
  • control unit and the technique described in the present disclosure may be implemented by a dedicated computer provided by configuring a processor with one or more dedicated hardware logic circuits.
  • control unit and the method according to the present disclosure may be implemented by a combination of a processor and a memory programmed to perform one or more functions and a processor configured with one or more hardware logic circuits. It may be realized by one or more dedicated computers configured.
  • the computer program may be stored in a computer-readable non-transitional tangible recording medium as instructions to be executed by a computer.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

This sign recognition system is a system (1, 21, 31) for photographing a prescribed sign by an on-vehicle camera (4) mounted on a vehicle and recognizing the sign on the basis of the photographed image data thereof, and comprises: an extraction device (11, 25, 34) that recognizes and extracts only a character of a specific type in the sign from the photographed image data; and a sign database (17, 28, 33) that sets information of a character string extracted by the extraction device (11, 25, 34) as attribute information for identifying the sign, and stores sign data including installation position information and attribute information of the sign.

Description

標示物認識システム及び標示物認識方法Sign object recognition system and sign object recognition method 関連出願の相互参照Cross-reference of related applications
 本出願は、2018年8月31日に出願された日本出願番号2018-163075号と、2019年7月25日に出願された日本出願番号2019-136947号に基づくもので、ここにその記載内容を援用する。 This application is based on Japanese Patent Application No. 2018-163075 filed on Aug. 31, 2018 and Japanese Application No. 2019-136947 filed on Jul. 25, 2019, the contents of which are described herein. Invite.
 本開示は、車両に搭載された車載カメラにより道路案内用の案内標識及び看板を含む所定の標示物を撮影し、その撮影画像データに基づいて、当該標示物を認識する標示物認識システム及び標示物認識方法に関する。 The present disclosure relates to a sign object recognition system and a sign object that photograph a predetermined sign object including a guide sign and a signboard for road guidance by an on-vehicle camera mounted on a vehicle and recognize the sign object based on the photographed image data. It relates to an object recognition method.
 自動車等の車両において、例えば走行方向前方を撮影する車載カメラを搭載し、その車載カメラの撮影画像を走行支援などに役立てることが考えられている。例えば特許文献1には、次のことが提案されている。即ち、車載カメラの画像から、道路前方に案内標識が存在することを検出し、その案内標識部分の撮影画像データを処理する。そして、文字例えば目的地方面の地名、方向、距離、交差点表示等を認識し、車内のモニタに簡略表示を行う。 車 両 In vehicles such as automobiles, for example, it is considered that an in-vehicle camera that captures an image of the front in the traveling direction is mounted, and an image captured by the in-vehicle camera is used for driving assistance or the like. For example, Patent Document 1 proposes the following. That is, the presence of a guide sign in front of the road is detected from the image of the vehicle-mounted camera, and the image data of the guide sign portion is processed. Then, characters such as a place name, a direction, a distance, and an intersection display on the destination local plane are recognized, and simplified display is performed on a monitor in the vehicle.
特開2010-266383号公報JP 2010-266383 A
 ところで、近年、自動車の自動運転技術に対する実現の気運が高まっており、そのための高精度の道路地図データを整備したい要望がある。この場合、車両に搭載された車載カメラにより、道路を走行しながら前方等を撮影し、その撮影画像いわゆるプローブデータに基づいて道路地図を生成するシステムが考えられている。或いは、車載カメラの撮影画像から道路地図データと照合して車両位置を検出したりするシステムが考えられている。ここで、道路上に設けられる案内標識は、個々の表示内容が相違し、主要道路上に適度な間隔をもって設置されるため、それら案内標識を、ランドマークとして道路地図データ中に含ませることが有効となる。 By the way, in recent years, there has been an increasing trend toward realization of autonomous driving technology for automobiles, and there is a demand for preparing high-precision road map data for that purpose. In this case, a system has been considered in which an in-vehicle camera mounted on a vehicle captures an image of the front of the vehicle while traveling on a road, and generates a road map based on the captured image, that is, probe data. Alternatively, a system that detects a vehicle position by comparing it with road map data from an image captured by a vehicle-mounted camera has been considered. Here, since the guide signs provided on the roads have different display contents and are installed at appropriate intervals on the main road, the guide signs may be included as landmarks in the road map data. Becomes effective.
 しかし、多数設置されている案内標識のうち個々の案内標識を特定即ち識別するために、案内標識の撮影画像データ全体を用いることは、取扱うべきデータ量が多くなり、通信や画像認識に要する処理時間が長くなってしまう。そのため、車載カメラにより撮影された案内標識をどのように特定するかが問題となり、その特定を容易に行うことが要望される。
 そこで、本開示は、車載カメラにより撮影された標示物が、データベース中のどの標示物かを特定する処理を、簡易に実行することを可能とする標示物認識システム及び標示物認識方法を提供することを目的とする。尚、ここでの標示物とは、例えば道路案内用の案内標識や看板などを含む。
However, using the entire photographed image data of the guide sign to specify or identify each of the guide signs provided in a large number of guide signs requires a large amount of data to be handled, and requires processing required for communication and image recognition. Time will be long. Therefore, how to identify the guide sign photographed by the on-vehicle camera becomes a problem, and it is demanded to easily specify the sign.
Therefore, the present disclosure provides a sign object recognition system and a sign object recognition method that can easily execute a process of specifying which sign object in the database is a sign object photographed by the vehicle-mounted camera. The purpose is to: Note that the sign here includes, for example, guide signs and signboards for road guidance.
 本開示の第1の態様において、車両に搭載された車載カメラにより所定の標示物を撮影し、その撮影画像データに基づいて、当該標示物を認識する標示物認識システムであって、前記撮影画像データから、標示物中の特定種類の文字のみを認識して抽出する抽出装置と、前記抽出装置により抽出された文字列の情報をその標示物を特定するための属性情報として、当該標示物の設置位置情報及び前記属性情報が含まれる標示物データを記憶する標示物データベースとを備えている。 In the first aspect of the present disclosure, a sign object recognition system that picks up a sign object with an on-board camera mounted on a vehicle and recognizes the sign object based on the shot image data. From the data, an extraction device that recognizes and extracts only a specific type of character in the sign, and information on the character string extracted by the extraction device is used as attribute information for identifying the sign, as the attribute information of the sign. A sign object database for storing sign object data including the installation position information and the attribute information.
 これによれば、車両に搭載された車載カメラにより所定の標示物が撮影されると、抽出装置により、その撮影画像データから、標示物中の特定種類の文字のみが抽出されるようになる。そして、標示物データベースには、標示物データが記憶されるのであるが、抽出装置により抽出された文字列の情報が属性情報とされ、その標示物データには、標示物の設置位置情報及びその属性情報が含まれる。 According to this, when a predetermined sign is photographed by the on-board camera mounted on the vehicle, only a specific type of character in the sign is extracted from the photographed image data by the extraction device. The sign object data is stored in the sign object database, and the information of the character string extracted by the extraction device is used as the attribute information, and the sign object data includes the installation position information of the sign object and its information. Contains attribute information.
 このとき、属性情報として、標示物中の特定種類の文字のみを認識して抽出した文字列の情報を、個々の標示物の特定即ち識別に用いることができる。従って、標示物を特定するために取扱うべきデータ量が大幅に少なくなり、通信する場合の通信時間や画像認識等のデータ処理に要する時間を短く済ませることができる。この場合、認識対象が特定種類の文字に限定されることにより、認識に利用する辞書が小さくて済み、処理の高速化が可能となる。この結果、車載カメラにより撮影された標示物が、標示物データベース中のどの標示物かを特定する処理を、簡易に実行することを可能とするという優れた効果を奏する。 At this time, as the attribute information, information of a character string extracted by recognizing only a specific type of character in the sign can be used to specify or identify each sign. Therefore, the amount of data to be handled in order to specify the sign is significantly reduced, and the communication time for communication and the time required for data processing such as image recognition can be shortened. In this case, since the recognition target is limited to a specific type of character, the dictionary used for recognition can be small, and the processing can be speeded up. As a result, there is an excellent effect that it is possible to easily execute the process of specifying which of the sign objects in the sign object database the sign object photographed by the vehicle-mounted camera has.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。その図面は、
図1は、第1の実施形態に係るシステムの全体構成を概略的に示すブロック図であり、 図2は、第1の実施形態に係る制御部及び処理制御装置が実行する案内標識データの登録の処理の手順を概略的に示すフローチャートであり、 図3Aは、第1の実施形態に係る案内標識の具体例を示す図(その1)であり、 図3Bは、第1の実施形態に係る案内標識の具体例を示す図(その2)であり、 図3Cは、第1の実施形態に係る案内標識の具体例を示す図(その3)であり、 図3Dは、第1の実施形態に係る案内標識の具体例を示す図(その4)であり、 図3Eは、第1の実施形態に係る案内標識の具体例を示す図(その5)であり、 図4は、第1の実施形態に係る撮影画像データから数字のみを抽出する処理の手法を説明するための図であり、 図5は、第1の実施形態に係る案内標識データの設置位置情報を更新登録する場合を説明するための図であり、 図6は、第2の実施形態に係るシステムの全体構成を概略的に示すブロック図であり、 図7は、第2の実施形態に係る制御部及び処理制御装置が実行する案内標識データの照合の処理の手順を概略的に示すフローチャートであり、 図8は、第2の実施形態に係る照合を行う際の手法を説明するための図であり、 図9は、第3の実施形態に係る車載装置の構成を概略的に示すブロック図であり、 図10は、第3の実施形態に係る制御部が実行する案内標識データの照合の処理の手順を概略的に示すフローチャートであり、 図11は、第4の実施形態に係る制御部が実行する案内標識データの照合の処理の手順を概略的に示すフローチャートであり、 図12は、第5の実施形態に係る案内標識の具体例を示す図であり、 図13は、第6の実施形態に係る撮影画像データから数字を位置情報と共に抽出する処理の手法を説明するための図である。
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. The drawing is
FIG. 1 is a block diagram schematically showing an overall configuration of a system according to the first embodiment, FIG. 2 is a flowchart schematically illustrating a procedure of a process of registering guide sign data performed by the control unit and the processing control device according to the first embodiment; FIG. 3A is a diagram (part 1) illustrating a specific example of a guide sign according to the first embodiment; FIG. 3B is a diagram (part 2) illustrating a specific example of the guide sign according to the first embodiment; FIG. 3C is a diagram (part 3) illustrating a specific example of the guide sign according to the first embodiment; FIG. 3D is a diagram (part 4) illustrating a specific example of the guidance sign according to the first embodiment; FIG. 3E is a diagram (part 5) illustrating a specific example of the guidance sign according to the first embodiment; FIG. 4 is a diagram for explaining a method of processing for extracting only numbers from captured image data according to the first embodiment, FIG. 5 is a diagram for explaining a case where the installation position information of the guide sign data according to the first embodiment is updated and registered, FIG. 6 is a block diagram schematically illustrating an entire configuration of a system according to the second embodiment. FIG. 7 is a flowchart schematically illustrating a procedure of a process of collating guide sign data executed by the control unit and the processing control device according to the second embodiment; FIG. 8 is a diagram for explaining a method of performing collation according to the second embodiment. FIG. 9 is a block diagram schematically illustrating a configuration of an in-vehicle device according to the third embodiment. FIG. 10 is a flowchart schematically illustrating a procedure of a process of collating guide sign data executed by the control unit according to the third embodiment; FIG. 11 is a flowchart schematically illustrating a procedure of a process of collating guide sign data executed by the control unit according to the fourth embodiment; FIG. 12 is a diagram illustrating a specific example of the guide sign according to the fifth embodiment, FIG. 13 is a diagram for explaining a method of processing for extracting a number together with position information from captured image data according to the sixth embodiment.
 以下、具体化したいくつかの実施形態について、図面を参照しながら説明する。尚、複数の実施形態間で共通する部分については、同一の符号を付して、新たな図示や繰り返しの説明を省略する。以下の説明でいう「案内標識」とは、道路の管理者が道路の所要位置に設置する標識で、経路案内、地点案内、付属施設案内などを行うものであり、設置位置、形状、色、文字の大きさ等、規定された様式に従って設置される。また、「看板」とは、店頭や道路沿い等に設置され、広告のため主として商業的な目的で、通行する人に見せるために作られたものであり、板に文字等により商業施設の名称、方向や距離を含む位置等を表示したものである。「標示物」とは、それら道路案内用の案内標識及び看板を含むものである。 Hereinafter, some concrete embodiments will be described with reference to the drawings. Note that portions common to a plurality of embodiments are denoted by the same reference numerals, and new illustration and repeated description are omitted. In the following description, the "guidance sign" is a sign that the road manager installs at a required position on the road and performs route guidance, point guidance, attached facility guidance, etc., and the installation position, shape, color, It is installed according to a prescribed format such as the size of characters. In addition, "signboards" are installed in stores or along roads, etc., and are made mainly for commercial purposes for advertising purposes and to be shown to passersby. , A position including a direction and a distance, and the like. The "signs" include information signs and signs for road guidance.
 (1)第1の実施形態
 図1から図5を参照して、第1の実施形態について述べる。図1は、本実施形態に係る標示物認識システムとしての案内標識認識システム1の全体構成を概略的に示している。ここで、案内標識認識システム1は、データセンタ2と車載装置3とから構成される。データセンタ2は、データを収集、分析し、高精度の標示物データベースとしての案内標識データベースを生成する。車載装置3は、道路上を走行する乗用車やトラック等の複数台の車両に夫々設けられており、便宜上1つのみ図示している。
(1) First Embodiment A first embodiment will be described with reference to FIGS. FIG. 1 schematically shows an entire configuration of a guide sign recognition system 1 as a sign recognition system according to the present embodiment. Here, the guide sign recognition system 1 includes a data center 2 and a vehicle-mounted device 3. The data center 2 collects and analyzes data, and generates a guide sign database as a highly accurate sign object database. The in-vehicle device 3 is provided in each of a plurality of vehicles such as a passenger car and a truck traveling on a road, and only one is shown for convenience.
 前記各車両に搭載された車載装置3は、車載カメラ4、位置検出部5、各種車載センサ6、地図データ記憶部7、通信部8、検出データ記憶部9、操作表示部10、制御部11を備えている。前記車載カメラ4は、例えば車両の前部に設けられ少なくとも走行方向前方の道路状況を撮影するように構成されている。前記位置検出部5は、GPS受信機の受信データ等に基づいて、自車位置を検出する周知構成を備える。前記各種車載センサ6は、自車の速度情報や走行方向即ち車体の向きの情報を検出するセンサ等を含んでいる。 The on-vehicle device 3 mounted on each vehicle includes an on-vehicle camera 4, a position detection unit 5, various on-vehicle sensors 6, a map data storage unit 7, a communication unit 8, a detection data storage unit 9, an operation display unit 10, and a control unit 11. It has. The in-vehicle camera 4 is provided, for example, at the front of a vehicle and configured to photograph at least a road condition ahead in the traveling direction. The position detector 5 has a well-known configuration for detecting the position of the vehicle based on data received by a GPS receiver and the like. The various in-vehicle sensors 6 include sensors for detecting speed information and traveling direction of the own vehicle, that is, information on the direction of the vehicle body.
 前記地図データ記憶部7は、例えば全国の道路地図情報を記憶している。前記通信部8は、移動体通信網を介して或いは路車間通信等を用いて、前記データセンタ2との間での通信を行う。この通信部8は、送信装置としての送信部8a及び受信装置としての受信部8bとして機能する。前記検出データ記憶部9には、後述するように、案内標識の推定される撮影位置情報及び求められた属性情報などからなる検出データが記憶される。前記操作表示部10は、図示しないスイッチ例えばタッチパネルやディスプレイを有し、車両のユーザ例えばドライバにより操作がなされたり、ナビゲーション画面等のユーザに対する必要な表示を行ったりする。 The map data storage unit 7 stores, for example, road map information for the whole country. The communication unit 8 performs communication with the data center 2 via a mobile communication network or using road-to-vehicle communication or the like. The communication unit 8 functions as a transmission unit 8a as a transmission device and a reception unit 8b as a reception device. As will be described later, the detection data storage unit 9 stores detection data including shooting position information at which a guide sign is estimated, attribute information obtained, and the like. The operation display unit 10 has a switch (not shown), for example, a touch panel or a display, and is operated by a user of the vehicle, for example, a driver, and performs necessary display such as a navigation screen to the user.
 前記制御部11は、コンピュータを含んで構成され、車載装置3全体を制御する。このとき、制御部11は、車両の走行中に、前記車載カメラ4により、前方の道路状況を撮影する。そして、その撮影画像データ中に標示物としての案内標識が検出された場合に、その案内標識の撮影画像データから、例えば周知のOCR技術を用いて、案内標識中の文字のうち、特定種類の文字に属するものを一文字単位で認識して抽出する。更に、抽出された文字列を、その案内標識を特定するための属性情報とする。従って、制御部11は、抽出装置としての機能を備えている。本実施形態における属性情報の詳細は後述する。 The control unit 11 includes a computer and controls the entire on-vehicle device 3. At this time, the control unit 11 captures an image of a road condition ahead by the on-vehicle camera 4 while the vehicle is running. Then, when a guide sign as a sign is detected in the photographed image data, a specific type of characters in the guide sign is obtained from the photographed image data of the guide sign using, for example, a well-known OCR technique. Recognize and extract characters belonging to characters one by one. Further, the extracted character string is used as attribute information for specifying the guide sign. Therefore, the control unit 11 has a function as an extraction device. Details of the attribute information in the present embodiment will be described later.
 これと共に、制御部11は、標示物としての案内標識の撮影時点における位置検出部5が検出した自車位置や走行速度、走行方向等から、案内標識が設置されている位置を推定し、その位置を撮影位置情報とする。そして、制御部11は、案内標識の撮影位置情報及び求められた属性情報からなる検出データを、撮影日時等のデータを付して検出データ記憶部9に記憶させる。その後、前記通信部8により、前記データセンタ2に対して、検出データ記憶部9に記憶されている検出データを送信する。 At the same time, the control unit 11 estimates the position where the guide sign is installed from the own vehicle position, the traveling speed, the traveling direction, and the like detected by the position detection unit 5 at the time of photographing the guide sign as the sign object. The position is taken as the shooting position information. Then, the control unit 11 causes the detection data storage unit 9 to store the detection data including the photographing position information of the guide sign and the obtained attribute information together with data such as the photographing date and time. After that, the communication unit 8 transmits the detection data stored in the detection data storage unit 9 to the data center 2.
 一方、前記データセンタ2は、通信部12、入力操作部13、処理制御装置14、検出データ記憶部15、道路地図データベース16、標示物データベースとしての案内標識データベース17を備えている。そのうち前記通信部12は、各車両の通信部8との間の通信により、前記検出データを受信するもので、受信装置としての受信部12a及び送信装置としての送信部12bとして機能する。前記入力操作部13は、オペレータが必要な入力操作を行うためのものである。 On the other hand, the data center 2 includes a communication unit 12, an input operation unit 13, a processing control unit 14, a detection data storage unit 15, a road map database 16, and a guide sign database 17 as a sign object database. The communication unit 12 receives the detection data by communication with the communication unit 8 of each vehicle, and functions as a receiving unit 12a as a receiving device and a transmitting unit 12b as a transmitting device. The input operation unit 13 is for an operator to perform necessary input operations.
 前記処理制御装置14は、コンピュータを主体として構成され、データセンタ2全体の制御を行う。これと共に、後に詳述するように、処理制御装置14は、道路地図データの生成処理等を行うと共に、標示物データとしての案内標識データ(図5参照)の生成、更新の処理等を実行する。このとき、前記検出データ記憶部15には、各車両から送信された検出データが収集され、一時的に記憶される。このとき、例えば日本全国を走行する多数台の一般の車両から、膨大な検出データが収集されるようになる。 The processing control device 14 is mainly composed of a computer, and controls the entire data center 2. At the same time, as described later in detail, the processing control device 14 performs a process of generating road map data and the like, and also performs a process of generating and updating guide sign data (see FIG. 5) as the sign object data. . At this time, the detection data transmitted from each vehicle is collected and temporarily stored in the detection data storage unit 15. At this time, for example, a huge amount of detection data is collected from a large number of ordinary vehicles traveling all over Japan.
 前記道路地図データベース16には、前記処理制御装置14により生成された高精度の道路地図データが記憶される。そして、案内標識データベース17中には、ランドマーク情報等に利用される標示物データとしての案内標識データが記憶される。この案内標識データは、図5に一部示すように、全国の主要な各道路に設置される案内標識や道路近傍に設置される商業的な看板を含む標示物の設置位置情報即ち経度、緯度からなる座標情報と、その案内標識を特定するための属性情報とが含まれる。尚、道路地図データベース16中に、標示物データベースである案内標識データベース17を含ませるように構成しても良い。道路地図データ中に、ランドマークとしての標示物データを含ませ、各標示物データ中に属性情報を含ませることもできる。 道路 The road map database 16 stores high-precision road map data generated by the processing control device 14. The guide sign database 17 stores guide sign data as sign data used for landmark information and the like. As shown in FIG. 5, the guide sign data includes installation position information of a sign including a guide sign installed on each major road in the whole country and a commercial signboard installed near the road, that is, longitude and latitude. , And attribute information for specifying the guide sign. The road map database 16 may include a guide sign database 17 as a sign object database. It is also possible to include sign object data as landmarks in the road map data and to include attribute information in each sign object data.
 さて、後の作用説明つまりフローチャートの説明でも述べるように、本実施形態では、車載装置3の制御部11は、車載カメラ4の撮影画像データから、標示物としての案内標識中の特定種類の文字として、0から9までの数字のみを認識して抽出する抽出工程を実行する。抽出された文字列の情報を属性情報とする。本実施形態では、標示物として、案内標識を撮影する場合を具体例としている。このとき、制御部11は、標示物としての案内標識中の数字の抽出処理において、案内標識の撮影画像データから、該案内標識の左から右方向に向けて数字を探索することを、上下方向に順に繰り返す。抽出した順番に数字を並べた文字列を属性情報とする。 In the present embodiment, the control unit 11 of the in-vehicle apparatus 3 uses the captured image data of the in-vehicle camera 4 to convert a specific type of character in a guide sign as a sign, as will be described later in the description of the operation, that is, the description of the flowchart. An extraction step of recognizing and extracting only numbers from 0 to 9 is executed. The information of the extracted character string is used as attribute information. In the present embodiment, a specific example is described in which a guide sign is photographed as a sign. At this time, in the process of extracting the number in the guide sign as the sign, the control unit 11 determines that the search for the number from the left to the right of the guide sign from the photographed image data of the guide sign is performed in the up-down direction. Repeat in order. A character string in which numbers are arranged in the extracted order is used as attribute information.
 そして、データセンタ2の処理制御装置14は、各車両の車載装置3から検出データを受信して収集して検出データ記憶部15に記憶させる。これと共に、収集した検出データに基づいて前記案内標識データベース17に案内標識データを登録、更新する標示物データ記憶工程を実行する。従って、処理制御装置14が、収集装置としての収集部14a及び登録装置としての登録部14bとしての機能も備えている。このとき、これも次の作用説明で述べるように、処理制御装置14は案内標識データを登録するにあたり、受信して検出データ記憶部15に記憶された複数の検出データから、同一の属性情報を有する検出データを収集する。そして、収集した検出データの撮影位置情報を統計処理することに基づいて設置位置情報を確定し、案内標識データとするように構成されている。 Then, the processing control device 14 of the data center 2 receives and collects the detection data from the vehicle-mounted device 3 of each vehicle, and stores the data in the detection data storage unit 15. At the same time, a sign object data storing step of registering and updating guide sign data in the guide sign database 17 based on the collected detection data is executed. Therefore, the processing control device 14 also has functions as a collection unit 14a as a collection device and a registration unit 14b as a registration device. At this time, as described in the following description of operation, the processing control device 14 registers the same attribute information from the plurality of pieces of detection data received and stored in the detection data storage unit 15 when registering the guide sign data. Collect the detection data that you have. Then, the installation position information is determined based on statistical processing of the shooting position information of the collected detection data, and is configured to be used as guide sign data.
 図3A~図3Eは、標示物としての案内標識の具体例として、例えば高速道路に設置される案内標識A1~A5を撮影した画像の例を示している。これら案内標識A1~A5は、高速道路上で方面及び出口の予告等を案内するもので、四角形の看板に、緑色の地に主として白色の文字を記して構成される。具体的には、図3Aに示す案内標識A1は、インター番号「26」の「弥富」の出口までが2kmであることを示している。 FIGS. 3A to 3E show examples of images of guide signs A1 to A5 installed on a highway, for example, as specific examples of guide signs as sign objects. These guide signs A1 to A5 are for guiding a direction and an exit on an expressway, etc., and are formed by writing white letters mainly on a green background on a square signboard. Specifically, the guide sign A1 shown in FIG. 3A indicates that the distance to the exit of "Yatomi" of the inter number "26" is 2 km.
 図3Bに示す案内標識A2は、「弥富、津島」方面の出口までが1kmであり、出口が国道155号線につながっていることを示している。図3Cに示す案内標識A3は、「弥富、津島」方面の出口までが550mであることを示している。図3Dに示す案内標識A4は、「弥富、津島」方面の出口であることを示している。図3Eに示す案内標識A5は、ハイウェイラジオが、1620kHzの周波数で、この地点から聴取が可能であることを示している。 案 内 The guide sign A2 shown in FIG. 3B indicates that the exit to the “Yatomi, Tsushima” area is 1 km, and the exit is connected to National Route 155. The guide sign A3 shown in FIG. 3C indicates that the distance to the exit in the direction of "Yatomi, Tsushima" is 550 m. The guidance sign A4 shown in FIG. 3D indicates that the exit is in the direction of "Yatomi, Tsushima". The guide sign A5 shown in FIG. 3E indicates that the highway radio can be heard from this point at a frequency of 1620 kHz.
 次に、上記構成の案内標識認識システム1の作用について、図2から図5も参照して述べる。図2のフローチャートは、車載装置3側の制御部11及びデータセンタ2側の処理制御装置14が実行する、案内標識データの登録までの処理の手順、即ち本実施形態における表示物認識方法の各工程を示している。この図2において、ステップS1~S3は、車両の走行中に車載装置3の制御部11の実行する処理である。まず、ステップS1では、車載カメラ4により前方が撮影され、撮影画像中における標示物としての案内標識の有無が常に監視される。車載カメラ4により案内標識が撮影されると、ステップS2では、その撮影画像データこの場合静止画から、特定種類の文字この場合数字を認識して抽出する処理が実行される(抽出工程)。 Next, the operation of the guide sign recognition system 1 having the above configuration will be described with reference to FIGS. The flowchart of FIG. 2 shows a procedure of processing up to registration of guide sign data, which is executed by the control unit 11 on the in-vehicle apparatus 3 and the processing control apparatus 14 on the data center 2 side, ie, each of the display object recognition methods in the present embodiment. The steps are shown. In FIG. 2, steps S1 to S3 are processes executed by the control unit 11 of the vehicle-mounted device 3 while the vehicle is running. First, in step S1, the front of the vehicle is photographed by the in-vehicle camera 4, and the presence or absence of a guide sign as a sign in the photographed image is constantly monitored. When the guidance sign is photographed by the vehicle-mounted camera 4, in step S2, a process of recognizing and extracting a specific type of character, in this case, a numeral, is performed from the photographed image data, in this case, a still image (extraction step).
 上記したように、この抽出の処理は、撮影画像データから、該案内標識の左から右方向に向けて数字を探索することを、上下方向に順に繰り返し、抽出した順番に数字を並べることにより行われる。図4は、制御部11において、案内標識の撮影画像データから数字のみを抽出する際の処理の手法を示している。即ち、図3Bに示した案内標識A2を具体例に挙げると、まず上段において左から右にトレースすると、「155」の数字が認識、抽出される。上から2段目、3段目については、数字は認識されない。そして、下段においては、左側に「26」の数字、その右側に「1」の数字が認識、抽出される。 As described above, this extraction processing is performed by repeating the search for numbers from the photographed image data from the left to the right of the guide sign in the vertical direction, and arranging the numbers in the extracted order. Will be FIG. 4 shows a processing method when the control unit 11 extracts only numbers from the captured image data of the guide sign. That is, taking the guide sign A2 shown in FIG. 3B as a specific example, first, when tracing from left to right in the upper stage, the number “155” is recognized and extracted. No numbers are recognized for the second and third rows from the top. In the lower part, the number “26” is recognized on the left side, and the number “1” is recognized and extracted on the right side.
 従って、案内標識A2の属性情報は、「155261」の6個の数字からなる文字列となる。同様に、図3の例では、図3Aに示した案内標識A1の属性情報は、「262」の文字列となる。図3Cに示した案内標識A3の属性情報は、「15526550」の文字列となる。図3Dに示した案内標識A4の属性情報は、「15526」の文字列となる。図3Eに示した案内標識A5の属性情報は、「1620」の文字列となる。 Accordingly, the attribute information of the guide sign A2 is a character string composed of six numbers “155261”. Similarly, in the example of FIG. 3, the attribute information of the guide sign A1 shown in FIG. 3A is a character string of “262”. The attribute information of the guide sign A3 illustrated in FIG. 3C is a character string of “15552650”. The attribute information of the guide sign A4 illustrated in FIG. 3D is a character string of “15526”. The attribute information of the guide sign A5 illustrated in FIG. 3E is a character string of “1620”.
 図2に戻って、次のステップS3では、撮影した案内標識の位置つまり撮影位置情報を特定し、その撮影位置情報と上記ステップS2にて求められた文字列の情報即ち属性情報とを検出データとして通信部8によりデータセンタ2に送信する。この場合、撮影位置情報に関しては、案内標識を撮影した時点での位置検出部5により検出された自車位置と、撮影画像データ中の案内標識の位置や大きさ等から求められたその案内標識までの距離とに基づいて推定される。 Returning to FIG. 2, in the next step S3, the position of the photographed guide sign, that is, photographing position information is specified, and the photographing position information and the character string information, ie, attribute information, obtained in step S2 are detected data. Is transmitted to the data center 2 by the communication unit 8. In this case, regarding the photographing position information, the own vehicle position detected by the position detection unit 5 at the time of photographing the guide sign, and the guide sign obtained from the position and size of the guide sign in the photographed image data. Is estimated based on the distance to
 次のステップS4、S5は、データセンタ2側の処理制御装置14が実行する処理である。ステップS4では、データセンタ2において、車載装置3から送信された検出データを、通信部12により受信し、検出データ記憶部15に書込む処理が行われる。ステップS5では、受信した多数の検出データに対して統計処理を行い、案内標識が存在する場所を特定し、案内標識の設置位置の位置座標の情報及び属性情報が含まれる案内標識データとして、案内標識データベース17に登録する処理が行われる(標示物データ記憶工程)。この場合、登録には新規登録はもとより、更新登録も含まれる。 The following steps S4 and S5 are processing executed by the processing control device 14 of the data center 2. In step S <b> 4, in the data center 2, the detection data transmitted from the in-vehicle device 3 is received by the communication unit 12 and written into the detection data storage unit 15. In step S5, statistical processing is performed on the received large number of pieces of detected data, the location where the guide sign is present is specified, and the guide sign data including the information on the position coordinates of the installation position of the guide sign and the attribute information is set as A process of registering in the sign database 17 is performed (a sign object data storing step). In this case, the registration includes not only new registration but also update registration.
 上記案内標識データの登録にあたっては、検出データ記憶部15に収集された多数の検出データから同一の属性情報この場合数字を有するデータを集め、それら検出データの撮影位置情報である位置座標を統計処理して求めた位置座標を真の設置位置情報とする。その際の統計処理は、例えば外れ値即ち異常値を有するデータを除外した後、設置位置情報の平均、メディアン、モード等を求めることにより行うことができる。図5は、更新登録が行われた場合の例を示しており、1番の属性情報が「155261」の案内標識の設置位置座標が、(X1,Y1)から(X11,Y11)に更新された様子を示している。 When registering the guide sign data, the same attribute information is collected from a large number of detection data collected in the detection data storage unit 15, data having a number in this case, and the position coordinates, which are the photographing position information of the detection data, are statistically processed. The obtained position coordinates are used as true installation position information. The statistical processing at that time can be performed by, for example, excluding data having an outlier, that is, an abnormal value, and then calculating the average, median, mode, and the like of the installation position information. FIG. 5 shows an example in the case where update registration has been performed. The installation position coordinates of the guide sign whose first attribute information is “155261” are updated from (X1, Y1) to (X11, Y11). It shows how it was.
 このように本実施形態の案内標識認識システム1及び認識方法によれば、以下の効果を得ることができる。即ち、車載装置3においては、車両の走行中に車載カメラ4により案内標識が撮影され、制御部11により、その撮影画像データから案内標識中の特定種類の文字のみが抽出されて属性情報が求められる(抽出工程)。そして、通信部8により、案内標識の撮影位置情報及び求めた属性情報からなる検出データが、データセンタ2に送信される。データセンタ2においては、処理制御装置14により、複数台の車両の車載装置3から検出データが受信されて収集される。そして、収集した検出データに基づいて、案内標識の設置位置情報及び属性情報が含まれる案内標識データが生成され、案内標識データベース17に登録される(標示物データ記憶工程)。 According to the guide sign recognition system 1 and the recognition method of the present embodiment, the following effects can be obtained. That is, in the in-vehicle apparatus 3, the guide sign is photographed by the in-vehicle camera 4 while the vehicle is traveling, and the control unit 11 extracts only a specific type of character in the guide sign from the photographed image data to obtain attribute information. (Extraction step). Then, the communication unit 8 transmits detection data including the photographing position information of the guide sign and the obtained attribute information to the data center 2. In the data center 2, the processing control device 14 receives and collects detection data from the vehicle-mounted devices 3 of a plurality of vehicles. Then, based on the collected detection data, guide sign data including the installation position information and attribute information of the guide sign is generated and registered in the guide sign database 17 (sign object data storing step).
 このとき、案内標識の特定のための属性情報は、案内標識中の特定種類の文字のみが抽出された文字列のデータからなるので、車載装置3側からデータセンタ2側へ送信されるデータ量を大幅に少なく済ませることができる。これにより、通信する場合の通信時間や、データの処理に要する時間を短く済ませることができる。道路地図データベース16中の案内標識データのデータ量も少なくなり、記憶容量が少なく済み、またデータの取扱いが容易になる。 At this time, since the attribute information for specifying the guide sign is composed of character string data in which only a specific type of character in the guide sign is extracted, the amount of data transmitted from the vehicle-mounted device 3 to the data center 2 side Can be greatly reduced. Thereby, the communication time for communication and the time required for data processing can be shortened. The data amount of the guide sign data in the road map database 16 is also reduced, the storage capacity is reduced, and the handling of the data is facilitated.
 このように本実施形態の案内標識認識システム1及び認識方法は、車両に搭載された車載カメラ4により、標示物としての道路案内用の案内標識を撮影し、その撮影画像データに基づいて、当該案内標識を認識するシステム1及び方法である。この案内標識認識システム1及び認識方法によれば、車載カメラ4により撮影された案内標識が、案内標識データベース17中のどの案内標識かを特定する処理を、簡易に実行することを可能とするという優れた効果を奏する。 As described above, the guide sign recognition system 1 and the recognition method of the present embodiment use the in-vehicle camera 4 mounted on the vehicle to photograph a guide sign for road guidance as a sign, and based on the photographed image data, A system 1 and a method for recognizing guide signs. According to the guide sign recognition system 1 and the recognition method, it is possible to easily execute a process of specifying which guide sign in the guide sign database 17 is a guide sign photographed by the vehicle-mounted camera 4. It has excellent effects.
 本実施形態では、データセンタ2の処理制御装置14は、受信した複数の検出データから、同一の属性情報を有する検出データを収集し、収集した検出データの撮影位置情報を統計処理することに基づいて設置位置情報を確定して案内標識データとする。これにより、より正確な設置位置情報を含むデータを生成することができ、高精度の案内標識データベース17を構築することが可能となる。 In the present embodiment, the processing control device 14 of the data center 2 collects detection data having the same attribute information from the received plurality of detection data, and performs statistical processing on the shooting position information of the collected detection data. To determine the installation location information and use it as guide sign data. As a result, data including more accurate installation position information can be generated, and a highly accurate guide sign database 17 can be constructed.
 更に本実施形態では、属性情報を構成する特定種類の文字として、数字を採用するようにした。これにより、文字種類の識別処理を十分な確かさで且つ短時間で行うことができながらも、0~9の十個の文字のみを抽出、認識すれば良いので、文字認識が極めて簡易に済み、データ処理が容易になる。また、数字の抽出にあたって、案内標識の撮影画像データから、該案内標識の左から右方向に向けて数字を探索することを、上下方向に順に繰り返し、抽出した順番に数字を並べた文字列を属性情報とするといったルールを採用した。これにより、属性情報の抽出の処理も容易に行うことができる。 In the present embodiment, numbers are used as specific types of characters constituting attribute information. As a result, while character type identification processing can be performed with sufficient certainty and in a short time, only ten characters from 0 to 9 need to be extracted and recognized, so that character recognition is extremely simple. And data processing becomes easier. In addition, in extracting numbers, a search for numbers from the photographed image data of the guide sign from left to right of the guide sign is repeated in the vertical direction, and a character string in which the numbers are arranged in the extracted order is repeated. A rule of attribute information was adopted. Thereby, the process of extracting the attribute information can be easily performed.
 尚、上記実施形態では説明しなかったが、上記車載カメラ4は、例えば100msec毎に画像の取込みを繰返し実行しており、どの時点の撮影画像データを用いて処理を行うかを設定しておく必要がある。認識精度を高める観点から、基本的には、標示物が画面から外れる直前の、標示物が最も大きく見えている撮影画像データを用いることが好ましい。但し、遠方からのローカライズに用いるような場合には、比較的早期に撮影された撮影画像データに基づく認識の方が有用性の高いものとなる。例えば標示物から50m離れているタイミングで撮影画像データの処理を行うといった設定を行うことも可能である。ここでのローカライズとは、撮影画像データを解析することによって認識している標示物の自車両に対する相対位置と、地図データに登録されている当該標示物の位置座標とに基づいて、自車両の位置座標を特定する処理を指す。 Although not described in the above embodiment, the in-vehicle camera 4 repeatedly executes image capture, for example, every 100 msec, and sets at which time point the captured image data is to be used for processing. There is a need. From the viewpoint of improving the recognition accuracy, basically, it is preferable to use photographed image data in which the sign appears most immediately before the sign deviates from the screen. However, when it is used for localization from a distance, recognition based on photographed image data photographed relatively early becomes more useful. For example, it is also possible to make settings such that processing of photographed image data is performed at a timing 50 m away from the sign. Here, the localization refers to the relative position of the sign recognized by analyzing the captured image data with respect to the own vehicle, and the position coordinates of the sign registered in the map data, based on the position of the sign. Refers to the process of specifying the position coordinates.
 (2)第2の実施形態
 次に、図6から図8を参照して、第2の実施形態について述べる。この第2の実施形態に係る標示物認識システムとしての案内標識認識システム21は、図6に示すように、データセンタ22と、複数台の車両に搭載された車載装置23との間を通信可能に接続して構成される。そのうち車載装置23は、車載カメラ4、位置検出部5、各種車載センサ6、地図データ記憶部7、通信部24、操作表示部10、抽出装置としての機能を有する制御部25を備えている。通信部24は、送信装置としての送信部24a及び受信装置としての受信部24bの機能を備える。
(2) Second Embodiment Next, a second embodiment will be described with reference to FIGS. As shown in FIG. 6, the guide sign recognition system 21 as the sign object recognition system according to the second embodiment can communicate between the data center 22 and the in-vehicle devices 23 mounted on a plurality of vehicles. Connected to. The in-vehicle device 23 includes an in-vehicle camera 4, a position detection unit 5, various on-vehicle sensors 6, a map data storage unit 7, a communication unit 24, an operation display unit 10, and a control unit 25 having a function as an extraction device. The communication unit 24 has functions of a transmission unit 24a as a transmission device and a reception unit 24b as a reception device.
 前記制御部25は、車両の走行中に、車載カメラ4により、前方の道路状況を撮影する。その撮影画像データ中に標示物としての案内標識が検出された場合に、その案内標識の撮影画像データから、案内標識中の特定種類の文字この場合数字のみを認識して抽出し、抽出された文字列を、その案内標識を特定するための属性情報とする。制御部25は、案内標識の撮影位置情報及び求められた属性情報からなる検出データを、通信部24により、データセンタ22に対して送信する。また後述するように、本実施形態では、通信部24は、データセンタ22の通信部26から送信された車両位置データを受信する。 (4) The control unit 25 uses the vehicle-mounted camera 4 to photograph a road condition ahead while the vehicle is running. When a guide sign as a sign is detected in the photographed image data, a specific type of character in the guide sign, in this case, only a numeral, is recognized and extracted from the photographed image data of the guide sign, and extracted. The character string is used as attribute information for specifying the guide sign. The control unit 25 transmits detection data including the photographing position information of the guide sign and the obtained attribute information to the data center 22 by the communication unit 24. In addition, as described later, in the present embodiment, the communication unit 24 receives the vehicle position data transmitted from the communication unit 26 of the data center 22.
 一方、前記データセンタ22は、通信部26、入力操作部13、処理制御装置27、道路地図データベース16、標示物データベースとしての案内標識データベース28を備えている。前記通信部26は、前記車載装置23の通信部24から送信された検出データを受信すると共に、該当車両の車載装置23に対して車両位置データを送信する。従って、通信部26は、受信装置としての受信部26a及び送信装置としての送信部26bの機能を有する。前記案内標識データベース28には、標示物としての案内標識の設置位置情報及び属性情報を含んだ案内標識データが記憶されている。 On the other hand, the data center 22 includes a communication unit 26, an input operation unit 13, a processing control device 27, a road map database 16, and a guide sign database 28 as a sign object database. The communication unit 26 receives the detection data transmitted from the communication unit 24 of the vehicle-mounted device 23 and transmits vehicle position data to the vehicle-mounted device 23 of the vehicle. Therefore, the communication unit 26 has the functions of a receiving unit 26a as a receiving device and a transmitting unit 26b as a transmitting device. The guide sign database 28 stores guide sign data including installation position information and attribute information of a guide sign as a sign.
 前記処理制御装置27は、通信部26により車両の車載装置23から検出データを受信すると、その検出データの属性情報を案内標識データベース28の案内標識データと照合する。そして、処理制御装置27は、前記案内標識データベース28の案内標識データ中に一致する属性情報が存在する場合に、当該案内標識の設置位置情報から、道路地図データベース16も参照して車両の位置を判断する。従って、処理制御装置27は、照合装置としての照合部27a及び車両位置判断装置としての車両位置判断部27bの機能を有する。 When the processing control device 27 receives the detection data from the in-vehicle device 23 of the vehicle through the communication unit 26, the processing control device 27 checks the attribute information of the detection data with the guide sign data in the guide sign database 28. When there is matching attribute information in the guide sign data of the guide sign database 28, the processing control device 27 refers to the road map database 16 based on the installation position information of the guide sign to determine the position of the vehicle. to decide. Therefore, the processing control device 27 has a function of a collating unit 27a as a collating device and a vehicle position determining unit 27b as a vehicle position determining device.
 このとき本実施形態では、処理制御装置27は、検出データの属性情報を案内標識データベース28の案内標識データと照合するにあたって、次の処理を行う。即ち、撮影位置情報に基づいてその撮影位置の周辺所定範囲内、例えば撮影位置情報が示す座標を中心とした半径100mの円内に設置位置情報が存在する案内標識データから、属性情報の一致する案内標識を検索することにより、照合を行う。更に本実施形態では、処理制御装置27は、判断した車両位置のデータを、当該車両の車載装置23に対し通信部26により送信させる。車両の車載装置23側では、その車両位置の情報を受信することに基づいて、自車位置を認識したり、ナビゲーション上の自車位置を更新したりすることができる。 At this time, in the present embodiment, the processing control device 27 performs the following processing when collating the attribute information of the detected data with the guide sign data in the guide sign database 28. That is, based on the shooting position information, the attribute information matches from the guide sign data in which the installation position information exists within a predetermined range around the shooting position, for example, within a circle having a radius of 100 m centered on the coordinates indicated by the shooting position information. Collation is performed by searching for a guide sign. Further, in the present embodiment, the processing control device 27 causes the communication unit 26 to transmit data of the determined vehicle position to the vehicle-mounted device 23 of the vehicle. The in-vehicle device 23 of the vehicle can recognize the own vehicle position or update the own vehicle position on the navigation based on receiving the information of the vehicle position.
 さて、図7のフローチャートは、車載装置23側の制御部25及びデータセンタ22側の処理制御装置27が実行する、車載カメラ4による案内標識の撮影から車両位置の判断までの処理の手順を示している。この図7において、ステップS11~S13は、車両の走行中に車載装置23の制御部25の実行する処理である。上記第1の実施形態と同様に、ステップS11にて、車載カメラ4により案内標識が撮影され、ステップS12にて、その撮影画像データから特定種類の文字この場合数字が認識、抽出され、属性情報が求められる。ステップS13では、撮影位置情報及び属性情報を含む検出データが、通信部24によりデータセンタ22に送信される。 Now, the flowchart of FIG. 7 shows a procedure of processing from the photographing of the guide sign by the vehicle-mounted camera 4 to the determination of the vehicle position, which is executed by the control unit 25 of the vehicle-mounted device 23 and the processing control device 27 of the data center 22. ing. In FIG. 7, steps S11 to S13 are processes executed by the control unit 25 of the vehicle-mounted device 23 while the vehicle is running. As in the first embodiment, in step S11, a guidance sign is photographed by the vehicle-mounted camera 4, and in step S12, a specific type of character, in this case, a number, is recognized and extracted from the photographed image data, and the attribute information is extracted. Is required. In step S13, the detection data including the shooting position information and the attribute information is transmitted to the data center 22 by the communication unit 24.
 ステップS14~S16は、データセンタ22の処理制御装置27の実行する処理である。ステップS14では、データセンタ22において、車載装置23から送信された検出データが、通信部26により受信される。そして、ステップS15では、受信した検出データ中の属性情報と、案内標識データベース28の案内標識データ中の属性情報とが照合され、撮影された標示物としての案内標識が特定される。ステップS16では、上記ステップS15にて特定された案内標識の設置位置情報から、車両の位置が判断され、該当車両に車両位置情報が送信され、処理が終了する。 Steps S14 to S16 are processing executed by the processing control device 27 of the data center 22. In step S14, in the data center 22, the detection data transmitted from the in-vehicle device 23 is received by the communication unit 26. Then, in step S15, the attribute information in the received detection data is compared with the attribute information in the guide sign data of the guide sign database 28, and the guide sign as the photographed sign is specified. In step S16, the position of the vehicle is determined from the installation position information of the guide sign specified in step S15, the vehicle position information is transmitted to the vehicle, and the process ends.
 このとき、図8には、上記ステップS15で処理制御装置27が照合を行う際の手法の例を示している。今、検出データの撮影位置情報が例えば(X0,Y0)で、属性情報が例えば「155261」であったとする。処理制御装置27においては、まず、所定範囲として撮影位置情報(X0,Y0)を中心とした半径100mの円Rを描き、その円R内に位置する案内標識データを抽出する。この例では、番号1、2、3の3つの案内標識データが抽出される。 FIG. 8 shows an example of a method when the processing control device 27 performs the collation in step S15. Now, it is assumed that the shooting position information of the detection data is, for example, (X0, Y0) and the attribute information is, for example, “155261”. The processing control device 27 first draws a circle R having a radius of 100 m centered on the photographing position information (X0, Y0) as a predetermined range, and extracts guide sign data located within the circle R. In this example, three pieces of guidance sign data of numbers 1, 2, and 3 are extracted.
 そして、その中で属性情報の一致するものが存在すれば、撮影された標示物としての案内標識であると判定する。この場合、番号1の案内標識データの属性情報が、「155261」で一致するので、番号1の案内標識が撮影された案内標識であると認識する。尚、所定範囲内の案内標識データに一致する属性情報が存在しない場合、及び、該当する案内標識データが2つ以上存在する場合には、特定ができなかったものとする。 {Circle around (4)} If any of the attribute information matches, it is determined that the guide sign is a photographed sign. In this case, since the attribute information of the guide sign data of No. 1 matches “155261”, it is recognized that the guide sign of No. 1 is the photographed guide sign. It should be noted that if there is no attribute information that matches the guide sign data within the predetermined range, and if there are two or more corresponding guide sign data, it is determined that no identification was possible.
 このような第2の実施形態の案内標識認識システム1においては、車載装置23から検出データがデータセンタ22に送信される。データセンタ22の処理制御装置27において、受信した検出データ中の属性情報と案内標識データベース28の案内標識データ中の属性情報とが照合される。これにより、案内標識を特定することができ、その案内標識の設置位置情報から車両の位置を判断することができる。更に、判断した車両位置の情報を、車載装置23に送信することにより、車載装置23側で、正確な自車位置を認識することが可能となる。 In the guide sign recognition system 1 according to the second embodiment, the detection data is transmitted from the vehicle-mounted device 23 to the data center 22. In the processing control device 27 of the data center 22, the attribute information in the received detection data is compared with the attribute information in the guide sign data of the guide sign database 28. Thereby, the guide sign can be specified, and the position of the vehicle can be determined from the installation position information of the guide sign. Further, by transmitting the information on the determined vehicle position to the on-vehicle device 23, the on-vehicle device 23 can accurately recognize the own vehicle position.
 この第2の実施形態の案内標識認識システム21においても、案内標識の特定のための属性情報は、案内標識中の特定種類の文字この場合数字のみが抽出された文字列のデータからなる。従って、車載装置23側からデータセンタ22側へ送信されるデータ量を少なく済ませることができ、通信する場合の通信時間や、データの処理に要する時間を短く済ませることができる。処理制御装置27における照合の処理も、少ないデータ量で済むので、簡易に短時間で行うことができる。 In the guide sign recognition system 21 of the second embodiment, the attribute information for specifying the guide sign also includes data of a character string in which only a specific type of character in the guide sign, in this case, only a numeral is extracted. Therefore, the amount of data transmitted from the vehicle-mounted device 23 to the data center 22 can be reduced, and the communication time for communication and the time required for data processing can be reduced. Since the collation processing in the processing control device 27 also requires a small amount of data, it can be easily performed in a short time.
 また、本実施形態では、処理制御装置27における照合の処理は、検出データ中の撮影位置に基づいてその撮影位置の周辺所定範囲内の案内標識データから、案内標識データベース28から属性情報の一致する案内標識を検索することにより行われる。これにより、撮影された案内標識と、案内標識データベース28中の案内標識データとの照合の処理を、十分な確かさで、簡易に行うことができる。 Further, in the present embodiment, the matching process in the processing control device 27 matches the attribute information from the guide sign database 28 from the guide sign data within a predetermined range around the shooting position based on the shooting position in the detection data. This is performed by searching for a guide sign. Thereby, the process of collating the photographed guide sign with the guide sign data in the guide sign database 28 can be easily performed with sufficient certainty.
 (3)第3の実施形態
 図9及び図10は、第3の実施形態を示すものである。図9に示すように、本実施形態に係る標示物認識システムとしての案内標識認識システム31は、車両に搭載された車載装置32を備えている。車載装置32は、車載カメラ4、位置検出部5、各種車載センサ6、地図データ記憶部7、通信部8、標示物データベースとしての案内標識データベース33、操作表示部10、制御部34を備えている。前記案内標識データベース33には、最新の高精度の標示物データとしての案内標識データが記憶されている。それらデータは、例えば上記第1の実施形態で示したデータセンタ2等において高精度のものが生成、更新され、各車載装置31に配信される。
(3) Third Embodiment FIGS. 9 and 10 show a third embodiment. As shown in FIG. 9, a guide sign recognition system 31 as a sign object recognition system according to the present embodiment includes an in-vehicle device 32 mounted on a vehicle. The in-vehicle device 32 includes an in-vehicle camera 4, a position detection unit 5, various on-vehicle sensors 6, a map data storage unit 7, a communication unit 8, a guide sign database 33 as a sign object database, an operation display unit 10, and a control unit 34. I have. The guide sign database 33 stores guide sign data as the latest high-precision sign object data. For example, those data are generated and updated with high accuracy in the data center 2 or the like described in the first embodiment, and are distributed to the in-vehicle devices 31.
 前記制御部34は、車両の走行中に、車載カメラ4により前方の道路状況を撮影する。その撮影画像データ中に標示物としての案内標識が検出された場合に、その案内標識の撮影画像データから、案内標識中の特定種類の文字この場合数字のみを認識して抽出し、抽出された文字列を、その案内標識を特定するための属性情報とする。更に、制御部34は、撮影した案内標識の撮影位置情報及び抽出した属性情報からなる検出データを、案内標識データベース33の案内標識データと照合する。案内標識データベース33の案内標識データ中に一致する属性情報が存在する場合に、当該案内標識の設置位置情報から自車両の位置の判断、いわゆるローカライズを行う。従って、制御部34は、抽出装置としての抽出部34aの機能を有すると共に、照合装置としての照合部34b及び自車位置判定装置としての自車位置判定部34cの機能をも有している。 (4) The control unit 34 uses the on-board camera 4 to photograph the road condition ahead while the vehicle is running. When a guide sign as a sign is detected in the photographed image data, a specific type of character in the guide sign, in this case, only a numeral, is recognized and extracted from the photographed image data of the guide sign, and extracted. The character string is used as attribute information for specifying the guide sign. Further, the control unit 34 checks the detection data including the photographing position information of the photographed guide sign and the extracted attribute information with the guide sign data of the guide sign database 33. When there is matching attribute information in the guide sign data of the guide sign database 33, the location of the own vehicle is determined from the installation position information of the guide sign, so-called localization is performed. Therefore, the control unit 34 has the function of the extracting unit 34a as the extracting device, and also has the function of the checking unit 34b as the checking device and the own vehicle position determining unit 34c as the own vehicle position determining device.
 図10のフローチャートは、車載装置32の制御部34が実行する、車載カメラ4による案内標識の撮影から自車両の位置の判定までの処理の手順を示している。この図10において、ステップS21にて、車載カメラ4により標示物としての案内標識が撮影される。ステップS22にて、その撮影画像データから特定種類の文字この場合数字が認識、抽出され、属性情報が求められる。 フ ロ ー チ ャ ー ト The flowchart of FIG. 10 shows a procedure of a process executed by the control unit 34 of the in-vehicle device 32 from the photographing of the guide sign by the in-vehicle camera 4 to the determination of the position of the own vehicle. In FIG. 10, at step S21, a guidance sign as a sign is photographed by the vehicle-mounted camera 4. In step S22, a specific type of character, in this case, a number, is recognized and extracted from the captured image data, and attribute information is obtained.
 そして、ステップS23では、求められた属性情報と、案内標識データベース33の案内標識データ中の属性情報とが照合され、撮影された案内標識が特定される。これと共に、特定された案内標識の設置位置情報から自車両の位置が判定され、処理が終了する。この場合も、照合の手法としては、上記第2の実施形態と同様の処理が行われる。即ち、撮影位置を中心とした周辺所定範囲内例えば半径100mの円内に位置する案内標識データを案内標識データベース33から抽出する。その中で属性情報の一致するものが存在すれば、撮影された案内標識であると判定することができる。 Then, in step S23, the obtained attribute information is compared with the attribute information in the guide sign data of the guide sign database 33, and the photographed guide sign is specified. At the same time, the position of the own vehicle is determined from the specified installation position information of the guide sign, and the process ends. In this case, the same processing as in the second embodiment is performed as a collation method. That is, the guide sign data located within a predetermined range around the photographing position, for example, within a circle having a radius of 100 m, is extracted from the guide sign database 33. If there is one whose attribute information matches, it can be determined that the guide sign has been taken.
 このような第3の実施形態においては、車載装置32において、車両の走行中に車載カメラ4により標示物としての案内標識が撮影され、その撮影画像データから案内標識中の特定種類の文字のみが抽出されて属性情報が求められる。そして、案内標識の撮影位置情報及び求められた属性情報からなる検出データが、道路地図データベース33の案内標識データと照合され、一致する属性情報が存在する場合に、当該案内標識の設置位置情報から自車位置が判定される。この場合、データセンタ2との通信を要することなく、車載装置32のみにおいて自車両の位置を高精度に検出する即ちローカライズできるシステムとすることができる。これにて、似たような標示物、つまり案内標識や看板が連続して現れるような環境下、例えば高速道路や都市部の一般道において、標示物を混同せずにローカライズが可能となる。 In such a third embodiment, the in-vehicle device 32 captures a guide sign as a sign by the in-vehicle camera 4 while the vehicle is running, and only specific types of characters in the guide sign are captured from the captured image data. The extracted attribute information is obtained. Then, the detection data including the photographing position information of the guide sign and the obtained attribute information is collated with the guide sign data of the road map database 33, and if there is a matching attribute information, the information is obtained from the installation position information of the guide sign. The own vehicle position is determined. In this case, it is possible to provide a system capable of detecting the position of the host vehicle with high accuracy, that is, localizing only the in-vehicle device 32 without requiring communication with the data center 2. Thus, in an environment where similar sign objects, that is, guide signs and signboards appear continuously, for example, on an expressway or a general road in an urban area, localization can be performed without confusing the sign objects.
 この第3の実施形態の案内標識認識システム31によれば、案内標識の特定のための属性情報は、案内標識中の特定種類の文字この場合数字のみが抽出された文字列のデータからなる。従って、照合の処理などにおいて取り扱うべきデータ量が大幅に少なくなり、データの処理を簡易に短時間で行うことができる。このとき、道路地図データベース33の案内標識データのデータ量も少ないので、記憶容量が少なく済みながらも、十分に正確な自車位置を求めることができる。 According to the guide sign recognition system 31 of the third embodiment, the attribute information for specifying the guide sign is composed of character string data from which only a specific type of character in the guide sign, in this case, only a numeral is extracted. Therefore, the amount of data to be handled in the collation processing and the like is significantly reduced, and data processing can be performed easily and in a short time. At this time, since the data amount of the guide sign data in the road map database 33 is also small, it is possible to obtain a sufficiently accurate position of the own vehicle even though the storage capacity is small.
 尚、車両に組込まれる車載装置においては、車両の走行時の位置情報とその際の車載カメラの画像情報とを含むプローブデータを収集する機能を備えるものがある。そのプローブデータは、地図データ生成システムのセンタに送信され、センタにあっては、多数のプローブデータを収集しそれらを統合することに基づいて、自動運転にも適用可能な高精度の地図データが生成・更新される。この場合、プローブデータと地図データの位置合せや、プローブデータ同士の位置合せに、ランドマークとしての案内標識等の標示物の位置データを用いることが可能となる。これにより、ランドマーク同士の対応付けに伴う高精度の位置合せひいては高精度の地図データの生成が可能となる。 Some of the in-vehicle devices incorporated in the vehicle have a function of collecting probe data including positional information of the vehicle during traveling and image information of the in-vehicle camera at that time. The probe data is transmitted to the center of the map data generation system, and the center collects a large number of probe data and integrates them, based on which high-precision map data that can be applied to automatic driving is also generated. Generated and updated. In this case, it is possible to use the position data of a sign object such as a guide sign as a landmark for the alignment between the probe data and the map data and the alignment between the probe data. As a result, it becomes possible to perform high-accuracy positioning associated with the association between landmarks, and thereby to generate high-accuracy map data.
 (4)第4の実施形態
 図11は、第4の実施形態を示すものであり、以下、上記第3の実施形態と異なる点について述べる。即ち、車両に搭載された車載装置は、車載カメラ、位置検出部、各種車載センサ、地図データ記憶部、通信部、標示物データベースとしての案内標識データベース、操作表示部、制御部を備えている。前記制御部が、抽出装置、照合装置、自車位置判定装置等として機能する。前記制御部は、車載カメラにより撮影された標示物としての案内標識の撮影画像データから、特定種類の文字のみを認識し、属性情報として抽出する。
(4) Fourth Embodiment FIG. 11 shows a fourth embodiment. Hereinafter, points different from the third embodiment will be described. That is, the vehicle-mounted device mounted on the vehicle includes a vehicle-mounted camera, a position detection unit, various vehicle-mounted sensors, a map data storage unit, a communication unit, a guide sign database as a sign object database, an operation display unit, and a control unit. The control unit functions as an extraction device, a collation device, a vehicle position determination device, and the like. The control unit recognizes only a specific type of character from photographed image data of a guide sign as a sign taken by a vehicle-mounted camera and extracts the character as attribute information.
 本実施形態では、特定種類の文字として、数字に加えて漢字を採用するようにしている。そして、制御部が認識対象とする文字群のうち漢字の数が、所定数に制限されると共に、その認識対象文字群が、位置検出部の検出した自車両の位置に応じて動的に変更される。ここで、例えば交差点などに存在する標示物としては、交差点名や地点名称、施設名称等を示す案内標識があり、例えば「刈谷駅」、「刈谷駅西」のように、交差点の名称が漢字で記載されていることが多い。しかし、全ての漢字を文字認識の対象とすることは難しい。本実施形態では、車両の現在位置及び進行方向から、案内標識に使用されていると予想される漢字を、例えば十個程度~多くとも数十個程度に絞って認識対象文字群とするようになっている。 In this embodiment, Chinese characters are used in addition to numbers as specific types of characters. Then, the number of Chinese characters in the character group to be recognized by the control unit is limited to a predetermined number, and the character group to be recognized is dynamically changed according to the position of the own vehicle detected by the position detection unit. Is done. Here, for example, as a sign present at an intersection or the like, there is a guide sign indicating an intersection name, a point name, a facility name, and the like. Often described in. However, it is difficult to target all kanji for character recognition. In the present embodiment, the kanji expected to be used for the guidance sign is narrowed down to, for example, about ten to at most several tens of characters from the current position and the traveling direction of the vehicle as a recognition target character group. Has become.
 図11のフローチャートは、制御部が実行する、撮影画像データから、標示物としての案内標識の文字列を認識する処理手順を示している。即ち、まずステップS31では、位置検出部の検出に基づいて、現在の自車位置の大まかな位置情報が取得される。ステップS32では、車両の現在位置及び進行方向に基づいて、認識対象とされる文字群が設定される。上記した例では、「刈」、「谷」、「駅」、「西」といった文字即ち漢字が認識用の辞書に追加される。この場合、認識対象とされる文字群については、データセンタから配信される構成でもよく、車載装置において抽出する構成としても良い。 フ ロ ー チ ャ ー ト The flowchart of FIG. 11 shows a processing procedure executed by the control unit for recognizing a character string of a guide sign as a sign from photographed image data. That is, first, in step S31, rough position information of the current vehicle position is acquired based on the detection of the position detection unit. In step S32, a character group to be recognized is set based on the current position and the traveling direction of the vehicle. In the above example, characters such as "Kari", "Valley", "Station", and "West", that is, Chinese characters, are added to the dictionary for recognition. In this case, the character group to be recognized may be distributed from the data center or may be extracted in the vehicle-mounted device.
 次のステップS33では、車載カメラにより標示物としての案内標識が撮影されて撮影画像データが取得される。ステップS34にて、撮影画像データ中の文字即ち漢字及び数字を認識して抽出する処理が実行される。この場合、漢字を認識する場合でも、認識対象とする文字群がごく少数に制限されているので、短時間で容易に認識処理を行うことができる。車両が案内標識部分を通過すると、次の案内標識に向けてステップS31からの処理が繰り返される。 In the next step S33, a guidance sign as a sign is photographed by the vehicle-mounted camera, and photographed image data is acquired. In step S34, a process of recognizing and extracting characters, that is, kanji and numerals, in the captured image data is executed. In this case, even in the case of recognizing a kanji, since the character group to be recognized is limited to a very small number, the recognition process can be easily performed in a short time. When the vehicle passes the guide sign portion, the processing from step S31 is repeated toward the next guide sign.
 このような第4の実施形態によれば、上記第1~第3の実施形態と同様に、車載カメラにより撮影された標示物としての案内標識が、データベース中のどの案内標識かを特定する処理を、簡易に実行することを可能とする等の効果を得ることができる。そして、本実施形態では、特定種類の文字として、漢字についても認識対象とすることが可能となり、より一層適用範囲を広げることができる。 According to the fourth embodiment, similarly to the first to third embodiments, the process of specifying which guide sign in the database is the guide sign as the sign taken by the vehicle-mounted camera. Can be easily executed. Then, in the present embodiment, it becomes possible to recognize kanji as a specific type of character, so that the applicable range can be further expanded.
 なお、上記実施形態では、自車位置に応じて認識対象とする漢字を動的に変更する態様を開示したが、これに限らない。自車位置に応じて認識対象とする文字種類は、複数種類の文字が混在していても良い。例えば、認識対象とする文字群は、平仮名、カタカナ、及び漢字の組合せであっても良い。また、認識対象とする文字種類は自車位置に応じて動的に変更するように構成されていても良い。例えば高速道路などの自動車専用道路を走行中は数字のみを認識対象とする一方、一般道路を走行中は数字とアルファベットを認識対象とするように構成されていても良い。認識対象とする文字を、車両が使用される地域で使用される全ての文字群とするのではなく、その一部に限定することで、他の実施形態と同様に、CPUの処理負荷を低減できる。 In the above embodiment, the mode in which the kanji to be recognized is dynamically changed according to the position of the vehicle is disclosed, but the present invention is not limited to this. A plurality of types of characters may be mixed as the types of characters to be recognized according to the own vehicle position. For example, the character group to be recognized may be a combination of hiragana, katakana, and kanji. The character type to be recognized may be dynamically changed according to the position of the host vehicle. For example, the configuration may be such that only numbers are recognized when traveling on a motorway such as an expressway, while numbers and alphabets are recognized when traveling on a general road. The processing load on the CPU is reduced as in the other embodiments by limiting the characters to be recognized not to all character groups used in the area where the vehicle is used, but to some of them. it can.
 (5)第5、第6の実施形態、その他の実施形態
 図12は、第5の実施形態を示すものである。ここでは、左右2つの標示物としての案内標識A6、A7が並んで設けられているが、これらはよく似ており、数字を認識する場合に、共に「26 1/2」が抽出され、相違点は見られない。このとき、左側の案内標識A6の上部には、「95」の看板A8が設けられ、右側の案内標識A7の上部には、「20」の看板A9が設けられている。この場合、本来的には、案内標識A6と看板A8とは別のものとして取扱われ、同様に、案内標識A7と看板A9とは別のものとして取扱われる。
(5) Fifth and sixth embodiments and other embodiments FIG. 12 shows a fifth embodiment. Here, guide signs A6 and A7 as two right and left signs are provided side by side, but they are very similar, and when recognizing a number, both "26 1/2" are extracted and the difference is detected. No dots are seen. At this time, a sign A8 of “95” is provided above the left guide sign A6, and a sign A9 of “20” is provided above the right guide sign A7. In this case, the guide sign A6 and the signboard A8 are originally handled as different things, and similarly, the guide sign A7 and the signboard A9 are handled as different things.
 ところが、このような場合には、案内標識A6と看板A8とを一体の標示物として取扱い、案内標識A7と看板A9と一体の標示物として取扱う。これにより、夫々抽出される文字列即ち属性情報が、「95 26 1/2」、「20 26 1/2」となり、それらの区別を容易に行うことが可能となる。即ち、上下方向つまりZ軸方向に並んで配置されている2つの案内標識或いは看板を、一体物として取扱うことにより、属性情報がより区別しやすくなる利点を得ることができる。更に、それら文字列のうち、「95」、「20」に関して、他の数字よりも優先度を高くする情報を付加するような構成とすることもできる。 However, in such a case, the guide sign A6 and the sign A8 are handled as an integral sign, and the guide sign A7 and the sign A9 are treated as an integral sign. As a result, the extracted character strings, that is, the attribute information, become “95 26 1/2” and “20 26 1/2”, and can be easily distinguished from each other. That is, by treating two guide signs or signboards arranged side by side in the vertical direction, that is, the Z-axis direction, as an integral object, an advantage that attribute information can be more easily distinguished can be obtained. Furthermore, of the character strings, it is also possible to adopt a configuration in which information giving higher priority than other numbers is added to “95” and “20”.
 図13は、第6の実施形態を示すものである。この第6の実施形態では、標示物である例えば案内標識A2の撮影画像データから特定種類の文字この場合数字のみを抽出する。これと共に、標示物内の文字の位置の情報、この場合、横軸をX軸、縦軸をY軸とした座標情報をも、属性情報に含ませるようにしている。これによれば、類似した特定種類の文字を有する標示物に対して、属性情報がより区別しやすくなり、認識の処理をより短時間で正確に行うことが可能となる。尚、位置情報を属性情報に含ませる場合、座標情報でなく、例えば左上、中央、右端といった大まかな位置情報とすることもできる。 FIG. 13 shows a sixth embodiment. In the sixth embodiment, only a specific type of character, in this case, a numeral, is extracted from captured image data of a sign, for example, a guide sign A2. At the same time, information on the position of the character in the sign, in this case, coordinate information with the horizontal axis being the X axis and the vertical axis being the Y axis, is also included in the attribute information. According to this, attribute information can be more easily distinguished from a sign having a similar specific type of character, and recognition processing can be performed more accurately in a shorter time. When the position information is included in the attribute information, the position information may be rough position information such as the upper left, the center, and the right end instead of the coordinate information.
 それ以外にも、1つの標示物に関して複数の文字列を属性情報とする場合、全ての文字列を一律にして属性情報とするものに限らず、以下のような変更も可能である。即ち、複数の文字列のうち、大きさが最も大きい文字のみを抽出して属性情報とすることができる。1つの標示物に、複数の文字セットがある場合に、文字セット毎に、スペースやコロン、スラッシュ等の区切りを入れることもできる。また、文字列として数字だけでなく数字に付随した単位、例えば、「分」、「min」、「km」、「m」などの単位も特定種類の文字として属性情報に含ませることができる。標示物内の文字の位置情報や、文字のフォントサイズ情報を、地図データに含ませるようにすれば、ローカライズにより有効となる。 に も In addition, when a plurality of character strings are used as attribute information for one sign, not only the character strings are uniformly used as attribute information but also the following changes are possible. That is, it is possible to extract only the largest character from the plurality of character strings and use the extracted character string as attribute information. When one sign has a plurality of character sets, a delimiter such as a space, a colon, or a slash can be inserted for each character set. In addition, not only a number but also a unit attached to the number as a character string, for example, a unit such as “minute”, “min”, “km”, “m” can be included in the attribute information as a specific type of character. If the position information of the character in the sign and the font size information of the character are included in the map data, the localization is effective.
 尚、上記した各実施形態では、特定種類の文字として数字或いは漢字を採用するようにしたが、数字以外にも、アルファベット、例えば大文字のみ、小文字のみ、大文字小文字の両方、かな即ちかたかな、ひらがな等を採用することも考えられる。漢字と数字との双方を用いる、或いは、アルファベットの大文字と数字との双方を用いるといった、複数種類を用いるように構成しても良い。また、数字のうち特定の数字のみ、例えば1~9のみと、アルファベットのうち特定のもののみ、例えばA~Nのみとの組合せを特定種類の文字として用いるように構成しても良い。ランドマークや標示物の種類によって特定種類文字を変更するようにしても良い。例えば、方面看板ならば数字、大型商業施設の看板であればその店名の文字、交差点名の看板であれば数字+漢字、或いは数字+アルファベット等が考えられる。 In each of the embodiments described above, numbers or kanji are adopted as specific types of characters. Hiragana or the like may be adopted. A plurality of types may be used, such as using both kanji and numbers, or using both uppercase letters and numbers in the alphabet. In addition, a combination of only a specific number of the numbers, for example, only 1 to 9, and only a specific one of the alphabets, for example, only A to N, may be used as a specific type of character. The specific type character may be changed depending on the type of landmark or sign. For example, a sign for a direction may be a number, a sign for a large commercial facility may be the letter of the store name, and a sign for an intersection may be a number + kanji or a number + alphabet.
 上記実施形態では、主に高速道路における案内標識を例としたが、一般道路における案内標識に関しても同様に実施できることは勿論である。また、上記第2の実施形態では、ステップS15の車両位置を判断する即ち把握するまでで、処理を止める、つまりステップS16を実行しないようにしても良い。そして、上記各実施形態では、標示物として案内標識を例としたが、標示物として各種の看板を認識するものであっても良い。この場合、例えば、大型ショッピングセンター名とそこまでの距離を表示した看板や、ビル名や施設名を標示した看板、ガソリンスタンドやレストラン、ドライブスルーを有したファストフード店等の店名やロゴを表示した看板等がある。それら主に商業的な目的で設置される看板も対象とすることができる。 で は In the above embodiment, the guide sign on the expressway is mainly used as an example, but it is needless to say that the guide sign can be implemented on a general road. In the second embodiment, the process may be stopped until the vehicle position is determined or grasped in step S15, that is, step S16 may not be executed. In each of the above embodiments, a guide sign is used as an example of a sign, but various signs may be recognized as the sign. In this case, for example, a signboard indicating the name of the large shopping center and the distance to it, a signboard indicating the name of the building or facility, a gas station, a restaurant, a store name or logo of a fast food store having a drive-through, etc. are displayed. There are signboards. Signs installed mainly for commercial purposes can also be covered.
 その他、車載装置や車両、データセンタのハードウエア構成、ソフトウエア構成等についても様々な変更が可能である。本開示は、実施例に準拠して記述されたが、本開示は当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、さらには、それらに一要素のみ、それ以上、あるいはそれ以下、を含む他の組み合わせや形態をも、本開示の範疇や思想範囲に入るものである。 In addition, various changes can be made to the hardware configuration and software configuration of the in-vehicle device, the vehicle, and the data center. Although the present disclosure has been described with reference to the embodiments, it is understood that the present disclosure is not limited to the embodiments and the structures. The present disclosure also encompasses various modifications and variations within an equivalent range. In addition, various combinations and forms, and other combinations and forms including only one element, more or less, are also included in the scope and spirit of the present disclosure.
 本開示に記載の制御部及びその手法は、コンピュータプログラムにより具体化された一つ乃至は複数の機能を実行するようにプログラムされたプロセッサ及びメモリを構成することにより提供された専用コンピュータにより実現されても良い。或いは、本開示に記載の制御部及びその手法は、一つ以上の専用ハードウェア論理回路によりプロセッサを構成することにより提供された専用コンピュータにより実現されても良い。若しくは、本開示に記載の制御部及びその手法は、一つ乃至は複数の機能を実行するようにプログラムされたプロセッサ及びメモリと一つ以上のハードウェア論理回路により構成されたプロセッサとの組み合わせにより構成された一つ以上の専用コンピュータにより実現されても良い。又、コンピュータプログラムは、コンピュータにより実行されるインストラクションとして、コンピュータ読み取り可能な非遷移有形記録媒体に記憶されていても良い。 The control unit and the technique according to the present disclosure are realized by a dedicated computer provided by configuring a processor and a memory programmed to execute one or a plurality of functions embodied by a computer program. May be. Alternatively, the control unit and the technique described in the present disclosure may be implemented by a dedicated computer provided by configuring a processor with one or more dedicated hardware logic circuits. Alternatively, the control unit and the method according to the present disclosure may be implemented by a combination of a processor and a memory programmed to perform one or more functions and a processor configured with one or more hardware logic circuits. It may be realized by one or more dedicated computers configured. Further, the computer program may be stored in a computer-readable non-transitional tangible recording medium as instructions to be executed by a computer.

Claims (11)

  1.  車両に搭載された車載カメラ(4)により、所定の標示物を撮影し、その撮影画像データに基づいて、当該標示物を認識するシステム(1、21、31)であって、
     前記撮影画像データから、標示物中の特定種類の文字のみを認識して抽出する抽出装置(11、25、34)と、
     前記抽出装置(11、25、34)により抽出された文字列の情報をその標示物を特定するための属性情報として、当該標示物の設置位置情報及び前記属性情報が含まれる標示物データを記憶する標示物データベース(17、28、33)とを備える標示物認識システム。
    A system (1, 21, 31) for photographing a predetermined sign by a vehicle-mounted camera (4) mounted on a vehicle and recognizing the sign based on the photographed image data,
    An extracting device (11, 25, 34) for recognizing and extracting only a specific type of character in the sign from the photographed image data;
    The information of the character string extracted by the extracting device (11, 25, 34) is stored as attribute information for specifying the sign object, and the sign object data including the installation position information of the sign object and the attribute information is stored. A sign object recognition system comprising a sign object database (17, 28, 33) for performing the operation.
  2.  複数台の車両に搭載された車載装置と、データセンタとの間を通信可能に接続して構成されるものであって、
     前記車載装置は、前記車載カメラと、前記抽出装置と、車両の走行中に前記車載カメラにより撮影した標示物の撮影位置情報及び前記抽出装置が抽出した属性情報からなる検出データを前記データセンタに送信する送信装置とを備え、
     前記データセンタは、前記標示物データベースと、前記各車両の車載装置から検出データを受信して収集する収集装置と、前記収集装置により収集した検出データに基づいて前記標示物データベースに前記標示物データを登録する登録装置とを備える請求項1記載の標示物認識システム。
    An in-vehicle device mounted on a plurality of vehicles and a data center are communicably connected and configured,
    The in-vehicle device, the in-vehicle camera, the extraction device, and detection data including shooting position information of a sign taken by the in-vehicle camera while the vehicle is traveling and attribute information extracted by the extraction device are sent to the data center. And a transmitting device for transmitting.
    The data center includes the sign object database, a collection device that receives and collects detection data from the on-board devices of the vehicles, and the sign object database based on the detection data collected by the collection device. 2. The sign recognition system according to claim 1, further comprising a registration device for registering the information.
  3.  前記登録装置は、受信した複数の検出データから、同一の属性情報を有する検出データを収集し、収集した検出データの撮影位置情報を統計処理することに基づいて設置位置情報を確定して前記標示物データとする請求項2記載の標示物認識システム。 The registration device collects detection data having the same attribute information from the received plurality of detection data, determines the installation position information based on statistical processing of the shooting position information of the collected detection data, and performs the labeling. The sign recognition system according to claim 2, wherein the sign data is object data.
  4.  複数台の車両に搭載された車載装置と、データセンタとの間を通信可能に接続して構成されるものであって、
     前記車載装置は、前記車載カメラと、前記抽出装置と、車両の走行中に前記車載カメラにより撮影した標示物の撮影位置情報及び前記抽出装置が抽出した属性情報からなる検出データを前記データセンタに送信する送信装置とを備え、
     前記データセンタは、前記標示物データベースと、前記車載装置から検出データを受信する受信装置と、前記検出データの属性情報を前記標示物データベースの標示物データと照合する照合装置と、前記標示物データベースの標示物データ中に一致する属性情報が存在する場合に、当該標示物の設置位置情報から前記車両の位置を判断する車両位置判断装置とを備える請求項1から3のいずれか一項に記載の標示物認識システム。
    An in-vehicle device mounted on a plurality of vehicles and a data center are communicably connected and configured,
    The in-vehicle device, the in-vehicle camera, the extraction device, and detection data including shooting position information of a sign taken by the in-vehicle camera while the vehicle is traveling and attribute information extracted by the extraction device are sent to the data center. And a transmitting device for transmitting.
    The data center includes the sign object database, a receiving device that receives detection data from the on-vehicle device, a matching device that compares attribute information of the detection data with sign object data of the sign object database, and the sign object database. 4. A vehicle position determining device for determining the position of the vehicle from the installation position information of the sign object when matching attribute information exists in the sign object data of any one of claims 1 to 3. Sign recognition system.
  5.  前記車両には車載装置が搭載され、
     前記車載装置は、前記車載カメラと、前記抽出装置と、前記標示物データベースと、車両の走行中に前記車載カメラにより撮影した標示物の撮影位置情報及び前記抽出装置が抽出した属性情報からなる検出データを前記標示物データベースの標示物データと照合する照合装置と、前記標示物データベースの標示物データ中に一致する属性情報が存在する場合に、当該標示物の設置位置情報から自車位置を判定する自車位置判定装置とを備える請求項1から3のいずれか一項に記載の標示物認識システム。
    The vehicle is equipped with an in-vehicle device,
    The on-vehicle device includes the on-vehicle camera, the extraction device, the sign object database, and detection information including photographing position information of a sign object photographed by the on-vehicle camera while the vehicle is traveling and attribute information extracted by the extraction device. A collation device for collating data with the sign object data of the sign object database, and, when there is matching attribute information in the sign object data of the sign object database, determining the own vehicle position from the installation position information of the sign object The marking object recognition system according to any one of claims 1 to 3, further comprising:
  6.  前記照合装置は、前記撮影位置情報に基づいてその撮影位置の周辺所定範囲内の標示物データから、属性情報の一致する標示物を検索することにより、照合を行う請求項4又は5に記載の標示物認識システム。 6. The collation device according to claim 4, wherein the collation device performs collation by searching for sign objects having the same attribute information from sign object data within a predetermined range around the photographing position based on the photographing position information. 7. Sign recognition system.
  7.  前記特定種類の文字は、数字である請求項1から6のいずれか一項に記載の標示物認識システム。 The sign recognition system according to any one of claims 1 to 6, wherein the specific type of character is a numeral.
  8.  前記抽出装置は、前記標示物の撮影画像データから、該標示物の左から右方向に向けて数字を探索することを、上下方向に順に繰り返し、抽出した順番に数字を並べた文字列を属性情報とする請求項7記載の標示物認識システム。 The extraction device repeats a search for numbers from the photographed image data of the sign in a direction from left to right of the sign in the vertical direction, and attribute a character string in which the numbers are arranged in the extracted order. The sign recognition system according to claim 7, wherein the sign is information.
  9.  前記抽出装置が認識対象とする文字は、自車両の位置に応じて動的に変更される請求項1から6のいずれか一項に記載の標示物認識システム。 The sign recognition system according to any one of claims 1 to 6, wherein the character to be recognized by the extraction device is dynamically changed according to the position of the host vehicle.
  10.  前記特定種類の文字は、漢字であり、
     前記抽出装置が認識対象とする文字群が所定数に制限されると共に、前記認識対象文字群が、自車両の位置に応じて動的に変更される請求項9記載の標示物認識システム。
    The specific type of character is a kanji,
    10. The sign recognition system according to claim 9, wherein a character group to be recognized by the extraction device is limited to a predetermined number, and the character group to be recognized is dynamically changed according to a position of the host vehicle.
  11.  車両に搭載された車載カメラ(4)により、所定の標示物を撮影し、その撮影画像データに基づいて、当該標示物を認識する方法であって、
     前記撮影画像データから、標示物中の特定種類の文字のみを認識して抽出する抽出工程と、
     前記抽出工程において抽出された文字列の情報をその標示物を特定するための属性情報として、当該標示物の設置位置情報及び前記属性情報が含まれる標示物データを記憶する標示物データ記憶工程とを含む標示物認識方法。
    A method in which a predetermined sign is photographed by a vehicle-mounted camera (4) mounted on a vehicle, and the sign is recognized based on the photographed image data,
    From the photographed image data, an extraction step of recognizing and extracting only a specific type of character in the sign,
    A sign object data storage step of storing information on the character string extracted in the extraction step as attribute information for specifying the sign object, and storing sign object data including the installation position information of the sign object and the attribute information; Sign recognition method including.
PCT/JP2019/033315 2018-08-31 2019-08-26 Sign recognition system and sign recognition method WO2020045345A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201980056008.0A CN112639905B (en) 2018-08-31 2019-08-26 Marker identification system and marker identification method
DE112019004319.6T DE112019004319T5 (en) 2018-08-31 2019-08-26 METHOD AND SYSTEM FOR DETECTING A SIGN
US17/186,948 US11830255B2 (en) 2018-08-31 2021-02-26 Method and system for recognizing sign

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2018163075 2018-08-31
JP2018-163075 2018-08-31
JP2019-136947 2019-07-25
JP2019136947A JP7088136B2 (en) 2018-08-31 2019-07-25 Marking object recognition system and marking object recognition method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/186,948 Continuation US11830255B2 (en) 2018-08-31 2021-02-26 Method and system for recognizing sign

Publications (1)

Publication Number Publication Date
WO2020045345A1 true WO2020045345A1 (en) 2020-03-05

Family

ID=69643285

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/033315 WO2020045345A1 (en) 2018-08-31 2019-08-26 Sign recognition system and sign recognition method

Country Status (1)

Country Link
WO (1) WO2020045345A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2023053997A (en) * 2020-05-29 2023-04-13 トヨタ自動車株式会社 Map data collection apparatus and computer program for collecting map

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009089239A (en) * 2007-10-02 2009-04-23 Nikon Corp Camera, and communication system
JP2009099125A (en) * 2007-09-27 2009-05-07 Aisin Aw Co Ltd Image recognition device, image recognition program, and point information collection device and navigation device using them
JP2012002595A (en) * 2010-06-15 2012-01-05 Sony Corp Information processing device, information processing method, information processing system, and program
WO2018142533A1 (en) * 2017-02-02 2018-08-09 三菱電機株式会社 Position/orientation estimating device and position/orientation estimating method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009099125A (en) * 2007-09-27 2009-05-07 Aisin Aw Co Ltd Image recognition device, image recognition program, and point information collection device and navigation device using them
JP2009089239A (en) * 2007-10-02 2009-04-23 Nikon Corp Camera, and communication system
JP2012002595A (en) * 2010-06-15 2012-01-05 Sony Corp Information processing device, information processing method, information processing system, and program
WO2018142533A1 (en) * 2017-02-02 2018-08-09 三菱電機株式会社 Position/orientation estimating device and position/orientation estimating method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2023053997A (en) * 2020-05-29 2023-04-13 トヨタ自動車株式会社 Map data collection apparatus and computer program for collecting map
JP7439969B2 (en) 2020-05-29 2024-02-28 トヨタ自動車株式会社 Map data collection device and computer program for map data collection

Similar Documents

Publication Publication Date Title
JP7088136B2 (en) Marking object recognition system and marking object recognition method
US9064155B2 (en) Guidance device, guidance method, and guidance program
CN113570894B (en) Automatic driving support device, control method, and storage medium
US20200042800A1 (en) Image acquiring system, terminal, image acquiring method, and image acquiring program
JP4847090B2 (en) Position positioning device and position positioning method
US20180045516A1 (en) Information processing device and vehicle position detecting method
JP5760696B2 (en) Image recognition device
US11410429B2 (en) Image collection system, image collection method, image collection device, recording medium, and vehicle communication device
KR102383499B1 (en) Method and system for generating visual feature map
US9374803B2 (en) Message notification system, message transmitting and receiving apparatus, program, and recording medium
JP6580437B2 (en) Information processing device
CN108286973B (en) Running data verification method and device and hybrid navigation system
US11774258B2 (en) Server apparatus and information processing method for providing vehicle travel guidance that is generated based on an image of a specific point
JP5203747B2 (en) Navigation device
WO2020045345A1 (en) Sign recognition system and sign recognition method
JP4953015B2 (en) Own vehicle position recognition device, own vehicle position recognition program, and navigation device using the same
JP3190739B2 (en) Vehicle position detection device
JP4220354B2 (en) Other vehicle position display device and other vehicle information presentation method
US20210180982A1 (en) Map generation device and map generation method
KR20170109130A (en) Address management apparatus of navigation system and method thereof
JP2019061328A (en) Guidance system and guidance program
JP6177584B2 (en) Navigation device
JP2006038557A (en) Car navigation system
JP2021196317A (en) Navigation system and navigation program
CN116424347A (en) Data mining method, vehicle control method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19855517

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19855517

Country of ref document: EP

Kind code of ref document: A1