WO2021176820A1 - Processing device - Google Patents

Processing device Download PDF

Info

Publication number
WO2021176820A1
WO2021176820A1 PCT/JP2020/048701 JP2020048701W WO2021176820A1 WO 2021176820 A1 WO2021176820 A1 WO 2021176820A1 JP 2020048701 W JP2020048701 W JP 2020048701W WO 2021176820 A1 WO2021176820 A1 WO 2021176820A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processing
circle
center
processing apparatus
Prior art date
Application number
PCT/JP2020/048701
Other languages
French (fr)
Japanese (ja)
Inventor
ユイビン ツーン
永崎 健
雄飛 椎名
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to DE112020005250.8T priority Critical patent/DE112020005250T8/en
Priority to JP2022504990A priority patent/JP7277666B2/en
Publication of WO2021176820A1 publication Critical patent/WO2021176820A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs

Definitions

  • the present invention relates to a processing device, for example, a processing device for a sensing system such as a stereo camera system.
  • the stereo camera device simultaneously measures the visual information based on the image and the distance information to the object in the image, so that various objects (people, cars, three-dimensional objects, road surfaces, road surfaces) around the automobile are measured at the same time. It is said that it can grasp signs, signboards, etc. in detail and contribute to improving safety during driving assistance.
  • Patent Document 1 is mentioned as a technique focusing on improving the accuracy of recognition.
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to provide a processing device capable of improving the recognition performance of an object in an image such as an electric sign.
  • the processing device of the present invention is a processing device that recognizes an object in an image, and acquires a plurality of histograms obtained by totaling the number of pixels brighter than a predetermined brightness from the image. It is characterized in that the center estimation process for estimating the position corresponding to the center of the circle of the object in the image from the plurality of histograms is performed.
  • the present invention it is possible to detect an object including a bright region and a region darker than the region, or an object in which a part of the light is emitted and the other region does not emit light. More specifically, for example, by detecting a self-luminous object, the detection performance of the electric sign can be improved. Therefore, it is possible to improve the recognition performance of an object in an image such as an electric sign.
  • the block diagram which shows the schematic structure of the vehicle-mounted stereo camera device of this embodiment.
  • the flow diagram explaining the processing content of the stereo camera processing which is the basis of this embodiment.
  • the timing chart of various processing of the stereo camera processing which is the basis of this embodiment.
  • the flow diagram explaining the processing content of the sign recognition process The figure explaining the center estimation process and the radius estimation process in a sign detection process (circle detection process). Histogram used in radius estimation processing in sign detection processing (circle detection processing).
  • the flow diagram explaining the processing content of the sign detection process The figure explaining the image captured by the 1st shutter.
  • the flow diagram explaining the processing content of the self-luminous object detection process The figure explaining the center estimation process in the self-luminous object detection process.
  • the figure explaining the area estimation processing in self-luminous object detection processing The figure explaining the case where there is a bright region around the center of the estimated region.
  • the figure explaining the method of determining a threshold value The figure explaining the center estimation process and the radius estimation process
  • FIG. 1 is a block diagram showing a schematic configuration of an in-vehicle stereo camera device of the present embodiment.
  • the in-vehicle stereo camera device 100 of the present embodiment is a device mounted on a vehicle and recognizes the outside environment of the vehicle based on image information of a shooting target area in front of the vehicle.
  • the in-vehicle stereo camera device 100 recognizes, for example, white lines of roads, pedestrians, vehicles, other three-dimensional objects, signals, signs, lighting lamps, etc. from captured images (information), and mounts the stereo camera device 100. Make adjustments such as braking and steering adjustment of the vehicle (own vehicle).
  • the in-vehicle stereo camera device 100 captures image information in an image based on two cameras 101 and 102 (left camera 101 and right camera 102) arranged side by side and image information acquired by the cameras 101 and 102. It has a processing device 110 that performs recognition processing of an object (object) of the above.
  • the processing device 110 is configured as a computer including a processor such as a CPU (Central Processing Unit), a memory such as a ROM (Read Only Memory), a RAM (Random Access Memory), and an HDD (Hard Disk Drive). Each function of the processing device 110 is realized by the processor executing the program stored in the ROM.
  • the RAM stores data including intermediate data of operations performed by a program executed by the processor.
  • the processing device 110 has an image input interface 103 for controlling the imaging of the cameras 101 and 102 and capturing the captured images.
  • the image captured through the image input interface 103 is sent data through the internal bus 109, processed by the image processing unit 104 and the arithmetic processing unit 105, and the result in the process of processing or the image data as the final result is stored as a memory. Is stored in the storage unit 106 of.
  • the image processing unit 104 compares the first image (left image) obtained from the image sensor of the camera 101 with the second image (right image) obtained from the image sensor of the camera 102, and obtains each image. On the other hand, correction of the device-specific deviation caused by the image sensor and image correction such as noise interpolation are performed, and this is stored in the storage unit 106. Further, the points corresponding to each other are calculated between the first and second images, the parallax information is calculated, and this is stored in the storage unit 106 in the same manner as before.
  • the arithmetic processing unit 105 recognizes various objects necessary for perceiving the environment around the vehicle by using the image and the parallax information (distance information for each point on the image) stored in the storage unit 106.
  • Various objects include people, cars, other obstacles, traffic lights, signs, car tail lamps and head rides, and the like. A part of these recognition results and intermediate calculation results is recorded in the storage unit 106 as before. After performing various object recognition on the captured image, the arithmetic processing unit 105 calculates the vehicle control policy using these recognition results.
  • the vehicle control policy obtained as a result of the calculation and a part of the object recognition result are transmitted to the in-vehicle network CAN111 through the CAN interface 107, whereby the vehicle (own vehicle) is braked. Further, regarding these operations, the control processing unit 108 monitors whether or not each processing unit has caused an abnormal operation and whether or not an error has occurred during data transfer, which is a mechanism for preventing the abnormal operation. ing.
  • the image processing unit 104 is an image input interface 103 that is an input / output unit between the control processing unit 108, the storage unit 106, the arithmetic processing unit 105, and the image pickup elements of the cameras 101 and 102 via the internal bus 109. It is connected to the CAN interface 107, which is an input / output unit with the external in-vehicle network CAN111.
  • the control processing unit 108, the image processing unit 104, the storage unit 106, the arithmetic processing unit 105, and the input / output units 103 and 107 are composed of a single computer unit or a plurality of computer units.
  • the storage unit 106 is composed of, for example, a memory that stores image information obtained by the image processing unit 104, image information created as a result of scanning by the arithmetic processing unit 105, and the like.
  • the CAN interface 107 which is an input / output unit with the external vehicle-mounted network CAN111, outputs the information output from the vehicle-mounted stereo camera device 100 to the control system of the own vehicle via the vehicle-mounted network CAN111.
  • FIG. 2 shows a processing flow (in other words, processing content of stereo camera processing) in the vehicle-mounted stereo camera device 100 which is the basis of the present embodiment.
  • images are imaged by the left and right cameras 101 and 102 (S201 and S202), and for each of the image data 121 and 122 captured by each, image processing such as correction for absorbing the unique characteristics of the image sensor is performed. Do (S203).
  • the processing result is stored in the image buffer 161.
  • the image buffer 161 is provided in the storage unit 106 of FIG.
  • the two corrected images are collated with each other, and the parallax information of the images obtained by the left and right cameras 101 and 102 is obtained.
  • the parallax of the left and right images makes it clear where and where a certain point of interest on the object corresponds to the images of the left and right cameras 101 and 102, and the distance to the object can be obtained by the principle of triangulation. It will be.
  • parallax processing S204.
  • the image processing (S203) and the parallax processing (S204) are performed by the image processing unit 104 of FIG. 1, and the finally obtained image and the parallax information are stored in the storage unit 106.
  • the object to be recognized includes a person, a car, other three-dimensional objects, a sign, a traffic light, a tail lamp, and the like, and the recognition dictionary 162 is used as necessary for recognition.
  • the recognition dictionary 162 stores and records, for example, the features of the object to be recognized as machine learning data.
  • the vehicle control process issues a warning to the occupant, for example, braking of the own vehicle, braking such as steering angle adjustment, etc. (S206), and the result is output to the outside through the CAN interface 107 (S207).
  • Various object recognition processes (S205) and vehicle control processes (S206) are performed by the arithmetic processing unit 105 of FIG. 1, and output (S207) to the external in-vehicle network CAN 111 is performed by the CAN interface 107.
  • Each of these processes and means is composed of, for example, a single computer unit or a plurality of computer units, and is configured so that data can be exchanged with each other.
  • FIG. 3 shows the timing of various processes in the in-vehicle stereo camera device 100 of the present embodiment.
  • the flow of two systems is roughly shown as 301 and 302.
  • the flow of 301 indicates the processing timing of the image processing unit 104 of FIG. 1
  • the flow of 302 indicates the processing timing of the arithmetic processing unit 105 of FIG.
  • the right image input process (S303) is performed. This corresponds to the process of capturing an image with the right camera 102 in FIG. 2 (S202), then performing image processing (S203), and storing the right image in the image buffer 161.
  • the left image input process (S304) is performed. This corresponds to the process of capturing an image with the left camera 101 in FIG. 2 (S201), performing image processing (S203), and storing the left image in the image buffer 161.
  • parallax processing (S204) is performed. This is a process of reading the two left and right images from the image buffer 161 in FIG. 2, calculating the parallax by collating between the two images, and storing the calculated parallax information in the storage unit 106. Corresponds to. At this point, the image and the parallax information are gathered in the storage unit 106.
  • FIG. 4 shows a processing flow of the sign recognition processing (S205a).
  • the sign recognition process (S205a) of the present embodiment is basically a process of recognizing a sign that is a circular object by detecting a circle.
  • the cameras 101 and 102 are provided with a first shutter and a second shutter.
  • the exposure time of the first shutter is longer than the exposure time of the second shutter in order to recognize a dark object.
  • the shutter speed of the first shutter is slower than the shutter speed of the second shutter.
  • the imaging by the second shutter (relatively short exposure time) is performed in a frame after the imaging by the first shutter (relatively long exposure time).
  • the sign recognition process (S205a) of the present embodiment basically passes through the image process (S203) and stores the image (left image or the left image) in the image buffer 161 as shown in FIG. , Right image), a sign detection process (S401) for extracting a circular object, a sign identification process (S402) for identifying the type of a circular object, and a sign tracking process (S403) for associating between frames of a circular object. ),
  • the sign determination process (S404) for making a comprehensive determination in a plurality of frames (for example, 10 frames) of a circular object is included.
  • the sign detection process (S401) is executed using the circle detection process, and the circle detection process is divided into a center estimation process and a radius estimation process.
  • the center estimation process as shown in FIG. 5, a line segment 405 is drawn from each edge in the normal direction, and the point at which the intersections of the line segments 405 overlap by a certain number or more is estimated as the center.
  • 406 is estimated to be the center (candidate).
  • the radius estimation process the radius of the circle is estimated based on the histogram shown in FIG. In the histogram of FIG. 6, the horizontal axis represents the radius from the center 406, and the vertical axis represents the number of edges within a predetermined width. How to obtain the histogram of FIG. 6 will be described.
  • an identification process using a classifier is performed on an image including the circular object.
  • the type of the circular object such as whether or not the circle is a sign, the type of the sign, and the content (for example, the speed limit) is recognized.
  • a circular object tracking process is performed, and in the sign determination process (S404), a circular object determination process is performed based on the result of the tracking process.
  • the recognition process of the sign which is a circular object in the image is executed and used for the vehicle control process (S206).
  • FIG. 7 shows the processing flow of the sign detection process (S401).
  • the cameras 101 and 102 have a first shutter and a second shutter, and the sign detection process (S401) of the present embodiment is obtained by the first shutter (relatively long exposure time). It includes processing on the image (S710) and processing on the image obtained by the second shutter (relatively short exposure time) (S720).
  • FIG. 8 is a diagram illustrating an image captured by using the first shutter with one camera (for example, the right camera 102) of the in-vehicle stereo camera device 100. Since the exposure time of the first shutter is longer than the exposure time of the second shutter, it is possible to image the red-colored circumferential portion 805 of the speed indicator 801 in FIG.
  • the circumferential portion 805 is a region darker than the self-luminous portion 804, which is a light emitting portion (lightning portion) near the center.
  • the first shutter can detect the circumferential portion 805, but since the first exposure time is set to be relatively long, the self-luminous portion 804 brighter than the circumferential portion 805 may be overexposed and cannot be detected. ..
  • FIG. 9 is a diagram illustrating an image captured by using the second shutter with one camera (for example, the right camera 102) of the vehicle-mounted stereo camera device 100.
  • the second exposure time of the second shutter is shorter than the first exposure time of the first shutter so that the self-luminous portion 804, which is a light emitting portion (lightning portion) near the center of the speed indicator 801 is not overexposed. It is set to be the time.
  • the circumferential portion 805 may not be detected, but the self-luminous portion 804, which is a light emitting portion, can be detected.
  • the area 803 around the self-luminous unit 804 in FIG. 9 represents a prediction area described later.
  • the imaging frame by the first shutter will be referred to as the first frame
  • the imaging frame by the second shutter will be referred to as the second frame.
  • step S711 a process of searching a circle for the entire image (entire surface) obtained by, for example, the first shutter of FIG. 8 is performed by the processor. Will be done.
  • image processing (S203) prior to step S711 for example, differential processing is performed on the image of FIG. 8 to emphasize the edges.
  • step S711 a process of searching for a circle is performed by performing the above-mentioned circle detection process on the entire image in which the edge is emphasized.
  • step S712 information on the detected circle (that is, information for identifying the position of the circle), for example, the coordinates of the center (candidate) of the searched circle, the radius (candidate), and the number of detected circles are stored in the memory. Is stored in the storage unit 106.
  • step S720 of the sign detection process (S401) in step S721, an image obtained by, for example, the second shutter of FIG. 9 from the information of the previous frame, that is, the circle obtained as a result of the process (S710) by the first shutter. Calculate the center (candidate) of the prediction area (explained later) inside. More specifically, by considering the elapsed time and the movement information of the own vehicle (for example, vehicle speed, yaw rate) in the center coordinates of the circle stored in step S712, the center of the circular region detected in the first frame is set. In the second frame, the coordinates are calculated.
  • a prediction area 803 (see FIG. 9) including the coordinates of the center calculated in step S721 is defined.
  • the prediction area 803 is such that at least the radius d 1 stored in step S712 includes the circle represented by the radius d 2 determined in consideration of the elapsed time and the movement information of the own vehicle (for example, vehicle speed, yaw rate). Defined.
  • step S723 the processor performs the above-mentioned circle detection process on the inside of the prediction area 803 defined in step S722.
  • the processor performs the above-mentioned circle detection process on the inside of the prediction area 803 defined in step S722.
  • the circle detection process for detecting the characteristics of the circle in the limited area (in other words, the processing area smaller than the processing area for the circle detection process in step S711), the entire surface is covered.
  • step S724 the processor confirms whether or not a circle is detected in the prediction area 803. If no circle is detected in the prediction area 803 (S724: Yes), the process proceeds to step S725. Step S725 will be described in detail later in FIG. 10 and the like. If a circle is detected in the prediction area 803 (S724: No), the process proceeds to step S726.
  • step S726 the processor stores and stores the detected circle information in the storage unit 106 as a memory, and the process returns to step S721. If a circle is detected, the process does not transition to step S725. The reason is that step S725 is a process on the premise that the circle is not detected. Step S725 is performed only on the image acquired by the second exposure time (in other words, the shutter speed) by the second shutter, which is shorter than the first exposure time by the first shutter.
  • step S712 the information of another circle stored in step S712 is read out, and steps S721 to S726 described above are repeated.
  • This series of processing is performed for the number of circles detected by the exposure time (in other words, the shutter speed) by the first shutter.
  • step S726 the identification process using the classifier is performed in the above-mentioned sign identification process (S402).
  • FIG. 10 shows the processing flow of the self-luminous object detection process in step S725 described above.
  • this self-luminous object detection process as described above, when a circle is not detected in the prediction area 803 (see FIG. 9) calculated in step S722 (in other words, the image of the prediction area 803 includes the feature of the circle). This is the process to be executed when it is determined that there is no such process.
  • FIG. 11 is a diagram illustrating the following step S1001.
  • step S1001 the processor counts the number of pixels (in other words, pixels brighter than the predetermined brightness) having a brightness higher than a predetermined threshold value (described later in FIG. 14) in the prediction area 803, and a histogram. Is created (acquired).
  • histograms 1008 and 1009 are created (acquired) for the crossed (orthogonal) x-axis and y-axis, respectively.
  • the processor obtains the position of the center of gravity of the histogram 1008 and the position of the center of gravity of the histogram 1009.
  • the coordinates of the center of gravity 1010 of the prediction area 803 are specified from the positions of the centers of gravity of the plurality of histograms 1008 and 1009.
  • the center of gravity 1010 is estimated to be the center (corresponding position) of the circle of the object in the prediction area 803.
  • FIG. 12 is a diagram illustrating the following step S1002.
  • the rectangular region 1013 is determined using the coordinates of the center of the circle estimated in step S1001 and the radius d 2 used in step S722.
  • the rectangular region 1013 is determined by drawing lines parallel to the x-axis and the y-axis from the first point 1011 and the second point 1012, respectively.
  • the first point 1011 is determined by subtracting the radius d 2 from the coordinates (X C , Y C ) of the center of the circle estimated in step S1001.
  • the second point 1012 is determined by adding the radius d 2 from the coordinates (X C , Y C ) of the center of the circle estimated in step S1001.
  • step S1003 the brightness (valid / invalid) of the region 1013 is determined, and in step S1004, a predetermined brightness is determined around the center of the region 1013 (in other words, in a region of a predetermined size including the center). (Predetermined threshold value: described later in FIG. 14) It is determined whether or not there is a brighter region. As shown in FIG. 13, if there is a bright region 1014, in other words, if it is not invalid (S1004: Yes), it is determined that there is a self-luminous object that is a light emitting portion (lightning portion), and the process proceeds to step S1005. do.
  • step S1005 information about the area 1013 (that is, information about the bright area 1014 near the center of the area 1013) is stored and stored in the storage unit 106 as a memory.
  • the information (region information) regarding the region 1013 is, for example, the coordinates of the center calculated in step S1001 and the radius d 2 .
  • FIG. 14 is a diagram illustrating a threshold value determination method for separating the foreground (character portion, light emitting portion) and the background (black background portion, non-light emitting portion) in the prediction area 803.
  • One example is a method of observing the ratio of the number of pixels in the foreground part to the number of pixels in the boundary part of the background / foreground part.
  • the image 1401 in FIG. 14 is a case where only high-luminance pixels are used as the foreground. Since the number of pixels in the foreground portion is small (three in the image 1401) and the number of pixels in the black portion adjacent to the foreground portion is large compared to this number, it can be seen that there are many isolated areas. Therefore, it can be estimated by looking at this ratio that it is not suitable for extracting the character part of the foreground.
  • the image 1403 of FIG. 14 is an example in which the number of pixels occupied by the foreground portion is extremely large compared to the number of pixels of the surplus boundary portion. In such a case, it can be estimated by looking at this ratio that the numbers in the character part (especially the numbers of zero) are crushed and may not be suitable for recognition.
  • Image 1402 of FIG. 14 is an example of a threshold value suitable for subsequent recognition processing and the like. This is an example in which there is a certain number of pixels in the foreground part and the ratio with the number of pixels in the background part also has a certain ratio expected from the character pattern.
  • a threshold value can be set according to the quality of the image (contrast, average gradation).
  • step S725 the self-luminous object detection process in step S725 described above may be performed when the feature of the circle is detected.
  • a plurality of histograms 1008 and 1009 obtained by totaling the number of pixels brighter than a predetermined brightness are acquired from the images acquired by the cameras 101 and 102, and from the plurality of histograms 1008 and 1009.
  • the center estimation process (S1001) for estimating the position corresponding to the center of the circle of the object in the image is performed.
  • the image is an image captured at an exposure time or a shutter speed at which a light emitting portion in the object can be detected in order to perform identification processing of the object.
  • a circle detection process (S711) is performed to acquire information for identifying the position of the circle from among other images captured at another exposure time longer than the exposure time or at another shutter speed slower than the shutter speed.
  • the processing area (limited area) of the image for the center estimation process (S1001) is smaller than the processing area (entire surface) of the other image for the circle detection process (S711).
  • the image is an image captured in a frame after the other image.
  • the present embodiment acquires a first image (an image obtained by a first shutter (relatively long exposure time)) suitable for detecting a region having a first brightness.
  • a second image an image obtained by a second shutter (relatively short exposure time) suitable for detecting a region having a second brightness that is brighter than the first brightness.
  • the first process (S710) is performed on the first image
  • the second process (S720) is performed on the second image
  • the second process is the result of the first process.
  • the circle detection process using the circumferential edge cannot be performed in the limited area (prediction area 803) for imaging of the second shutter, for example, an LED type electric sign is detected.
  • the present embodiment it is possible to detect an object including a bright region and a region darker than the region, or an object that partially emits light and the other region does not emit light. become. More specifically, for example, by detecting a self-luminous object, the detection performance of the electric sign can be improved. Therefore, it is possible to improve the recognition performance of an object in an image such as an electric sign.
  • the in-vehicle stereo camera device 100 composed of two cameras, one camera may be used, or three or more cameras may be used.
  • the present invention is not limited to the above-described embodiment, and includes various modified forms.
  • the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the described configurations.
  • the present invention is widely applicable, for example, to detecting an object having a plurality of optical properties, for example, an object that partially emits light.
  • each of the above configurations, functions, processing units, processing means, etc. may be realized by hardware by designing a part or all of them by, for example, an integrated circuit. Further, each of the above configurations, functions, and the like may be realized by software by the processor interpreting and executing a program that realizes each function. Information such as programs, tables, and files that realize each function can be stored in a memory, a hard disk, a storage device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
  • SSD Solid State Drive
  • control lines and information lines indicate what is considered necessary for explanation, and not all control lines and information lines are necessarily shown on the product. In practice, it can be considered that almost all configurations are interconnected.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Provided is a processing device capable of improving the ability to recognize an object such as an illuminated road sign in an image. The processing device acquires a plurality of histograms 1008, 1009 obtained by aggregating the number of bright pixels brighter than a predetermined brightness from images acquired by cameras 101, 102, and performs center estimation processing (S1001) for estimating a position corresponding to the center of a circle of an object in the images from the plurality of histograms 1008, 1009.

Description

処理装置Processing equipment
 本発明は、処理装置に係り、例えば、ステレオカメラシステムなどのセンシングシステムのため処理装置に関する。 The present invention relates to a processing device, for example, a processing device for a sensing system such as a stereo camera system.
 近年、車載カメラ装置の普及により、安全運転や自動運転に向けた各種認識機能への要求が高まってきている。なかでも、ステレオカメラ装置は、画像に依る視覚的な情報と、画像内の対象物への距離情報を同時に計測するため、自動車周辺の様々な対象物(人、車、立体物、路面、路面標識、看板標識など)を詳細に把握でき、運転支援時の安全性の向上にも寄与するとされている。 In recent years, with the spread of in-vehicle camera devices, there is an increasing demand for various recognition functions for safe driving and automatic driving. Among them, the stereo camera device simultaneously measures the visual information based on the image and the distance information to the object in the image, so that various objects (people, cars, three-dimensional objects, road surfaces, road surfaces) around the automobile are measured at the same time. It is said that it can grasp signs, signboards, etc. in detail and contribute to improving safety during driving assistance.
 車載カメラ装置における物体認識の対象の一つとして標識がある。一般に、標識認識は、地図情報と連携して、自動運転車の加速・減速に使われる。また、先進運転支援システムの評価指標であるEuroNCAP(2016年~2020年アップデート)においても、SAS(Speed Assistance Systems(速度支援システム))に関する評価項目が設けられており、その重要度が増している。 There is a sign as one of the objects of object recognition in the in-vehicle camera device. Generally, sign recognition is used for accelerating and decelerating autonomous vehicles in cooperation with map information. In addition, EuroNCAP (2016-2020 update), which is an evaluation index for advanced driver assistance systems, also has evaluation items related to SAS (Speed Assistance Systems), and its importance is increasing. ..
 従来から、車両に搭載され、車両の前方の状況を認識する車載カメラ装置に関する様々な技術・装置が提案されてきた。このような車載カメラ装置において、例えば、標識認識の性能向上に関しては、認識の高精度化を図るという点に着目した技術として、特許文献1が挙げられる。 Conventionally, various technologies and devices related to in-vehicle camera devices that are mounted on vehicles and recognize the situation in front of the vehicle have been proposed. In such an in-vehicle camera device, for example, with regard to improving the performance of sign recognition, Patent Document 1 is mentioned as a technique focusing on improving the accuracy of recognition.
特開2019-125022号公報Japanese Unexamined Patent Publication No. 2019-125022
 しかしながら、例えば、特許文献1などに記載の従来技術では、明るい領域とその領域よりも暗い領域とを含む対象物、あるいは、一部が発光し、その他の領域は発光しない対象物を検知することについては十分配慮が行われていない。より具体的には、例えば、電光標識は路面に対して上部にあり、中心付近(例えば、制限速度表示部分など)が発光しているものもあるが、円周部分が発光しておらず、ヘッドライトが当たりにくい場所(夜間及びトンネル内)では円周エッジを使用した従来の円検知手法では検知することが困難である。 However, for example, in the prior art described in Patent Document 1 or the like, an object including a bright region and a region darker than the region, or an object in which a part of the light is emitted and the other region does not emit light is detected. There is not enough consideration for. More specifically, for example, some lightning signs are located above the road surface and emit light near the center (for example, a speed limit display portion), but the circumferential portion does not emit light. In places where the headlights are hard to hit (at night and in tunnels), it is difficult to detect with the conventional circle detection method using the circumferential edge.
 本発明は、上記事情に鑑みてなされたもので、その目的とするところは、例えば電光標識などの画像内の対象物の認識性能を向上させることのできる処理装置を提供することにある。 The present invention has been made in view of the above circumstances, and an object of the present invention is to provide a processing device capable of improving the recognition performance of an object in an image such as an electric sign.
 上記課題を解決するため、本発明の処理装置は、画像内の対象物の認識処理を行う処理装置であって、前記画像から所定の明るさより明るい画素の数を集計した複数のヒストグラムを取得し、前記複数のヒストグラムから前記画像内の対象物の円の中心に相当する位置を推定する中心推定処理を行うことを特徴とする。 In order to solve the above problems, the processing device of the present invention is a processing device that recognizes an object in an image, and acquires a plurality of histograms obtained by totaling the number of pixels brighter than a predetermined brightness from the image. It is characterized in that the center estimation process for estimating the position corresponding to the center of the circle of the object in the image from the plurality of histograms is performed.
 本発明によれば、明るい領域とその領域よりも暗い領域とを含む対象物、あるいは、一部が発光し、その他の領域は発光しない対象物を検知することが可能になる。より具体的には、例えば、自発光物体を検知することで、電光標識の検知性能を向上させることができる。よって、例えば電光標識などの画像内の対象物の認識性能を向上させることができる。 According to the present invention, it is possible to detect an object including a bright region and a region darker than the region, or an object in which a part of the light is emitted and the other region does not emit light. More specifically, for example, by detecting a self-luminous object, the detection performance of the electric sign can be improved. Therefore, it is possible to improve the recognition performance of an object in an image such as an electric sign.
 上記した以外の課題、構成及び効果は以下の実施形態の説明により明らかにされる。 Issues, configurations and effects other than those described above will be clarified by the explanation of the following embodiments.
本実施形態の車載ステレオカメラ装置の概略構成を示すブロック図。The block diagram which shows the schematic structure of the vehicle-mounted stereo camera device of this embodiment. 本実施形態の基礎となるステレオカメラ処理の処理内容を説明するフロー図。The flow diagram explaining the processing content of the stereo camera processing which is the basis of this embodiment. 本実施形態の基礎となるステレオカメラ処理の各種処理のタイミングチャート。The timing chart of various processing of the stereo camera processing which is the basis of this embodiment. 標識認識処理の処理内容を説明するフロー図。The flow diagram explaining the processing content of the sign recognition process. 標識検知処理(円検知処理)における中心推定処理と半径推定処理を説明する図。The figure explaining the center estimation process and the radius estimation process in a sign detection process (circle detection process). 標識検知処理(円検知処理)における半径推定処理で使用するヒストグラム。Histogram used in radius estimation processing in sign detection processing (circle detection processing). 標識検知処理の処理内容を説明するフロー図。The flow diagram explaining the processing content of the sign detection process. 第1シャッタで撮像された画像を説明する図。The figure explaining the image captured by the 1st shutter. 第2シャッタで撮像された画像を説明する図。The figure explaining the image captured by the 2nd shutter. 自発光物体検知処理の処理内容を説明するフロー図。The flow diagram explaining the processing content of the self-luminous object detection process. 自発光物体検知処理における中心推定処理を説明する図。The figure explaining the center estimation process in the self-luminous object detection process. 自発光物体検知処理における領域推定処理を説明する図。The figure explaining the area estimation processing in self-luminous object detection processing. 推定された領域の中心周辺に明るい領域がある場合を説明する図。The figure explaining the case where there is a bright region around the center of the estimated region. 閾値の決定手法を説明する図。The figure explaining the method of determining a threshold value.
 以下、本発明の実施形態を図面を用いて説明する。なお、各図において同じ機能を有する部分には同じ符号を付して繰り返し説明は省略する場合がある。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In each figure, parts having the same function may be designated by the same reference numerals and repeated description may be omitted.
(車載ステレオカメラ装置の概略構成)
 図1は、本実施形態の車載ステレオカメラ装置の概略構成を示すブロック図である。本実施形態の車載ステレオカメラ装置100は、車両に搭載され、車両前方の撮影対象領域の画像情報に基づいて車外環境を認識する装置である。車載ステレオカメラ装置100は、撮影した画像(情報)から、例えば、道路の白線、歩行者、車両、その他の立体物、信号、標識、点灯ランプなどの認識を行い、当該ステレオカメラ装置100を搭載した車両(自車両)のブレーキ、ステアリング調整などの調整を行う。
(Outline configuration of in-vehicle stereo camera device)
FIG. 1 is a block diagram showing a schematic configuration of an in-vehicle stereo camera device of the present embodiment. The in-vehicle stereo camera device 100 of the present embodiment is a device mounted on a vehicle and recognizes the outside environment of the vehicle based on image information of a shooting target area in front of the vehicle. The in-vehicle stereo camera device 100 recognizes, for example, white lines of roads, pedestrians, vehicles, other three-dimensional objects, signals, signs, lighting lamps, etc. from captured images (information), and mounts the stereo camera device 100. Make adjustments such as braking and steering adjustment of the vehicle (own vehicle).
 車載ステレオカメラ装置100は、画像情報を取得する左右に横並びに配置された2つのカメラ101、102(左カメラ101、右カメラ102)と、カメラ101、102で取得した画像情報に基づいて画像内の対象物(物体)の認識処理などを行う処理装置110とを有する。処理装置110は、CPU(Central Processing Unit)等のプロセッサ、ROM(Read Only Memory)、RAM(Random Access Memory)、HDD(Hard Disk Drive)等のメモリ等を備えるコンピュータとして構成されている。処理装置110の各機能は、ROMに記憶されたプログラムをプロセッサが実行することによって実現される。RAMは、プロセッサが実行するプログラムによる演算の中間データ等を含むデータを格納する。 The in-vehicle stereo camera device 100 captures image information in an image based on two cameras 101 and 102 (left camera 101 and right camera 102) arranged side by side and image information acquired by the cameras 101 and 102. It has a processing device 110 that performs recognition processing of an object (object) of the above. The processing device 110 is configured as a computer including a processor such as a CPU (Central Processing Unit), a memory such as a ROM (Read Only Memory), a RAM (Random Access Memory), and an HDD (Hard Disk Drive). Each function of the processing device 110 is realized by the processor executing the program stored in the ROM. The RAM stores data including intermediate data of operations performed by a program executed by the processor.
 詳しくは、処理装置110は、カメラ101、102の撮像を制御して、撮像した画像を取り込むための画像入力インタフェース103を持つ。この画像入力インタフェース103を通して取り込まれた画像は、内部バス109を通してデータが送られ、画像処理部104や、演算処理部105で処理され、処理途中の結果や最終結果となる画像データなどがメモリとしての記憶部106に記憶される。 Specifically, the processing device 110 has an image input interface 103 for controlling the imaging of the cameras 101 and 102 and capturing the captured images. The image captured through the image input interface 103 is sent data through the internal bus 109, processed by the image processing unit 104 and the arithmetic processing unit 105, and the result in the process of processing or the image data as the final result is stored as a memory. Is stored in the storage unit 106 of.
 画像処理部104は、カメラ101の撮像素子から得られる第1の画像(左画像)と、カメラ102の撮像素子から得られる第2の画像(右画像)とを比較して、それぞれの画像に対して、撮像素子に起因するデバイス固有の偏差の補正や、ノイズ補間などの画像補正を行い、これを記憶部106に記憶する。更に第1および第2の画像の間で、相互に対応する箇所を計算して、視差情報を計算し、先程と同様に、これを記憶部106に記憶する。 The image processing unit 104 compares the first image (left image) obtained from the image sensor of the camera 101 with the second image (right image) obtained from the image sensor of the camera 102, and obtains each image. On the other hand, correction of the device-specific deviation caused by the image sensor and image correction such as noise interpolation are performed, and this is stored in the storage unit 106. Further, the points corresponding to each other are calculated between the first and second images, the parallax information is calculated, and this is stored in the storage unit 106 in the same manner as before.
 演算処理部105は、記憶部106に蓄えられた画像および視差情報(画像上の各点に対する距離情報)を使い、車両周辺の環境を知覚するために必要な、各種物体の認識を行う。各種物体とは、人、車、その他の障害物、信号機、標識、車のテールランプやヘッドライド、などである。これら認識結果や中間的な計算結果の一部が、先程と同様に記憶部106に記録される。演算処理部105は、撮像した画像に対して各種物体認識を行った後に、これら認識結果を用いて車両の制御方針を計算する。 The arithmetic processing unit 105 recognizes various objects necessary for perceiving the environment around the vehicle by using the image and the parallax information (distance information for each point on the image) stored in the storage unit 106. Various objects include people, cars, other obstacles, traffic lights, signs, car tail lamps and head rides, and the like. A part of these recognition results and intermediate calculation results is recorded in the storage unit 106 as before. After performing various object recognition on the captured image, the arithmetic processing unit 105 calculates the vehicle control policy using these recognition results.
 計算の結果として得られた車両の制御方針や、物体認識結果の一部は、CANインタフェース107を通して、車載ネットワークCAN111に伝えられ、これにより車両(自車両)の制動が行われる。また、これらの動作について、各処理部が異常動作を起こしていないか、データ転送時にエラーが発生していないかどうかなどを、制御処理部108が監視しており、異常動作を防ぐ仕掛けとなっている。 The vehicle control policy obtained as a result of the calculation and a part of the object recognition result are transmitted to the in-vehicle network CAN111 through the CAN interface 107, whereby the vehicle (own vehicle) is braked. Further, regarding these operations, the control processing unit 108 monitors whether or not each processing unit has caused an abnormal operation and whether or not an error has occurred during data transfer, which is a mechanism for preventing the abnormal operation. ing.
 上記の画像処理部104は、内部バス109を介して制御処理部108、記憶部106、演算処理部105、およびカメラ101、102の撮像素子との間の入出力部である画像入力インタフェース103と外部の車載ネットワークCAN111との入出力部であるCANインタフェース107に接続されている。制御処理部108、画像処理部104、記憶部106、演算処理部105、および入出力部103、107は、単一または複数のコンピュータユニットにより構成されている。記憶部106は、例えば画像処理部104によって得られた画像情報や、演算処理部105によって走査された結果作られた画像情報等を記憶するメモリ等により構成されている。外部の車載ネットワークCAN111との入出力部であるCANインタフェース107は、車載ステレオカメラ装置100から出力された情報を、車載ネットワークCAN111を介して自車両の制御システムに出力する。 The image processing unit 104 is an image input interface 103 that is an input / output unit between the control processing unit 108, the storage unit 106, the arithmetic processing unit 105, and the image pickup elements of the cameras 101 and 102 via the internal bus 109. It is connected to the CAN interface 107, which is an input / output unit with the external in-vehicle network CAN111. The control processing unit 108, the image processing unit 104, the storage unit 106, the arithmetic processing unit 105, and the input / output units 103 and 107 are composed of a single computer unit or a plurality of computer units. The storage unit 106 is composed of, for example, a memory that stores image information obtained by the image processing unit 104, image information created as a result of scanning by the arithmetic processing unit 105, and the like. The CAN interface 107, which is an input / output unit with the external vehicle-mounted network CAN111, outputs the information output from the vehicle-mounted stereo camera device 100 to the control system of the own vehicle via the vehicle-mounted network CAN111.
(車載ステレオカメラ装置の処理内容)
 図2に、本実施形態の基礎となる車載ステレオカメラ装置100内の処理フロー(換言すれば、ステレオカメラ処理の処理内容)を示す。
(Processing content of in-vehicle stereo camera device)
FIG. 2 shows a processing flow (in other words, processing content of stereo camera processing) in the vehicle-mounted stereo camera device 100 which is the basis of the present embodiment.
 まず、左右のカメラ101、102により画像が撮像され(S201、S202)、各々で撮像した画像データ121、122のそれぞれについて、撮像素子が持つ固有の特性を吸収するための補正などの画像処理を行う(S203)。その処理結果は画像バッファ161に蓄えられる。画像バッファ161は、図1の記憶部106に設けられる。更に、補正された2つの画像を使って、画像同士の照合を行い、これにより左右のカメラ101、102で得た画像の視差情報を得る。左右画像の視差により、対象物体上のある着目点が、左右のカメラ101、102の画像上の何処と何処に対応するかが明らかとなり、三角測量の原理によって、対象物までの距離が得られることになる。これを行うのが視差処理である(S204)。画像処理(S203)および視差処理(S204)は、図1の画像処理部104で行われ、最終的に得られた画像、および視差情報は、記憶部106に蓄えられる。 First, images are imaged by the left and right cameras 101 and 102 (S201 and S202), and for each of the image data 121 and 122 captured by each, image processing such as correction for absorbing the unique characteristics of the image sensor is performed. Do (S203). The processing result is stored in the image buffer 161. The image buffer 161 is provided in the storage unit 106 of FIG. Further, the two corrected images are collated with each other, and the parallax information of the images obtained by the left and right cameras 101 and 102 is obtained. The parallax of the left and right images makes it clear where and where a certain point of interest on the object corresponds to the images of the left and right cameras 101 and 102, and the distance to the object can be obtained by the principle of triangulation. It will be. This is done by parallax processing (S204). The image processing (S203) and the parallax processing (S204) are performed by the image processing unit 104 of FIG. 1, and the finally obtained image and the parallax information are stored in the storage unit 106.
 更に上記の記憶された画像、および視差情報を用いて、各種物体認識処理を行う(S205)。認識対象の物体としては、人、車、その他の立体物、標識、信号機、テールランプなどがあるが、認識の際は必要に応じて認識辞書162を利用する。認識辞書162は、例えば認識対象の物体の特徴を機械学習データとして保存・記録したものである。更に、物体認識の結果と、自車両の状態(速度、舵角など)とを勘案して、車両制御処理によって、例えば、乗員に警告を発し、自車両のブレーキングや舵角調整などの制動を行う、あるいは、それによって対象物の回避制御を行う方針を決め(S206)、その結果をCANインタフェース107を通して外部に出力する(S207)。各種物体認識処理(S205)および車両制御処理(S206)は、図1の演算処理部105で行われ、外部の車載ネットワークCAN111への出力(S207)は、CANインタフェース107にて行われる。これらの各処理・各手段は、例えば単一または複数のコンピュータユニットにより構成され、相互にデータを交換可能に構成されている。 Further, various object recognition processes are performed using the above-mentioned stored image and parallax information (S205). The object to be recognized includes a person, a car, other three-dimensional objects, a sign, a traffic light, a tail lamp, and the like, and the recognition dictionary 162 is used as necessary for recognition. The recognition dictionary 162 stores and records, for example, the features of the object to be recognized as machine learning data. Furthermore, in consideration of the result of object recognition and the state of the own vehicle (speed, steering angle, etc.), the vehicle control process issues a warning to the occupant, for example, braking of the own vehicle, braking such as steering angle adjustment, etc. (S206), and the result is output to the outside through the CAN interface 107 (S207). Various object recognition processes (S205) and vehicle control processes (S206) are performed by the arithmetic processing unit 105 of FIG. 1, and output (S207) to the external in-vehicle network CAN 111 is performed by the CAN interface 107. Each of these processes and means is composed of, for example, a single computer unit or a plurality of computer units, and is configured so that data can be exchanged with each other.
 図3に、本実施形態の車載ステレオカメラ装置100内の各種処理のタイミングを示す。 FIG. 3 shows the timing of various processes in the in-vehicle stereo camera device 100 of the present embodiment.
 図3のタイミングチャートでは、大きく2系統の流れを301、302として示している。301の流れが、図1の画像処理部104における処理タイミングであり、302の流れが、図1の演算処理部105における処理タイミングを示している。 In the timing chart of FIG. 3, the flow of two systems is roughly shown as 301 and 302. The flow of 301 indicates the processing timing of the image processing unit 104 of FIG. 1, and the flow of 302 indicates the processing timing of the arithmetic processing unit 105 of FIG.
 まず、301の流れにおいて、右画像入力処理(S303)が行われる。これは、図2における右カメラ102による画像撮像を行い(S202)、その後で画像処理(S203)を経て、画像バッファ161に右画像を蓄えるまでの処理に相当する。次に、左画像入力処理(S304)が行われる。これは、図2における左カメラ101による画像撮像を行い(S201)、画像処理(S203)を経て、画像バッファ161に左画像を蓄えるまでの処理に該当する。次に、視差処理(S204)を行う。これは、図2において、画像バッファ161から左右の2つの画像を読み出し、両画像間の照合を取ることで視差を計算し、計算して得られた視差情報を記憶部106に蓄えるまでの処理に相当する。この時点で、画像と視差情報が記憶部106に揃ったことになる。 First, in the flow of 301, the right image input process (S303) is performed. This corresponds to the process of capturing an image with the right camera 102 in FIG. 2 (S202), then performing image processing (S203), and storing the right image in the image buffer 161. Next, the left image input process (S304) is performed. This corresponds to the process of capturing an image with the left camera 101 in FIG. 2 (S201), performing image processing (S203), and storing the left image in the image buffer 161. Next, parallax processing (S204) is performed. This is a process of reading the two left and right images from the image buffer 161 in FIG. 2, calculating the parallax by collating between the two images, and storing the calculated parallax information in the storage unit 106. Corresponds to. At this point, the image and the parallax information are gathered in the storage unit 106.
 次に、302の流れにおいて、記憶部106に蓄えられた画像と視差情報を用いて、各種物体認識処理(S205)を行い、車両制御処理(S206)を行い、その結果をCANインタフェース107にて車載ネットワークCAN111に出力する運びとなる。 Next, in the flow of 302, various object recognition processes (S205) and vehicle control processes (S206) are performed using the image and parallax information stored in the storage unit 106, and the results are obtained by the CAN interface 107. It will be output to the in-vehicle network CAN111.
 各種物体認識処理(S205)においては、記憶部106に蓄えられた画像と視差情報などから、人、車、その他の立体物、標識、信号機、テールランプなどの認識を行うが、以下では、本実施形態の特徴部分である、道路などに設置された標識の認識を行う標識認識処理(S205a)について説明する。 In the various object recognition processes (S205), people, cars, other three-dimensional objects, signs, traffic lights, tail lamps, etc. are recognized from the images stored in the storage unit 106 and the parallax information. A sign recognition process (S205a) for recognizing a sign installed on a road or the like, which is a characteristic part of the form, will be described.
(標識認識処理)
 図4に、標識認識処理(S205a)の処理フローを示す。
(Sign recognition processing)
FIG. 4 shows a processing flow of the sign recognition processing (S205a).
 本実施形態の標識認識処理(S205a)は、基本的に円検知によって円形物体である標識の認識を行う処理である。そのための前提として、カメラ101、102は、第1シャッタと第2シャッタが用意されている。第1シャッタの露光時間は、暗い物を認識するために、第2シャッタの露光時間よりも長くしている。換言すれば、第1シャッタのシャッタ速度は、第2シャッタのシャッタ速度よりも遅くしている。また、第2シャッタ(相対的に短い露光時間)による撮像は、第1シャッタ(相対的に長い露光時間)による撮像よりも後のフレームで行われる。 The sign recognition process (S205a) of the present embodiment is basically a process of recognizing a sign that is a circular object by detecting a circle. As a premise for that purpose, the cameras 101 and 102 are provided with a first shutter and a second shutter. The exposure time of the first shutter is longer than the exposure time of the second shutter in order to recognize a dark object. In other words, the shutter speed of the first shutter is slower than the shutter speed of the second shutter. Further, the imaging by the second shutter (relatively short exposure time) is performed in a frame after the imaging by the first shutter (relatively long exposure time).
 上記条件を前提とし、本実施形態の標識認識処理(S205a)は、図4に示すように、基本的に、画像処理(S203)を経て、画像バッファ161に蓄えられた画像(左画像、または、右画像)を用いて、円形物体を抽出する標識検知処理(S401)、円形物体の種別を特定する標識識別処理(S402)、円形物体のフレーム間での対応付けを行う標識追跡処理(S403)、円形物体の複数フレーム(例えば、10フレームなど)での総合的な判定を下す標識判定処理(S404)を含んでいる。 On the premise of the above conditions, the sign recognition process (S205a) of the present embodiment basically passes through the image process (S203) and stores the image (left image or the left image) in the image buffer 161 as shown in FIG. , Right image), a sign detection process (S401) for extracting a circular object, a sign identification process (S402) for identifying the type of a circular object, and a sign tracking process (S403) for associating between frames of a circular object. ), The sign determination process (S404) for making a comprehensive determination in a plurality of frames (for example, 10 frames) of a circular object is included.
 標識検知処理(S401)は、円検知処理を用いて実行され、円検知処理は、中心推定処理と半径推定処理に分けられる。中心推定処理では、図5に示すように、各エッジから法線方向に線分405を引き、その線分405の交点が一定数以上重なる点を中心と推定する。図5では、406が中心(候補)と推定される。半径推定処理では、図6に記載されたヒストグラムに基づいて円の半径を推定する。図6のヒストグラムは、横軸が中心406からの半径、縦軸が所定幅内のエッジ数を表す。図6のヒストグラムの求め方を説明する。図5の中心406から半径が徐々に大きくなる円を想定する。この円とエッジ(例えば、第1エッジ407や第2エッジ408)が重なった場合に、その重なったエッジ数をカウントして、図6のヒストグラムの縦軸とする。407のように円が存在する場合は、図6のヒストグラムのエッジ数410が多くなり、409のように円が存在しない場合は、図6のヒストグラムのエッジ数411は少なくなる。図6のヒストグラムで所定の閾値412を超えたものを半径候補とする。このような処理により、画像バッファ161に蓄えられた画像(左画像、または、右画像)から、円の特徴(中心、半径)を含む円形物体を抽出する。 The sign detection process (S401) is executed using the circle detection process, and the circle detection process is divided into a center estimation process and a radius estimation process. In the center estimation process, as shown in FIG. 5, a line segment 405 is drawn from each edge in the normal direction, and the point at which the intersections of the line segments 405 overlap by a certain number or more is estimated as the center. In FIG. 5, 406 is estimated to be the center (candidate). In the radius estimation process, the radius of the circle is estimated based on the histogram shown in FIG. In the histogram of FIG. 6, the horizontal axis represents the radius from the center 406, and the vertical axis represents the number of edges within a predetermined width. How to obtain the histogram of FIG. 6 will be described. Assume a circle whose radius gradually increases from the center 406 in FIG. When this circle and an edge (for example, the first edge 407 or the second edge 408) overlap, the number of the overlapped edges is counted and used as the vertical axis of the histogram of FIG. When there is a circle as in 407, the number of edges 410 of the histogram in FIG. 6 is large, and when there is no circle as in 409, the number of edges 411 in the histogram in FIG. 6 is small. The histogram of FIG. 6 that exceeds a predetermined threshold value of 412 is used as a radius candidate. By such processing, a circular object including the features (center, radius) of the circle is extracted from the image (left image or right image) stored in the image buffer 161.
 標識識別処理(S402)では、前記円形物体を含む画像に対して識別器を用いた識別処理が行われる。識別処理の結果、円が標識か否か、標識の種類、内容(例えば、制限速度)など、円形物体の種別が認識される。 In the sign identification process (S402), an identification process using a classifier is performed on an image including the circular object. As a result of the identification process, the type of the circular object such as whether or not the circle is a sign, the type of the sign, and the content (for example, the speed limit) is recognized.
 標識追跡処理(S403)では、円形物体の追跡処理が行われ、標識判定処理(S404)では、追跡処理の結果を踏まえて、円形物体の判定処理が行われる。これにより、画像中の円形物体である標識の認識処理が実行され、車両制御処理(S206)に利用される。 In the sign tracking process (S403), a circular object tracking process is performed, and in the sign determination process (S404), a circular object determination process is performed based on the result of the tracking process. As a result, the recognition process of the sign which is a circular object in the image is executed and used for the vehicle control process (S206).
(標識検知処理)
 次に、前述の標識検知処理(S401)について、より詳細に説明する。
(Sign detection processing)
Next, the above-mentioned sign detection process (S401) will be described in more detail.
 図7に、標識検知処理(S401)の処理フローを示す。前述したように、カメラ101、102は、第1シャッタと第2シャッタを有しており、本実施形態の標識検知処理(S401)は、第1シャッタ(相対的に長い露光時間)により得られた画像に対する処理(S710)と、第2シャッタ(相対的に短い露光時間)により得られた画像に対する処理(S720)を含んでいる。 FIG. 7 shows the processing flow of the sign detection process (S401). As described above, the cameras 101 and 102 have a first shutter and a second shutter, and the sign detection process (S401) of the present embodiment is obtained by the first shutter (relatively long exposure time). It includes processing on the image (S710) and processing on the image obtained by the second shutter (relatively short exposure time) (S720).
 図8は、車載ステレオカメラ装置100の一方のカメラ(例えば右カメラ102)で第1シャッタを使用して撮像された画像を説明する図である。第1シャッタの露光時間は、第2シャッタの露光時間よりも長くしているので、図8では、速度標識801の赤い色の円周部805を撮像することが可能となる。なお、円周部805は、中心付近の発光部分(電光部分)である自発光部804よりも暗い領域である。第1シャッタは円周部805を検出できるが、第1の露光時間を相対的に長く設定しているため、円周部805よりも明るい自発光部804は白とびして検出できない場合もある。 FIG. 8 is a diagram illustrating an image captured by using the first shutter with one camera (for example, the right camera 102) of the in-vehicle stereo camera device 100. Since the exposure time of the first shutter is longer than the exposure time of the second shutter, it is possible to image the red-colored circumferential portion 805 of the speed indicator 801 in FIG. The circumferential portion 805 is a region darker than the self-luminous portion 804, which is a light emitting portion (lightning portion) near the center. The first shutter can detect the circumferential portion 805, but since the first exposure time is set to be relatively long, the self-luminous portion 804 brighter than the circumferential portion 805 may be overexposed and cannot be detected. ..
 図9は、車載ステレオカメラ装置100の一方のカメラ(例えば右カメラ102)で第2シャッタを使用して撮像された画像を説明する図である。第2シャッタの第2の露光時間は、速度標識801の中心付近の発光部分(電光部分)である自発光部804を白とびしないように、第1シャッタの第1の露光時間よりも短い露光時間となるように設定される。その結果、図9に示すように、円周部805は検出できない場合もあるが、発光部分である自発光部804は検出することが可能となる。よって、前述した第1シャッタの画像で自発光部804が検出できない場合でも、第2シャッタの画像には自発光部804が存在するので、自発光部804を検出することが可能となる。なお、図9中の自発光部804周りの803の領域は、後述する予測エリアを表している。 FIG. 9 is a diagram illustrating an image captured by using the second shutter with one camera (for example, the right camera 102) of the vehicle-mounted stereo camera device 100. The second exposure time of the second shutter is shorter than the first exposure time of the first shutter so that the self-luminous portion 804, which is a light emitting portion (lightning portion) near the center of the speed indicator 801 is not overexposed. It is set to be the time. As a result, as shown in FIG. 9, the circumferential portion 805 may not be detected, but the self-luminous portion 804, which is a light emitting portion, can be detected. Therefore, even if the self-luminous unit 804 cannot be detected in the image of the first shutter described above, the self-luminous unit 804 is present in the image of the second shutter, so that the self-luminous unit 804 can be detected. The area 803 around the self-luminous unit 804 in FIG. 9 represents a prediction area described later.
 なお、以下では、第1シャッタによる撮像フレームを第1フレーム、第2シャッタによる撮像フレームを第2フレームとする。 In the following, the imaging frame by the first shutter will be referred to as the first frame, and the imaging frame by the second shutter will be referred to as the second frame.
 図7に示すように、標識検知処理(S401)のステップS710において、ステップS711では、プロセッサによって、例えば図8の第1シャッタにより得られた画像全体(全面)に対して円を探索する処理が行われる。なお、このステップS711よりも前の画像処理(S203)では、例えば図8の画像に対して微分処理が行われ、エッジが強調される。ステップS711では、このエッジが強調された画像全体に対して、上述した円検知処理を行うことで、円を探索する処理が行われる。 As shown in FIG. 7, in step S710 of the sign detection process (S401), in step S711, a process of searching a circle for the entire image (entire surface) obtained by, for example, the first shutter of FIG. 8 is performed by the processor. Will be done. In the image processing (S203) prior to step S711, for example, differential processing is performed on the image of FIG. 8 to emphasize the edges. In step S711, a process of searching for a circle is performed by performing the above-mentioned circle detection process on the entire image in which the edge is emphasized.
 ステップS712では、検知された円の情報(つまり、円の位置を特定するための情報)、例えば、探索した円の中心(候補)の座標、半径(候補)、及び検知した円の数をメモリとしての記憶部106に記憶する。 In step S712, information on the detected circle (that is, information for identifying the position of the circle), for example, the coordinates of the center (candidate) of the searched circle, the radius (candidate), and the number of detected circles are stored in the memory. Is stored in the storage unit 106.
 ステップS712で記憶した円の情報を用いて、前述の標識識別処理(S402)にて、例えば図8の画像に対して識別器を用いた識別処理が行われる。 Using the circle information stored in step S712, in the above-mentioned sign identification process (S402), for example, an identification process using a classifier is performed on the image of FIG.
 標識検知処理(S401)のステップS720において、ステップS721では、前回のフレーム、つまり第1シャッタによる処理(S710)の結果得られた円の情報から、例えば図9の第2シャッタにより得られた画像の中にある予測エリア(後で説明)の中心(候補)を計算する。より具体的には、ステップS712で記憶された円の中心座標に経過時間、自車の移動情報(例えば、車速、ヨーレート)を考慮することで、第1フレームで検出された円形領域の中心が第2フレームではどの座標にあるかを計算する。 In step S720 of the sign detection process (S401), in step S721, an image obtained by, for example, the second shutter of FIG. 9 from the information of the previous frame, that is, the circle obtained as a result of the process (S710) by the first shutter. Calculate the center (candidate) of the prediction area (explained later) inside. More specifically, by considering the elapsed time and the movement information of the own vehicle (for example, vehicle speed, yaw rate) in the center coordinates of the circle stored in step S712, the center of the circular region detected in the first frame is set. In the second frame, the coordinates are calculated.
 ステップS722では、ステップS721で計算した中心の座標を含む予測エリア803(図9参照)を定義する。予測エリア803は、少なくともステップS712で記憶された半径d1に経過時間、自車の移動情報(例えば、車速、ヨーレート)を考慮して決定される半径d2によって表現される円を含むように定義される。 In step S722, a prediction area 803 (see FIG. 9) including the coordinates of the center calculated in step S721 is defined. The prediction area 803 is such that at least the radius d 1 stored in step S712 includes the circle represented by the radius d 2 determined in consideration of the elapsed time and the movement information of the own vehicle (for example, vehicle speed, yaw rate). Defined.
 ステップS723では、プロセッサは、ステップS722で定義された予測エリア803の内部に対して、上述した円検知処理を行う。このように、限定された領域(換言すれば、ステップS711での円検知処理のための処理領域よりも小さい処理領域)に対して円の特徴を検知する円検知処理を行うことで、全面に対して円検知処理を行う場合(S711)よりもプロセッサの処理負荷を軽くすることが可能となる。 In step S723, the processor performs the above-mentioned circle detection process on the inside of the prediction area 803 defined in step S722. In this way, by performing the circle detection process for detecting the characteristics of the circle in the limited area (in other words, the processing area smaller than the processing area for the circle detection process in step S711), the entire surface is covered. On the other hand, it is possible to reduce the processing load of the processor as compared with the case of performing the circle detection process (S711).
 ステップS724では、プロセッサは、予測エリア803内で円が検知されなかったか否かの確認を行う。予測エリア803内に円が検知されなければ(S724:Yes)、処理はステップS725に遷移する。ステップS725は、後の図10などで詳細に説明される。予測エリア803内に円が検知されれば(S724:No)、処理はステップS726に遷移する。ステップS726では、プロセッサは、検知した円の情報をメモリとしての記憶部106に保存・記憶して、処理はステップS721に戻る。なお、円が検知された場合、処理はステップS725に遷移しない。その理由は、ステップS725は、円が検知されないことを前提とした処理だからである。ステップS725は、第1シャッタによる第1の露光時間よりも短い第2シャッタによる第2の露光時間(言い換えれば、シャッター速度)により取得された画像に対してのみ行われる。 In step S724, the processor confirms whether or not a circle is detected in the prediction area 803. If no circle is detected in the prediction area 803 (S724: Yes), the process proceeds to step S725. Step S725 will be described in detail later in FIG. 10 and the like. If a circle is detected in the prediction area 803 (S724: No), the process proceeds to step S726. In step S726, the processor stores and stores the detected circle information in the storage unit 106 as a memory, and the process returns to step S721. If a circle is detected, the process does not transition to step S725. The reason is that step S725 is a process on the premise that the circle is not detected. Step S725 is performed only on the image acquired by the second exposure time (in other words, the shutter speed) by the second shutter, which is shorter than the first exposure time by the first shutter.
 そして、ステップS712で記憶された、別の円の情報を読み出して、前述したステップS721からステップS726を繰り返す。この一連の処理は、第1シャッタによる露光時間(言い換えれば、シャッター速度)で検出された円の数だけ、行われる。 Then, the information of another circle stored in step S712 is read out, and steps S721 to S726 described above are repeated. This series of processing is performed for the number of circles detected by the exposure time (in other words, the shutter speed) by the first shutter.
 その後、ステップS726で記憶した円の情報を用いて、前述した標識識別処理(S402)にて、識別器を用いた識別処理が行われる。 After that, using the circle information stored in step S726, the identification process using the classifier is performed in the above-mentioned sign identification process (S402).
(自発光物体検知処理)
 図10に、前述したステップS725の自発光物体検知処理の処理フローを示す。この自発光物体検知処理は、前述したように、ステップS722で計算した予測エリア803(図9参照)内に円が検知されない場合(換言すれば、予測エリア803の画像に円の特徴が含まれないと判断した場合)に、実施される処理である。
(Self-luminous object detection processing)
FIG. 10 shows the processing flow of the self-luminous object detection process in step S725 described above. In this self-luminous object detection process, as described above, when a circle is not detected in the prediction area 803 (see FIG. 9) calculated in step S722 (in other words, the image of the prediction area 803 includes the feature of the circle). This is the process to be executed when it is determined that there is no such process.
 図11は、以下のステップS1001を説明する図である。 FIG. 11 is a diagram illustrating the following step S1001.
 ステップS1001では、プロセッサによって、予測エリア803内に所定の閾値(後で図14にて説明)よりも高い輝度を有する画素(換言すれば、所定の明るさより明るい画素)の数をカウントし、ヒストグラムを作成(取得)する。ヒストグラムは、ここでは、交差(直交)したx軸、y軸それぞれに対してヒストグラム1008、1009が作成(取得)される。そして、プロセッサは、ヒストグラム1008の重心位置、およびヒストグラム1009の重心位置を得る。その結果、複数のヒストグラム1008、1009の重心位置から、予測エリア803の重心1010の座標が特定される。この重心1010は、予測エリア803の対象物の円の中心(に相当する位置)と推定される。 In step S1001, the processor counts the number of pixels (in other words, pixels brighter than the predetermined brightness) having a brightness higher than a predetermined threshold value (described later in FIG. 14) in the prediction area 803, and a histogram. Is created (acquired). Here, histograms 1008 and 1009 are created (acquired) for the crossed (orthogonal) x-axis and y-axis, respectively. Then, the processor obtains the position of the center of gravity of the histogram 1008 and the position of the center of gravity of the histogram 1009. As a result, the coordinates of the center of gravity 1010 of the prediction area 803 are specified from the positions of the centers of gravity of the plurality of histograms 1008 and 1009. The center of gravity 1010 is estimated to be the center (corresponding position) of the circle of the object in the prediction area 803.
 図12は、以下のステップS1002を説明する図である。 FIG. 12 is a diagram illustrating the following step S1002.
 ステップS1002では、ステップS1001で推定した円の中心の座標とステップS722で使用した半径d2を使用して、矩形の領域1013を決定する。矩形の領域1013は、第1の点1011および第2の点1012、それぞれからx軸、y軸に平行な線を引くことで決定する。第1の点1011は、ステップS1001で推定した円の中心の座標(XC、YC)から半径d2を引くことによって決定される。第2の点1012は、ステップS1001で推定した円の中心の座標(XC、YC)から半径d2を足すことによって決定される。 In step S1002, the rectangular region 1013 is determined using the coordinates of the center of the circle estimated in step S1001 and the radius d 2 used in step S722. The rectangular region 1013 is determined by drawing lines parallel to the x-axis and the y-axis from the first point 1011 and the second point 1012, respectively. The first point 1011 is determined by subtracting the radius d 2 from the coordinates (X C , Y C ) of the center of the circle estimated in step S1001. The second point 1012 is determined by adding the radius d 2 from the coordinates (X C , Y C ) of the center of the circle estimated in step S1001.
 ステップS1003では、領域1013の明るさ(有効/無効)を判断し、ステップS1004では、領域1013の中心周辺に(換言すれば、中心を含む所定の大きさの領域に)、ある所定の明るさ(所定の閾値:後で図14にて説明)より明るい領域があるか否かを判断する。図13に示すように、明るい領域1014があれば、言い換えれば、無効でなければ(S1004:Yes)、発光部分(電光部分)である自発光物体があると判断し、処理はステップS1005に遷移する。 In step S1003, the brightness (valid / invalid) of the region 1013 is determined, and in step S1004, a predetermined brightness is determined around the center of the region 1013 (in other words, in a region of a predetermined size including the center). (Predetermined threshold value: described later in FIG. 14) It is determined whether or not there is a brighter region. As shown in FIG. 13, if there is a bright region 1014, in other words, if it is not invalid (S1004: Yes), it is determined that there is a self-luminous object that is a light emitting portion (lightning portion), and the process proceeds to step S1005. do.
 ステップS1005では、領域1013に関する情報(つまり、領域1013の中心付近の明るい領域1014に関する情報)をメモリとしての記憶部106に保存・記憶する。領域1013に関する情報(領域情報)とは、例えば、ステップS1001で計算した中心の座標、及び半径d2である。 In step S1005, information about the area 1013 (that is, information about the bright area 1014 near the center of the area 1013) is stored and stored in the storage unit 106 as a memory. The information (region information) regarding the region 1013 is, for example, the coordinates of the center calculated in step S1001 and the radius d 2 .
 明るい領域が無い場合は、言い換えれば、無効であれば(S1004:No)、発光部分(電光部分)である自発光物体がないと判断し、記憶部106への情報の保存・記憶は行われず、処理は終了する。 If there is no bright area, in other words, if it is invalid (S1004: No), it is determined that there is no self-luminous object that is the light emitting part (lightning part), and the information is not stored / stored in the storage unit 106. , The process ends.
 図14は、予測エリア803内に対する前景(文字部分、発光部分)と背景(黒地部分、非発光部分)を分けるための閾値の決定手法を説明する図である。 FIG. 14 is a diagram illustrating a threshold value determination method for separating the foreground (character portion, light emitting portion) and the background (black background portion, non-light emitting portion) in the prediction area 803.
 前景部分と背景部分を分けるためには幾つかの手法がある。その1つの例が、前景部分のピクセル数と、背景/前景部分の境界部分のピクセル数の比率を見る手法である。 There are several methods to separate the foreground part and the background part. One example is a method of observing the ratio of the number of pixels in the foreground part to the number of pixels in the boundary part of the background / foreground part.
 例えば図14の画像1401は、高い輝度のピクセルのみを前景とする場合である。前景部分のピクセル数が少なく(画像1401では3つ)、かつ、この数に比して、前景部分と隣接する黒字部分のピクセル数が多いため、孤立した領域が多いということが分かる。そのため、前景の文字部分を抽出するためには適さないということが、この比率を見ることで推定できる。 For example, the image 1401 in FIG. 14 is a case where only high-luminance pixels are used as the foreground. Since the number of pixels in the foreground portion is small (three in the image 1401) and the number of pixels in the black portion adjacent to the foreground portion is large compared to this number, it can be seen that there are many isolated areas. Therefore, it can be estimated by looking at this ratio that it is not suitable for extracting the character part of the foreground.
 また、図14の画像1403は、前景部分の占めるピクセル数が、黒字境界部分のピクセル数に比して、極めて多い場合の事例である。このような場合、文字部分の数字(特にゼロの数字)がつぶれていて、認識には適さない可能性があることが、この比率を見ることで推定できる。 Further, the image 1403 of FIG. 14 is an example in which the number of pixels occupied by the foreground portion is extremely large compared to the number of pixels of the surplus boundary portion. In such a case, it can be estimated by looking at this ratio that the numbers in the character part (especially the numbers of zero) are crushed and may not be suitable for recognition.
 図14の画像1402は、後に続く認識処理などに適した閾値の事例である。前景部分に一定のピクセル数があり、かつ、背景部分のピクセル数との比率も、文字パターンから想定される一定の割合のバランスを持っている事例である。このような手段によって、画像の質(コントラスト、平均階調)に応じた閾値を定めることができる。 Image 1402 of FIG. 14 is an example of a threshold value suitable for subsequent recognition processing and the like. This is an example in which there is a certain number of pixels in the foreground part and the ratio with the number of pixels in the background part also has a certain ratio expected from the character pattern. By such means, a threshold value can be set according to the quality of the image (contrast, average gradation).
 なお、前述したステップS725の自発光物体検知処理は、円の特徴が検知された場合に行っても良いことは勿論である。 Needless to say, the self-luminous object detection process in step S725 described above may be performed when the feature of the circle is detected.
 以上で説明したように、本実施形態は、カメラ101、102で取得した画像から所定の明るさより明るい画素の数を集計した複数のヒストグラム1008、1009を取得し、前記複数のヒストグラム1008、1009から前記画像内の対象物の円の中心に相当する位置を推定する中心推定処理(S1001)を行う。 As described above, in the present embodiment, a plurality of histograms 1008 and 1009 obtained by totaling the number of pixels brighter than a predetermined brightness are acquired from the images acquired by the cameras 101 and 102, and from the plurality of histograms 1008 and 1009. The center estimation process (S1001) for estimating the position corresponding to the center of the circle of the object in the image is performed.
 また、前記画像が円の特徴を含むか否かの判断(S724)に基づいて、前記中心推定処理(S1001)を行うか否かを判断し、前記画像に前記円の特徴が含まれないと判断した場合に(S724:Yes)、前記中心推定処理(S1001)を行う。 Further, based on the determination (S724) of whether or not the image includes the features of the circle, it is determined whether or not the center estimation process (S1001) is performed, and the image does not include the features of the circle. When it is determined (S724: Yes), the center estimation process (S1001) is performed.
 また、前記画像は、前記対象物の識別処理を行うために前記対象物内の発光部分を検出可能な露光時間又はシャッタ速度で撮像された画像である。前記露光時間より長い他の露光時間又は前記シャッタ速度より遅い他のシャッタ速度で撮像した他の画像の中から前記円の位置を特定するための情報を取得する円検知処理(S711)を行い、前記中心推定処理(S1001)のための前記画像の処理領域(限定エリア)は、前記円検知処理(S711)のための前記他の画像の処理領域(全面)よりも小さい。前記画像は、前記他の画像より後のフレームで撮像された画像である。 Further, the image is an image captured at an exposure time or a shutter speed at which a light emitting portion in the object can be detected in order to perform identification processing of the object. A circle detection process (S711) is performed to acquire information for identifying the position of the circle from among other images captured at another exposure time longer than the exposure time or at another shutter speed slower than the shutter speed. The processing area (limited area) of the image for the center estimation process (S1001) is smaller than the processing area (entire surface) of the other image for the circle detection process (S711). The image is an image captured in a frame after the other image.
 また、前記円の中心に相当する位置周辺に所定の明るさより明るい領域が存在するか否かを判断し(S1004)、前記明るい領域が存在する場合(S1004:Yes)、前記明るい領域に関する情報を記憶部に保存する(S1005)。 Further, it is determined whether or not there is a region brighter than a predetermined brightness around the position corresponding to the center of the circle (S1004), and if the bright region exists (S1004: Yes), information on the bright region is provided. It is saved in the storage unit (S1005).
 言い換えれば、例えば、本実施形態は、第1の明るさを有する領域を検知するのに適した第1の画像(第1シャッタ(相対的に長い露光時間)により得られた画像)を取得し、第1の明るさよりも明るい第2の明るさを有する領域を検知するのに適した第2の画像(第2シャッタ(相対的に短い露光時間)により得られた画像)を取得することを、1つの側面とする。また、第1の画像に対して第1の処理(S710)を行い、第2の画像に対して第2の処理(S720)を行い、特に、第2の処理は第1の処理の結果を使用し、明るい領域を検知する。そして、第2シャッタの撮像の限定エリア(予測エリア803)に円周エッジを使用する円検知処理ができないとき、例えばLED式の電光標識を検知する。 In other words, for example, the present embodiment acquires a first image (an image obtained by a first shutter (relatively long exposure time)) suitable for detecting a region having a first brightness. To acquire a second image (an image obtained by a second shutter (relatively short exposure time)) suitable for detecting a region having a second brightness that is brighter than the first brightness. One side. Further, the first process (S710) is performed on the first image, the second process (S720) is performed on the second image, and in particular, the second process is the result of the first process. Use to detect bright areas. Then, when the circle detection process using the circumferential edge cannot be performed in the limited area (prediction area 803) for imaging of the second shutter, for example, an LED type electric sign is detected.
 このような構成により、本実施形態によれば、明るい領域とその領域よりも暗い領域とを含む対象物、あるいは、一部が発光し、その他の領域は発光しない対象物を検知することが可能になる。より具体的には、例えば、自発光物体を検知することで、電光標識の検知性能を向上させることができる。よって、例えば電光標識などの画像内の対象物の認識性能を向上させることができる。 With such a configuration, according to the present embodiment, it is possible to detect an object including a bright region and a region darker than the region, or an object that partially emits light and the other region does not emit light. become. More specifically, for example, by detecting a self-luminous object, the detection performance of the electric sign can be improved. Therefore, it is possible to improve the recognition performance of an object in an image such as an electric sign.
 なお、前述した実施形態では、2つのカメラから構成される車載ステレオカメラ装置100を用いて説明したが、カメラは1台でもよいし、3台以上使用してもよい。 Although the above-described embodiment has been described using the in-vehicle stereo camera device 100 composed of two cameras, one camera may be used, or three or more cameras may be used.
 なお、本発明は上記した実施形態に限定されるものではなく、様々な変形形態が含まれる。例えば、上記した実施形態は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。本発明は、例えば、複数の光学的特性を有するオブジェクト、例えば、一部が発光するオブジェクトの検出に広く適用可能である。 The present invention is not limited to the above-described embodiment, and includes various modified forms. For example, the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the described configurations. The present invention is widely applicable, for example, to detecting an object having a plurality of optical properties, for example, an object that partially emits light.
 また、上記の各構成、機能、処理部、処理手段等は、それらの一部又は全部を、例えば集積回路で設計する等によりハードウェアで実現してもよい。また、上記の各構成、機能等は、プロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによりソフトウェアで実現してもよい。各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリや、ハードディスク、SSD(Solid State Drive)等の記憶装置、または、ICカード、SDカード、DVD等の記録媒体に置くことができる。 Further, each of the above configurations, functions, processing units, processing means, etc. may be realized by hardware by designing a part or all of them by, for example, an integrated circuit. Further, each of the above configurations, functions, and the like may be realized by software by the processor interpreting and executing a program that realizes each function. Information such as programs, tables, and files that realize each function can be stored in a memory, a hard disk, a storage device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
 また、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。実際には殆ど全ての構成が相互に接続されていると考えてもよい。 Also, the control lines and information lines indicate what is considered necessary for explanation, and not all control lines and information lines are necessarily shown on the product. In practice, it can be considered that almost all configurations are interconnected.
100 車載ステレオカメラ装置101、102 カメラ103 画像入力インタフェース104 画像処理部105 演算処理部106 記憶部107 CANインタフェース108 制御処理部109 内部バス110 処理装置111 車載ネットワークCAN 100 In-vehicle stereo camera devices 101, 102 Camera 103 Image input interface 104 Image processing unit 105 Arithmetic processing unit 106 Storage unit 107 CAN interface 108 Control processing unit 109 Internal bus 110 Processing device 111 In-vehicle network CAN

Claims (7)

  1.  画像内の対象物の認識処理を行う処理装置であって、
     前記画像から所定の明るさより明るい画素の数を集計した複数のヒストグラムを取得し、前記複数のヒストグラムから前記画像内の対象物の円の中心に相当する位置を推定する中心推定処理を行うことを特徴とする、処理装置。
    A processing device that recognizes an object in an image.
    A center estimation process is performed in which a plurality of histograms obtained by totaling the number of pixels brighter than a predetermined brightness are acquired from the image, and a position corresponding to the center of a circle of an object in the image is estimated from the plurality of histograms. A featured processing device.
  2.  請求項1に記載の処理装置において、
     前記画像が円の特徴を含むか否かの判断に基づいて、前記中心推定処理を行うか否かを判断することを特徴とする、処理装置。
    In the processing apparatus according to claim 1,
    A processing device, characterized in that it determines whether or not to perform the center estimation process based on the determination as to whether or not the image includes the characteristics of a circle.
  3.  請求項2に記載の処理装置において、
     前記画像に前記円の特徴が含まれないと判断した場合に、前記中心推定処理を行うことを特徴とする、処理装置。
    In the processing apparatus according to claim 2,
    A processing device characterized in that the center estimation process is performed when it is determined that the image does not include the features of the circle.
  4.  請求項1に記載の処理装置において、
     前記画像は、前記対象物の識別処理を行うために前記対象物内の発光部分を検出可能な露光時間又はシャッタ速度で撮像された画像であることを特徴とする、処理装置。
    In the processing apparatus according to claim 1,
    The processing device is characterized in that the image is an image captured at an exposure time or a shutter speed at which a light emitting portion in the object can be detected in order to perform identification processing of the object.
  5.  請求項4に記載の処理装置において、
     前記露光時間より長い他の露光時間又は前記シャッタ速度より遅い他のシャッタ速度で撮像した他の画像の中から前記円の位置を特定するための情報を取得する円検知処理を行い、
     前記中心推定処理のための前記画像の処理領域は、前記円検知処理のための前記他の画像の処理領域よりも小さいことを特徴とする、処理装置。
    In the processing apparatus according to claim 4,
    A circle detection process is performed to acquire information for identifying the position of the circle from other images captured at another exposure time longer than the exposure time or at another shutter speed slower than the shutter speed.
    The processing apparatus, characterized in that the processing area of the image for the center estimation processing is smaller than the processing area of the other image for the circle detection processing.
  6.  請求項5に記載の処理装置において、
     前記画像は、前記他の画像より後のフレームで撮像された画像であることを特徴とする、処理装置。
    In the processing apparatus according to claim 5,
    The processing apparatus, characterized in that the image is an image captured in a frame after the other image.
  7.  請求項1に記載の処理装置において、
     前記円の中心に相当する位置周辺に所定の明るさより明るい領域が存在するか否かを判断し、
     前記明るい領域が存在する場合、前記明るい領域に関する情報を記憶部に保存することを特徴とする、処理装置。
     
    In the processing apparatus according to claim 1,
    It is determined whether or not there is a region brighter than a predetermined brightness around the position corresponding to the center of the circle.
    A processing device, characterized in that information about the bright area is stored in a storage unit when the bright area exists.
PCT/JP2020/048701 2020-03-03 2020-12-25 Processing device WO2021176820A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DE112020005250.8T DE112020005250T8 (en) 2020-03-03 2020-12-25 processing device
JP2022504990A JP7277666B2 (en) 2020-03-03 2020-12-25 processing equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020035419 2020-03-03
JP2020-035419 2020-03-03

Publications (1)

Publication Number Publication Date
WO2021176820A1 true WO2021176820A1 (en) 2021-09-10

Family

ID=77613238

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/048701 WO2021176820A1 (en) 2020-03-03 2020-12-25 Processing device

Country Status (3)

Country Link
JP (1) JP7277666B2 (en)
DE (1) DE112020005250T8 (en)
WO (1) WO2021176820A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015191626A (en) * 2014-03-28 2015-11-02 富士重工業株式会社 Outside-vehicle environment recognition device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6868932B2 (en) 2018-01-12 2021-05-12 日立Astemo株式会社 Object recognition device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015191626A (en) * 2014-03-28 2015-11-02 富士重工業株式会社 Outside-vehicle environment recognition device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
UCHIMURA, KEIICHI ET AL.: "Extraction and recognition of circular road signs using road scene color images", IEICE TRANSACTION ON FUNDAMENTALS OF ELECTRONICS, COMMUNICATIONS AND COMPUTER SCIENCES, vol. J81-A, no. 4, 25 April 1998 (1998-04-25), pages 546 - 553 *

Also Published As

Publication number Publication date
DE112020005250T5 (en) 2022-09-08
JP7277666B2 (en) 2023-05-19
DE112020005250T8 (en) 2022-11-03
JPWO2021176820A1 (en) 2021-09-10

Similar Documents

Publication Publication Date Title
JP7040374B2 (en) Object detection device, vehicle control system, object detection method and computer program for object detection
JP6132412B2 (en) Outside environment recognition device
CN109997148B (en) Information processing apparatus, imaging apparatus, device control system, moving object, information processing method, and computer-readable recording medium
CN106169244A (en) The guidance information utilizing crossing recognition result provides device and method
US20080285799A1 (en) Apparatus and method for detecting obstacle through stereovision
JP6420650B2 (en) Outside environment recognition device
US10878259B2 (en) Vehicle detecting method, nighttime vehicle detecting method based on dynamic light intensity and system thereof
JP6236039B2 (en) Outside environment recognition device
JP6034923B1 (en) Outside environment recognition device
US10984264B2 (en) Detection and validation of objects from sequential images of a camera
US20220171975A1 (en) Method for Determining a Semantic Free Space
US9524645B2 (en) Filtering device and environment recognition system
US11069049B2 (en) Division line detection device and division line detection method
WO2021176820A1 (en) Processing device
JP6329417B2 (en) Outside environment recognition device
JP2020126304A (en) Out-of-vehicle object detection apparatus
JP5957182B2 (en) Road surface pattern recognition method and vehicle information recording apparatus
CN112926476A (en) Vehicle identification method, device and storage medium
WO2021161508A1 (en) Detection device, detection method, and detection program
JP7446445B2 (en) Image processing device, image processing method, and in-vehicle electronic control device
JP7379523B2 (en) image recognition device
JP7005762B2 (en) Sign recognition method of camera device and sign recognition device
US20230410318A1 (en) Vehicle and method of controlling the same
JP7390899B2 (en) lane mark recognition device
WO2023112127A1 (en) Image recognition device and image recognition method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20922670

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022504990

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 20922670

Country of ref document: EP

Kind code of ref document: A1