CN112840349A - Mark recognition device - Google Patents

Mark recognition device Download PDF

Info

Publication number
CN112840349A
CN112840349A CN201980063009.8A CN201980063009A CN112840349A CN 112840349 A CN112840349 A CN 112840349A CN 201980063009 A CN201980063009 A CN 201980063009A CN 112840349 A CN112840349 A CN 112840349A
Authority
CN
China
Prior art keywords
marker
sign
unit
dimensional object
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980063009.8A
Other languages
Chinese (zh)
Inventor
椎名雄飞
永崎健
岩崎启佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Showa Corp
Hitachi Astemo Ltd
Original Assignee
Hitachi Astemo Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Astemo Ltd filed Critical Hitachi Astemo Ltd
Publication of CN112840349A publication Critical patent/CN112840349A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions

Abstract

The present invention aims to provide a sign recognition device that suppresses erroneous recognition of a sign as a road sign by an object that is highly similar to a speed compliance sticker or the like. A mark recognition device (1) is characterized by comprising: at least one imaging unit (101, 102); a marker estimation unit (111) that estimates a marker candidate (603) from the images obtained by the imaging units (101, 102); a three-dimensional object recognition unit (112) that recognizes a three-dimensional object (602) included in the imaging range of the imaging units (101, 102); and a sign determination unit (113) that determines whether or not the sign candidate (603) is a road sign, based on information on the estimation of the sign candidate (603) by the sign estimation unit (111) and information on the recognition of the three-dimensional object (602) by the three-dimensional object recognition unit (112).

Description

Mark recognition device
Technical Field
The present invention relates to a sign recognition apparatus for recognizing a road sign.
Background
As social awareness for safe driving or automatic driving increases, the demand for object recognition or distance measurement in the vehicle-mounted camera device has become high.
The in-vehicle camera device is a device that can simultaneously measure visual information of an image and distance information to an object, and thereby can grasp various objects (people, cars, three-dimensional objects, white lines, road surfaces, signs, and the like) around the car in detail, and also contributes to improvement of safety during driving support.
The flag is one of objects to be recognized as an object in the in-vehicle camera device. Generally, the sign recognition function is used for acceleration or deceleration of a vehicle that is automatically driven in cooperation with map information. Evaluation indexes of advanced driving Assistance Systems EuroNCAP (updated 2016 to 2020) are also provided with evaluation items regarding Speed Assistance Systems (SAS), and the importance thereof is increased.
Patent document 1 discloses a device for recognizing a mark.
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open publication No. 2017-26430
Disclosure of Invention
Problems to be solved by the invention
The problem with the road sign recognition function of the vehicle-mounted camera device is to improve the accuracy of outputting the recognition result of the road sign for a road sign as a target and not outputting the recognition result of the road sign for an object other than the road sign under various driving conditions such as urban areas and expressways.
However, in the apparatus described in patent document 1, since the sign is recognized from the appearance by image processing, there is a possibility that an object whose appearance is exactly like a target road sign, such as a signboard installed in a downtown area or an expressway or a speed compliance sticker attached to the back of a large vehicle such as a truck, is erroneously recognized as the road sign.
The present invention has been made to solve the above-described problems, and an object of the present invention is to provide a sign recognition apparatus that suppresses the occurrence of a sign recognition operation that erroneously recognizes an object, such as a speed compliance sticker, that closely resembles a road sign as a road sign.
Means for solving the problems
In order to achieve the above object, a mark recognition device according to the present invention includes: at least one image pickup section; a marker estimation unit that estimates marker candidates from the image obtained by the imaging unit; a three-dimensional object recognition unit that recognizes a predetermined three-dimensional object included in an imaging range of the imaging unit; and a sign determination unit that determines whether or not the sign candidate is a road sign based on information on the estimation of the sign candidate by the sign estimation unit and information on the recognition of the three-dimensional object by the three-dimensional object recognition unit.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the marker recognition device of the present invention, it is possible to suppress erroneous marker recognition results from being output to an object that is a highly-similar target marker and that appears under various driving conditions such as an urban area and an expressway.
Drawings
Fig. 1 is a block diagram showing a schematic configuration of a marker recognition device according to an embodiment of the present invention.
Fig. 2 is a flowchart showing a process of the camera device.
Fig. 3 is a flowchart showing the processing contents of flag determination in fig. 2.
Fig. 4A is a diagram of an example of a road sign candidate set on the back of a bus.
Fig. 4B is an enlarged image of the marker candidate.
Fig. 4C is an enlarged image of the marker candidate.
Fig. 5 is a diagram of road sign candidates set on a road.
Fig. 6 is a diagram of another example of road sign candidates set on the back of a bus.
FIG. 7 is a top view of a road sign candidate disposed behind a bus
Detailed Description
System configuration
Fig. 1 is a block diagram showing a schematic configuration of a marker recognition device according to an embodiment of the present invention. As shown in fig. 1, a marker recognition device 1 according to the present embodiment is a device that is mounted on a vehicle and recognizes an environment outside the vehicle based on image processing of a photographic subject area in front of the vehicle. The sign recognition device 1 has a function of recognizing white lines of roads, pedestrians, vehicles, other three-dimensional objects, traffic lights, road signs, illumination lamps, and the like, for example, by image processing to determine the authenticity of road signs. The mark recognition device 1 also has a function of adjusting braking, steering, and the like of the host vehicle on which the mark recognition device is mounted, but this function may be omitted.
The sign recognition device 1 is a recognition system for a road sign mounted on a vehicle, and includes: a camera apparatus 100, a can (controller Area network)110, and an output apparatus 114.
The camera device 100 includes: a left camera 101, a right camera 102, an image input interface 103, an image processing unit 104, an arithmetic processing unit 105, a storage unit 106, a CAN interface 107, a control processing unit 108, and a bus 109.
The image input interface 103, the image processing unit 104, the arithmetic processing unit 105, the storage unit 106, the CAN interface 107, and the control processing unit 108 may be configured by one or more computer units. In the present embodiment, a case where the camera device is configured by a single computer unit is exemplified, and the image processing unit 104, the control processing unit 108, the storage unit 106, the arithmetic processing unit 105, the image input interface 103, and the CAN interface 107 are connected to each other via the internal bus 109 of the camera device 100.
The camera device 100 is also connected to other computer units or output devices 114 via the CAN 110. The following description will be made in order for each portion.
The left camera 101 and the right camera 102 are disposed on the left and right sides to acquire image information. The left camera 101 and the right camera 102 are mounted on the vehicle with a set interval left and right so that predetermined regions are accommodated in the respective fields of view.
The image input interface 103 controls the imaging of the left camera 101 and the right camera 102, and receives captured images. The image data received through the image input interface 103 is transmitted to, for example, the image processing unit 104 and the arithmetic processing unit 105 through the bus 109.
The image processing unit 104 compares a first image obtained from the image pickup device of the left camera 101 with a second image obtained from the image pickup device of the right camera 102, performs correction of device-specific variation due to the image pickup device and image correction such as noise interpolation for each image, and stores these in the storage unit 106. Further, the portions corresponding to each other between the first image and the second image are calculated, parallax information (information on the difference in the direction viewed from the same point when viewed from two different points, that is, the angle between the two directions) is calculated, and these are stored in the storage unit 106.
The arithmetic processing unit 105 includes a marker estimation unit 111, a three-dimensional object recognition unit 112, and a marker determination unit 113.
The marker estimation unit 111 estimates marker candidates from the images captured by the left camera 101 and the right camera 102 and stored in the storage unit 106. Here, the image that is the basis of the estimation of the marker candidate may be either the image captured by the left camera 101 or the image captured by the right camera 102, or the images captured by both cameras may be used. As for the estimation method, for example, the shape recognized by the image processing is compared with, for example, a database of road signs stored in the storage unit 106, and a road sign having a degree of matching with any one road sign of a certain degree or more is estimated as a candidate of the road sign.
The three-dimensional object recognition unit 112 recognizes a three-dimensional object included in the imaging ranges of the left camera 101 and the right camera 102 based on the parallax information calculated by the image processing unit 104. Specifically, a three-dimensional object is recognized by three-dimensional image processing from images obtained by the left camera 101 and the right camera 102. The three-dimensional object recognition unit 112 recognizes a predetermined three-dimensional object necessary for perceiving the environment around the vehicle, using the image and the parallax information (distance information for each point on the image) stored in the storage unit 106. The three-dimensional object to be recognized is a person, a vehicle, another obstacle, for example, a traffic light, a road sign, a tail light and a headlight of the vehicle. A database of these three-dimensional objects as objects (for example, standard data for identification) is stored in, for example, the storage unit 106, and the three-dimensional object recognition unit 112 recognizes the three-dimensional object based on the database from the clipped image of each object and parallax information thereof, for example, with respect to the input captured image.
The sign determination unit 113 determines whether or not the sign candidate estimated by the sign estimation unit 111 is a road sign based on the information on the estimation of the sign by the sign estimation unit 111 and the information on the recognition of the three-dimensional object by the three-dimensional object recognition unit 112. The recognition result of the road sign or the intermediate calculation result (for example, the calculation result of the sign estimation unit 111 or the three-dimensional object recognition unit 112) by the arithmetic processing unit 105 is appropriately stored in the storage unit 106. The decision algorithm will be described later.
The storage unit 106 is, for example, a memory, and stores information obtained by the image processing unit 104 and the arithmetic processing unit 105.
The CAN interface 107 is an input/output unit for the CAN110, and the operation information of the camera device 100 is output to the CAN110 via the CAN interface 107 and is output to the vehicle control system of the vehicle via the CAN 110. Specifically, when the flag determination unit 113 determines that the flag candidate is the road flag, the CAN interface 107 outputs the flag recognition result to the output device 114 via the CAN 110.
The control processing unit 108 plays a role of preventing the abnormal operation by monitoring whether the abnormal operation occurs in each processing unit or whether an error occurs during data transmission, for example, with respect to the above operation of the camera device 100.
As described above, the determination result of the road sign or the like calculated by the camera device 100 is transmitted to the output device 114 or another on-vehicle computing unit (for example, a unit for performing vehicle control) via the CAN 110. The output device 114 is a device mounted in the cab and performing display output or sound output, such as a display, an indicator lamp, a buzzer, and a speaker, and visually or audibly notifies the passenger of the road sign based on information input from the camera device 100. The notification method may be to notify only the presence or absence of the road sign, or may be to notify the type of the road sign together. Further, at least one of the visual output mode and the auditory output mode may be selected. When the braking or steering of the vehicle is controlled by another computer unit, information of the road sign calculated by the camera device 100 or the like is added to basic information of the control of the other computer unit.
-process flow-
Next, a processing flow of the camera apparatus 100 will be explained based on fig. 2. Fig. 2 is a processing flowchart of the camera apparatus 100. The in-vehicle camera apparatus 200 shown in fig. 2 schematically represents the camera apparatus 100 of fig. 1. First, the image capture 201 by the left camera 101 and the image capture 202 by the right camera 102 are performed. Then, image processing for absorbing correction of the intrinsic characteristics of the image pickup device or the like, and stereo image processing 205 for calculating the measurement distance (parallax information) with respect to the object photographed in front, or the like are performed on each of the image data 203 photographed by the left camera 101 and the image data 204 photographed by the right camera 102. The stereoscopic image processing 205 is executed by the image processing unit 104.
Next, three-dimensional object detection 206 is performed, which is image processing such as predetermined clipping of a three-dimensional image or a grayscale image. The result of the three-dimensional object detection 206 and the like are stored in the storage unit 106 in fig. 1. Then, based on the measurement distance result 207 obtained by the stereo image processing 205 and the processing result of the stereo object detection 206, the object recognition 208 is performed as image processing (calculation processing for object recognition). The processes of the three-dimensional object detection 206 and the object recognition 208 are performed by the three-dimensional object recognition unit 112 in fig. 1. As described above, the three-dimensional object identified herein refers to a person, a vehicle, and other obstacles, such as traffic lights, road signs, tail lights and headlights of a vehicle. Then, a storing process 215 is performed in which an object recognition result 216 (including information on the recognized three-dimensional object and other parallax information not recognized as the three-dimensional object) obtained by the processes of the three-dimensional object detection 206 and the object recognition 208 is stored in the storage unit 106. In the present embodiment, the object recognition result 216 of the three-dimensional object recognition unit 112 is stored in the storage unit 106 and is output to another computer unit or the flag determination unit 113 via the CAN 110.
In the camera apparatus 100, the single-eye image is input in parallel with the above-described object recognition processing, and mark recognition processing 209 is performed. The flag identifying process 209 is constituted by the following 4 processes: a landmark detection 210 that extracts a circular object from an input image; a landmark discrimination 211 that determines the category of a circular object; marker tracking 212, which makes correspondence between images; and a flag decision 213 that makes an integrated decision over multiple frames.
That is, in the present embodiment, the outer shape of the road sign to be determined is assumed to be a circle, and in the processing of the sign detection 210, a shape estimated to have a degree of conformity with a perfect circle or more by, for example, image processing is detected as a sign candidate. In the subsequent marker discriminating 211 process, the marker candidate detected in the marker detecting 210 process is compared with, for example, a database (recognition dictionary) of markers stored in the storage unit 106, and the type of the marker candidate (which road marker candidate is the marker candidate) is discriminated. In the process of the marker tracking 212, the marker candidates that may be displaced in the field of view of the camera according to the traveling of the vehicle or the like are tracked by the correspondence of image processing between the consecutive frames (camera images). The respective processes up to the marker detection 210, the marker discrimination 211, and the marker tracking 212 are executed by the marker estimation unit 111, for example.
In the process of the sign determination 213, it is determined whether or not the sign candidate is a true road sign from, for example, images of a plurality of frames. The process of the flag determination 213 is executed by the flag determination 113, and is characterized by being executed based on the object recognition result 216 obtained by the process of the object recognition 208 by the three-dimensional object recognition unit 112. In the present embodiment, whether or not the road sign is true is determined based on the inclusion relationship and the positional relationship between the parallax information of the other three-dimensional object recognized by the three-dimensional object recognition unit 112 and the sign candidate estimated by the sign estimation unit 111. The details of the flag determination processing will be described later.
Finally, a storing process 214 is performed to store the flag recognition result obtained by the flag recognition process 209 in the storage unit 106. The flag recognition result of the flag recognition processing 209 is stored in the storage unit 106 and is output to another computer unit or the output device 114 via the CAN 110. In the present embodiment, the flag recognition result is not output to the output device 114 for an object that exactly resembles the target flag (i.e., an object that is estimated not to be a true road flag), but only for an object that is estimated to be a true road flag is output to the output device 114 and notified to the occupant.
Fig. 3 is a flowchart showing the processing contents of flag determination in fig. 2. The flow of processing regarding the flag determination 213 of the present invention will be described below based on fig. 3.
(step 501)
The result of marker tracking 212 in fig. 2 is input, and the result of identification of marker candidates in a plurality of images is obtained. Thereafter, the process transitions to the next step 502.
(step 502)
The final determination result is output in consideration of the discrimination results in the respective acquired images. As a determination method, a majority of determination results of the discrimination results of the respective images may be obtained, or the determination results may be obtained in consideration of the reliability obtained at the same time as the discrimination results. Further, the more the travel speed of the host vehicle is reduced, the more the discrimination result of the image can be acquired, and the image blur is also reduced, which is a favorable situation. Thereafter, the process transitions to the next step 503.
(step 503)
As the first confirmation, the overlapping of the marker candidate and the three-dimensional object recognized by the three-dimensional object recognition unit 112 is checked. When all the sign candidates are included in the three-dimensional object, it is determined that the road sign is less likely to be set on the road and is not output as a sign result. After that, the process ends.
On the other hand, when the marker candidate is not included in the three-dimensional object, it is determined that the marker is a road marker set on the road, and the process proceeds to step 504.
(step 504)
This step is a process for detecting a part of the marker candidates that may be recognized as the road marker but are not actually the road marker in step 503 (i.e., the omission of the determination in step 503), and determines whether or not the marker candidates are outside the mask (e.g., the left and right outer sides in the image). The mask is parallax information of a predetermined three-dimensional object which cannot be checked against the database relating to the three-dimensional object by the three-dimensional object recognition unit 112 and is not recognized. A typical example of parallax information to be processed as a blocking object is a predetermined three-dimensional object in reality, and a part of an object whose parallax information is input but is not entirely reflected on a captured image and cannot be recognized as the predetermined three-dimensional object is a part of the captured image in which all the parallax information is reflected (a side surface 802 of a bus 801 in fig. 6, and the like). If the blocking object is present between the host vehicle (camera) and the actual road sign, the entire road sign is not reflected, and there is a high possibility that the sign candidate is not a true road sign. In order to discriminate such a phenomenon, it is determined whether or not the marker candidate is located outside the left-right direction in the image with respect to the mask (whether or not the mask exists inside the left-right direction of the marker candidate). When the sign candidate is located outside the shade, as in the example described later in fig. 6 and 7, it is determined that the sign candidate is a road sign provided on the road with a low possibility, and it is not output as a sign result. That is, in the case where the shielding object is located between the marker candidate and the own vehicle, it is determined that the marker candidate is not the road marker. After that, the process ends.
On the other hand, when the marker candidate is not located outside the blocking object (when no blocking object is located inside the marker candidate in the left-right direction), it is determined that the marker is a road marker set on the road, and the process proceeds to step 505.
(step 505)
The flag determination 213 outputs a flag determination result that the flag candidate is a road flag. After that, the process ends.
Example 1-
Fig. 4A is a diagram showing an example of a road sign candidate set behind a bus. As shown in fig. 4A, the bus 601 includes a logo candidate 603 and a logo candidate 604 on the rear surface. In addition, the rear surface of the bus 601 is recognized as a three-dimensional object 602 by the camera device 100.
On the rear surface of a large vehicle such as a truck and a bus in europe, a speed compliance sticker indicating that the vehicle is traveling at 80kph indicated by a symbol candidate 603 on a general road and traveling at 100 kph indicated by a symbol candidate 604 on an expressway is attached.
As shown in the enlarged images of the marker candidates 603 and 604 in fig. 4B and 4C, these objects closely resemble the speed limit markers, and are output as marker recognition results in the processing of the camera device in the past.
On the other hand, since the marker candidates 603 and 604 are included in the three-dimensional object 602, the marker recognition device 1 of the present invention determines that there is a low possibility that the road marker is a road marker installed on the road in step 503 of fig. 3, and suppresses the output of the road marker as the marker recognition result. In the example of fig. 4A, the marker candidates 603 and 604 are thus discriminated not to be road markers, thereby avoiding false alarm to the occupant.
Example 2-
Fig. 5 is a diagram showing road sign candidates set on a road. As shown in fig. 5, the marker candidate 701 is not included in the three-dimensional object 702. Furthermore, marker candidate 701 is not included in three-dimensional object 703. In this example, since the marker candidate 701 is not included in the three-dimensional object 702 nor the three-dimensional object 703, the process proceeds from step 503 to step 504 in fig. 3. In step 504, it is determined whether or not the sign candidate 701 is located outside the screen than the blocking object, but since no blocking object is detected, the determination is not satisfied, and the sequence transitions to step 505, where the sign candidate 701 is output to the output device 114 as a true road sign set on the road.
Example 3-
Fig. 6 is a diagram showing another example of road sign candidates set behind a bus. In the example of fig. 6, a speed compliance sticker affixed to the rear surface 804 of a bus 801 is identified as a marker candidate 803. When a large vehicle such as the bus 801 travels on a side lane or when the large vehicle is parked on the side of a road, the rear surface 804 of the bus 801 is positioned out of the imaging range of the camera device 100, and the entire rear surface 804 is not displayed, and the bus 801 is not recognized as a bus. In the example of fig. 6, the sign candidates 803 are included in the rear surface 804 of the bus 801, but since the rear surface 804 is not recognized as a predetermined three-dimensional object, the sign candidates 803 as speed compliance stickers can be determined as road signs in the determination of step 503 of fig. 3.
However, the side surface 802 of the bus 801 is entirely seen, and is not recognized as a predetermined three-dimensional object and is classified as a blocking object. In the example of fig. 6, since the sign candidate 803 is located outside the side surface 802 as the shielding object, the determination of step 504 of fig. 3 is satisfied, and it is possible to avoid the sign candidate 803 being erroneously output as the road sign to the output device 114.
Fig. 7 is a plan view of a road sign candidate provided on the back of a bus (a model diagram showing an example of fig. 6 in plan view). As shown in fig. 7, if the positional relationship of the side surface 802 of the bus 801 and the sign candidate 803 is observed, in the case where it is assumed that the sign candidate 803 is a road sign 805 provided on the road, the road sign 805 should not be reflected on the captured image, being shielded by the bus 801 having the side surface 802. Thus, as shown in step 504 of fig. 3, by confirming the positional relationship between the flag candidate 803 and the side surface 802, it is possible to prevent an erroneous flag recognition result from being output.
Deformation example-
In the present embodiment, the camera device 100 is provided with the left camera 101 and the right camera 102, but the present invention is not limited to this, and either the left camera 101 or the right camera 102 may be omitted, and the three-dimensional object recognition may be performed by monocular processing. In the example of fig. 3, the flow in which the camera has the steps 503 and 504 is described, but for example, in the step 503, when the authenticity determination for the road sign is not sufficiently accurate, the step 504 may be omitted.
Description of the symbols
1 … mark recognition apparatus, 100 … camera apparatus, 101 … left camera, 102 … right camera, 103 … image input interface, 104 … image processing section, 105 … arithmetic processing section, 106 … storage section, 107 … CAN interface, 108 … control processing section, 109 … bus, 110 … CAN, 111 … mark estimation section, 112 … three-dimensional object recognition section, 113 … mark determination section, 114 … output apparatus, 200 … camera apparatus, 201 … image pickup, 202 … image pickup, 203 … image data, 204 … image data, 205 … three-dimensional image processing, 206 … three-dimensional object detection, 207 … measurement distance result, 208 … object recognition, 209 … mark recognition processing, 210 … mark detection, 211 … mark recognition, 212 … mark tracking, 36213 mark determination, 214 … storage processing, 601 … storage processing, … bus, 36602, … mark candidate 36603, 36604 three-dimensional mark candidate …, 701 … mark candidate, 702 … solid object, 703 … solid object, 801 … bus, 802 … solid object, 803 … mark candidate, 804 … back surface, 805 … road mark candidate

Claims (7)

1. A mark recognition device is characterized by comprising:
at least one image pickup section;
a marker estimation unit that estimates a marker candidate from the image obtained by the imaging unit;
a three-dimensional object recognition unit that recognizes a predetermined three-dimensional object included in an imaging range of the imaging unit; and
and a sign determination unit that determines whether or not the sign candidate is a road sign based on the information on the estimation of the sign candidate by the sign estimation unit and the information on the recognition of the three-dimensional object by the three-dimensional object recognition unit.
2. The logo recognition device according to claim 1,
the marker estimation unit estimates the marker candidate by monocular image processing based on the image obtained by the single imaging unit.
3. The logo recognition device according to claim 1,
the three-dimensional object recognition unit recognizes the three-dimensional object by three-dimensional image processing based on the images obtained by the plurality of imaging units.
4. The logo recognition device according to claim 3,
the marker determination unit determines an inclusion relationship between the three-dimensional object recognized by the three-dimensional object recognition unit and the marker candidate acquired by the marker estimation unit, and determines that the marker candidate is not a road marker when the marker candidate is included in the three-dimensional object.
5. The logo recognition device according to claim 3,
the marker determination unit determines a positional relationship between a blocking object that is parallax information of the three-dimensional object and the marker candidate acquired by the marker estimation unit in the three-dimensional image processing, and determines that the marker candidate is not a road marker when the marker candidate is located outside the blocking object.
6. The logo recognition device according to claim 1,
the apparatus further includes an output device for outputting the flag recognition result of the flag determination unit.
7. The logo recognition device according to claim 6,
the output device is a display or a buzzer.
CN201980063009.8A 2018-10-04 2019-09-19 Mark recognition device Pending CN112840349A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018189045 2018-10-04
JP2018-189045 2018-10-04
PCT/JP2019/036681 WO2020071133A1 (en) 2018-10-04 2019-09-19 Sign recognition device

Publications (1)

Publication Number Publication Date
CN112840349A true CN112840349A (en) 2021-05-25

Family

ID=70055884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980063009.8A Pending CN112840349A (en) 2018-10-04 2019-09-19 Mark recognition device

Country Status (3)

Country Link
JP (1) JP7145227B2 (en)
CN (1) CN112840349A (en)
WO (1) WO2020071133A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568236A (en) * 2010-12-08 2012-07-11 罗伯特·博世有限公司 Method and device for recognizing road signs and comparing with road signs information
CN103459227A (en) * 2011-04-08 2013-12-18 丰田自动车株式会社 Road shape inferring system
CN105740877A (en) * 2014-12-09 2016-07-06 比亚迪股份有限公司 Traffic sign recognition method and device, and vehicle
CN105989725A (en) * 2015-03-17 2016-10-05 本田技研工业株式会社 Traffic sign determination device
JP2017026430A (en) * 2015-07-21 2017-02-02 日本電信電話株式会社 Marker detection device, method, and program
JP2017091232A (en) * 2015-11-11 2017-05-25 日立オートモティブシステムズ株式会社 Object detector
CN107220583A (en) * 2016-03-21 2017-09-29 伊莱比特汽车有限责任公司 Method and apparatus for recognizing traffic sign
CN107458375A (en) * 2016-06-02 2017-12-12 丰田自动车株式会社 Vehicle limitation speed display device
JP2017228131A (en) * 2016-06-23 2017-12-28 日産自動車株式会社 Reflection target detection method and reflection target detection device
CN107949874A (en) * 2015-09-15 2018-04-20 马自达汽车株式会社 Landmark identification display device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010086268A (en) * 2008-09-30 2010-04-15 Mazda Motor Corp Display object recognition device for vehicle
US10402665B2 (en) * 2014-05-14 2019-09-03 Mobileye Vision Technologies, Ltd. Systems and methods for detecting traffic signs
JP6834964B2 (en) * 2015-09-30 2021-02-24 ソニー株式会社 Image processing equipment, image processing methods, and programs
JP6143831B2 (en) * 2015-11-12 2017-06-07 三菱電機株式会社 Driving assistance device
US10859395B2 (en) * 2016-12-30 2020-12-08 DeepMap Inc. Lane line creation for high definition maps for autonomous vehicles

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568236A (en) * 2010-12-08 2012-07-11 罗伯特·博世有限公司 Method and device for recognizing road signs and comparing with road signs information
CN103459227A (en) * 2011-04-08 2013-12-18 丰田自动车株式会社 Road shape inferring system
CN105740877A (en) * 2014-12-09 2016-07-06 比亚迪股份有限公司 Traffic sign recognition method and device, and vehicle
CN105989725A (en) * 2015-03-17 2016-10-05 本田技研工业株式会社 Traffic sign determination device
JP2017026430A (en) * 2015-07-21 2017-02-02 日本電信電話株式会社 Marker detection device, method, and program
CN107949874A (en) * 2015-09-15 2018-04-20 马自达汽车株式会社 Landmark identification display device
JP2017091232A (en) * 2015-11-11 2017-05-25 日立オートモティブシステムズ株式会社 Object detector
CN107220583A (en) * 2016-03-21 2017-09-29 伊莱比特汽车有限责任公司 Method and apparatus for recognizing traffic sign
CN107458375A (en) * 2016-06-02 2017-12-12 丰田自动车株式会社 Vehicle limitation speed display device
JP2017228131A (en) * 2016-06-23 2017-12-28 日産自動車株式会社 Reflection target detection method and reflection target detection device

Also Published As

Publication number Publication date
JPWO2020071133A1 (en) 2021-09-02
WO2020071133A1 (en) 2020-04-09
JP7145227B2 (en) 2022-09-30

Similar Documents

Publication Publication Date Title
JP4420011B2 (en) Object detection device
US10032085B2 (en) Method and system to identify traffic lights by an autonomous vehicle
US9180814B2 (en) Vehicle rear left and right side warning apparatus, vehicle rear left and right side warning method, and three-dimensional object detecting device
CN114375467B (en) System and method for detecting an emergency vehicle
US10691125B2 (en) Blinker judgment device and autonomous driving system
CN110400478A (en) A kind of road condition notification method and device
CN107408338A (en) Driver assistance system
JP2013057992A (en) Inter-vehicle distance calculation device and vehicle control system using the same
CN105378815A (en) Method and device for signalling traffic object that is at least partially visually concealed to driver of vehicle
CN110969843B (en) Traffic sign identification alarm method with inhibition strategy
US11893802B2 (en) Systems and methods for traffic light identification
JP5097681B2 (en) Feature position recognition device
KR101721442B1 (en) Avoiding Collision Systemn using Blackbox Rear Camera for vehicle and Method thereof
KR20160133386A (en) Method of Avoiding Collision Systemn using Blackbox Rear Camera for vehicle
CN115497282B (en) Information processing device, information processing method, and storage medium
CN113591673A (en) Method and device for recognizing traffic signs
CN112840349A (en) Mark recognition device
KR20150076532A (en) System for measuring distance and Method thereof
US20220237926A1 (en) Travel management device, travel management method, and recording medium
CN114333414A (en) Parking yield detection device, parking yield detection system, and recording medium
CN112026700A (en) Automobile anti-collision early warning method and system and storage medium
JP2019074776A (en) Reverse-run notification device
JP2014115799A (en) License plate determination device
CN112334944B (en) Mark recognition method and mark recognition device for camera device
US20230045706A1 (en) System for displaying attention to nearby vehicles and method for providing an alarm using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination