WO2021250934A1 - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
WO2021250934A1
WO2021250934A1 PCT/JP2021/004261 JP2021004261W WO2021250934A1 WO 2021250934 A1 WO2021250934 A1 WO 2021250934A1 JP 2021004261 W JP2021004261 W JP 2021004261W WO 2021250934 A1 WO2021250934 A1 WO 2021250934A1
Authority
WO
WIPO (PCT)
Prior art keywords
identification
image
class identification
unit
class
Prior art date
Application number
PCT/JP2021/004261
Other languages
French (fr)
Japanese (ja)
Inventor
拓紀 茂泉
宏治 土井
健 永崎
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to CN202180039409.2A priority Critical patent/CN115769253A/en
Priority to DE112021002170.2T priority patent/DE112021002170T5/en
Priority to JP2022530019A priority patent/JP7323716B2/en
Publication of WO2021250934A1 publication Critical patent/WO2021250934A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • This disclosure relates to an image processing apparatus and an image processing method.
  • Patent Document 1 discloses an object detection device including a detection unit and a non-linear processing unit (the same document, abstract, claim 1, paragraph 0006).
  • the detection unit detects one or more object candidate regions from the captured image.
  • the nonlinear processing unit inputs at least a part or all of the captured image including the object candidate region to a neural network that simultaneously estimates the posture of the object in the object candidate region and the distance to the object. Further, the nonlinear processing unit uses the output of the neural network to output object information including at least information on the distance to the object.
  • the conventional object detection device described in Patent Document 1 detects an object existing in the shooting range based on a shot image taken by an in-vehicle camera, and obtains object information including at least information on the distance to the detected object. Output.
  • Objects detected by the object detection device include other vehicles, pedestrians, two-wheeled vehicles such as bicycles and motorcycles, traffic lights, signs, electric poles, signboards, and other roadside installations that exist around the vehicle on which the object detection device is installed. , Obstacles that may hinder the running of the own vehicle are exemplified (the same document, paragraph 0008).
  • the detection of the object candidate area by the detection function of the object detection device is based on determining the presence or absence of an object using a scanning rectangle corresponding to the size of the object to be detected on the captured image of the in-vehicle camera. (Ibid., Paragraph 0021). Then, the image feature amount is calculated for the image area in the scanning rectangle, and the discriminator learned in advance is used to determine whether or not there is another vehicle in the scanning rectangle, or the likelihood of indicating the uniqueness of the other vehicle. Is output (ibid., Paragraph 0022).
  • the classifier that determines whether or not there is an object to be detected in the scanning rectangle such as the object detection device, for example, a two-class classifier that discriminates between a vehicle and other objects, a vehicle and a pedestrian, and the like.
  • the object detection device for example, a two-class classifier that discriminates between a vehicle and other objects, a vehicle and a pedestrian, and the like.
  • multi-class classifiers that discriminate between multiple objects at once, such as other objects.
  • ADAS Advanced Driver Assistance System
  • ADS Automated Driving System
  • the present disclosure provides an image processing apparatus and an image processing method capable of reducing the processing load of image processing for identifying a plurality of identification targets from an image and improving the identification accuracy.
  • One aspect of the present disclosure is a multi-class identification unit that performs multi-class identification processing on an image captured by an image pickup device to identify a plurality of types of identification targets, and the identification identified by the multi-class identification process.
  • the tracking processing unit that performs image tracking with the target as the tracking target and calculates the predicted position of the tracking target in the image at the later time based on the image at the previous time, and the predicted position of the image at the later time.
  • the image processing apparatus is characterized by having an identification unit for identifying the type of the tracking target by performing two-class identification processing according to the type of the tracking target.
  • an image processing apparatus and an image processing method capable of reducing the processing load of image processing for identifying a plurality of identification targets from an image and improving the identification accuracy.
  • the block diagram which shows one Embodiment of the image processing apparatus which concerns on this disclosure The flow diagram which shows one Embodiment of the image processing method which concerns on this disclosure.
  • the flow diagram which shows one Embodiment of the image processing method which concerns on this disclosure The flow diagram which shows one Embodiment of the image processing method which concerns on this disclosure.
  • FIG. 1 is a block diagram showing an embodiment of the image processing apparatus according to the present disclosure.
  • the image processing device IPA of the present embodiment is, for example, a device that identifies a plurality of types of identification targets from images taken by an image pickup device ID. More specifically, the image processing device IPA is a device mounted on a vehicle and discriminating a plurality of different objects around the vehicle from images taken by an image pickup device ID such as a monocular camera or a stereo camera. ..
  • the image captured by the image pickup apparatus ID is not particularly limited, and for example, a color image or a shade image can be appropriately selected.
  • the image pickup device ID is a stereo camera mounted on a vehicle.
  • the image processing device IPA is, for example, a processing unit 100 including a processing device such as a CPU, a storage unit 200 including a storage device such as a ROM or RAM, and a computer stored in the storage unit 200 and executed by the processing unit 100. ⁇ It has a program. Further, although not shown, the image processing apparatus IPA includes, for example, an input / output unit for inputting / outputting signals.
  • the processing unit 100 of the image processing apparatus IPA has, for example, a signal processing unit 110 and a recognition processing unit 150.
  • the signal processing unit 110 includes, for example, an image acquisition unit 111 and a parallax calculation unit 112.
  • the recognition processing unit 150 includes, for example, a first recognition processing unit 120, a second recognition processing unit 130, and an output processing unit 140.
  • the first recognition processing unit 120 includes, for example, an image area selection unit 121 and a multi-class identification unit 122.
  • the second recognition processing unit 130 includes, for example, a tracking processing unit 131 and an identification unit 132.
  • the identification unit 132 includes, for example, a plurality of two-class identification units 132a and 132b.
  • Each unit of the processing unit 100 is, for example, a functional block of the processing unit 100 realized by executing the computer program stored in the storage unit 200 by the processing unit 100.
  • Each of these processing units 100 may be realized by, for example, a dedicated processing device, or a plurality of functional blocks may be realized by one processing device.
  • the storage unit 200 may be configured by, for example, a plurality of storage devices of one type or various types, or may be configured by one storage device.
  • the image processing device IPA includes a storage unit 200, but the image processing device IPA may be connected to an external storage unit 200. Further, in the example shown in FIG. 1, the image processing device IPA is connected to the external image pickup device ID, but the image processing device IPA may include the image pickup device ID. Further, in the example shown in FIG. 1, the identification unit 132 has two two-class identification units 132a and 132b, but may have three or more two-class identification units.
  • the identification target 202 identified by the image processing device IPA from the image captured by the image pickup device ID is stored in, for example, in the storage unit 200 in advance.
  • the identification target 202 includes a plurality of types such as other vehicles, pedestrians, moving objects, obstacles, roads, road signs, road signs, and signals around the own vehicle equipped with the image processing device IPA.
  • the other vehicles to be identified by the image processing device IPA may include a plurality of types such as light vehicles such as bicycles, motorized bicycles, motorcycles, light vehicles, ordinary vehicles, large vehicles, buses, and trucks. good.
  • the other vehicle may include a type based on the position, attitude, direction of travel, speed, acceleration, angular velocity, etc. with respect to the own vehicle, such as a preceding vehicle, a following vehicle, an oncoming vehicle, a crossing vehicle, a right turn vehicle, and a left turn vehicle. good.
  • FIGS. 2A and 2B are flow charts of an image processing method IPM of the present embodiment using the image processing apparatus IPA shown in FIG.
  • the image pickup device ID captures an image in a predetermined cycle and a predetermined imaging time, for example.
  • the image processing apparatus IPA processes each image taken at a predetermined cycle by the image pickup apparatus ID by the image processing method IPM shown in FIG. 2A.
  • the image processing apparatus IPA starts the image processing method IPM shown in FIG. 2A, the image processing apparatus IPA first executes the image acquisition process P1.
  • the image acquisition unit 111 acquires, for example, an image from the image pickup apparatus ID and stores it in the storage unit 200 as a part of the image information 201.
  • the image information 201 includes, for example, image information of a right image taken by the right camera and a left image taken by the left camera when the image pickup device ID is a stereo camera.
  • the parallax calculation unit 112 inputs, for example, the right image and the left image, searches for a region in the left image similar to a specific region in the right image, and performs a process of obtaining the parallax. ..
  • the parallax calculation unit 112 outputs a parallax image by performing this process on the entire area of the right image.
  • the parallax calculation unit 112 stores the parallax image in the storage unit 200 as a part of the image information 201.
  • the image processing device IPA executes, for example, the image area selection process P2.
  • the image processing apparatus IPA selects an image region that may include any of a plurality of types of identification targets from the images captured by the image pickup apparatus ID.
  • the first recognition processing unit 120 acquires, for example, a parallax image which is an output of the parallax calculation unit 112 from the image information 201 stored in the parallax calculation unit 112 or the storage unit 200.
  • the image area selection unit 121 receives, for example, a parallax image and groups parallax adjacent to each other and close to each other in the parallax image to generate a rectangular frame surrounding the grouped parallax. .. Further, the image area selection unit 121 selects a rectangular frame whose vertical and horizontal sizes are equal to or larger than a predetermined size as an image area in which any of a plurality of types of identification targets may be included. ..
  • the image area selection unit 121 outputs and stores the position information of the selected image area, that is, the coordinates on the parallax image, and the vertical and horizontal sizes of the vertical width and the horizontal width as an image area 203 that may include an identification target.
  • Store in unit 200 for example, when a plurality of image areas are selected from a parallax image, the image area selection unit 121 assigns an identification number N from 1 to n (natural number) to each image area, and the storage unit 200 is assigned an identification number N. It is stored as the image area 203.
  • the image area selection unit 121 estimates the type of identification target that may be included in the image area of the parallax image surrounded by the rectangular frame by, for example, the aspect ratio of the rectangular frame, and identifies the specific type. Only image areas that may contain objects may be selected.
  • the image area selection unit 121 may select an image area that may include any of a plurality of types of identification targets from the images of the monocular camera.
  • the image area selection unit 121 may use, for example, the detection result of an object by the millimeter wave radar mounted on the vehicle to select the image area. Further, the image area selection unit 121 specifies, for example, a specific area of the image of the image pickup apparatus ID in advance, and performs raster scan using a window of an arbitrary size for the area, so that a plurality of types can be obtained. You may select an image area that may contain any of the identification targets.
  • the multi-class identification unit 122 identifies a plurality of types of identification targets from the image area selected by the image area selection unit 121, for example.
  • the multi-class identification unit 122 sequentially executes the multi-class identification process P4 for each image area having the identification numbers N selected in the above-mentioned selection process P2 from 1 to n.
  • the multi-class identification process P4 has, for example, a registration number determination process P4a, a multi-class identification process P4b, a type determination process P4c, a type candidate registration process P4d, P4e, and an increment process P4f.
  • the registration number determination process P4a the multi-class identification unit 122 determines whether or not the number of registrations of the tracking target in the tracking process P6a, which will be described later, is less than the upper limit.
  • the multi-class identification unit 122 determines in the registration number determination process P4a that the number of registrations is not less than the upper limit (NO), that is, the number of registrations has reached the upper limit, the processing after the multi-class identification process P4b Is not executed, and the process proceeds to the next process P5.
  • the multi-class identification unit 122 executes the multi-class identification process P4b when, for example, the registration number determination process P4a determines that the number of registrations is less than the upper limit (YES).
  • the multi-class identification unit 122 identifies a plurality of types of identification targets stored in the storage unit 200 as the identification target 202 from the image area.
  • the multi-class identification unit 122 evaluates, for example, the similarity between the image area 203 selected by the image area selection unit 121 and stored in the storage unit 200 and the multi-class identification learning data 204 stored in the storage unit 200. ..
  • the multi-class identification learning data 204 is, for example, learning data obtained by inputting a large number of images of a car to be identified, an image of a motorcycle, and an image of other objects to perform machine learning. That is, the multi-class identification unit 122 performs multi-class identification processing using the multi-class identification learning data 204 obtained by inputting a plurality of types of identification targets and performing machine learning.
  • the multi-class identification unit 122 is a machine in which, for example, at least an automobile as an identification target of the first type and a motorcycle as an identification target of the second type are input to the machine.
  • Multi-class identification processing is performed using the learned multi-class identification learning data 204.
  • the types of identification targets in the multi-class identification process P4b are, for example, two types of an automobile and a motorcycle will be described, but the types of identification targets and the number thereof are not particularly limited.
  • the multi-class identification unit 122 calculates, for example, an evaluation value of similarity between the image area 203 and the multi-class identification learning data 204 in the multi-class identification process P4b.
  • the multi-class identification unit 122 includes, for example, the multi-class identification learning data 204 in which the identification target (i) of the first type is an automobile and the identification target (ii) of the second type is a motorcycle.
  • the evaluation value of the similarity with the image area 203 is calculated.
  • the multi-class identification unit 122 executes the determination process P4c of the type of identification target existing in the image area 203 based on the evaluation value of the similarity.
  • the multi-class identification unit 122 identifies the type of identification target existing in the image area 203, for example, when the evaluation value of the similarity is equal to or higher than a predetermined threshold value. More specifically, in the multi-class identification unit 122, for example, when the evaluation value of the similarity between the image area 203 and the automobile which is the identification target (i) of the first type is equal to or higher than a predetermined threshold value. From the image area 203, the vehicle and its position information, which is the identification target (i) of the first type, are identified.
  • the multi-class identification unit 122 assigns a registration number to the automobile which is the identification target (i) of the first type identified from the image area 203, and registers it in the storage unit 200 as a tracking target and a type candidate 205.
  • the process P4d is executed.
  • the evaluation value of the similarity between the image area 203 and the motorcycle which is the identification target (ii) of the second type is equal to or higher than a predetermined threshold value. If this is the case, the image area 203 identifies the motorcycle that is the second type of identification target (ii) and its position information. Further, the multi-class identification unit 122 assigns a registration number to the motorcycle which is the identification target (ii) of the second type identified from the image area 203, and registers the motorcycle as the tracking target and the type candidate 205 in the storage unit 200.
  • Process P4e is executed. The multi-class identification unit 122 executes the increment process P4f after the end of the process P4d or the process P4e, for example.
  • the multi-class identification unit 122 determines, for example, an evaluation value of similarity between the image area 203 and the identification targets (i) and (ii) of the first and second types. When it is less than the threshold value, it is identified that the image area 203 does not include the identification target. In this case, the multi-class identification unit 122 executes, for example, the increment process P4f.
  • the multi-class identification unit 122 increments, for example, the identification number N of the image area 203 to be processed by the next multi-class identification process P4 to N + 1.
  • the multi-class identification unit 122 includes, for example, the above-mentioned processes P4a to P4f until the incremented identification number N of the image area 203 exceeds the number n of the image areas 203 selected in the above-mentioned selection process P2.
  • the multi-class identification process P4 is repeatedly executed.
  • the tracking process unit 131 After the completion of the multi-class identification process P4 for all the image areas 203 selected in the above-mentioned selection process P2, the tracking process unit 131 sets the registration number R of the tracking target to be processed in the identification process P6 described later to 1, for example.
  • the process P5 set to is executed. Further, the tracking processing unit 131 calculates the predicted position of the tracking target and the tracking target registered as the type candidate 205 in the storage unit 200, and executes the identification process P6 for determining the type of the tracking target.
  • the tracking processing unit 131 is, for example, for each tracking target whose registration number R registered as the tracking target and the type candidate 205 in the storage unit 200 in the above-mentioned type candidate registration processes P4d and P4e is 1 to m (natural number). Then, the identification process P6 is sequentially executed.
  • the identification process P6 is, for example, a tracking process P6a, a type candidate determination process P6b, a two-class identification process P6c, P6h, a type determination process P6d, P6i, a registration process P6e, P6j, and a predicted position calculation process P6f, P6k. It has a registration deletion process P6g and an increment process P6l.
  • the tracking processing unit 131 performs image tracking targeting the identification target identified by the multi-class identification process P4, and predicts the tracking target in the image at the later time based on the image at the previous time. Calculate the position.
  • the tracking processing unit 131 calculates, for example, the predicted position of the tracking target at the current time based on the position information of the tracking target at the previous time.
  • the tracking processing unit 131 optically searches for the tracking target at the current time by template matching using the image of the tracking target at the previous time as a template, and the movement amount of each pixel in the tracking target area. A method of estimating by flow etc. is used. Then, the tracking processing unit 131 predicts the movement of the tracking target at the current time from the position of the tracking target at the previous time and the movement amount of the past tracking target.
  • the tracking processing unit 131 refers to the tracking target and the type candidate 205 registered in the storage unit 200 in, for example, the type candidate determination processing P6b, and the type of the tracking target identified by the multi-class identification processing P4 is the first. It is determined whether the vehicle is of the first type (i) or the motorcycle of the second type (ii).
  • the identification unit 132 determines that the predicted position of the image at a later time is reached. Two-class identification processing according to the type of the tracking target is performed to identify the type of the tracking target. More specifically, the identification unit 132 performs the two-class identification process P6c using the two-class identification unit 132a corresponding to the identification target of the first type (i).
  • the two-class identification unit 132a uses the two-class identification learning data 206 stored in the storage unit 200 to perform two-class identification processing on the predicted position of the identification target and its surroundings to identify the type of the tracking target.
  • the two-class identification learning data 206 is the identification target (i) of the first type, which is one of the identification targets of the plurality of types of the image processing device IPA, that is, the automobile and the other identification targets. This is two-class identification learning data for automobiles in which a large number of images are input and machine learning is performed.
  • the two-class identification unit 132a calculates, for example, an evaluation value of similarity between the predicted position of the tracking target and the image area around it and the two-class identification learning data 206.
  • the second class identification unit 132a executes the type determination process P6d.
  • the type determination process P6d in the second class identification unit 132a, for example, when the evaluation value of the similarity is equal to or higher than a predetermined threshold value, the type of the tracking target is the identification target (i) of the first type. It is determined that the vehicle is a car, and the registration process P6e is executed.
  • the second class identification unit 132a registers, for example, the vehicle which is the identification target (i) of the first type as the type of the tracking target in the output information 208 of the storage unit 200. Further, in the registration process P6e, the second class identification unit 132a registers the predicted position of the tracking target in the output information 208 of the storage unit 200 as the position of the automobile which is the identification target (i) of the first type. Next, the second class identification unit 132a executes, for example, the predicted position calculation process P6f.
  • the two-class identification unit 132a obtains, for example, the difference between the position information of the tracking target at the previous time and the position information of the tracking target at the current time, and divides the difference by the frame imaging interval. By doing so, the moving speed of the tracking target is calculated. Further, the second class identification unit 132a calculates, for example, the predicted position of the tracking target at the next time based on the position information of the tracking target at the current time and the moving speed of the tracking target. The predicted position of the tracking target calculated here is used, for example, in the tracking process P6a at the next time.
  • the tracking target of the second class identification unit 132a is the identification target (i) of the first type.
  • the registration deletion process P6g is executed.
  • the two-class identification unit 132a deletes, for example, the tracking target and the type candidate 205 registered in the storage unit 200.
  • the identification unit 132 determines that the image at a later time is displayed.
  • the predicted position is subjected to two-class identification processing according to the type of the tracking target to identify the type of the tracking target. More specifically, the identification unit 132 performs the two-class identification process P6h using the two-class identification unit 132b corresponding to the second type (ii).
  • the two-class identification unit 132b uses the two-class identification learning data 207 stored in the storage unit 200 to perform two-class identification processing on the predicted position of the identification target and its surroundings to identify the type of the tracking target.
  • the two-class identification learning data 207 is the identification target (ii) of the second type, which is one of the identification targets of the plurality of types of the image processing device IPA, that is, the motorcycle and the other identification targets.
  • This is two-class identification learning data for a motorcycle that has been machine-learned by inputting a large number of images of.
  • the two-class identification learning data 206 and the two-class identification learning data 207 may be learned by different machine learning methods.
  • the two-class identification unit 132b calculates, for example, an evaluation value of similarity between the predicted position of the tracking target and the image area around it and the two-class identification learning data 207.
  • the second class identification unit 132b executes the type determination process P6i.
  • the type determination process P6i in the second class identification unit 132b, for example, when the evaluation value of the similarity is equal to or higher than a predetermined threshold value, the type of the tracking target is the second type identification target (ii), that is, It is determined that the vehicle is a motorcycle, and the registration process P6j is executed.
  • the second class identification unit 132b registers, for example, the motorcycle that is the identification target of the second type (ii) in the output information 208 of the storage unit 200 as the type of the tracking target. Further, in the registration process P6j, the second class identification unit 132b registers the predicted position of the tracking target in the output information 208 of the storage unit 200 as the position of the motorcycle which is the identification target (ii) of the second type. Next, the second class identification unit 132b executes, for example, the predicted position calculation process P6k.
  • the two-class identification unit 132b obtains, for example, the difference between the position information of the tracking target at the previous time and the position information of the tracking target at the current time, and divides the difference by the frame imaging interval. By doing so, the moving speed of the tracking target is calculated. Further, the second class identification unit 132b calculates, for example, the predicted position of the tracking target at the next time based on the position information of the tracking target at the current time and the moving speed of the tracking target. The predicted position of the tracking target calculated here is used, for example, in the tracking process P6a at the next time.
  • the tracking target of the second class identification unit 132b is the identification target (ii) of the second type.
  • the registration deletion process P6g is executed.
  • the two-class identification unit 132b deletes, for example, the tracking target and the type candidate 205 registered in the storage unit 200.
  • the tracking processing unit 131 executes, for example, the increment processing P6l.
  • the tracking processing unit 131 increments, for example, the registration number R of the tracking target and the type candidate 205 to be processed in the next identification processing P6 to R + 1.
  • the tracking processing unit 131 describes the above-mentioned tracking processing P6a until, for example, the incremented registration number R of the tracking target and the type candidate 205 exceeds several meters of the tracking target and the type candidate 205 registered in the multi-class identification process P4.
  • the identification process P6 including the increment process P6l is repeatedly executed.
  • the predicted position and type of the tracking target are output from the second recognition processing unit 130 or the storage unit 200 to the output processing unit 140 as output information 208 as shown in FIG.
  • the output processing unit 140 outputs the output information 208 to, for example, a vehicle control device constituting ADS, ADAS, or the like, so that the output information 208 is used in the signal generation processing in automatic driving or advanced driving support.
  • ADAS and ADS using an image pickup device ID such as an in-vehicle camera and an external recognition sensor such as a radar have been attracting attention.
  • image processing for identifying an object to be identified from an image of an image pickup device ID in order to cope with an increase in the types of identification targets, for example, a large number of classifiers may be used in combination, or the hierarchy of each classifier may be increased. Therefore, it is necessary to improve the identification accuracy.
  • the number of classifiers or the hierarchy of classifiers is increased, the load of object discrimination processing increases and the processing time may not be within the required time.
  • the image processing apparatus IPA of the present embodiment has a multi-class identification unit 122, a tracking processing unit 131, and an identification unit 132.
  • the multi-class identification unit 122 performs the multi-class identification process P4 on the image captured by the image pickup apparatus ID to identify a plurality of types of identification targets.
  • the tracking processing unit 131 performs image tracking for the identification target identified by the multi-class identification processing P4, and calculates the predicted position of the tracking target in the image at the later time based on the image at the previous time.
  • the identification unit 132 performs two-class identification processes P6c and P6h according to the type of the tracking target with respect to the predicted position of the image at a later time to identify the type of the tracking target.
  • the image processing method IPM of the present embodiment performs the multi-class identification process P4 on the image captured by the image pickup apparatus ID to identify a plurality of types of identification targets. Further, the image processing method IPM performs image tracking using the identification target identified by the multi-class identification process P4 as the tracking target, and calculates the predicted position of the tracking target in the image at the later time based on the image at the previous time. Then, the image processing method IPM performs two-class identification processes P6c and P6h according to the type of the tracking target with respect to the predicted position of the image at a later time to identify the type of the tracking target.
  • the image processing device IPA and the image processing method IPM of the present embodiment it is possible to reduce the processing load of image processing for identifying a plurality of identification targets from an image and improve the identification accuracy. More specifically, according to the image processing apparatus IPA and the image processing method IPM of the present embodiment, there is a case where a plurality of identification targets are identified from an image using only a multi-class classifier or only a two-class classifier. In comparison, the processing load of image processing can be reduced.
  • the reason is that by using the multi-class identification process by the multi-class identification unit 122 and the two-class identification process by the identification unit 132 together, the hierarchy of the multi-class identification process can be improved as compared with the case where only the multi-class identification process is used. This is because the processing load can be reduced by making it shallow. By making the hierarchy of the multi-class identification process shallow in this way, even if the identification accuracy of the multi-class identification process deteriorates, the identification target identified by the multi-class identification process is set as the tracking target, and the tracking target is classified into two classes. By performing the processing, it becomes possible to improve the identification accuracy.
  • the identification target identified by the multi-class identification process P4 is set as the tracking target, and the two-class identification process is performed for the predicted position of the tracking target of the image at a later time according to the type of the tracking target.
  • the processing can be performed by limiting the types only to the extremely limited image area. This makes it possible to reduce the processing amount of the two-class identification processing and reduce the processing load, and to discriminate erroneous recognition of the multi-class identification processing to improve the identification accuracy of a plurality of types of identification targets. ..
  • the multi-class identification unit 122 performs the multi-class identification process P4 using the multi-class identification learning data 204 obtained by inputting a plurality of types of identification targets and performing machine learning. .. With this configuration, it is possible to perform multi-class identification processing for identifying a plurality of types of identification targets from an image with high accuracy based on the result of machine learning.
  • the multi-class identification unit 122 is subjected to machine learning by inputting at least the identification target (i) of the first type and the identification target (ii) of the second type.
  • the multi-class discrimination process P4 is performed using the multi-class discrimination learning data 204.
  • the identification target (i) of the first type is an automobile
  • the identification target (ii) of the second type is a motorcycle.
  • the identification unit 132 has a plurality of two-class identification units 132a and 132b. Further, the plurality of two-class identification units 132a and 132b use the two-class identification learning data 206 and 207 obtained by inputting the identification target of one of the plurality of types and performing machine learning, respectively, for two classes. Perform identification processing. With this configuration, each of the two-class identification unit 132a and the two-class identification unit 132b can accurately determine whether or not one of the plurality of types is the identification target.
  • the number of the two-class identification units 132a and 132b is equal to the number of types identified by the multi-class identification unit 122. More specifically, in the image processing apparatus IPA of the present embodiment, the multi-class identification unit 122 identifies two types, the first type identification target (i) and the second type identification target (ii). However, the identification unit 132 has two two-class identification units 132a and 132b. With this configuration, it is possible to perform two-class identification processing on the identification targets of all the types identified by the multi-class identification unit 122, and improve the identification accuracy of the types of the identification targets.
  • the image processing apparatus IPA of the present embodiment has an image area selection unit 121 that selects an image area that may include any of a plurality of types of identification targets from the image. Then, the multi-class identification unit 122 identifies a plurality of types of identification targets from the image area selected by the image area selection unit 121. With this configuration, the multi-class identification process by the multi-class identification unit 122 can be performed only on a limited image area, the processing amount of the multi-class identification process can be reduced, and the processing load can be reduced.
  • an image processing device IPA and an image processing method IPM capable of reducing the processing load of image processing for identifying a plurality of identification targets from an image and improving the identification accuracy. Can be done.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided are an image processing device and an image processing method with which it is possible to reduce processing load and improve identification accuracy in image processing in which a plurality of identification targets are identified from an image. An image processing device IPA comprises a multi-class identification unit 122, a track processing unit 131, and an identification unit 132. The multi-class identification unit 122 performs multi-class identification processing on an image captured by an imaging device ID to identify multiple types of identification targets. The track processing unit 131 performs image tracking in which each of the identification targets identified in the multi-class identification processing is tracked as a tracking target, to calculate a predicted position of the tracking target in an image at a later point in time, the predicted position being based on an image at an earlier point in time. The identification unit 132 performs two-class identification processing on the predicted position in the image at the later point in time in accordance with the type of tracking target to identify the type of tracking target.

Description

画像処理装置および画像処理方法Image processing device and image processing method
 本開示は、画像処理装置および画像処理方法に関する。 This disclosure relates to an image processing apparatus and an image processing method.
 従来から物体検出装置、物体検出方法およびプログラムに関する発明が知られている(下記特許文献1を参照)。特許文献1は、検出部と、非線形処理部と、を備える物体検出装置を開示している(同文献、要約、請求項1、第0006段落)。検出部は、撮影画像から1つ以上の物体候補領域を検出する。非線形処理部は、少なくとも前記物体候補領域を含む前記撮影画像の一部または全部を、前記物体候補領域内の物体の姿勢と該物体までの距離とを同時に推定するニューラルネットワークに入力する。また、非線形処理部は、該ニューラルネットワークの出力を用いて、少なくとも前記物体までの距離の情報を含む物体情報を出力する。 Inventions relating to an object detection device, an object detection method, and a program have been conventionally known (see Patent Document 1 below). Patent Document 1 discloses an object detection device including a detection unit and a non-linear processing unit (the same document, abstract, claim 1, paragraph 0006). The detection unit detects one or more object candidate regions from the captured image. The nonlinear processing unit inputs at least a part or all of the captured image including the object candidate region to a neural network that simultaneously estimates the posture of the object in the object candidate region and the distance to the object. Further, the nonlinear processing unit uses the output of the neural network to output object information including at least information on the distance to the object.
 特許文献1に記載された従来の物体検出装置は、車載カメラで撮影された撮影画像を元に撮影範囲内に存在する物体を検出し、少なくとも検出した物体までの距離の情報を含む物体情報を出力する。物体検出装置が検出する物体として、当該物体検出装置が搭載された自車両の周辺に存在する他車両、歩行者、自転車やバイクなどの二輪車、信号機、標識、電柱、看板などの路側設置物など、自車両の走行の妨げになり得る障害物が例示されている(同文献、第0008段落)。 The conventional object detection device described in Patent Document 1 detects an object existing in the shooting range based on a shot image taken by an in-vehicle camera, and obtains object information including at least information on the distance to the detected object. Output. Objects detected by the object detection device include other vehicles, pedestrians, two-wheeled vehicles such as bicycles and motorcycles, traffic lights, signs, electric poles, signboards, and other roadside installations that exist around the vehicle on which the object detection device is installed. , Obstacles that may hinder the running of the own vehicle are exemplified (the same document, paragraph 0008).
 上記物体検出装置の検出機能による物体候補領域の検出は、車載カメラの撮影画像上で、検出の対象となる物体の大きさに相当する走査矩形を用いて物体の有無を判断することを基本とする(同文献、第0021段落)。そして、走査矩形内の画像領域に対し、画像特徴量を算出し、予め学習した識別器を用いて、走査矩形内に他車両があるか否かを判別するか、他車両らしさを示す尤度を出力する(同文献、第0022段落)。 The detection of the object candidate area by the detection function of the object detection device is based on determining the presence or absence of an object using a scanning rectangle corresponding to the size of the object to be detected on the captured image of the in-vehicle camera. (Ibid., Paragraph 0021). Then, the image feature amount is calculated for the image area in the scanning rectangle, and the discriminator learned in advance is used to determine whether or not there is another vehicle in the scanning rectangle, or the likelihood of indicating the uniqueness of the other vehicle. Is output (ibid., Paragraph 0022).
特開2019‐008460号公報Japanese Unexamined Patent Publication No. 2019-008660
 上記物体検出装置のように、走査矩形内に検出する物体があるか否かを判別する識別器には、たとえば、車両とそれ以外の物体を判別する2クラス識別器や、車両と歩行者とそれ以外の物体など、複数の物体を一度に判別する多クラス識別器がある。しかし、先進運転支援システム(Advanced Driver Assistance System:ADAS)や自動運転システム(Automated Driving System:ADS)の発展にともなって、識別対象の種別はさらに増加する傾向にある。 As the classifier that determines whether or not there is an object to be detected in the scanning rectangle, such as the object detection device, for example, a two-class classifier that discriminates between a vehicle and other objects, a vehicle and a pedestrian, and the like. There are multi-class classifiers that discriminate between multiple objects at once, such as other objects. However, with the development of Advanced Driver Assistance System (ADAS) and Automated Driving System (ADS), the types of identification targets tend to increase further.
 撮像装置の画像から識別対象の物体を識別する画像処理において、識別対象の種別の増加に対応するには、たとえば、多数の識別器を併用したり、各々の識別器の階層を増加させたりして、識別精度を向上させる必要がある。しかし、識別器の数や、識別器の階層を増加させると、物体の識別処理の負荷が増加して処理時間が必要な時間内に収まらなくなるおそれがある。 In image processing for identifying an object to be identified from an image of an image pickup device, in order to cope with an increase in the types of identification targets, for example, a large number of classifiers may be used together or the hierarchy of each classifier may be increased. Therefore, it is necessary to improve the identification accuracy. However, if the number of classifiers or the hierarchy of classifiers is increased, the load of object discrimination processing increases and the processing time may not be within the required time.
 本開示は、画像から複数の識別対象を識別する画像処理の処理負荷の低減と識別精度の向上が可能な画像処理装置および画像処理方法を提供する。 The present disclosure provides an image processing apparatus and an image processing method capable of reducing the processing load of image processing for identifying a plurality of identification targets from an image and improving the identification accuracy.
 本開示の一態様は、撮像装置によって撮影された画像に対して多クラス識別処理を行って複数の種別の識別対象を識別する多クラス識別部と、前記多クラス識別処理により識別された前記識別対象を追跡対象とする画像追跡を行って前時刻の前記画像に基づく後時刻の前記画像における前記追跡対象の予測位置を算出する追跡処理部と、前記後時刻の前記画像の前記予測位置に対して前記追跡対象の前記種別に応じた2クラス識別処理を行って前記追跡対象の前記種別を識別する識別部と、を有することを特徴とする画像処理装置である。 One aspect of the present disclosure is a multi-class identification unit that performs multi-class identification processing on an image captured by an image pickup device to identify a plurality of types of identification targets, and the identification identified by the multi-class identification process. With respect to the tracking processing unit that performs image tracking with the target as the tracking target and calculates the predicted position of the tracking target in the image at the later time based on the image at the previous time, and the predicted position of the image at the later time. The image processing apparatus is characterized by having an identification unit for identifying the type of the tracking target by performing two-class identification processing according to the type of the tracking target.
 本開示の上記一態様によれば、画像から複数の識別対象を識別する画像処理の処理負荷の低減と識別精度の向上が可能な画像処理装置および画像処理方法を提供することができる。 According to the above aspect of the present disclosure, it is possible to provide an image processing apparatus and an image processing method capable of reducing the processing load of image processing for identifying a plurality of identification targets from an image and improving the identification accuracy.
本開示に係る画像処理装置の一実施形態を示すブロック図。The block diagram which shows one Embodiment of the image processing apparatus which concerns on this disclosure. 本開示に係る画像処理方法の一実施形態を示すフロー図。The flow diagram which shows one Embodiment of the image processing method which concerns on this disclosure. 本開示に係る画像処理方法の一実施形態を示すフロー図。The flow diagram which shows one Embodiment of the image processing method which concerns on this disclosure.
 以下、図面を参照して本開示の画像処理装置および画像処理方法の実施形態を説明する。 Hereinafter, embodiments of the image processing apparatus and the image processing method of the present disclosure will be described with reference to the drawings.
 図1は、本開示に係る画像処理装置の一実施形態を示すブロック図である。本実施形態の画像処理装置IPAは、たとえば、撮像装置IDによって撮影された画像から複数の種別の識別対象を識別する装置である。より具体的には、画像処理装置IPAは、たとえば、車両に搭載され、単眼カメラやステレオカメラなどの撮像装置IDによって撮影された画像から、車両の周囲の複数の異なる物体を識別する装置である。なお、撮像装置IDによって撮影される画像は特に限定されず、たとえば、カラー画像または濃淡画像などを適宜選択することができる。 FIG. 1 is a block diagram showing an embodiment of the image processing apparatus according to the present disclosure. The image processing device IPA of the present embodiment is, for example, a device that identifies a plurality of types of identification targets from images taken by an image pickup device ID. More specifically, the image processing device IPA is a device mounted on a vehicle and discriminating a plurality of different objects around the vehicle from images taken by an image pickup device ID such as a monocular camera or a stereo camera. .. The image captured by the image pickup apparatus ID is not particularly limited, and for example, a color image or a shade image can be appropriately selected.
 図1に示す例において、撮像装置IDは、車両に搭載されたステレオカメラである。画像処理装置IPAは、たとえば、CPUなどの処理装置を含む処理部100と、ROMやRAMなどの記憶装置を含む記憶部200と、その記憶部200に記憶されて処理部100によって実行されるコンピュータ・プログラムと、を備えている。また、図示を省略するが、画像処理装置IPAは、たとえば、信号の入出力を行う入出力部を備えている。 In the example shown in FIG. 1, the image pickup device ID is a stereo camera mounted on a vehicle. The image processing device IPA is, for example, a processing unit 100 including a processing device such as a CPU, a storage unit 200 including a storage device such as a ROM or RAM, and a computer stored in the storage unit 200 and executed by the processing unit 100.・ It has a program. Further, although not shown, the image processing apparatus IPA includes, for example, an input / output unit for inputting / outputting signals.
 画像処理装置IPAの処理部100は、たとえば、信号処理部110と、認識処理部150とを有している。信号処理部110は、たとえば、画像取得部111と、視差算出部112とを含む。認識処理部150は、たとえば、第1認識処理部120と、第2認識処理部130と、出力処理部140とを含む。第1認識処理部120は、たとえば、画像領域選択部121と、多クラス識別部122とを含む。第2認識処理部130は、たとえば、追跡処理部131と、識別部132とを含む。識別部132は、たとえば、複数の2クラス識別部132a,132bを含む。 The processing unit 100 of the image processing apparatus IPA has, for example, a signal processing unit 110 and a recognition processing unit 150. The signal processing unit 110 includes, for example, an image acquisition unit 111 and a parallax calculation unit 112. The recognition processing unit 150 includes, for example, a first recognition processing unit 120, a second recognition processing unit 130, and an output processing unit 140. The first recognition processing unit 120 includes, for example, an image area selection unit 121 and a multi-class identification unit 122. The second recognition processing unit 130 includes, for example, a tracking processing unit 131 and an identification unit 132. The identification unit 132 includes, for example, a plurality of two- class identification units 132a and 132b.
 処理部100の各部は、たとえば、記憶部200に記憶されたコンピュータ・プログラムを処理部100によって実行することによって実現される処理部100の機能ブロックである。これら処理部100の各部は、たとえば、それぞれ専用の処理装置によって実現されてもよく、複数の機能ブロックが一つの処理装置によって実現されてもよい。また、記憶部200は、たとえば、一種または多種の複数の記憶装置によって構成してもよく、一つの記憶装置によって構成してもよい。 Each unit of the processing unit 100 is, for example, a functional block of the processing unit 100 realized by executing the computer program stored in the storage unit 200 by the processing unit 100. Each of these processing units 100 may be realized by, for example, a dedicated processing device, or a plurality of functional blocks may be realized by one processing device. Further, the storage unit 200 may be configured by, for example, a plurality of storage devices of one type or various types, or may be configured by one storage device.
 なお、図1に示す例において、画像処理装置IPAは記憶部200を含んでいるが、画像処理装置IPAは外部の記憶部200に接続されていてもよい。また、図1に示す例において、画像処理装置IPAは外部の撮像装置IDに接続されているが、画像処理装置IPAは撮像装置IDを含んでもよい。また、図1に示す例において、識別部132は、2つの2クラス識別部132a,132bを有しているが、3以上の2クラス識別部を有してもよい。 In the example shown in FIG. 1, the image processing device IPA includes a storage unit 200, but the image processing device IPA may be connected to an external storage unit 200. Further, in the example shown in FIG. 1, the image processing device IPA is connected to the external image pickup device ID, but the image processing device IPA may include the image pickup device ID. Further, in the example shown in FIG. 1, the identification unit 132 has two two- class identification units 132a and 132b, but may have three or more two-class identification units.
 画像処理装置IPAが撮像装置IDによって撮影された画像から識別する識別対象202は、たとえば、予め記憶部200に記憶されている。識別対象202は、画像処理装置IPAが搭載された自車両の周囲の他車両、歩行者、移動体、障害物、道路、道路標示、道路標識、および信号などの複数の種別を含む。さらに、画像処理装置IPAの識別対象である他車両は、たとえば、自転車などの軽車両、原動機付き自転車、自動二輪車、軽自動車、普通自動車、大型自動車、バス、トラックなど、複数の種別を含んでもよい。さらに、他車両は、たとえば、先行車、後続車、対向車、横断車両、右折車両、左折車両など、自車両に対する位置、姿勢、進行方向、速度、加速度、および角速度などに基づく種別を含んでもよい。 The identification target 202 identified by the image processing device IPA from the image captured by the image pickup device ID is stored in, for example, in the storage unit 200 in advance. The identification target 202 includes a plurality of types such as other vehicles, pedestrians, moving objects, obstacles, roads, road signs, road signs, and signals around the own vehicle equipped with the image processing device IPA. Further, the other vehicles to be identified by the image processing device IPA may include a plurality of types such as light vehicles such as bicycles, motorized bicycles, motorcycles, light vehicles, ordinary vehicles, large vehicles, buses, and trucks. good. Further, the other vehicle may include a type based on the position, attitude, direction of travel, speed, acceleration, angular velocity, etc. with respect to the own vehicle, such as a preceding vehicle, a following vehicle, an oncoming vehicle, a crossing vehicle, a right turn vehicle, and a left turn vehicle. good.
 次に、図2Aおよび図2Bを参照して、図1に示す画像処理装置IPAの動作とともに、本開示に係る画像処理方法の一実施形態を説明する。図2Aおよび図2Bは、図1に示す画像処理装置IPAを用いた本実施形態の画像処理方法IPMのフロー図である。 Next, with reference to FIGS. 2A and 2B, an embodiment of the image processing method according to the present disclosure will be described together with the operation of the image processing apparatus IPA shown in FIG. 2A and 2B are flow charts of an image processing method IPM of the present embodiment using the image processing apparatus IPA shown in FIG.
 撮像装置IDは、たとえば、所定の周期および所定の撮像時間で画像を撮影する。画像処理装置IPAは、撮像装置IDによって所定の周期で撮影された各画像を、図2Aに示す画像処理方法IPMによって処理する。画像処理装置IPAは、図2Aに示す画像処理方法IPMを開始すると、まず画像取得処理P1を実行する。 The image pickup device ID captures an image in a predetermined cycle and a predetermined imaging time, for example. The image processing apparatus IPA processes each image taken at a predetermined cycle by the image pickup apparatus ID by the image processing method IPM shown in FIG. 2A. When the image processing apparatus IPA starts the image processing method IPM shown in FIG. 2A, the image processing apparatus IPA first executes the image acquisition process P1.
 画像取得処理P1において、画像取得部111は、たとえば、撮像装置IDから画像を取得して、記憶部200に画像情報201の一部として記憶させる。なお、画像情報201は、たとえば、撮像装置IDがステレオカメラである場合、右カメラによって撮影した右画像と、左カメラによって撮影した左画像のそれぞれの画像情報を含む。 In the image acquisition process P1, the image acquisition unit 111 acquires, for example, an image from the image pickup apparatus ID and stores it in the storage unit 200 as a part of the image information 201. The image information 201 includes, for example, image information of a right image taken by the right camera and a left image taken by the left camera when the image pickup device ID is a stereo camera.
 また、画像取得処理P1において、視差算出部112は、たとえば、右画像と左画像を入力として、右画像内の特定の領域に類似する左画像内の領域を探索して視差を求める処理を行う。視差算出部112は、この処理を右画像の全領域に対して行うことで視差画像を出力する。視差算出部112は、視差画像を画像情報201の一部として、記憶部200に記憶させる。 Further, in the image acquisition process P1, the parallax calculation unit 112 inputs, for example, the right image and the left image, searches for a region in the left image similar to a specific region in the right image, and performs a process of obtaining the parallax. .. The parallax calculation unit 112 outputs a parallax image by performing this process on the entire area of the right image. The parallax calculation unit 112 stores the parallax image in the storage unit 200 as a part of the image information 201.
 次に、画像処理装置IPAは、たとえば、画像領域の選択処理P2を実行する。画像領域の選択処理P2において、画像処理装置IPAは、撮像装置IDによって撮影された画像から複数の種別の識別対象のいずれかが含まれる可能性がある画像領域を選択する。より具体的には、第1認識処理部120は、たとえば、視差算出部112の出力である視差画像を、視差算出部112または記憶部200に記憶された画像情報201から取得する。 Next, the image processing device IPA executes, for example, the image area selection process P2. In the image region selection process P2, the image processing apparatus IPA selects an image region that may include any of a plurality of types of identification targets from the images captured by the image pickup apparatus ID. More specifically, the first recognition processing unit 120 acquires, for example, a parallax image which is an output of the parallax calculation unit 112 from the image information 201 stored in the parallax calculation unit 112 or the storage unit 200.
 画像領域の選択処理P2において、画像領域選択部121は、たとえば、視差画像を入力とし、視差画像中の互いに隣接しかつ互いに近似する視差をグルーピングして、グルーピングした視差を囲む矩形枠を生成する。また、画像領域選択部121は、生成した矩形枠の縦横の大きさが所定の大きさ以上の矩形枠を、複数の種別の識別対象のいずれかが含まれる可能性がある画像領域として選択する。 In the image area selection process P2, the image area selection unit 121 receives, for example, a parallax image and groups parallax adjacent to each other and close to each other in the parallax image to generate a rectangular frame surrounding the grouped parallax. .. Further, the image area selection unit 121 selects a rectangular frame whose vertical and horizontal sizes are equal to or larger than a predetermined size as an image area in which any of a plurality of types of identification targets may be included. ..
 画像領域選択部121は、選択した画像領域の位置情報すなわち視差画像上の座標と、縦横の大きさである縦幅と横幅を、識別対象を含む可能性のある画像領域203として出力して記憶部200に記憶させる。ここで、画像領域選択部121は、たとえば、視差画像から複数の画像領域を選択した場合、各々の画像領域に対して1からn(自然数)までの識別番号Nを付与し、記憶部200に画像領域203として記憶させる。 The image area selection unit 121 outputs and stores the position information of the selected image area, that is, the coordinates on the parallax image, and the vertical and horizontal sizes of the vertical width and the horizontal width as an image area 203 that may include an identification target. Store in unit 200. Here, for example, when a plurality of image areas are selected from a parallax image, the image area selection unit 121 assigns an identification number N from 1 to n (natural number) to each image area, and the storage unit 200 is assigned an identification number N. It is stored as the image area 203.
 なお、画像領域選択部121は、たとえば、矩形枠の縦横比などによって、矩形枠で囲まれた視差画像の画像領域に含まれる可能性がある識別対象の種別を推定し、特定の種別の識別対象を含む可能性がある画像領域のみを選択してもよい。なお、撮像装置IDが単眼カメラである場合、画像領域選択部121は、単眼カメラの画像から複数の種別の識別対象のいずれかが含まれる可能性がある画像領域を選択してもよい。 The image area selection unit 121 estimates the type of identification target that may be included in the image area of the parallax image surrounded by the rectangular frame by, for example, the aspect ratio of the rectangular frame, and identifies the specific type. Only image areas that may contain objects may be selected. When the image pickup device ID is a monocular camera, the image area selection unit 121 may select an image area that may include any of a plurality of types of identification targets from the images of the monocular camera.
 この場合、画像領域選択部121は、たとえば、車両に搭載されたミリ波レーダによる物体の検知結果を、画像領域の選択に使用してもよい。また、画像領域選択部121は、たとえば、撮像装置IDの画像の特定の領域を予め指定し、その領域に対して任意の大きさのウィンドウを用いたラスタスキャンを行うことで、複数の種別の識別対象のいずれかが含まれる可能性がある画像領域を選択してもよい。 In this case, the image area selection unit 121 may use, for example, the detection result of an object by the millimeter wave radar mounted on the vehicle to select the image area. Further, the image area selection unit 121 specifies, for example, a specific area of the image of the image pickup apparatus ID in advance, and performs raster scan using a window of an arbitrary size for the area, so that a plurality of types can be obtained. You may select an image area that may contain any of the identification targets.
 次に、多クラス識別部122は、たとえば、多クラス識別処理の処理対象とする画像領域の識別番号Nを、N=1に設定する処理P3を実行する。さらに、多クラス識別部122は、撮像装置IDによって撮影された画像に対して多クラス識別処理を行って複数の種別の識別対象を識別する多クラス識別処理P4を行う。 Next, the multi-class identification unit 122 executes the process P3 for setting the identification number N of the image area to be processed for the multi-class identification process to N = 1. Further, the multi-class identification unit 122 performs multi-class identification processing on the image captured by the image pickup apparatus ID to perform multi-class identification processing P4 for identifying a plurality of types of identification targets.
 より具体的には、多クラス識別処理P4において、多クラス識別部122は、たとえば、画像領域選択部121によって選択された画像領域から複数の種別の識別対象を識別する。多クラス識別部122は、たとえば、前述の選択処理P2で選択された識別番号Nが1からnまでの各々の画像領域に対して、順次、多クラス識別処理P4を実行する。 More specifically, in the multi-class identification process P4, the multi-class identification unit 122 identifies a plurality of types of identification targets from the image area selected by the image area selection unit 121, for example. The multi-class identification unit 122 sequentially executes the multi-class identification process P4 for each image area having the identification numbers N selected in the above-mentioned selection process P2 from 1 to n.
 多クラス識別処理P4は、たとえば、登録数判定処理P4aと、多クラス識別処理P4bと、種別判定処理P4cと、種別候補登録処理P4d,P4eと、インクリメント処理P4fとを有している。多クラス識別部122は、まず、登録数判定処理P4aにおいて、後述する追跡処理P6aにおける追跡対象の登録数が上限数未満であるか否かを判定する。 The multi-class identification process P4 has, for example, a registration number determination process P4a, a multi-class identification process P4b, a type determination process P4c, a type candidate registration process P4d, P4e, and an increment process P4f. First, in the registration number determination process P4a, the multi-class identification unit 122 determines whether or not the number of registrations of the tracking target in the tracking process P6a, which will be described later, is less than the upper limit.
 多クラス識別部122は、たとえば、登録数判定処理P4aにおいて、登録数が上限数未満ではない(NO)、すなわち登録数が上限数に達していると判定すると、多クラス識別処理P4b以降の処理を実行せず、次の処理P5へ進む。一方、多クラス識別部122は、たとえば、登録数判定処理P4aにおいて、登録数が上限数未満である(YES)と判定すると、多クラス識別処理P4bを実行する。 For example, when the multi-class identification unit 122 determines in the registration number determination process P4a that the number of registrations is not less than the upper limit (NO), that is, the number of registrations has reached the upper limit, the processing after the multi-class identification process P4b Is not executed, and the process proceeds to the next process P5. On the other hand, the multi-class identification unit 122 executes the multi-class identification process P4b when, for example, the registration number determination process P4a determines that the number of registrations is less than the upper limit (YES).
 多クラス識別部122は、たとえば、多クラス識別処理P4bにおいて、画像領域から識別対象202として記憶部200に記憶された複数の種別の識別対象を識別する。多クラス識別部122は、たとえば、画像領域選択部121によって選択されて記憶部200に記憶された画像領域203と、記憶部200に記憶された多クラス識別学習データ204との類似性を評価する。 For example, in the multi-class identification process P4b, the multi-class identification unit 122 identifies a plurality of types of identification targets stored in the storage unit 200 as the identification target 202 from the image area. The multi-class identification unit 122 evaluates, for example, the similarity between the image area 203 selected by the image area selection unit 121 and stored in the storage unit 200 and the multi-class identification learning data 204 stored in the storage unit 200. ..
 多クラス識別学習データ204は、たとえば、識別対象である自動車の画像と、自動二輪車の画像と、その他の物体の画像をそれぞれ多数入力して機械学習を行った学習データである。すなわち、多クラス識別部122は、複数の種別の識別対象を入力して機械学習を行った多クラス識別学習データ204を用いて多クラス識別処理を行う。 The multi-class identification learning data 204 is, for example, learning data obtained by inputting a large number of images of a car to be identified, an image of a motorcycle, and an image of other objects to perform machine learning. That is, the multi-class identification unit 122 performs multi-class identification processing using the multi-class identification learning data 204 obtained by inputting a plurality of types of identification targets and performing machine learning.
 より具体的には、本実施形態において、多クラス識別部122は、たとえば、少なくとも第1の種別の識別対象としての自動車と、第2の種別の識別対象としての自動二輪車とが入力されて機械学習が行われた多クラス識別学習データ204を用いて、多クラス識別処理を行う。本実施形態では、多クラス識別処理P4bにおける識別対象の種別が、たとえば、自動車と自動二輪車の二種である場合を説明するが、識別対象の種別およびその数は、特に限定されない。 More specifically, in the present embodiment, the multi-class identification unit 122 is a machine in which, for example, at least an automobile as an identification target of the first type and a motorcycle as an identification target of the second type are input to the machine. Multi-class identification processing is performed using the learned multi-class identification learning data 204. In the present embodiment, a case where the types of identification targets in the multi-class identification process P4b are, for example, two types of an automobile and a motorcycle will be described, but the types of identification targets and the number thereof are not particularly limited.
 多クラス識別部122は、たとえば、多クラス識別処理P4bにおいて、画像領域203と多クラス識別学習データ204との類似性の評価値を算出する。具体的には、多クラス識別部122は、たとえば、第1の種別の識別対象(i)を自動車とし、第2の種別の識別対象(ii)を自動二輪車とした多クラス識別学習データ204と、画像領域203との類似性の評価値を算出する。次に、多クラス識別部122は、この類似性の評価値に基づいて、画像領域203に存在する識別対象の種別の判定処理P4cを実行する。 The multi-class identification unit 122 calculates, for example, an evaluation value of similarity between the image area 203 and the multi-class identification learning data 204 in the multi-class identification process P4b. Specifically, the multi-class identification unit 122 includes, for example, the multi-class identification learning data 204 in which the identification target (i) of the first type is an automobile and the identification target (ii) of the second type is a motorcycle. , The evaluation value of the similarity with the image area 203 is calculated. Next, the multi-class identification unit 122 executes the determination process P4c of the type of identification target existing in the image area 203 based on the evaluation value of the similarity.
 この判定処理P4cにおいて、多クラス識別部122は、たとえば、上記類似性の評価値が所定のしきい値以上である場合に、画像領域203に存在する識別対象の種別を識別する。より具体的には、多クラス識別部122は、たとえば、画像領域203と、第1の種別の識別対象(i)である自動車との類似性の評価値が所定のしきい値以上である場合、画像領域203から第1の種別の識別対象(i)である自動車とその位置情報を識別する。さらに、多クラス識別部122は、当該画像領域203から識別した第1の種別の識別対象(i)である自動車に登録番号を付与して、追跡対象および種別候補205として記憶部200に登録する処理P4dを実行する。 In this determination process P4c, the multi-class identification unit 122 identifies the type of identification target existing in the image area 203, for example, when the evaluation value of the similarity is equal to or higher than a predetermined threshold value. More specifically, in the multi-class identification unit 122, for example, when the evaluation value of the similarity between the image area 203 and the automobile which is the identification target (i) of the first type is equal to or higher than a predetermined threshold value. From the image area 203, the vehicle and its position information, which is the identification target (i) of the first type, are identified. Further, the multi-class identification unit 122 assigns a registration number to the automobile which is the identification target (i) of the first type identified from the image area 203, and registers it in the storage unit 200 as a tracking target and a type candidate 205. The process P4d is executed.
 また、上記判定処理P4cにおいて、多クラス識別部122は、たとえば、画像領域203と、第2の種別の識別対象(ii)である自動二輪車との類似性の評価値が所定のしきい値以上である場合、画像領域203から第2の種別の識別対象(ii)である自動二輪車とその位置情報を識別する。さらに、多クラス識別部122は、当該画像領域203から識別した第2の種別の識別対象(ii)である自動二輪車に登録番号を付与して、追跡対象および種別候補205として記憶部200に登録する処理P4eを実行する。多クラス識別部122は、たとえば、処理P4dまたは処理P4eの終了後、インクリメント処理P4fを実行する。 Further, in the determination process P4c, in the multi-class identification unit 122, for example, the evaluation value of the similarity between the image area 203 and the motorcycle which is the identification target (ii) of the second type is equal to or higher than a predetermined threshold value. If this is the case, the image area 203 identifies the motorcycle that is the second type of identification target (ii) and its position information. Further, the multi-class identification unit 122 assigns a registration number to the motorcycle which is the identification target (ii) of the second type identified from the image area 203, and registers the motorcycle as the tracking target and the type candidate 205 in the storage unit 200. Process P4e is executed. The multi-class identification unit 122 executes the increment process P4f after the end of the process P4d or the process P4e, for example.
 また、上記判定処理P4cにおいて、多クラス識別部122は、たとえば、画像領域203と、第1および第2の種別の識別対象(i),(ii)との類似性の評価値が所定のしきい値未満である場合に、当該画像領域203に識別対象が含まれないことを識別する。この場合、多クラス識別部122は、たとえば、インクリメント処理P4fを実行する。 Further, in the determination process P4c, the multi-class identification unit 122 determines, for example, an evaluation value of similarity between the image area 203 and the identification targets (i) and (ii) of the first and second types. When it is less than the threshold value, it is identified that the image area 203 does not include the identification target. In this case, the multi-class identification unit 122 executes, for example, the increment process P4f.
 インクリメント処理P4fにおいて、多クラス識別部122は、たとえば、次の多クラス識別処理P4の処理対象となる画像領域203の識別番号Nを、N+1にインクリメントする。多クラス識別部122は、たとえば、インクリメントされた画像領域203の識別番号Nが、前述の選択処理P2で選択された画像領域203の数nを超えるまで、前述の処理P4aから処理P4fまでを含む多クラス識別処理P4を繰り返し実行する。 In the increment process P4f, the multi-class identification unit 122 increments, for example, the identification number N of the image area 203 to be processed by the next multi-class identification process P4 to N + 1. The multi-class identification unit 122 includes, for example, the above-mentioned processes P4a to P4f until the incremented identification number N of the image area 203 exceeds the number n of the image areas 203 selected in the above-mentioned selection process P2. The multi-class identification process P4 is repeatedly executed.
 前述の選択処理P2で選択されたすべての画像領域203に対する多クラス識別処理P4の終了後、追跡処理部131は、たとえば、後述する識別処理P6における処理対象となる追跡対象の登録番号Rを1に設定する処理P5を実行する。さらに、追跡処理部131は、記憶部200に追跡対象および種別候補205として登録された追跡対象の予測位置を算出するとともに、その追跡対象の種別を確定する識別処理P6を実行する。 After the completion of the multi-class identification process P4 for all the image areas 203 selected in the above-mentioned selection process P2, the tracking process unit 131 sets the registration number R of the tracking target to be processed in the identification process P6 described later to 1, for example. The process P5 set to is executed. Further, the tracking processing unit 131 calculates the predicted position of the tracking target and the tracking target registered as the type candidate 205 in the storage unit 200, and executes the identification process P6 for determining the type of the tracking target.
 追跡処理部131は、たとえば、前述の種別候補登録処理P4d,P4eで記憶部200に追跡対象および種別候補205として登録された登録番号Rが1からm(自然数)までの各々の追跡対象に対して、順次、識別処理P6を実行する。識別処理P6は、たとえば、追跡処理P6aと、種別候補判定処理P6bと、2クラス識別処理P6c,P6hと、種別判定処理P6d,P6iと、登録処理P6e,P6jと、予測位置算出処理P6f,P6kと、登録削除処理P6gと、インクリメント処理P6lとを有している。 The tracking processing unit 131 is, for example, for each tracking target whose registration number R registered as the tracking target and the type candidate 205 in the storage unit 200 in the above-mentioned type candidate registration processes P4d and P4e is 1 to m (natural number). Then, the identification process P6 is sequentially executed. The identification process P6 is, for example, a tracking process P6a, a type candidate determination process P6b, a two-class identification process P6c, P6h, a type determination process P6d, P6i, a registration process P6e, P6j, and a predicted position calculation process P6f, P6k. It has a registration deletion process P6g and an increment process P6l.
 追跡処理部131は、まず、追跡処理P6aにおいて、多クラス識別処理P4により識別された識別対象を追跡対象とする画像追跡を行って、前時刻の画像に基づく後時刻の画像における追跡対象の予測位置を算出する。追跡処理部131は、たとえば、前回時刻における追跡対象の位置情報に基づいて今回時刻における追跡対象の予測位置を算出する。 First, in the tracking process P6a, the tracking processing unit 131 performs image tracking targeting the identification target identified by the multi-class identification process P4, and predicts the tracking target in the image at the later time based on the image at the previous time. Calculate the position. The tracking processing unit 131 calculates, for example, the predicted position of the tracking target at the current time based on the position information of the tracking target at the previous time.
 追跡処理部131は、追跡処理P6aにおいて、たとえば、前回時刻における追跡対象の画像をテンプレートとして今回時刻における追跡対象をテンプレートマッチングによって探索する手法や、追跡対象の領域内の各画素の移動量をオプティカルフローなどによって推定する手法を用いる。そして、追跡処理部131は、前回時刻における追跡対象の位置と過去の追跡対象の移動量から今回時刻の追跡対象の動き予測を行う。 In the tracking process P6a, the tracking processing unit 131 optically searches for the tracking target at the current time by template matching using the image of the tracking target at the previous time as a template, and the movement amount of each pixel in the tracking target area. A method of estimating by flow etc. is used. Then, the tracking processing unit 131 predicts the movement of the tracking target at the current time from the position of the tracking target at the previous time and the movement amount of the past tracking target.
 また、追跡処理部131は、たとえば、種別候補判定処理P6bにおいて、記憶部200に登録された追跡対象および種別候補205を参照し、多クラス識別処理P4で識別された追跡対象の種別が、第1の種別(i)である自動車と、第2の種別(ii)である自動二輪車のどちらであるかを判定する。 Further, the tracking processing unit 131 refers to the tracking target and the type candidate 205 registered in the storage unit 200 in, for example, the type candidate determination processing P6b, and the type of the tracking target identified by the multi-class identification processing P4 is the first. It is determined whether the vehicle is of the first type (i) or the motorcycle of the second type (ii).
 種別候補判定処理P6bにおいて、追跡処理部131により追跡対象の種別が第1の種別(i)の自動車であると判定されると、識別部132は、後時刻の画像の予測位置に対して、追跡対象の種別に応じた2クラス識別処理を行って追跡対象の種別を識別する。より具体的には、識別部132は、第1の種別(i)の識別対象に対応する2クラス識別部132aを用いた2クラス識別処理P6cを行う。 When the tracking processing unit 131 determines in the type candidate determination processing P6b that the type of the tracking target is a vehicle of the first type (i), the identification unit 132 determines that the predicted position of the image at a later time is reached. Two-class identification processing according to the type of the tracking target is performed to identify the type of the tracking target. More specifically, the identification unit 132 performs the two-class identification process P6c using the two-class identification unit 132a corresponding to the identification target of the first type (i).
 2クラス識別部132aは、記憶部200に記憶された2クラス識別学習データ206を用い、識別対象の予測位置とその周辺に対して2クラス識別処理を行って、追跡対象の種別を識別する。ここで、2クラス識別学習データ206は、画像処理装置IPAの複数の種別の識別対象のうちの一の種別である第1の種別の識別対象(i)すなわち自動車と、それ以外の識別対象の画像を多数入力して機械学習を行った自動車用の2クラス識別学習データである。 The two-class identification unit 132a uses the two-class identification learning data 206 stored in the storage unit 200 to perform two-class identification processing on the predicted position of the identification target and its surroundings to identify the type of the tracking target. Here, the two-class identification learning data 206 is the identification target (i) of the first type, which is one of the identification targets of the plurality of types of the image processing device IPA, that is, the automobile and the other identification targets. This is two-class identification learning data for automobiles in which a large number of images are input and machine learning is performed.
 2クラス識別処理P6cにおいて、2クラス識別部132aは、たとえば、追跡対象の予測位置およびその周辺の画像領域と、2クラス識別学習データ206との類似性の評価値を算出する。次に、2クラス識別部132aは、種別判定処理P6dを実行する。種別判定処理P6dにおいて、2クラス識別部132aは、たとえば、上記類似性の評価値が所定のしきい値以上である場合に、追跡対象の種別が、第1の種別の識別対象(i)すなわち自動車であることを判定し、登録処理P6eを実行する。 In the two-class identification process P6c, the two-class identification unit 132a calculates, for example, an evaluation value of similarity between the predicted position of the tracking target and the image area around it and the two-class identification learning data 206. Next, the second class identification unit 132a executes the type determination process P6d. In the type determination process P6d, in the second class identification unit 132a, for example, when the evaluation value of the similarity is equal to or higher than a predetermined threshold value, the type of the tracking target is the identification target (i) of the first type. It is determined that the vehicle is a car, and the registration process P6e is executed.
 この登録処理P6eにおいて、2クラス識別部132aは、たとえば、追跡対象の種別として、第1の種別の識別対象(i)である自動車を記憶部200の出力情報208に登録する。また、登録処理P6eにおいて、2クラス識別部132aは、追跡対象の予測位置を、第1の種別の識別対象(i)である自動車の位置として記憶部200の出力情報208に登録する。次に、2クラス識別部132aは、たとえば、予測位置算出処理P6fを実行する。 In this registration process P6e, the second class identification unit 132a registers, for example, the vehicle which is the identification target (i) of the first type as the type of the tracking target in the output information 208 of the storage unit 200. Further, in the registration process P6e, the second class identification unit 132a registers the predicted position of the tracking target in the output information 208 of the storage unit 200 as the position of the automobile which is the identification target (i) of the first type. Next, the second class identification unit 132a executes, for example, the predicted position calculation process P6f.
 この予測位置算出処理P6fにおいて、2クラス識別部132aは、たとえば、前回時刻における追跡対象の位置情報と、今回時刻における追跡対象の位置情報との差分を求め、その差分をフレーム撮像間隔で除算することで、追跡対象の移動速度を算出する。さらに、2クラス識別部132aは、たとえば、今回時刻における追跡対象の位置情報と、追跡対象の移動速度に基づいて、次回時刻における追跡対象の予測位置を算出する。ここで算出された追跡対象の予測位置は、たとえば、次回時刻の追跡処理P6aにおいて用いられる。 In this predicted position calculation process P6f, the two-class identification unit 132a obtains, for example, the difference between the position information of the tracking target at the previous time and the position information of the tracking target at the current time, and divides the difference by the frame imaging interval. By doing so, the moving speed of the tracking target is calculated. Further, the second class identification unit 132a calculates, for example, the predicted position of the tracking target at the next time based on the position information of the tracking target at the current time and the moving speed of the tracking target. The predicted position of the tracking target calculated here is used, for example, in the tracking process P6a at the next time.
 また、前述の種別判定処理P6dにおいて、たとえば、上記類似性の評価値が所定のしきい値未満である場合に、2クラス識別部132aは、追跡対象が、第1の種別の識別対象(i)である自動車以外のその他の種別の識別対象または背景であることを判定し、登録削除処理P6gを実行する。登録削除処理P6gにおいて、2クラス識別部132aは、たとえば、記憶部200に登録された追跡対象および種別候補205を削除する。 Further, in the above-mentioned type determination process P6d, for example, when the evaluation value of the similarity is less than a predetermined threshold value, the tracking target of the second class identification unit 132a is the identification target (i) of the first type. ) Is determined to be an identification target or background of another type other than the automobile, and the registration deletion process P6g is executed. In the registration deletion process P6g, the two-class identification unit 132a deletes, for example, the tracking target and the type candidate 205 registered in the storage unit 200.
 また、前述の種別候補判定処理P6bにおいて、追跡処理部131によって追跡対象の種別が、第2の種別(ii)の自動二輪車であると判定されると、識別部132は、後時刻の画像の予測位置に対して、追跡対象の種別に応じた2クラス識別処理を行って追跡対象の種別を識別する。より具体的には、識別部132は、第2の種別(ii)に対応する2クラス識別部132bを用いた2クラス識別処理P6hを行う。 Further, in the above-mentioned type candidate determination processing P6b, when the tracking processing unit 131 determines that the type of the tracking target is a motorcycle of the second type (ii), the identification unit 132 determines that the image at a later time is displayed. The predicted position is subjected to two-class identification processing according to the type of the tracking target to identify the type of the tracking target. More specifically, the identification unit 132 performs the two-class identification process P6h using the two-class identification unit 132b corresponding to the second type (ii).
 2クラス識別部132bは、記憶部200に記憶された2クラス識別学習データ207を用い、識別対象の予測位置とその周辺に対して2クラス識別処理を行って、追跡対象の種別を識別する。ここで、2クラス識別学習データ207は、画像処理装置IPAの複数の種別の識別対象のうちの一の種別である第2の種別の識別対象(ii)すなわち自動二輪車と、それ以外の識別対象の画像を多数入力して機械学習を行った自動二輪車用の2クラス識別学習データである。なお、2クラス識別学習データ206と、2クラス識別学習データ207は、異なる機械学習の手法により学習してもよい。 The two-class identification unit 132b uses the two-class identification learning data 207 stored in the storage unit 200 to perform two-class identification processing on the predicted position of the identification target and its surroundings to identify the type of the tracking target. Here, the two-class identification learning data 207 is the identification target (ii) of the second type, which is one of the identification targets of the plurality of types of the image processing device IPA, that is, the motorcycle and the other identification targets. This is two-class identification learning data for a motorcycle that has been machine-learned by inputting a large number of images of. The two-class identification learning data 206 and the two-class identification learning data 207 may be learned by different machine learning methods.
 2クラス識別処理P6hにおいて、2クラス識別部132bは、たとえば、追跡対象の予測位置およびその周辺の画像領域と、2クラス識別学習データ207との類似性の評価値を算出する。次に、2クラス識別部132bは、種別判定処理P6iを実行する。種別判定処理P6iにおいて、2クラス識別部132bは、たとえば、上記類似性の評価値が所定のしきい値以上である場合に、追跡対象の種別が、第2の種別の識別対象(ii)すなわち自動二輪車であることを判定し、登録処理P6jを実行する。 In the two-class identification process P6h, the two-class identification unit 132b calculates, for example, an evaluation value of similarity between the predicted position of the tracking target and the image area around it and the two-class identification learning data 207. Next, the second class identification unit 132b executes the type determination process P6i. In the type determination process P6i, in the second class identification unit 132b, for example, when the evaluation value of the similarity is equal to or higher than a predetermined threshold value, the type of the tracking target is the second type identification target (ii), that is, It is determined that the vehicle is a motorcycle, and the registration process P6j is executed.
 この登録処理P6jにおいて、2クラス識別部132bは、たとえば、追跡対象の種別として、第2の種別(ii)の識別対象である自動二輪車を記憶部200の出力情報208に登録する。また、登録処理P6jにおいて、2クラス識別部132bは、追跡対象の予測位置を、第2の種別の識別対象(ii)である自動二輪車の位置として記憶部200の出力情報208に登録する。次に、2クラス識別部132bは、たとえば、予測位置算出処理P6kを実行する。 In this registration process P6j, the second class identification unit 132b registers, for example, the motorcycle that is the identification target of the second type (ii) in the output information 208 of the storage unit 200 as the type of the tracking target. Further, in the registration process P6j, the second class identification unit 132b registers the predicted position of the tracking target in the output information 208 of the storage unit 200 as the position of the motorcycle which is the identification target (ii) of the second type. Next, the second class identification unit 132b executes, for example, the predicted position calculation process P6k.
 この予測位置算出処理P6kにおいて、2クラス識別部132bは、たとえば、前回時刻における追跡対象の位置情報と、今回時刻における追跡対象の位置情報との差分を求め、その差分をフレーム撮像間隔で除算することで、追跡対象の移動速度を算出する。さらに、2クラス識別部132bは、たとえば、今回時刻における追跡対象の位置情報と、追跡対象の移動速度に基づいて、次回時刻における追跡対象の予測位置を算出する。ここで算出された追跡対象の予測位置は、たとえば、次回時刻の追跡処理P6aにおいて用いられる。 In this predicted position calculation process P6k, the two-class identification unit 132b obtains, for example, the difference between the position information of the tracking target at the previous time and the position information of the tracking target at the current time, and divides the difference by the frame imaging interval. By doing so, the moving speed of the tracking target is calculated. Further, the second class identification unit 132b calculates, for example, the predicted position of the tracking target at the next time based on the position information of the tracking target at the current time and the moving speed of the tracking target. The predicted position of the tracking target calculated here is used, for example, in the tracking process P6a at the next time.
 また、前述の種別判定処理P6iにおいて、たとえば、上記類似性の評価値が所定のしきい値未満である場合に、2クラス識別部132bは、追跡対象が、第2の種別の識別対象(ii)である自動二輪車以外のその他の種別の識別対象または背景であることを判定し、登録削除処理P6gを実行する。登録削除処理P6gにおいて、2クラス識別部132bは、たとえば、記憶部200に登録された追跡対象および種別候補205を削除する。 Further, in the above-mentioned type determination process P6i, for example, when the evaluation value of the similarity is less than a predetermined threshold value, the tracking target of the second class identification unit 132b is the identification target (ii) of the second type. ), It is determined that the object is an identification target or a background of another type other than the motorcycle, and the registration deletion process P6g is executed. In the registration deletion process P6g, the two-class identification unit 132b deletes, for example, the tracking target and the type candidate 205 registered in the storage unit 200.
 前述の予測位置算出処理P6f,P6kまたは登録削除処理P6gの終了後、追跡処理部131は、たとえば、インクリメント処理P6lを実行する。このインクリメント処理P6lにおいて、追跡処理部131は、たとえば、次回の識別処理P6の処理対象となる追跡対象および種別候補205の登録番号RをR+1にインクリメントする。追跡処理部131は、たとえば、インクリメントされた追跡対象および種別候補205の登録番号Rが、多クラス識別処理P4で登録された追跡対象および種別候補205の数mを超えるまで、前述の追跡処理P6aからインクリメント処理P6lまでを含む識別処理P6を繰り返し実行する。 After the completion of the above-mentioned predicted position calculation processing P6f, P6k or registration deletion processing P6g, the tracking processing unit 131 executes, for example, the increment processing P6l. In the increment processing P6l, the tracking processing unit 131 increments, for example, the registration number R of the tracking target and the type candidate 205 to be processed in the next identification processing P6 to R + 1. The tracking processing unit 131 describes the above-mentioned tracking processing P6a until, for example, the incremented registration number R of the tracking target and the type candidate 205 exceeds several meters of the tracking target and the type candidate 205 registered in the multi-class identification process P4. The identification process P6 including the increment process P6l is repeatedly executed.
 識別処理P6の終了後、追跡対象の予測位置および種別は、図1に示すように、第2認識処理部130または記憶部200から出力情報208として出力処理部140へ出力される。出力処理部140は、たとえば、ADSやADASなどを構成する車両制御装置へ出力情報208を出力することで、自動運転や高度運転支援における信号の生成処理において出力情報208が利用される。 After the end of the identification process P6, the predicted position and type of the tracking target are output from the second recognition processing unit 130 or the storage unit 200 to the output processing unit 140 as output information 208 as shown in FIG. The output processing unit 140 outputs the output information 208 to, for example, a vehicle control device constituting ADS, ADAS, or the like, so that the output information 208 is used in the signal generation processing in automatic driving or advanced driving support.
 以下、本実施形態の画像処理装置IPAとそれを用いた画像処理方法IPMの作用を説明する。 Hereinafter, the operation of the image processing apparatus IPA of the present embodiment and the image processing method IPM using the same will be described.
 近年、車載カメラなどの撮像装置IDやレーダなどの外界認識センサを用いたADASやADSが注目を集めている。撮像装置IDの画像から識別対象の物体を識別する画像処理において、識別対象の種別の増加に対応するには、たとえば、多数の識別器を併用したり、各々の識別器の階層を増加させたりして、識別精度を向上させる必要がある。しかし、識別器の数や、識別器の階層を増加させると、物体の識別処理の負荷が増加して処理時間が必要な時間内に収まらなくなるおそれがある。 In recent years, ADAS and ADS using an image pickup device ID such as an in-vehicle camera and an external recognition sensor such as a radar have been attracting attention. In image processing for identifying an object to be identified from an image of an image pickup device ID, in order to cope with an increase in the types of identification targets, for example, a large number of classifiers may be used in combination, or the hierarchy of each classifier may be increased. Therefore, it is necessary to improve the identification accuracy. However, if the number of classifiers or the hierarchy of classifiers is increased, the load of object discrimination processing increases and the processing time may not be within the required time.
 本実施形態の画像処理装置IPAは、前述のように、多クラス識別部122と、追跡処理部131と、識別部132とを有している。多クラス識別部122は、撮像装置IDによって撮影された画像に対して多クラス識別処理P4を行って複数の種別の識別対象を識別する。追跡処理部131は、多クラス識別処理P4により識別された識別対象を追跡対象とする画像追跡を行って前時刻の画像に基づく後時刻の画像における追跡対象の予測位置を算出する。識別部132は、後時刻の画像の予測位置に対して追跡対象の種別に応じた2クラス識別処理P6c,P6hを行って追跡対象の種別を識別する。 As described above, the image processing apparatus IPA of the present embodiment has a multi-class identification unit 122, a tracking processing unit 131, and an identification unit 132. The multi-class identification unit 122 performs the multi-class identification process P4 on the image captured by the image pickup apparatus ID to identify a plurality of types of identification targets. The tracking processing unit 131 performs image tracking for the identification target identified by the multi-class identification processing P4, and calculates the predicted position of the tracking target in the image at the later time based on the image at the previous time. The identification unit 132 performs two-class identification processes P6c and P6h according to the type of the tracking target with respect to the predicted position of the image at a later time to identify the type of the tracking target.
 また、本実施形態の画像処理方法IPMは、撮像装置IDによって撮影された画像に対して多クラス識別処理P4を行って複数の種別の識別対象を識別する。さらに、画像処理方法IPMは、多クラス識別処理P4により識別された識別対象を追跡対象とする画像追跡を行って前時刻の画像に基づく後時刻の画像における追跡対象の予測位置を算出する。
そして、画像処理方法IPMは、後時刻の画像の予測位置に対して追跡対象の種別に応じた2クラス識別処理P6c,P6hを行って追跡対象の種別を識別する。
Further, the image processing method IPM of the present embodiment performs the multi-class identification process P4 on the image captured by the image pickup apparatus ID to identify a plurality of types of identification targets. Further, the image processing method IPM performs image tracking using the identification target identified by the multi-class identification process P4 as the tracking target, and calculates the predicted position of the tracking target in the image at the later time based on the image at the previous time.
Then, the image processing method IPM performs two-class identification processes P6c and P6h according to the type of the tracking target with respect to the predicted position of the image at a later time to identify the type of the tracking target.
 上記本実施形態の画像処理装置IPAおよび画像処理方法IPMによれば、画像から複数の識別対象を識別する画像処理の処理負荷の低減と識別精度の向上が可能になる。より具体的には、本実施形態の画像処理装置IPAおよび画像処理方法IPMによれば、多クラス識別器のみ、または、2クラス識別器のみを用いて画像から複数の識別対象を識別する場合と比較して、画像処理の処理負荷の低減が可能になる。 According to the image processing device IPA and the image processing method IPM of the present embodiment, it is possible to reduce the processing load of image processing for identifying a plurality of identification targets from an image and improve the identification accuracy. More specifically, according to the image processing apparatus IPA and the image processing method IPM of the present embodiment, there is a case where a plurality of identification targets are identified from an image using only a multi-class classifier or only a two-class classifier. In comparison, the processing load of image processing can be reduced.
 その理由は、多クラス識別部122による多クラス識別処理と、識別部132による2クラス識別処理を併用することで、多クラス識別処理のみを用いる場合と比較して、多クラス識別処理の階層を浅くして、処理負荷を低減することができるためである。このように、多クラス識別処理の階層を浅くすることで、多クラス識別処理の識別精度が低下しても、多クラス識別処理で識別した識別対象を追跡対象とし、その追跡対象に2クラス識別処理を行うことで、識別精度の向上が可能になる。 The reason is that by using the multi-class identification process by the multi-class identification unit 122 and the two-class identification process by the identification unit 132 together, the hierarchy of the multi-class identification process can be improved as compared with the case where only the multi-class identification process is used. This is because the processing load can be reduced by making it shallow. By making the hierarchy of the multi-class identification process shallow in this way, even if the identification accuracy of the multi-class identification process deteriorates, the identification target identified by the multi-class identification process is set as the tracking target, and the tracking target is classified into two classes. By performing the processing, it becomes possible to improve the identification accuracy.
 また、多クラス識別処理P4により識別された識別対象を追跡対象とし、後時刻の画像の追跡対象の予測位置に対して追跡対象の種別に応じた2クラス識別処理を行うことで、2クラス識別処理を極めて限定された画像領域に対してのみ、種別を限定して行うことができる。これにより、2クラス識別処理の処理量を低減して処理負荷を低減するとともに、多クラス識別処理の誤認識を判別して、複数の種別の識別対象の識別精度を向上させることが可能になる。 In addition, the identification target identified by the multi-class identification process P4 is set as the tracking target, and the two-class identification process is performed for the predicted position of the tracking target of the image at a later time according to the type of the tracking target. The processing can be performed by limiting the types only to the extremely limited image area. This makes it possible to reduce the processing amount of the two-class identification processing and reduce the processing load, and to discriminate erroneous recognition of the multi-class identification processing to improve the identification accuracy of a plurality of types of identification targets. ..
 また、本実施形態の画像処理装置IPAにおいて、多クラス識別部122は、複数の種別の識別対象を入力して機械学習を行った多クラス識別学習データ204を用いて多クラス識別処理P4を行う。この構成により、画像から複数の種別の識別対象を識別する多クラス識別処理を、機械学習の結果に基づいて精度よく行うことができる。 Further, in the image processing apparatus IPA of the present embodiment, the multi-class identification unit 122 performs the multi-class identification process P4 using the multi-class identification learning data 204 obtained by inputting a plurality of types of identification targets and performing machine learning. .. With this configuration, it is possible to perform multi-class identification processing for identifying a plurality of types of identification targets from an image with high accuracy based on the result of machine learning.
 また、本実施形態の画像処理装置IPAにおいて、多クラス識別部122は、少なくとも第1の種別の識別対象(i)と第2の種別の識別対象(ii)とが入力されて機械学習が行われた多クラス識別学習データ204を用いて多クラス識別処理P4を行う。この構成により、画像に含まれる複数の種別の識別対象の中から、第1の種別の識別対象(i)と第2の種別の識別対象(ii)とを精度よく判別することが可能になる。 Further, in the image processing apparatus IPA of the present embodiment, the multi-class identification unit 122 is subjected to machine learning by inputting at least the identification target (i) of the first type and the identification target (ii) of the second type. The multi-class discrimination process P4 is performed using the multi-class discrimination learning data 204. With this configuration, it becomes possible to accurately discriminate between the identification target (i) of the first type and the identification target (ii) of the second type from among the identification targets of a plurality of types included in the image. ..
 また、本実施形態の画像処理装置IPAにおいて、第1の種別の識別対象(i)は自動車であり、第2の種別の識別対象(ii)は自動二輪車である。この構成により、画像に含まれる複数の種別の識別対象の中から、第1の種別の識別対象(i)である自動車と、第2の種別の識別対象(ii)である自動二輪車とを精度よく判別することが可能になる。 Further, in the image processing apparatus IPA of the present embodiment, the identification target (i) of the first type is an automobile, and the identification target (ii) of the second type is a motorcycle. With this configuration, among the plurality of types of identification targets included in the image, the automobile which is the identification target (i) of the first type and the motorcycle which is the identification target (ii) of the second type are accurately selected. It becomes possible to distinguish well.
 また、本実施形態の画像処理装置IPAにおいて、識別部132は、複数の2クラス識別部132a,132bを有している。また、複数の2クラス識別部132a,132bは、それぞれ、複数の種別のうちの一の種別の識別対象を入力して機械学習を行った2クラス識別学習データ206,207を用いて、2クラス識別処理を行う。この構成により、2クラス識別部132aと2クラス識別部132bの各々によって、複数の種別のうちの一の種別の識別対象であるか否かを精度よく判定することが可能になる。 Further, in the image processing apparatus IPA of the present embodiment, the identification unit 132 has a plurality of two- class identification units 132a and 132b. Further, the plurality of two- class identification units 132a and 132b use the two-class identification learning data 206 and 207 obtained by inputting the identification target of one of the plurality of types and performing machine learning, respectively, for two classes. Perform identification processing. With this configuration, each of the two-class identification unit 132a and the two-class identification unit 132b can accurately determine whether or not one of the plurality of types is the identification target.
 また、本実施形態の画像処理装置IPAにおいて、2クラス識別部132a,132bの数は、多クラス識別部122によって識別する種別の数と等しい。より具体的には、本実施形態の画像処理装置IPAにおいて、多クラス識別部122は、第1の種別の識別対象(i)と第2の種別の識別対象(ii)の二つの種別を識別し、識別部132は、二つの2クラス識別部132a,132bを有している。この構成により、多クラス識別部122で識別されたすべての種別の識別対象に対して2クラス識別処理を行って、識別対象の種別の識別精度を向上させることができる。 Further, in the image processing apparatus IPA of the present embodiment, the number of the two- class identification units 132a and 132b is equal to the number of types identified by the multi-class identification unit 122. More specifically, in the image processing apparatus IPA of the present embodiment, the multi-class identification unit 122 identifies two types, the first type identification target (i) and the second type identification target (ii). However, the identification unit 132 has two two- class identification units 132a and 132b. With this configuration, it is possible to perform two-class identification processing on the identification targets of all the types identified by the multi-class identification unit 122, and improve the identification accuracy of the types of the identification targets.
 また、本実施形態の画像処理装置IPAは、画像から複数の種別の識別対象のいずれかが含まれる可能性がある画像領域を選択する画像領域選択部121を有している。そして、多クラス識別部122は、画像領域選択部121によって選択された画像領域から複数の種別の識別対象を識別する。この構成により、多クラス識別部122による多クラス識別処理を限定された画像領域のみに対して行うことができ、多クラス識別処理の処理量を削減して、処理負荷を低減することができる。 Further, the image processing apparatus IPA of the present embodiment has an image area selection unit 121 that selects an image area that may include any of a plurality of types of identification targets from the image. Then, the multi-class identification unit 122 identifies a plurality of types of identification targets from the image area selected by the image area selection unit 121. With this configuration, the multi-class identification process by the multi-class identification unit 122 can be performed only on a limited image area, the processing amount of the multi-class identification process can be reduced, and the processing load can be reduced.
 以上説明したように、本実施形態によれば、画像から複数の識別対象を識別する画像処理の処理負荷の低減と識別精度の向上が可能な画像処理装置IPAおよび画像処理方法IPMを提供することができる。 As described above, according to the present embodiment, there is provided an image processing device IPA and an image processing method IPM capable of reducing the processing load of image processing for identifying a plurality of identification targets from an image and improving the identification accuracy. Can be done.
 以上、図面を用いて本開示に係る画像処理装置および画像処理方法の実施形態を詳述してきたが、具体的な構成はこの実施形態に限定されるものではなく、本開示の要旨を逸脱しない範囲における設計変更等があっても、それらは本開示に含まれるものである。 Although the embodiments of the image processing apparatus and the image processing method according to the present disclosure have been described in detail with reference to the drawings, the specific configuration is not limited to this embodiment and does not deviate from the gist of the present disclosure. Any design changes, etc. within scope are included in this disclosure.
121  画像領域選択部
122  多クラス識別部
131  追跡処理部
132  識別部
132a 2クラス識別部
132b 2クラス識別部
204  多クラス識別学習データ
206  2クラス識別学習データ
207  2クラス識別学習データ
ID   撮像装置
IPA  画像処理装置
IPM  画像処理方法
P4   多クラス識別処理
P6c  2クラス識別処理
P6h  2クラス識別処理
121 Image area selection unit 122 Multi-class identification unit 131 Tracking processing unit 132 Identification unit 132a Two-class identification unit 132b Two-class identification unit 204 Multi-class identification learning data 206 Two-class identification learning data 207 Two-class identification learning data ID Imaging device IPA image Processing device IPM image processing method P4 Multi-class identification processing P6c 2 class identification processing P6h 2 class identification processing

Claims (8)

  1.  撮像装置によって撮影された画像に対して多クラス識別処理を行って複数の種別の識別対象を識別する多クラス識別部と、
     前記多クラス識別処理により識別された前記識別対象を追跡対象とする画像追跡を行って前時刻の前記画像に基づく後時刻の前記画像における前記追跡対象の予測位置を算出する追跡処理部と、
     前記後時刻の前記画像の前記予測位置に対して前記追跡対象の前記種別に応じた2クラス識別処理を行って前記追跡対象の前記種別を識別する識別部と、
     を有することを特徴とする画像処理装置。
    A multi-class identification unit that performs multi-class identification processing on images taken by an image pickup device to identify multiple types of identification targets, and a multi-class identification unit.
    An image tracking unit that performs image tracking for the identification target identified by the multi-class identification process and calculates a predicted position of the tracking target in the image at a later time based on the image at the previous time, and a tracking processing unit.
    An identification unit that identifies the type of the tracking target by performing two-class identification processing according to the type of the tracking target for the predicted position of the image at the later time.
    An image processing apparatus characterized by having.
  2.  前記多クラス識別部は、複数の前記種別の前記識別対象を入力して機械学習を行った多クラス識別学習データを用いて前記多クラス識別処理を行うことを特徴とする請求項1に記載の画像処理装置。 The first aspect of claim 1, wherein the multi-class identification unit performs the multi-class identification process using the multi-class identification learning data obtained by inputting a plurality of the identification targets of the above type and performing machine learning. Image processing device.
  3.  前記多クラス識別部は、少なくとも第1の種別の前記識別対象と第2の種別の前記識別対象とが入力されて機械学習が行われた前記多クラス識別学習データを用いて前記多クラス識別処理を行うことを特徴とする請求項2に記載の画像処理装置。 The multi-class identification unit uses the multi-class identification learning data in which at least the first type of identification target and the second type of identification target are input and machine learning is performed, and the multi-class identification process is performed. 2. The image processing apparatus according to claim 2.
  4.  前記第1の種別の前記識別対象は自動車であり、前記第2の種別の前記識別対象は自動二輪車であることを特徴とする請求項3に記載の画像処理装置。 The image processing device according to claim 3, wherein the identification target of the first type is an automobile, and the identification target of the second type is a motorcycle.
  5.  前記識別部は、複数の2クラス識別部を有し、
     複数の前記2クラス識別部は、それぞれ、複数の前記種別のうちの一の前記種別の前記識別対象を入力して機械学習を行った2クラス識別学習データを用いて、前記2クラス識別処理を行うことを特徴とする請求項1に記載の画像処理装置。
    The identification unit has a plurality of two-class identification units, and has a plurality of two-class identification units.
    The plurality of the two-class identification units perform the two-class identification process using the two-class identification learning data obtained by inputting the identification target of the type of one of the plurality of types and performing machine learning. The image processing apparatus according to claim 1, wherein the image processing apparatus is performed.
  6.  前記2クラス識別部の数は、前記多クラス識別部によって識別する前記種別の数と等しいことを特徴とする請求項5に記載の画像処理装置。 The image processing apparatus according to claim 5, wherein the number of the two-class identification units is equal to the number of the types identified by the multi-class identification unit.
  7.  前記画像から複数の前記種別の前記識別対象のいずれかが含まれる可能性がある画像領域を選択する画像領域選択部を有し、
     前記多クラス識別部は、前記画像領域選択部によって選択された前記画像領域から複数の前記種別の前記識別対象を識別することを特徴とする請求項1に記載の画像処理装置。
    It has an image area selection unit that selects an image area that may include any one of the above-mentioned identification targets of the above-mentioned type from the above-mentioned image.
    The image processing apparatus according to claim 1, wherein the multi-class identification unit identifies a plurality of the identification targets of the type from the image area selected by the image area selection unit.
  8.  撮像装置によって撮影された画像に対して多クラス識別処理を行って複数の種別の識別対象を識別し、
     前記多クラス識別処理により識別された前記識別対象を追跡対象とする画像追跡を行って前時刻の前記画像に基づく後時刻の前記画像における前記追跡対象の予測位置を算出し、
     前記後時刻の前記画像の前記予測位置に対して前記追跡対象の前記種別に応じた2クラス識別処理を行って前記追跡対象の前記種別を識別することを特徴とする画像処理方法。
    Multi-class identification processing is performed on the image taken by the image pickup device to identify multiple types of identification targets.
    Image tracking is performed on the identification target identified by the multi-class identification process, and the predicted position of the tracking target in the image at the later time based on the image at the previous time is calculated.
    An image processing method comprising performing two-class identification processing according to the type of the tracking target on the predicted position of the image at a later time to identify the type of the tracking target.
PCT/JP2021/004261 2020-06-11 2021-02-05 Image processing device and image processing method WO2021250934A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202180039409.2A CN115769253A (en) 2020-06-11 2021-02-05 Image processing apparatus and image processing method
DE112021002170.2T DE112021002170T5 (en) 2020-06-11 2021-02-05 Image processing device and image processing method
JP2022530019A JP7323716B2 (en) 2020-06-11 2021-02-05 Image processing device and image processing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-101500 2020-06-11
JP2020101500 2020-06-11

Publications (1)

Publication Number Publication Date
WO2021250934A1 true WO2021250934A1 (en) 2021-12-16

Family

ID=78847176

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/004261 WO2021250934A1 (en) 2020-06-11 2021-02-05 Image processing device and image processing method

Country Status (4)

Country Link
JP (1) JP7323716B2 (en)
CN (1) CN115769253A (en)
DE (1) DE112021002170T5 (en)
WO (1) WO2021250934A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002133421A (en) * 2000-10-18 2002-05-10 Fujitsu Ltd Moving body recognition method and device
WO2019220622A1 (en) * 2018-05-18 2019-11-21 日本電気株式会社 Image processing device, system, method, and non-transitory computer readable medium having program stored thereon

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6833630B2 (en) 2017-06-22 2021-02-24 株式会社東芝 Object detector, object detection method and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002133421A (en) * 2000-10-18 2002-05-10 Fujitsu Ltd Moving body recognition method and device
WO2019220622A1 (en) * 2018-05-18 2019-11-21 日本電気株式会社 Image processing device, system, method, and non-transitory computer readable medium having program stored thereon

Also Published As

Publication number Publication date
JPWO2021250934A1 (en) 2021-12-16
CN115769253A (en) 2023-03-07
DE112021002170T5 (en) 2023-03-02
JP7323716B2 (en) 2023-08-08

Similar Documents

Publication Publication Date Title
EP1930863B1 (en) Detecting and recognizing traffic signs
US8305431B2 (en) Device intended to support the driving of a motor vehicle comprising a system capable of capturing stereoscopic images
US8670592B2 (en) Clear path detection using segmentation-based method
US8611585B2 (en) Clear path detection using patch approach
US7366325B2 (en) Moving object detection using low illumination depth capable computer vision
US8634593B2 (en) Pixel-based texture-less clear path detection
US11003928B2 (en) Using captured video data to identify active turn signals on a vehicle
JP6626410B2 (en) Vehicle position specifying device and vehicle position specifying method
US10442438B2 (en) Method and apparatus for detecting and assessing road reflections
EP2575078B1 (en) Front vehicle detecting method and front vehicle detecting apparatus
CN105825185A (en) Early warning method and device against collision of vehicles
RU2635280C2 (en) Device for detecting three-dimensional objects
JP4951481B2 (en) Road marking recognition device
CN113435237A (en) Object state recognition device, recognition method, recognition program, and control device
Kim et al. An intelligent and integrated driver assistance system for increased safety and convenience based on all-around sensing
Álvarez et al. Perception advances in outdoor vehicle detection for automatic cruise control
WO2021250934A1 (en) Image processing device and image processing method
Beresnev et al. Automated Driving System based on Roadway and Traffic Conditions Monitoring.
CN114677658A (en) Billion-pixel dynamic large-scene image acquisition and multi-target detection method and device
JP2018073049A (en) Image recognition device, image recognition system, and image recognition method
Wangsiripitak et al. Traffic light and crosswalk detection and localization using vehicular camera
Rosebrock et al. Real-time vehicle detection with a single camera using shadow segmentation and temporal verification
Wu et al. Color vision-based multi-level analysis and fusion for road area detection
JP7193942B2 (en) vehicle detector
CN108460323B (en) Rearview blind area vehicle detection method fusing vehicle-mounted navigation information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21822264

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022530019

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 21822264

Country of ref document: EP

Kind code of ref document: A1