WO2022201776A1 - Information processing method and information processing device - Google Patents

Information processing method and information processing device Download PDF

Info

Publication number
WO2022201776A1
WO2022201776A1 PCT/JP2022/001336 JP2022001336W WO2022201776A1 WO 2022201776 A1 WO2022201776 A1 WO 2022201776A1 JP 2022001336 W JP2022001336 W JP 2022001336W WO 2022201776 A1 WO2022201776 A1 WO 2022201776A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging device
model
information processing
camera
area
Prior art date
Application number
PCT/JP2022/001336
Other languages
French (fr)
Japanese (ja)
Inventor
寿伸 杉山
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Priority to JP2023508671A priority Critical patent/JPWO2022201776A1/ja
Publication of WO2022201776A1 publication Critical patent/WO2022201776A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • This technology relates to the technical field of information processing methods and information processing devices for verifying the consistency between the real environment and the simulation environment.
  • evaluations using simulation environments are performed to evaluate safety in various driving environments. For example, by reproducing the driving environment with a CG (Computer Graphics) model, inputting data based on the CG model into a camera model imitating an actual camera, and performing recognition processing on the image data output from the camera model. Evaluation is performed using the obtained recognition result.
  • CG Computer Graphics
  • input data for a camera model is generated by converting actual image data captured in a real environment into virtual image data.
  • This technology was created in view of these problems, and aims to propose a method for efficiently verifying the consistency between the real environment and the simulation environment.
  • An information processing method inputs first image data output from a first imaging device that has captured an image of a specific subject, and two-dimensional input data based on measurement results of measuring light from the specific subject. and second image data output from the camera model that is used to compare the characteristics of the first imaging device and the characteristics of the camera model to compare the characteristics of the first imaging device and the camera model. It verifies consistency.
  • the comparison of the characteristics of the first imaging device and the camera model may be performed, for example, by comparing the first image data and the second image data, or by applying image recognition processing to the first image data. and the recognition result obtained by applying image recognition processing to the second image data.
  • Calibration is performed by adjusting the yaw direction and pitch direction of the first imaging device so that the center of the first target placed at least two locations at the same height coincides with the center of the angle of view. is performed.
  • FIG. 4 is a diagram showing a flow performed in an on-vehicle camera system for consistency verification
  • FIG. 10 is a diagram showing a flow executed in a simulator system for consistency verification
  • It is a figure which shows the structural example of a camera model.
  • FIG. 10 is a flow chart showing the overall flow of consistency verification
  • FIG. FIG. 10 is a flowchart showing a specific processing example of camera model consistency verification
  • FIG. FIG. 4 is a schematic diagram showing a state in which a luminance box is photographed by the first imaging device
  • FIG. 10 is a diagram showing an example of a verification area set in RAW data output from the first imaging device
  • FIG. 4 is a diagram showing an example of RAW data output from the first imaging device
  • FIG. 4 is a diagram showing a state in which a measurement area set in a luminance box is measured using a spectral radiance meter; It is a figure which shows an example of the input data using a spectral radiance value as input data to a camera model.
  • FIG. 4 is a diagram showing an example of RAW data output from a camera model; It is a figure which shows an example of a color chart. It is a figure which shows an example of a gray chart.
  • FIG. 4 is a diagram showing an example of a method of measuring spectral radiance values for color charts and gray charts.
  • FIG. 17 is a flowchart showing a specific example of processing for calibrating the camera posture together with FIG. 16 ; FIG.
  • FIG. 16 is a flowchart showing a specific example of processing for calibrating the camera posture together with FIG. 15 ;
  • FIG. 4 is a schematic diagram showing a state in which two indicators are arranged in front of the vehicle;
  • FIG. 4 is a schematic diagram showing a state in which indicators are spaced apart in the width direction of the vehicle;
  • FIG. 10 is an explanatory diagram of calibration in the roll direction of the first imaging device;
  • FIG. 4 is a schematic diagram showing a state in which indices for adjusting the depression angle of the first imaging device are arranged;
  • FIG. 4 is a diagram showing an example of a filter included in the first imaging device, and this diagram is a diagram showing a Bayer array filter.
  • FIG. 4 is a schematic diagram showing a state in which two indicators are arranged in front of the vehicle;
  • FIG. 4 is a schematic diagram showing a state in which indicators are spaced apart in the width direction of the vehicle;
  • FIG. 10 is an explanatory diagram of calibration in the roll direction of
  • FIG. 4 is a diagram showing an example of a filter provided in the first imaging device, and this diagram is a diagram showing a filter of RCCB arrangement;
  • FIG. 4 is a diagram showing an example of a filter provided in the first imaging device, and this diagram is a diagram showing an RGBIR arrangement filter.
  • FIG. 4 is a diagram showing an example of a filter included in the first imaging device, and this diagram is a diagram showing a complementary color filter;
  • FIG. 1 is a block diagram of a computer device; FIG.
  • Consistency verification > ⁇ 2.
  • Process flow of each system > ⁇ 3.
  • Consistency verification This embodiment will be described with reference to the accompanying drawings.
  • consistency verification is performed between an in-vehicle camera system S1 equipped with a first imaging device 1 mounted on a vehicle 100 and a simulator system S2 equipped with a camera model 2 simulating an actual imaging device. is.
  • control algorithms are improved by actually running the vehicle 100 under various driving environments, acquiring data and control results obtained from various sensors, and repeating examinations.
  • MBD Model Based Design
  • CMOS Complementary Metal-Oxide Semiconductor
  • CCD Charge Coupled Device
  • FIG. 1 shows the flow of processing executed in the in-vehicle camera system S1 for matching verification
  • FIG. 2 shows the flow of processing executed in the simulator system S2.
  • the in-vehicle camera system S1 uses the first imaging device 1 to capture an actual driving environment scene A1.
  • the first imaging device 1 includes an optical member such as a lens, an image sensor, and the like.
  • a signal obtained by imaging by the first imaging device 1 is input to the camera signal processing model A2 as an imaging signal (RAW data).
  • the camera signal processing model A2 executes processing for generating RGB (Red, Green, Blue) images, white balance adjustment, sharpness adjustment, contrast adjustment processing, and the like.
  • the captured image data output from the camera signal processing model A2 is input to the recognition model A3. Captured image data output from the camera signal processing model A2 is referred to as a first image G1.
  • the recognition model A3 performs a process of identifying and labeling the imaged subject based on the input image data. That is, the recognition model A3 performs processing for detecting the subject captured in the image. Objects to be detected include, for example, pedestrians, traffic lights, signs, and vehicles. The detection result of the recognition model A3 is input to the subsequent integrated control model A4.
  • the integrated control model A4 controls the vehicle 100 using recognition results based on image data. This realizes a collision mitigation braking function, an ACC (Adaptive Cruise Control) function, and various warning functions.
  • ACC Adaptive Cruise Control
  • environment setting B1, scenario generation B2 and rendering B3 are performed in order to simulate the actual driving environment scene A1.
  • a simulated data set is set. These data sets include shape information, color information, material information, spectral reflectance information, etc. of polygons and the like used in ray tracing in the subsequent rendering B3.
  • a travel plan that simulates the travel mode of the vehicle 100 is set.
  • the travel plan includes not only the route that the vehicle 100 travels, but also operations such as steering wheel operation and accelerator operation during travel. This can also be rephrased as the behavior of vehicle 100 based on the driving operation.
  • the driving plan generated in the scenario generation B2 is input to the subsequent rendering B3.
  • a 3DCG (3-Dimensional CG) model is generated by executing rendering processing based on ray tracing.
  • An output result of rendering processing is input to the camera model 2 .
  • the output data of the rendering process is the spectral radiance value for each pixel developed in a two-dimensional array corresponding to the two-dimensional pixel array of the image sensor model provided in the camera model 2. Furthermore, the spectral radiance values are output at regular time intervals corresponding to the frame rate of the first imaging device 1 . For example, when the imaging of the first imaging device 1 to be simulated by the camera model 2 is performed at 30 fps (frames per second), the spectral radiance value of each pixel developed in a two-dimensional array is 30 times.
  • the camera model 2 is a model for simulating the imaging function of the first imaging device 1 of the in-vehicle camera system S1, and as shown in FIG. 3, includes an optical model B7 and a sensor model B8.
  • the optical model B7 is a model imitating various lens systems, a shutter mechanism, an IR (Infrared) cut filter, etc., which are optical members of the first imaging device 1, and the spectral radiance value input to the camera model 2 is , the effect of the optical member is calculated, and processing is performed to convert it into spectral irradiance for each pixel.
  • the optical model B7 performs projection correction processing, aperture correction processing, shading correction processing, IR cut filter correction processing, and the like.
  • the spectral irradiance for each pixel is input to a sensor model B8 that imitates the image sensor provided in the first imaging device 1.
  • the sensor model B8 includes a filter model simulating various optical filters such as color filters provided in the image sensor, a pixel array model simulating light receiving operation and readout operation in the pixel unit, and the like.
  • the effect of the color filter and the pixel portion on the illuminance is calculated, and output to the subsequent camera signal processing model B4 as RAW data for each pixel.
  • the sensor model B8 performs color filter correction processing, photoelectric conversion processing, AD (Analog to Digital) conversion processing, and the like.
  • the RAW data output from the first imaging device 1 of the in-vehicle camera system S1 and the RAW data output from the camera model 2 of the simulator system S2 have the same data format. Therefore, the camera signal processing model A2 of the vehicle-mounted camera system S1 to which RAW data is input and the camera signal processing model B4 of the simulator system S2 can be the same. Note that the captured image data output from the camera signal processing model B4 is referred to as a second image G2.
  • the recognition model A3 of the in-vehicle camera system S1 and the recognition model B5 of the simulator system S2 are the same, and the integrated control model A4 of the in-vehicle camera system S1 and the integrated control model B6 of the simulator system S2 are the same.
  • the flow from the environment setting B1 to the recognition model B5 in the simulator system S2 is the flow from the actual driving environment scene A1 to the recognition model A3 in the in-vehicle camera system S1 (broken line portion in FIG. 1). It is necessary to appropriately simulate the Verification of this point is verification of conformity between the in-vehicle camera system S1 and the simulator system S2.
  • RAW data output from the first imaging device 1 in the in-vehicle camera system S1 RAW data output from the camera model 2 in the simulator system S2, and output from the recognition model A3 in the in-vehicle camera system S1 Consistency verification is performed using the detection result and the detection result output from the recognition model B5 in the simulator system S2.
  • the device that performs each process for verifying consistency may be a computer device included in the simulator system S2, or may be a device separate from the computer device included in the simulator system S2.
  • the computer device is configured to include an arithmetic processing section including a CPU (Central Processing Unit) or the like in order to execute each process for consistency verification.
  • CPU Central Processing Unit
  • step S101 of FIG. 4 the arithmetic processing unit of the computer device verifies whether or not the camera model 2 appropriately simulates the optical members and image sensor of the first imaging device 1 as an actual device. That is, the arithmetic processing unit verifies the consistency between the camera model 2 and the first imaging device 1 .
  • the matching verification between the camera model 2 and the first imaging device 1 determines whether or not the RAW data output from the camera model 2 and the RAW data output from the first imaging device 1 match. Note that it is difficult to completely match the RAW data output from the first imaging device 1, which is an actual device, with the RAW data output from the camera model 2 using 3DCG. Consistency is verified using statistical data on the RAW data. Specific details of processing for verifying consistency of RAW data will be described later.
  • step S ⁇ b>102 the arithmetic processing unit performs determination processing based on the result of comparison between the RAW data output from the camera model 2 and the RAW data output from the first imaging device 1 . If the difference between the RAW data does not fall within the predetermined range, that is, if it is determined that the matching verification fails, the arithmetic processing unit performs factor analysis and model correction processing in step S103.
  • step S103 factor analysis is performed, as well as a process of presenting the result of factor analysis to the operator, a process of presenting a model to be corrected, and the like. Further, a process of changing and overwriting the variables given to the model may be performed in step S103. Alternatively, the arithmetic processing unit may perform a process of presenting data used for factor analysis to the worker in step S103, and the worker may perform factor analysis based on the presented data.
  • the arithmetic processing unit After finishing the factor analysis and model correction processing, the arithmetic processing unit returns to step S101 and verifies the consistency between the camera model 2 and the first imaging device 1 .
  • step S102 when it is determined that the matching verification is passed, that is, when it is determined that the difference between the RAW data is within a predetermined range, the arithmetic processing unit detects the recognition model A3 of the in-vehicle camera system S1 and Consistency verification is performed using each detection result output from B5, which also outputs the recognition of the simulator system S2.
  • step S ⁇ b>104 the arithmetic processing unit performs calibration for the orientation of the first imaging device 1 and the orientation of the camera model 2 .
  • the posture of the first imaging device 1 is calibrated so that the first imaging device 1 captures an image in a predetermined direction.
  • the optical axis direction set in the camera model 2 is changed so as to match the optical axis direction of the first imaging device 1 . Specific processing contents of step S104 will be described later.
  • step S105 the arithmetic processing unit verifies the consistency of the camera system in the running environment.
  • the matching verification of the camera system is to verify the matching between the dashed line portion shown in FIG. 1 in the in-vehicle camera system S1 and the dashed line portion shown in FIG. 3 in the simulator system S2. Specifically, a process of confirming consistency between the subject detection result output from the recognition model A3 of the in-vehicle camera system S1 and the subject detection result output from the recognition model B5 of the simulator system S2 is executed.
  • the detection result is, for example, data in which information specifying the pixel area of the first image G1 input to the recognition model A3 and label information that is the result of classifying the subject imaged there are linked. Specifically, it is assumed that a label "car" is associated with a certain predetermined pixel area.
  • the detection result may include a plurality of pairs of pixel area information and label information for one first image G1.
  • the detection result is obtained by linking the information specifying the pixel area of the second image G2 input to the recognition model B5 and the label information about the imaged subject.
  • the detection result may contain information other than the label.
  • it may contain brightness information of pixels or pixel regions. Specifically, information for specifying a certain pixel area, label information for specifying a subject imaged in the area, and brightness information about the subject, for example, the maximum and minimum values of brightness. Information such as the average value and variance may be linked. If the subjects imaged in the specific regions in the first image G1 and the second image G2 are classified into the same category and the brightness of the imaged subjects has a similar tendency, the detection result obtained by the simulator system S2 is used. It is possible to verify the control algorithm in the same way as with the actual equipment.
  • the detection result obtained by the simulator system S2 is used. It can be judged that it is impossible to verify the control algorithm by using it in the same way as the actual machine.
  • step S106 the arithmetic processing unit performs determination processing based on the result of comparison between the detection result output from the recognition model A3 and the detection result output from the recognition model B5. If both detection results are significantly different, that is, if it is determined that the consistency verification fails, the arithmetic processing unit performs factor analysis and model correction processing in step S107. However, in the processes of steps S101 and S102, the consistency between the camera model 2 and the first imaging device 1 is ensured, so in step S107, the parts other than the parts indicated by the dashed lines in both FIG. 1 and FIG. factor analysis and model modification.
  • step S106 similarly to the process of step S103, factor analysis, process of presenting the factor analysis result to the worker, and process of presenting the model to be corrected may be performed, or the variables given to the model may be changed.
  • a process of overwriting may be performed, or a process of presenting the data used for factor analysis to the worker may be performed to prompt the worker to perform the factor analysis work.
  • step S107 After completing the factor analysis and model correction processing in step S107, the arithmetic processing unit returns to step S105 to verify the consistency of the camera system in the driving environment.
  • FIG. 5 An example in which the arithmetic processing unit described above executes each processing shown in FIG. 5 will be described, but the present invention is not limited to this. 5 may be executed, or each process may be executed by a system configured including other devices.
  • the arithmetic processing unit determines whether or not an instruction to start shooting has been received.
  • the instruction to start photographing may be input by an operator to a computer device having an arithmetic processing unit, or may be performed by a first imaging device 1 that detects that a subject to be photographed is positioned at a predetermined position within the angle of view of the first imaging device 1 . 1 may be input from the imaging device 1 .
  • the arithmetic processing unit repeatedly executes the processing of step S201 until an instruction to start shooting is received.
  • the instruction to start photographing is given by the operator, for example, when predetermined conditions are met. Satisfying a predetermined condition means, for example, completion of installation of a subject to be measured within the angle of view of the first imaging device 1 .
  • FIG. 6 shows a state in which the luminance box 3 is arranged within the angle of view of the first imaging device 1. In this state, it is possible to image the luminance box 3 as the object to be measured. After installing the brightness box 3 at a predetermined position in front of the first imaging device 1, the operator instructs the arithmetic processing unit to start imaging.
  • the luminance box 3 has a luminous body disposed therein, and one surface is formed as a diffusion surface 3a for transmitting and diffusing light emitted from the luminous body disposed therein.
  • the brightness box 3 is arranged so that the diffusion surface 3 a faces the first imaging device 1 .
  • the arithmetic processing unit transmits an imaging operation instruction to the first imaging device 1 in step S202. Thereby, the imaging operation in the first imaging device 1 is executed.
  • the arithmetic processing unit acquires the RAW data output from the first imaging device 1 in step S203.
  • a predetermined area is set as the verification area Ar.
  • step S204 the arithmetic processing unit performs statistical processing for each verification area Ar set as a predetermined area in the RAW data.
  • the RAW data is data corresponding to the two-dimensional arrangement of pixels formed in the image sensor of the first imaging device 1 . That is, as shown in FIG. 7, RAW data can be regarded as two-dimensional data.
  • a verification area Ar1 is set in the center
  • a verification area Ar2 is set near the upper left corner
  • a verification area Ar3 is set near the upper right corner
  • a verification area Ar3 is set near the lower left corner.
  • Area Ar4 is set
  • verification area Ar5 is set near the lower right corner.
  • each verification area Ar is, for example, 100 pixels vertically and 100 pixels horizontally.
  • the RAW data includes data Dr indicating the intensity of the red light component and the intensity of the green light component of the light received for each pixel. It contains data Dg and data Db indicating the intensity of the blue light component.
  • the RAW data is two-dimensional data in which data Dr, Dg, and Db are developed in a predetermined pattern as shown in FIG. Therefore, the verification areas Ar1 to Ar5 are also two-dimensional data in which the data Dr, Dg, and Db are developed vertically and horizontally.
  • the mean value and variance of the data Dr included in the verification area Ar1 are calculated, and the mean value and variance of the data Dg included in the verification area Ar1 are calculated. Then, the average value and variance of the data Db included in the verification area Ar1 are calculated.
  • the arithmetic processing unit completes acquisition of data used for matching verification with respect to the first imaging device 1 as a real device.
  • the arithmetic processing unit executes the processes from step S205 onward, thereby starting acquisition of data used for consistency verification for the camera model 2 of the simulator system S2. Specifically, in step S205, the arithmetic processing unit determines whether or not preparation for measurement has been completed.
  • the state in which the measurement preparation is completed is, for example, a state in which the spectral radiance meter 4 is installed at a position facing the diffusion surface 3a of the luminance box 3, as shown in FIG. That is, it is determined that the preparation for measurement is completed when the spectral radiance value of the specific region can be measured using the spectral radiance meter 4 .
  • the process of step S205 is repeatedly executed until the preparation for measurement is completed.
  • the arithmetic processing unit acquires the measured spectral radiance value in step S206. Note that the spectral radiance meter 4 may be instructed to start measurement before step S206.
  • the spectral radiance meter 4 does not measure the spectral radiance value of the entire brightness box 3, but measures the spectral radiance value of a narrow area with a minute solid angle. Therefore, in order to obtain spectral radiance values for comparison with the five verification areas Ar in FIG. It is necessary to measure the spectral radiance value for
  • the arithmetic processing unit performs the processing of step S206 in a state in which the optical axis of the spectral radiance meter 4 is aligned with the center of the measurement area Br1. wait until it's ready. Then, in response to the operator's operation of aligning the optical axis of the spectral radiance meter 4 with the center of the measurement area Br2, the arithmetic processing unit performs the processing of step S206 again. By repeating the processing of steps S205 and S206 in this way, the spectral radiance value for each measurement area Br is obtained.
  • the measurement area Br is provided at a position corresponding to the verification area Ar. That is, the light passing through the central portion of the measurement area Br1 on the diffusion surface 3a affects the RAW data located in the central portion of the verification area Ar1 in the RAW data shown in FIG. It affects the measured spectral radiance values.
  • step S207 of FIG. 5 the arithmetic processing unit generates input data to the camera model 2 using the acquired spectral radiance values.
  • the generated data is two-dimensional data corresponding to the pixel array of the sensor model B8 included in the camera model 2.
  • FIG. An example is shown in FIG.
  • the obtained spectral radiance values are arranged in corresponding areas Cr1 to Cr5 corresponding to the respective measurement areas Br.
  • the spectral radiance values obtained from the measurement area Br1 are arranged in the corresponding area Cr1
  • the spectral radiance values obtained from the measurement area Br2 are arranged in the corresponding area Cr2
  • the corresponding area Cr3 The spectral radiance values obtained from the measurement area Br3 are arranged, the spectral radiance values obtained from the measurement area Br4 are arranged in the corresponding area Cr4, and the spectral radiance values obtained from the measurement area Br5 are arranged in the corresponding area Cr5. value is placed.
  • dummy data is used for areas other than the corresponding area Cr in the input data using the spectral radiance values.
  • the area where dummy data is placed is an area that is not used for consistency verification.
  • the dummy data may be zero values or other values.
  • the arithmetic processing unit acquires the RAW data output from the camera model 2 in step S208 of FIG.
  • the spectral radiance value input to the camera model 2 is subjected to calculation by the optical model B7 and converted into spectral irradiance, and further calculation is performed by the sensor model B8. It has been applied.
  • a predetermined area in the RAW data is provided as a verification area Ar'.
  • the area of the RAW data output from the pixel area to which the spectral radiance values arranged in the corresponding area Cr1 are input is set as the verification area Ar'1.
  • the area of the RAW data output from the pixel area to which the spectral radiance value arranged in the corresponding area Cr2 is input is set as the verification area Ar'2, and the spectral radiance value arranged in the corresponding area Cr3 is input.
  • the area of the RAW data output from the pixel area arranged in the corresponding area Cr4 is designated as the verification area Ar′3, and the area of the RAW data output from the pixel area into which the spectral radiance value arranged in the corresponding area Cr4 is inputted is designated as the verification area Ar '4, and the area of the RAW data output from the pixel area to which the spectral radiance value arranged in the corresponding area Cr5 is input is set as the verification area Ar'5.
  • step S209 of FIG. 5 the arithmetic processing unit performs statistical processing for each verification area Ar'. For example, the average value, variance, etc. are calculated for each of the verification areas Ar′1 to Ar′5.
  • the size of the verification area Ar'1 is the same size as the verification area Ar in FIG.
  • the RAW data output from the sensor model B8 is two-dimensional data in which the data Dr, Dg, and Db are expanded in the same manner as the RAW data output from the first imaging device 1 (see FIG. 8).
  • step S209 average values and variances are calculated for each of data Dr, data Dg, and data Db for one verification area Ar'. That is, when calculating the average value and variance, six statistical data are calculated for one verification area Ar'.
  • step S210 the arithmetic processing unit performs comparison processing between the verification area Ar and the verification area Ar'. Specifically, the statistical data calculated in step S204 for the verification area Ar1 and the statistical data calculated in step S209 for the corresponding verification area Ar' are compared.
  • the camera model 2 can appropriately simulate the first imaging device 1. can be determined.
  • step S102 the arithmetic processing unit performs branch processing based on the comparison result. Specifically, as a result of the comparison processing in step S210, if it is determined that the difference between all the statistical data is small, it is determined that the consistency between the first imaging device 1 and the camera model 2 is ensured. The series of processing shown in FIG. 5 is terminated assuming that the test has passed.
  • step S103 factor analysis and various model correction processes are performed. , the process returns to step S201 again. That is, the series of processing shown in FIG. 5 is repeated until the consistency between the first imaging device 1 and the camera model 2 can be ensured.
  • the threshold used in the branching process in step S102 may be different for each type of statistical data. Specifically, the threshold used when comparing mean values and the threshold used when comparing variances may be different values.
  • the brightness box 3 was used, but other items may be used.
  • a color chart 5 as shown in FIG. 12 or a gray chart 6 as shown in FIG. 13 may be used instead of using the luminance box 3 as the object to be measured.
  • a light source 7 for irradiating the color chart 5 or the gray chart 6 with light may be arranged as shown in FIG. That is, the light emitted from the light source 7 may be reflected by the color chart 5 or the gray chart 6 and input to the first imaging device 1 or the spectral radiance meter 4 .
  • FIGS. 15 and 16 are shown as a connector C1.
  • 15 and 16 may be the same as the arithmetic processing unit for executing each process shown in FIG. 4, or may be a different treatment device. good too.
  • the operator installs the index A and the index B at predetermined positions in front of the vehicle 100.
  • FIG. Specifically, as shown in FIG. 17, the first imaging device 1 is mounted on the vehicle 100 so that the horizontal distance from the first imaging device 1 is the distance L1 and the height from the ground is the vehicle 100.
  • An index A is installed so as to match the height Hc of the device 1 .
  • the operator sets the horizontal distance from the first imaging device 1 to the distance L2 longer than the distance L1, and the height from the ground to the height Hc.
  • Set index B so that
  • step S301 of FIG. 15 the arithmetic processing unit adjusts the yaw and pitch directions of the first imaging device 1 so that the center of the index A, the center of the index B, and the center of the angle of view overlap. make adjustments.
  • This adjustment is realized, for example, by the arithmetic processing unit transmitting a control signal for adjusting the imaging direction of the first imaging device 1 to the first imaging device 1 .
  • the arithmetic processing unit does not transmit a control signal to the first imaging device 1, but holds the first imaging device 1 in a state in which the imaging direction of the first imaging device 1 can be adjusted.
  • a control signal may be transmitted to a device or a control device that controls the imaging direction of the first imaging device 1 . This is the same for other control signals to be described later.
  • the optical axis of the first imaging device 1 can be horizontally adjusted.
  • the operator places the index C at a predetermined position. Specifically, as shown in FIG. 18, an index C is installed at a position moved in parallel in the vehicle width direction of the vehicle 100 from the position where the index A was installed. At this time, the height of the center of the index C is made to be the height Hc.
  • step S302 of FIG. 15 the arithmetic processing unit performs control for adjusting the roll direction of the first imaging device 1 so that the center of the index C is positioned on the horizontal line passing through the center of the angle of view of the first imaging device 1.
  • Send a signal Specifically, as shown in FIG. 19, the center of the index A is imaged at the center of the angle of view of the first imaging device 1, and the center of the index C is imaged at an intermediate position in the vertical direction of the angle of view. Adjust the roll direction as follows.
  • step S303 of FIG. 15 the arithmetic processing unit determines whether or not to provide the posture of the first imaging device 1 with a depression angle. Whether or not to provide the depression angle may be set by the operator, or may be automatically set according to subjects such as pedestrians and preceding vehicles imaged at the angle of view of the first imaging device 1. .
  • the operator When providing the angle of depression, for example, as shown in FIG. 20, the operator sets an index D at a predetermined position.
  • the index D has a distance L3 from the first imaging device 1 and a height Hd at the center position. Any value may be set for the distance L3 and the height Hd.
  • the arithmetic processing unit adjusts the pitch direction of the first imaging device 1 in step S304 of FIG. Sends a control signal for adjusting the At this time, no adjustments are made in the yaw or roll direction. Thereby, the imaging direction of the first imaging device 1 can be adjusted without changing the already set yaw direction and roll direction.
  • the arithmetic processing unit performs the processing from step S305 onward to calibrate the camera model 2 for the simulator system S2. Specifically, in step S305 of FIG. 15, the arithmetic processing unit sets the index A' and the index B' at predetermined positions on the environment map of the simulator system S2.
  • the set positions of the index A′ and the index B′ are based on the positional relationship of the index A and the index B with respect to the first imaging device 1 . That is, the positional relationship between the camera model 2 and the index A' is the same as the positional relationship between the first imaging device 1 and the index A, and the positional relationship between the camera model 2 and the index B' is the same as that between the first imaging device 1 and the index B. equated with a relationship.
  • the index A' and the index B' are set so that the distances from the position of the vehicle on the environment map are different.
  • step S306 of FIG. 15 the arithmetic processing unit adjusts the yaw direction and pitch direction of the camera model 2 so that the center of the index A', the center of the index B', and the center of the angle of view of the camera model 2 overlap.
  • step S307 of FIG. 16 the arithmetic processing unit sets an index C' at a predetermined position on the environment map of the simulator system S2. Specifically, the center position of the index C is set so that the positional relationship between the index A and the index C matches the positional relationship between the index A' and the index C'.
  • step S308 the arithmetic processing unit adjusts the roll direction of the camera model 2 so that the center of the index C' is positioned on the horizontal line passing through the center of the angle of view of the camera model 2.
  • step S309 the arithmetic processing unit determines whether or not to provide the posture of the camera model 2 with a depression angle.
  • the angle of depression is provided for the posture of the first imaging device 1, the angle of depression is also provided for the posture of the camera model 2 in order to simulate a similar state.
  • the arithmetic processing unit sets an index D' at a predetermined position on the environment map of the simulator system S2 in step S310.
  • the position and height of the index D' with respect to the camera model 2 are made to match the position and height of the index D with respect to the first imaging device 1 .
  • step S ⁇ b>311 the arithmetic processing unit adjusts the pitch direction of the camera model 2 so that the center of the index D′ is positioned on the horizontal line passing through the center of the angle of view of the camera model 2 .
  • no adjustments are made in the yaw or roll direction.
  • the imaging direction of the camera model 2 can be adjusted without changing the already set yaw direction and roll direction.
  • the camera model 2 that appropriately simulates the first imaging device 1 can be prepared.
  • the camera model 2 has the sensor model B8 imitating the color filter by the first imaging device 1 having the Bayer array color filter (see FIG. 21).
  • the filter provided in the first image pickup device 1 may be one other than the Bayer array.
  • the first imaging device 1 uses an RCCB array filter (see FIG. 22) in which filters corresponding to R (Red), C (Clear), and B (Blue) are arranged, or R (Red), G ( Green), B (Blue), and IR (Infrared) filters in RGBIR array (see Fig. 23), and Cy (Cyan), Ye (Yellow), G (Green), and Mg (Magenta).
  • RCCB array filter see FIG. 22
  • filters corresponding to R (Red), C (Clear), and B (Blue) are arranged, or R (Red), G ( Green), B (Blue), and IR (Infrared) filters in RGBIR array (see Fig. 23), and Cy (Cyan), Ye (Yellow), G (Green), and Mg (Magenta).
  • a complementary color filter in which corresponding filters are arranged may be provided.
  • the sensor model B8 of the camera model 2 is also provided with an RCCB arrangement filter model, an RGBIR arrangement filter model, and a complementary
  • FIGS. 7 and 9 to 11 there are five verification areas Ar, measurement areas Br, corresponding areas Cr, and verification areas Ar′ set at predetermined positions in the angle of view and the image. It had been. However, this is only an example, and other setting examples are also conceivable. For example, if it is not necessary to check the angle of view or the consistency of the peripheral areas in the image, only one predetermined area may be set in the center of each area. Specifically, a verification area Ar1 is set as the verification area Ar, a measurement area Br1 is set as the measurement area Br, a corresponding area Cr1 is set as the corresponding area Cr, and a verification area Ar'1 is set as the verification area Ar'. may be
  • each predetermined area may be set in the area around the corner.
  • six or more predetermined areas may be set.
  • the angle of view may be vertically divided into three and horizontally divided into three, for a total of nine divisions, and the verification area Ar may be provided in the central portion of each area.
  • Computer device> A configuration of a computer device including an arithmetic processing unit that executes each process shown in FIGS. 4, 5, 15, and 16 will be described with reference to FIG.
  • the CPU 71 of the computer device functions as an arithmetic processing unit that performs the various processes described above, and programs stored in a non-volatile memory unit 74 such as a ROM 72 or an EEP-ROM (Electrically Erasable Programmable Read-Only Memory), or Various processes are executed according to programs loaded from the storage unit 79 to the RAM 73 .
  • the RAM 73 also appropriately stores data necessary for the CPU 71 to execute various processes.
  • the CPU 71 , ROM 72 , RAM 73 and nonvolatile memory section 74 are interconnected via a bus 83 .
  • An input/output interface (I/F) 75 is also connected to this bus 83 .
  • the input/output interface 75 is connected to an input section 76 including operators and operating devices.
  • an input section 76 including operators and operating devices.
  • various operators and operation devices such as a keyboard, mouse, key, dial, touch panel, touch pad, remote controller, etc. are assumed.
  • a user's operation is detected by the input unit 76 , and a signal corresponding to the input operation is interpreted by the CPU 71 .
  • the input/output interface 75 is connected integrally or separately with a display unit 77 such as an LCD or an organic EL panel, and an audio output unit 78 such as a speaker.
  • the display unit 77 is a display unit that performs various displays, and is configured by, for example, a display device provided in the housing of the computer device, a separate display device connected to the computer device, or the like.
  • the display unit 77 displays images for various types of image processing, moving images to be processed, etc. on the display screen based on instructions from the CPU 71 . Further, the display unit 77 displays various operation menus, icons, messages, etc., ie, as a GUI (Graphical User Interface), based on instructions from the CPU 71 .
  • GUI Graphic User Interface
  • the input/output interface 75 may be connected to a storage unit 79 made up of a hard disk, solid-state memory, etc., and a communication unit 80 made up of a modem or the like.
  • the communication unit 80 performs communication processing via a transmission line such as the Internet, wired/wireless communication with various devices, bus communication, and the like.
  • a drive 81 is also connected to the input/output interface 75 as required, and a removable storage medium 82 such as a magnetic disk, optical disk, magneto-optical disk, or semiconductor memory is appropriately mounted.
  • Data files such as programs used for each process can be read from the removable storage medium 82 by the drive 81 .
  • the read data file is stored in the storage unit 79 , and the image and sound contained in the data file are output by the display unit 77 and the sound output unit 78 .
  • Computer programs and the like read from the removable storage medium 82 are installed in the storage unit 79 as required.
  • software for the processing of this embodiment can be installed via network communication by the communication unit 80 or via the removable storage medium 82 .
  • the software may be stored in advance in the ROM 72, the storage unit 79, or the like.
  • the CPU 71 performs processing operations based on various programs, thereby executing necessary information processing and communication processing as an information processing apparatus having the arithmetic processing unit described above.
  • the information processing apparatus is not limited to being composed of a single computer device as shown in FIG. 2, and may be configured by systematizing a plurality of computer devices.
  • the plurality of computer devices may be systematized by a LAN (Local Area Network) or the like, or may be remotely located by a VPN (Virtual Private Network) or the like using the Internet or the like.
  • the plurality of computing devices may include computing devices as a group of servers (cloud) available through a cloud computing service.
  • the information processing method of the present technology uses the first imaging device 1 that captures an image of a specific subject (such as the above-described brightness box, charts, subject in an actual driving environment scene, and preceding vehicle). and a second image G2 output from the camera model 2 to which two-dimensional input data (for example, spectral radiance values) based on measurement results of measuring light from a specific subject is input.
  • two-dimensional input data for example, spectral radiance values
  • the comparison of the characteristics of the first imaging device 1 and the camera model 2 may be performed, for example, by comparing the first image G1 and the second image G2, or applying image recognition processing to the first image G1. may be performed by comparing the recognition result (label information) obtained by applying the image recognition processing to the second image G2.
  • the recognition result label information obtained by applying the image recognition processing to the second image G2.
  • the data input to the camera model 2 is based on the measurement results (spectral radiance value, spectral irradiance, etc.) of measuring the light from the subject.
  • a perfect match is difficult. Therefore, the statistical data obtained from the first image G1 and similar statistical data obtained from the second image G2 may be compared, and if the difference is small, it may be determined that the consistency has been ensured. As a result, consistency verification can be easily performed, and various costs can be reduced.
  • the recognition result obtained by applying image recognition processing to the first image G1 may be compared with the recognition result for the second image G2. Even if the first image G1 and the second image G2 are slightly different, if the recognition results are the same, the subsequent control can be performed correctly.
  • automatic driving control and driving support control can be performed appropriately if the matching between the first imaging device 1 and the camera model 2 can be ensured to the extent that the same recognition result can be output. Ensuring the matching between the first imaging device and the camera model to the extent that the same recognition result can be output eliminates the need for excessive matching between the first imaging device and the camera model. Therefore, the camera model can be verified efficiently.
  • the input data input to the camera model 2 may be spectral radiance values of light from a specific subject.
  • the camera model 2 can output a signal similar to that of the first imaging device 1 that actually exists. As a result, consistency verification can be performed appropriately.
  • the camera model may have an optical model B7 and an image sensor model (sensor model B8).
  • the optical model B7 is a model imitating various lenses, projection correction, aperture correction, shading correction, IR cut filter, and the like provided in the first imaging device 1 .
  • the image sensor model is a model imitating a color filter provided in each pixel, photoelectric conversion processing, AD (Analog to Digital) conversion processing, and the like.
  • each model is configured including various corrections and processes incorporated in the first imaging device 1, which is an actual imaging device, to construct a camera model 2 that appropriately imitates the first imaging device 1. can do. Further, it is possible to easily change the camera model 2 that is required when some of these elements are changed in the actual first imaging device 1 .
  • the measurement result of measuring the light from the subject is a predetermined area (area corresponding to the verification area Ar) set within the angle of view of the first imaging device 1. ), and the two-dimensional input data may be arranged in an area (corresponding area Cr) corresponding to the verification area Ar.
  • the two-dimensional input data may be arranged in an area (corresponding area Cr) corresponding to the verification area Ar.
  • a pixel region (verification area Ar) corresponding to a predetermined region and a region (verification area Ar) corresponding to the pixel region (verification area Ar) in the second image G2 ') may be performed to verify consistency. Matching can be simply verified by performing comparison using data (statistical data) output from a part of pixel regions. Therefore, it is possible to reduce the amount of calculation required for consistency verification.
  • two-dimensional input data (input data using spectral radiance values) has dummy data arranged in an area other than an area (corresponding area Cr) corresponding to a predetermined area.
  • corresponding area Cr an area corresponding to a predetermined area.
  • the image sensor model (sensor model B8) has a filter model, and by performing statistical processing for each pixel group for each filter type in the second image G2 Consistency verification may be performed.
  • matching verification is performed in accordance with the actual configuration of the first imaging device 1 having color filters and the like. Therefore, it is possible to appropriately perform consistency verification.
  • the average value and variance are calculated for each pixel group (that is, each Dr, Dg, and Db) by statistical processing, and the average value and variance are compared with respective thresholds.
  • Consistency verification may be performed by Consistency can be verified with simple processing by using the average value, variance, or the like. Therefore, it is possible to reduce the amount of calculation for realizing consistency verification. In addition, by using the average value, variance, etc., it is possible to prevent erroneous determination due to noise or the like.
  • the filter model may be a model that employs at least one of the Bayer array, RCCB array, RGBIR array, and array using complementary color filters.
  • the information processing method of the present technology includes pre-processing for verifying consistency between the first imaging device 1 as an in-vehicle camera and the camera model 2 as a simulated camera.
  • the center of the first targets (indexes A and B) installed at least two locations in front of the first imaging device 1 and at the same height position (height Hc) as the first imaging device 1 is the angle of view.
  • a computer device (for example, an information processing device having an arithmetic processing unit such as a CPU 71) performs calibration by adjusting the yaw direction and the pitch direction of the first imaging device 1 so that the first imaging device 1 is aligned with the center of the .
  • the camera model 2 can be used to properly test the vehicle-mounted camera, thereby reducing various costs. It can be performed.
  • the second target is installed at a position offset in the horizontal direction with respect to the optical axis of the first imaging device 1 and at the same height as the first imaging device 1.
  • Calibration may be performed by adjusting the roll direction of the first imaging device 1 so that the center of the object (marker C) coincides with the middle position in the vertical direction of the angle of view. This makes it possible to calibrate in the roll direction with a simple method. Therefore, accuracy of matching verification can be improved.
  • the pitch direction is adjusted so that the center of the third target (index D) installed in front of the first imaging device 1 coincides with the vertical intermediate position.
  • the depression angle may be adjusted as calibration.
  • the optical axis direction of the camera model 2 according to the optical axis direction of the first imaging device 1 calibrated in this way, the first imaging device 1 set in various orientations can be appropriately simulated.
  • a camera model 2 can be provided. Therefore, the simulation for the first imaging device 1 can be properly performed.
  • the optical axis direction of the camera model 2 may be adjusted based on the optical axis direction of the first imaging device 1 adjusted by calibration. Thereby, the optical axis direction of the first imaging device 1 and the optical axis direction of the camera model 2 can be matched. Therefore, it is possible to prepare a camera model 2 that appropriately simulates the first imaging device 1, and it is possible to perform a simulation using the camera model 2 instead of the first imaging device 1, and it is used for driving support control. Algorithm verification work can be performed efficiently.
  • the information processing apparatus uses the first image G1 output from the first imaging device 1 that captures a specific subject (such as the luminance box and charts described above, the subject in the actual driving environment scene, the preceding vehicle, etc.). and a second image G2 output from the camera model 2 to which two-dimensional input data (for example, spectral radiance values) based on measurement results of measuring light from a specific subject is input, It has an arithmetic processing unit (for example, CPU 71) that verifies the consistency between the first imaging device 1 and the camera model 2 by comparing the characteristics of the imaging device 1 and the characteristics of the camera model 2.
  • arithmetic processing unit for example, CPU 71
  • the information processing apparatus performs the first imaging in front of the first imaging device 1 as preprocessing for verifying the matching between the first imaging device 1 as an in-vehicle camera and the camera model 2 as a simulated camera.
  • the first imaging device 1 is arranged so that the centers of the first targets (indicators A and B) set at at least two locations at the same height position (height Hc) as the device 1 are aligned with the center of the angle of view. It has an arithmetic processing unit (for example, CPU 71) that performs calibration by adjusting the yaw direction and the pitch direction.
  • the program to be executed by the information processing apparatus described above can be recorded in advance in a HDD (Hard Disk Drive) as a recording medium built in a device such as a computer device, or in a ROM or the like in a microcomputer having a CPU.
  • the program may be a flexible disk, a CD-ROM (Compact Disk Read Only Memory), an MO (Magneto Optical) disk, a DVD (Digital Versatile Disc), a Blu-ray Disc (registered trademark), a magnetic disk, a semiconductor It can be temporarily or permanently stored (recorded) in a removable recording medium such as a memory or memory card.
  • Such removable recording media can be provided as so-called package software.
  • it can also be downloaded from a download site via a network such as a LAN (Local Area Network) or the Internet.
  • LAN Local Area Network
  • the present technology can also adopt the following configuration.
  • the camera model includes an optical model and an image sensor model.
  • the measurement result of the light is the measurement result of a predetermined area set within the angle of view of the first imaging device, The information processing method according to any one of (1) to (3) above, wherein the two-dimensional input data includes the measurement result arranged in an area corresponding to the predetermined area.
  • the image sensor model has a filter model;
  • the filter model employs at least one of a Bayer array, an RCCB array, an RGBIR array, and an array using complementary color filters.
  • first imaging device 2 camera model B7 optical model B8 sensor model (image sensor model)
  • First image G2 Second image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)

Abstract

This information processing method verifies the consistency of a first image capturing device with a camera model by comparing the characteristics of the first image capturing device and the characteristics of the camera model by using first image data output from the first image capturing device that captures a specific subject and second image data output from the camera model to which two-dimensional input data, which is based on a measurement result obtained by measuring light from the specific subject, is input.

Description

情報処理方法、情報処理装置Information processing method, information processing device
 本技術は、実環境とシミュレーション環境の一致性を検証するための情報処理方法及び情報処理装置の技術分野に関する。 This technology relates to the technical field of information processing methods and information processing devices for verifying the consistency between the real environment and the simulation environment.
 自動運転や安全支援システムなどに用いられる車載カメラシステムの開発では、種々の運転環境における安全性を評価するために、シミュレーション環境を利用した評価が行われる。例えば、走行環境をCG(Computer Graphics)モデルで再現し、実際のカメラを模したカメラモデルに対してCGモデルに基づくデータを入力し、カメラモデルから出力される画像データに対する認識処理を施すことにより得られた認識結果を用いて評価などを行う。
 下記特許文献1では、実環境において撮像された実際の画像データを仮想画像データに変換することによりカメラモデルに対する入力データを生成する。
In the development of in-vehicle camera systems used for autonomous driving and safety support systems, evaluations using simulation environments are performed to evaluate safety in various driving environments. For example, by reproducing the driving environment with a CG (Computer Graphics) model, inputting data based on the CG model into a camera model imitating an actual camera, and performing recognition processing on the image data output from the camera model. Evaluation is performed using the obtained recognition result.
In Patent Document 1 listed below, input data for a camera model is generated by converting actual image data captured in a real environment into virtual image data.
米国特許出願公開第2019/0171223号明細書U.S. Patent Application Publication No. 2019/0171223
 ところで、シミュレーションのプロセスでは走行環境モデルや光学モデルなど各種の仮想モデルが用いられるため、実環境との一致性を検証する工程が複雑となり、時間的なコストや人的なコストが大きくなりがちである。
 そして、カメラモデルを用いたシミュレーションでは、ユーザにより使用されるセンサが各種存在するため、使用されるそれぞれのセンサに則したモデルをカメラモデルに組み込む必要がある。しかし、カメラモデルを変更するごとに、入力される仮想画像データを変更したり一致性の検証を行ったりする必要があるため、コストの増大を招来してしまう。
By the way, since various virtual models such as driving environment models and optical models are used in the simulation process, the process of verifying consistency with the actual environment is complicated, and time and personnel costs tend to increase. be.
In a simulation using a camera model, there are various types of sensors used by the user, so it is necessary to incorporate a model suitable for each sensor used into the camera model. However, every time the camera model is changed, it is necessary to change the virtual image data to be input and to verify the matching, which leads to an increase in cost.
 本技術はこのような問題に鑑みて為されたものであり、実環境とシミュレーション環境についての一致性検証を効率的に行う手法を提案することを目的とする。 This technology was created in view of these problems, and aims to propose a method for efficiently verifying the consistency between the real environment and the simulation environment.
 本技術に係る情報処理方法は、特定の被写体を撮像した第1撮像装置から出力される第1画像データと、前記特定の被写体からの光を計測した計測結果に基づく二次元の入力データが入力されるカメラモデルから出力される第2画像データと、を用いて、前記第1撮像装置についての特性と前記カメラモデルについての特性の比較を行うことにより前記第1撮像装置と前記カメラモデルについての一致性検証を行うものである。
 第1撮像装置とカメラモデルについての特性の比較とは、例えば、第1画像データと第2画像データを比較することにより行われてもよいし、第1画像データに画像認識処理を適用することにより得られた認識結果と第2画像データに画像認識処理を適用することにより得られた認識結果を比較することにより行われてもよい。
An information processing method according to the present technology inputs first image data output from a first imaging device that has captured an image of a specific subject, and two-dimensional input data based on measurement results of measuring light from the specific subject. and second image data output from the camera model that is used to compare the characteristics of the first imaging device and the characteristics of the camera model to compare the characteristics of the first imaging device and the camera model. It verifies consistency.
The comparison of the characteristics of the first imaging device and the camera model may be performed, for example, by comparing the first image data and the second image data, or by applying image recognition processing to the first image data. and the recognition result obtained by applying image recognition processing to the second image data.
 本技術に係る情報処理方法は、車載カメラとしての第1撮像装置と模擬カメラとしてのカメラモデルに関する一致性検証を行うための事前処理として、前記第1撮像装置の前方且つ前記第1撮像装置と同じ高さ位置とされた少なくとも2箇所に設置された第1目標物の中心が画角の中心に一致されるように前記第1撮像装置のヨー方向及びピッチ方向の調整を行うことによりキャリブレーションを行うものである。
 このようなキャリブレーションを行うことにより、車載カメラを搭載した車載カメラシステムと、カメラモデルを搭載したシミュレータシステムの一致性を検証することができる。
In the information processing method according to the present technology, as preprocessing for performing consistency verification regarding a first imaging device as an in-vehicle camera and a camera model as a simulated camera, Calibration is performed by adjusting the yaw direction and pitch direction of the first imaging device so that the center of the first target placed at least two locations at the same height coincides with the center of the angle of view. is performed.
By performing such calibration, it is possible to verify the consistency between the vehicle-mounted camera system equipped with the vehicle-mounted camera and the simulator system equipped with the camera model.
一致性検証のために車載カメラシステムにおいて実行されるフローを示す図である。FIG. 4 is a diagram showing a flow performed in an on-vehicle camera system for consistency verification; 一致性検証のためにシミュレータシステムにおいて実行されるフローを示す図である。FIG. 10 is a diagram showing a flow executed in a simulator system for consistency verification; カメラモデルの構成例を示す図である。It is a figure which shows the structural example of a camera model. 一致性検証についての全体の流れを示すフローチャートである。FIG. 10 is a flow chart showing the overall flow of consistency verification; FIG. カメラモデルの一致性検証についての具体的な処理例を示すフローチャートである。FIG. 10 is a flowchart showing a specific processing example of camera model consistency verification; FIG. 第1撮像装置で輝度箱を撮影している状態を示す概略図である。FIG. 4 is a schematic diagram showing a state in which a luminance box is photographed by the first imaging device; 第1撮像装置から出力されるRAWデータに設定された検証エリアの一例を示す図である。FIG. 10 is a diagram showing an example of a verification area set in RAW data output from the first imaging device; 第1撮像装置から出力されるRAWデータの一例を示す図である。FIG. 4 is a diagram showing an example of RAW data output from the first imaging device; 分光放射輝度計を用いて輝度箱に設定された測定エリアの測定を行っている状態を示す図である。FIG. 4 is a diagram showing a state in which a measurement area set in a luminance box is measured using a spectral radiance meter; カメラモデルへの入力データとして分光放射輝度値を用いた入力データの一例を示す図である。It is a figure which shows an example of the input data using a spectral radiance value as input data to a camera model. カメラモデルから出力されるRAWデータの一例を示す図である。FIG. 4 is a diagram showing an example of RAW data output from a camera model; カラーチャートの一例を示す図である。It is a figure which shows an example of a color chart. グレーチャートの一例を示す図である。It is a figure which shows an example of a gray chart. カラーチャートやグレーチャートについての分光放射輝度値の測定方法の一例を示す図である。FIG. 4 is a diagram showing an example of a method of measuring spectral radiance values for color charts and gray charts. 図16と共にカメラ姿勢のキャリブレーションについての具体的な処理例を示すフローチャートである。FIG. 17 is a flowchart showing a specific example of processing for calibrating the camera posture together with FIG. 16 ; 図15と共にカメラ姿勢のキャリブレーションについての具体的な処理例を示すフローチャートである。FIG. 16 is a flowchart showing a specific example of processing for calibrating the camera posture together with FIG. 15 ; 車両の前方に二つの指標が配置された状態を示す概略図である。FIG. 4 is a schematic diagram showing a state in which two indicators are arranged in front of the vehicle; 指標が車両の幅方向に離隔して配置された状態を示す概略図である。FIG. 4 is a schematic diagram showing a state in which indicators are spaced apart in the width direction of the vehicle; 第1撮像装置のロール方向のキャリブレーションについての説明図である。FIG. 10 is an explanatory diagram of calibration in the roll direction of the first imaging device; 第1撮像装置の俯角調整を行うための指標が配置された状態を示す概略図である。FIG. 4 is a schematic diagram showing a state in which indices for adjusting the depression angle of the first imaging device are arranged; 第1撮像装置が備えるフィルタの例を示した図であり、本図はベイヤー配列のフィルタを示す図である。FIG. 4 is a diagram showing an example of a filter included in the first imaging device, and this diagram is a diagram showing a Bayer array filter. 第1撮像装置が備えるフィルタの例を示した図であり、本図はRCCB配列のフィルタを示す図である。FIG. 4 is a diagram showing an example of a filter provided in the first imaging device, and this diagram is a diagram showing a filter of RCCB arrangement; 第1撮像装置が備えるフィルタの例を示した図であり、本図はRGBIR配列のフィルタを示す図である。FIG. 4 is a diagram showing an example of a filter provided in the first imaging device, and this diagram is a diagram showing an RGBIR arrangement filter. FIG. 第1撮像装置が備えるフィルタの例を示した図であり、本図は補色フィルタを示す図である。FIG. 4 is a diagram showing an example of a filter included in the first imaging device, and this diagram is a diagram showing a complementary color filter; FIG. コンピュータ装置のブロック図である。1 is a block diagram of a computer device; FIG.
 以下、添付図面を参照し、本技術に係る実施の形態を次の順序で説明する。
<1.一致性検証について>
<2.各システムのプロセスフロー>
<3.一致性検証のフロー>
<4.カメラモデルの一致性検証>
<5.カメラシステムのキャリブレーション>
<6.変形例>
<7.コンピュータ装置>
<8.まとめ>
<9.本技術>
Hereinafter, embodiments according to the present technology will be described in the following order with reference to the accompanying drawings.
<1. Consistency verification >
<2. Process flow of each system>
<3. Consistency verification flow>
<4. Consistency Verification of Camera Model>
<5. Camera system calibration>
<6. Variation>
<7. Computer device>
<8. Summary>
<9. This technology>
<1.一致性検証について>
 本実施の形態について添付図を参照して説明する。本実施の形態では、車両100に搭載される第1撮像装置1を備えた車載カメラシステムS1と、実際の撮像装置を模擬したカメラモデル2を備えたシミュレータシステムS2についての一致性検証を行うものである。
<1. Consistency verification >
This embodiment will be described with reference to the accompanying drawings. In the present embodiment, consistency verification is performed between an in-vehicle camera system S1 equipped with a first imaging device 1 mounted on a vehicle 100 and a simulator system S2 equipped with a camera model 2 simulating an actual imaging device. is.
 車両100の衝突軽減ブレーキ機能や自動運転機能等の運転支援機能を実現するためには、車両100の制御アルゴリズムの検証が必要である。それらの制御アルゴリズムは、各種の走行環境下で実際に車両100を走行させ、各種のセンサ類から取得されるデータや制御結果等を取得し、検討を重ねることによって改良されていく。 In order to realize driving support functions such as the collision mitigation brake function and the automatic driving function of the vehicle 100, it is necessary to verify the control algorithm of the vehicle 100. These control algorithms are improved by actually running the vehicle 100 under various driving environments, acquiring data and control results obtained from various sensors, and repeating examinations.
 しかし、車両100を走行させる実際の環境を整えるためには場所の確保等多大なコストが生じる。
 そのため、一般にMBD(Model Based Design)と呼ばれるシミュレーションを利用してアルゴリズムを評価することが行われている。
However, in order to prepare the actual environment for running the vehicle 100, a great cost such as securing a place is incurred.
Therefore, algorithms are generally evaluated using a simulation called MBD (Model Based Design).
 MBDでは、実際の走行環境を模擬した走行環境モデルや、第1撮像装置1が有するCMOS(Complementary Metal-Oxide Semiconductor)やCCD(Charge Coupled Device)等によるイメージセンサを模擬したセンサモデルや、第1撮像装置1が備えるレンズ等の光学部材を模擬した光学モデルなどを使用する。 In MBD, a driving environment model that simulates the actual driving environment, a sensor model that simulates an image sensor such as CMOS (Complementary Metal-Oxide Semiconductor) or CCD (Charge Coupled Device) possessed by the first imaging device 1, and the first An optical model or the like that simulates an optical member such as a lens provided in the imaging device 1 is used.
 従って、MBDを利用して制御アルゴリズムの検証を適切に行うためには、MBD
が備える各種のモデルを実際のものとある程度一致させる必要がある。
Therefore, in order to appropriately verify control algorithms using MBD, MBD
It is necessary to match the various models provided by
 本実施の形態では、これらの各種モデルと実際の環境の一致性を効率的に検証できる手法について説明する。具体的には、車載カメラシステムS1とシミュレータシステムS2の一致性検証を行う。
In this embodiment, a method for efficiently verifying the consistency between these various models and the actual environment will be described. Concretely, consistency verification between the in-vehicle camera system S1 and the simulator system S2 is performed.
<2.各システムのプロセスフロー>
 一致性検証のために車載カメラシステムS1において実行する処理のフローを図1に、シミュレータシステムS2において実行される処理のフローを図2に示す。
<2. Process flow of each system>
FIG. 1 shows the flow of processing executed in the in-vehicle camera system S1 for matching verification, and FIG. 2 shows the flow of processing executed in the simulator system S2.
 車載カメラシステムS1は、実際の走行環境シーンA1に対して第1撮像装置1による撮像を行う。第1撮像装置1は、レンズ等の光学部材とイメージセンサ等を内包している。
 第1撮像装置1による撮像によって得られた信号は、撮像信号(RAWデータ)としてカメラ信号処理モデルA2に入力される。
The in-vehicle camera system S1 uses the first imaging device 1 to capture an actual driving environment scene A1. The first imaging device 1 includes an optical member such as a lens, an image sensor, and the like.
A signal obtained by imaging by the first imaging device 1 is input to the camera signal processing model A2 as an imaging signal (RAW data).
 カメラ信号処理モデルA2は、RGB(Red、Green、Blue)画像を生成する処理や、ホワイトバランス調整やシャープネス調整やコントラスト調整処理などを実行する。カメラ信号処理モデルA2から出力された撮像画像データは認識モデルA3に入力される。カメラ信号処理モデルA2から出力される撮像画像データを第1画像G1と記載する。 The camera signal processing model A2 executes processing for generating RGB (Red, Green, Blue) images, white balance adjustment, sharpness adjustment, contrast adjustment processing, and the like. The captured image data output from the camera signal processing model A2 is input to the recognition model A3. Captured image data output from the camera signal processing model A2 is referred to as a first image G1.
 認識モデルA3は、入力された画像データに基づいて撮像された被写体を特定してラベリングする処理を行う。即ち、認識モデルA3は、画像に撮像された被写体を検出する処理を行う。検出される被写体としては、例えば、歩行者、信号機、標識、車両などである。認識モデルA3における検出結果は、後段の統合制御モデルA4に入力される。 The recognition model A3 performs a process of identifying and labeling the imaged subject based on the input image data. That is, the recognition model A3 performs processing for detecting the subject captured in the image. Objects to be detected include, for example, pedestrians, traffic lights, signs, and vehicles. The detection result of the recognition model A3 is input to the subsequent integrated control model A4.
 統合制御モデルA4は、画像データに基づく認識結果を用いて、車両100の制御を行う。これにより、衝突軽減ブレーキ機能や、ACC(Adaptive Cruise Control)機能や、各種の警告機能を実現する。 The integrated control model A4 controls the vehicle 100 using recognition results based on image data. This realizes a collision mitigation braking function, an ACC (Adaptive Cruise Control) function, and various warning functions.
 一方、シミュレータシステムS2においては、実際の走行環境シーンA1を模擬するために、環境設定B1、シナリオ生成B2及びレンダリングB3を行う。 On the other hand, in the simulator system S2, environment setting B1, scenario generation B2 and rendering B3 are performed in order to simulate the actual driving environment scene A1.
 環境設定B1では、実際に車両100が走行する実際の走行環境シーンA1に対応した地図情報、走行ルート上に配置される車両100以外の他車両や歩行者や標識や信号機や路面等の被写体を模したデータセットなどが設定される。
 これらのデータセットには、後段のレンダリングB3におけるレイトレーシングで用いられるポリゴンなどの形状情報や色情報や素材情報や分光反射率の情報などが含まれている。
In the environment setting B1, map information corresponding to the actual driving environment scene A1 in which the vehicle 100 actually runs, and subjects such as other vehicles other than the vehicle 100 placed on the driving route, pedestrians, signs, traffic lights, road surfaces, etc. A simulated data set is set.
These data sets include shape information, color information, material information, spectral reflectance information, etc. of polygons and the like used in ray tracing in the subsequent rendering B3.
 続くシナリオ生成B2では、車両100の走行態様を模擬した走行プランの設定がなされる。走行プランとは、車両100が通行するルートだけではなく、走行中のハンドル操作やアクセル操作等の動作を含むものである。これは、運転操作に基づく車両100の挙動と換言することもできる。 In the subsequent scenario generation B2, a travel plan that simulates the travel mode of the vehicle 100 is set. The travel plan includes not only the route that the vehicle 100 travels, but also operations such as steering wheel operation and accelerator operation during travel. This can also be rephrased as the behavior of vehicle 100 based on the driving operation.
 シナリオ生成B2において生成された走行プランは、後段のレンダリングB3に入力される。レンダリングB3では、レイトレーシングによるレンダリング処理が実行されることにより3DCG(3-Dimensional CG)のモデル生成が行われる。レンダリング処理の出力結果は、カメラモデル2に入力される。 The driving plan generated in the scenario generation B2 is input to the subsequent rendering B3. In rendering B3, a 3DCG (3-Dimensional CG) model is generated by executing rendering processing based on ray tracing. An output result of rendering processing is input to the camera model 2 .
 ここで、レンダリング処理の出力データは、カメラモデル2が備えるイメージセンサモデルの二次元画素配列に対応して二次元配列状に展開された画素ごとの分光放射輝度値とされている。更に、分光放射輝度値は、第1撮像装置1のフレームレートに対応して一定の時間間隔で出力される。例えば、カメラモデル2の模擬対象とされた第1撮像装置1の撮像が30fps(frames per second)で行われる場合には、二次元配列状に展開された画素ごとの分光放射輝度値が1秒間に30回出力される。 Here, the output data of the rendering process is the spectral radiance value for each pixel developed in a two-dimensional array corresponding to the two-dimensional pixel array of the image sensor model provided in the camera model 2. Furthermore, the spectral radiance values are output at regular time intervals corresponding to the frame rate of the first imaging device 1 . For example, when the imaging of the first imaging device 1 to be simulated by the camera model 2 is performed at 30 fps (frames per second), the spectral radiance value of each pixel developed in a two-dimensional array is 30 times.
 カメラモデル2は、車載カメラシステムS1の第1撮像装置1の撮像機能を模擬するためのモデルであり、図3に示すように、光学モデルB7とセンサモデルB8とを備えて構成されている。 The camera model 2 is a model for simulating the imaging function of the first imaging device 1 of the in-vehicle camera system S1, and as shown in FIG. 3, includes an optical model B7 and a sensor model B8.
 光学モデルB7は、第1撮像装置1の光学部材である各種のレンズ系やシャッタ機構やIR(Infrared)カットフィルタ等を模したモデルとされており、カメラモデル2に入力された分光放射輝度値に対して光学部材が与える影響を演算し、画素ごとの分光放射照度に変換する処理を行う。具体的には、光学モデルB7は、射影補正処理やアパーチャ補正処理やシェーディング補正処理やIRカットフィルタ補正処理等を行う。 The optical model B7 is a model imitating various lens systems, a shutter mechanism, an IR (Infrared) cut filter, etc., which are optical members of the first imaging device 1, and the spectral radiance value input to the camera model 2 is , the effect of the optical member is calculated, and processing is performed to convert it into spectral irradiance for each pixel. Specifically, the optical model B7 performs projection correction processing, aperture correction processing, shading correction processing, IR cut filter correction processing, and the like.
 画素ごとの分光放射照度は、第1撮像装置1が備えるイメージセンサを模したセンサモデルB8に入力される。センサモデルB8は、イメージセンサが備えるカラーフィルタ等の各種光学フィルタを模したフィルタモデルや画素部における受光動作や読み出し動作を模した画素アレイモデル等を備えて構成されており、入力された分光放射照度に対してカラーフィルタや画素部が与える影響を演算し、画素ごとのRAWデータとして後段のカメラ信号処理モデルB4に出力する。具体的には、センサモデルB8は、カラーフィルタ補正処理や光電変換処理やAD(Analog to Digital)変換処理等を行う。 The spectral irradiance for each pixel is input to a sensor model B8 that imitates the image sensor provided in the first imaging device 1. The sensor model B8 includes a filter model simulating various optical filters such as color filters provided in the image sensor, a pixel array model simulating light receiving operation and readout operation in the pixel unit, and the like. The effect of the color filter and the pixel portion on the illuminance is calculated, and output to the subsequent camera signal processing model B4 as RAW data for each pixel. Specifically, the sensor model B8 performs color filter correction processing, photoelectric conversion processing, AD (Analog to Digital) conversion processing, and the like.
 ここで、車載カメラシステムS1の第1撮像装置1から出力されるRAWデータと、シミュレータシステムS2のカメラモデル2から出力されるRAWデータは同じデータフォーマットを持つデータとされている。
 従って、RAWデータが入力される車載カメラシステムS1のカメラ信号処理モデルA2とシミュレータシステムS2のカメラ信号処理モデルB4は同じものを用いることができる。なお、カメラ信号処理モデルB4から出力される撮像画像データを第2画像G2と記載する。
Here, the RAW data output from the first imaging device 1 of the in-vehicle camera system S1 and the RAW data output from the camera model 2 of the simulator system S2 have the same data format.
Therefore, the camera signal processing model A2 of the vehicle-mounted camera system S1 to which RAW data is input and the camera signal processing model B4 of the simulator system S2 can be the same. Note that the captured image data output from the camera signal processing model B4 is referred to as a second image G2.
 同様に、車載カメラシステムS1の認識モデルA3とシミュレータシステムS2の認識モデルB5は同じものとされ、車載カメラシステムS1の統合制御モデルA4とシミュレータシステムS2の統合制御モデルB6は同じものとされる。 Similarly, the recognition model A3 of the in-vehicle camera system S1 and the recognition model B5 of the simulator system S2 are the same, and the integrated control model A4 of the in-vehicle camera system S1 and the integrated control model B6 of the simulator system S2 are the same.
 従って、統合制御モデルA4及び統合制御モデルB6に入力される検出結果が同一であれば、統合制御モデルA4及び統合制御モデルB6から出力されるデータも一致することになる。 Therefore, if the detection results input to the integrated control model A4 and integrated control model B6 are the same, the data output from the integrated control model A4 and integrated control model B6 will also match.
 そこで、シミュレータシステムS2における環境設定B1から認識モデルB5までのフロー(図2の破線部分)は、車載カメラシステムS1における実際の走行環境シーンA1から認識モデルA3までのフロー(図1の破線部分)を適切に模擬したものであることが必要となる。この点を検証するのが車載カメラシステムS1とシミュレータシステムS2の一致性検証である。 Therefore, the flow from the environment setting B1 to the recognition model B5 in the simulator system S2 (broken line portion in FIG. 2) is the flow from the actual driving environment scene A1 to the recognition model A3 in the in-vehicle camera system S1 (broken line portion in FIG. 1). It is necessary to appropriately simulate the Verification of this point is verification of conformity between the in-vehicle camera system S1 and the simulator system S2.
 本実施の形態においては、一致性検証として、各システムにおける中間データ等が一致しているか否かを判定する。
 具体的には、車載カメラシステムS1において第1撮像装置1から出力されるRAWデータと、シミュレータシステムS2においてカメラモデル2から出力されるRAWデータと、車載カメラシステムS1において認識モデルA3から出力される検出結果と、シミュレータシステムS2において認識モデルB5から出力される検出結果と、を用いて一致性検証を行う。
In this embodiment, it is determined whether or not the intermediate data and the like in each system are consistent as the consistency verification.
Specifically, RAW data output from the first imaging device 1 in the in-vehicle camera system S1, RAW data output from the camera model 2 in the simulator system S2, and output from the recognition model A3 in the in-vehicle camera system S1 Consistency verification is performed using the detection result and the detection result output from the recognition model B5 in the simulator system S2.
 このように一致性検証を行うことで、実際の撮像装置としての第1撮像装置1を備えた車載カメラシステムS1でアルゴリズムの検証を行った場合と同等の検証をシミュレータシステムS2で行うことが可能となる。
By performing consistency verification in this way, it is possible to perform the same verification in the simulator system S2 as in the case of verifying the algorithm in the in-vehicle camera system S1 including the first imaging device 1 as the actual imaging device. becomes.
<3.一致性検証のフロー>
 図4を用いて一致性検証の流れを説明する。なお、一致性検証についての各処理を行う装置は、シミュレータシステムS2が備えるコンピュータ装置であってもよいし、シミュレータシステムS2が備えるコンピュータ装置とは別の装置であってもよい。
 コンピュータ装置は、一致性検証についての各処理を実行するために、CPU(Central Processing Unit)等から成る演算処理部を備えて構成されている。
<3. Consistency verification flow>
The flow of consistency verification will be described with reference to FIG. The device that performs each process for verifying consistency may be a computer device included in the simulator system S2, or may be a device separate from the computer device included in the simulator system S2.
The computer device is configured to include an arithmetic processing section including a CPU (Central Processing Unit) or the like in order to execute each process for consistency verification.
 コンピュータ装置の演算処理部は、図4のステップS101において、カメラモデル2が実機としての第1撮像装置1の光学部材及びイメージセンサを適切に模擬したものとなっているか否かを検証する。即ち、演算処理部は、カメラモデル2と第1撮像装置1の一致性検証を行う。 In step S101 of FIG. 4, the arithmetic processing unit of the computer device verifies whether or not the camera model 2 appropriately simulates the optical members and image sensor of the first imaging device 1 as an actual device. That is, the arithmetic processing unit verifies the consistency between the camera model 2 and the first imaging device 1 .
 カメラモデル2と第1撮像装置1の一致性検証は、カメラモデル2から出力されるRAWデータと第1撮像装置1から出力されるRAWデータが一致しているか否かを判定する。なお、実機である第1撮像装置1から出力されるRAWデータと、3DCGを利用したカメラモデル2から出力されるRAWデータを完全に一致させることは難しいため、本実施の形態においては、それぞれのRAWデータについての統計データを用いて一致性の検証を行う。RAWデータの一致性の検証についての具体的な処理内容については後述する。 The matching verification between the camera model 2 and the first imaging device 1 determines whether or not the RAW data output from the camera model 2 and the RAW data output from the first imaging device 1 match. Note that it is difficult to completely match the RAW data output from the first imaging device 1, which is an actual device, with the RAW data output from the camera model 2 using 3DCG. Consistency is verified using statistical data on the RAW data. Specific details of processing for verifying consistency of RAW data will be described later.
 演算処理部はステップS102において、カメラモデル2から出力されるRAWデータと第1撮像装置1から出力されるRAWデータの比較結果に基づいた判定処理を行う。RAWデータ同士の差分が所定範囲に収まっていない場合、即ち、一致性検証は不合格であると判定した場合、演算処理部はステップS103において、要因解析とモデルの修正処理を行う。 In step S<b>102 , the arithmetic processing unit performs determination processing based on the result of comparison between the RAW data output from the camera model 2 and the RAW data output from the first imaging device 1 . If the difference between the RAW data does not fall within the predetermined range, that is, if it is determined that the matching verification fails, the arithmetic processing unit performs factor analysis and model correction processing in step S103.
 ステップS103の処理では、要因解析を行うと共に作業者に対して要因解析結果を提示する処理や修正対象のモデルを提示する処理などを行う。また、モデルに与えられる変数を変更して上書きする処理をステップS103において行ってもよい。
 或いは、演算処理部はステップS103において要因解析に用いるデータを作業者に提示する処理を行い、提示されたデータに基づいて作業者が要因解析を行ってもよい。
In the process of step S103, factor analysis is performed, as well as a process of presenting the result of factor analysis to the operator, a process of presenting a model to be corrected, and the like. Further, a process of changing and overwriting the variables given to the model may be performed in step S103.
Alternatively, the arithmetic processing unit may perform a process of presenting data used for factor analysis to the worker in step S103, and the worker may perform factor analysis based on the presented data.
 要因解析及びモデルの修正処理を終えた後、演算処理部はステップS101へと戻り、カメラモデル2と第1撮像装置1の一致性検証を行う。 After finishing the factor analysis and model correction processing, the arithmetic processing unit returns to step S101 and verifies the consistency between the camera model 2 and the first imaging device 1 .
 一方、ステップS102において、一致性検証は合格であると判定した場合、即ち、RAWデータ同士の差分が所定範囲に収まっていると判定した場合、演算処理部は車載カメラシステムS1の認識モデルA3及びシミュレータシステムS2の認識も出るB5から出力されるそれぞれの検出結果を用いた一致性検証を行う。 On the other hand, in step S102, when it is determined that the matching verification is passed, that is, when it is determined that the difference between the RAW data is within a predetermined range, the arithmetic processing unit detects the recognition model A3 of the in-vehicle camera system S1 and Consistency verification is performed using each detection result output from B5, which also outputs the recognition of the simulator system S2.
 具体的には、演算処理部はステップS104において、第1撮像装置1の姿勢及びカメラモデル2の姿勢についてのキャリブレーションを行う。ステップS104の処理では、第1撮像装置1が定められた方向を撮像するように第1撮像装置1の姿勢をキャリブレーションする。そして、カメラモデル2において設定された光軸方向を第1撮像装置1の光軸方向に一致させるように変更する。
 ステップS104の具体的な処理内容については後述する。
Specifically, in step S<b>104 , the arithmetic processing unit performs calibration for the orientation of the first imaging device 1 and the orientation of the camera model 2 . In the process of step S104, the posture of the first imaging device 1 is calibrated so that the first imaging device 1 captures an image in a predetermined direction. Then, the optical axis direction set in the camera model 2 is changed so as to match the optical axis direction of the first imaging device 1 .
Specific processing contents of step S104 will be described later.
 演算処理部はステップS105において、走行環境でのカメラシステムの一致性検証を行う。カメラシステムの一致性検証とは、車載カメラシステムS1における図1に示す破線部分と、シミュレータシステムS2における図3に示す破線部分の一致性を検証するものである。
 具体的には、車載カメラシステムS1の認識モデルA3から出力される被写体の検出結果と、シミュレータシステムS2の認識モデルB5から出力される被写体の検出結果の一致性を確認する処理が実行される。
In step S105, the arithmetic processing unit verifies the consistency of the camera system in the running environment. The matching verification of the camera system is to verify the matching between the dashed line portion shown in FIG. 1 in the in-vehicle camera system S1 and the dashed line portion shown in FIG. 3 in the simulator system S2.
Specifically, a process of confirming consistency between the subject detection result output from the recognition model A3 of the in-vehicle camera system S1 and the subject detection result output from the recognition model B5 of the simulator system S2 is executed.
 検出結果は、例えば、認識モデルA3に入力された第1画像G1に対して画素領域を特定する情報とそこに撮像された被写体を分類した結果であるラベル情報が紐付いたデータとされる。具体的には、ある所定の画素領域に「車」というラベルが紐付けられたものとされる。 The detection result is, for example, data in which information specifying the pixel area of the first image G1 input to the recognition model A3 and label information that is the result of classifying the subject imaged there are linked. Specifically, it is assumed that a label "car" is associated with a certain predetermined pixel area.
 検出結果には、1枚の第1画像G1に対して画素領域情報とラベル情報の組が複数組含まれていてもよい。 The detection result may include a plurality of pairs of pixel area information and label information for one first image G1.
 シミュレータシステムS2についての検出結果も同様である。即ち、認識モデルB5に入力された第2画像G2に対して画素領域を特定する情報と撮像された被写体についてのラベル情報を紐付けたものが検出結果とされる。 The same applies to the detection results for the simulator system S2. That is, the detection result is obtained by linking the information specifying the pixel area of the second image G2 input to the recognition model B5 and the label information about the imaged subject.
 なお、検出結果は、ラベル以外の情報を含んでいてもよい。例えば、画素や画素領域の明るさ情報を含んでいてもよい。具体的には、ある画素領域を特定するための情報と、当該領域に撮像された被写体を特定するためのラベル情報と、当該被写体についての明るさ情報として例えば明るさの最大値と最小値と平均値と分散などの情報が紐付けられていてもよい。第1画像G1及び第2画像G2における特定領域に撮像された被写体が同じカテゴリに分類されると共に撮像された被写体の明るさが同様の傾向であれば、シミュレータシステムS2で得られる検出結果を用いて実機と同様に制御アルゴリズムの検証を行うことができる。一方の認識モデルでのみ検出された被写体がいる場合や、検出された被写体の分類結果が異なる場合や、被写体についての明るさなどの傾向が異なる場合には、シミュレータシステムS2で得られる検出結果を用いて実機と同様に制御アルゴリズムの検証を行うことはできないと判断することができる。 Note that the detection result may contain information other than the label. For example, it may contain brightness information of pixels or pixel regions. Specifically, information for specifying a certain pixel area, label information for specifying a subject imaged in the area, and brightness information about the subject, for example, the maximum and minimum values of brightness. Information such as the average value and variance may be linked. If the subjects imaged in the specific regions in the first image G1 and the second image G2 are classified into the same category and the brightness of the imaged subjects has a similar tendency, the detection result obtained by the simulator system S2 is used. It is possible to verify the control algorithm in the same way as with the actual equipment. If there is an object detected by only one of the recognition models, if the classification results of the detected object are different, or if the tendency of the object such as brightness is different, the detection result obtained by the simulator system S2 is used. It can be judged that it is impossible to verify the control algorithm by using it in the same way as the actual machine.
 演算処理部はステップS106において、認識モデルA3から出力される検出結果と認識モデルB5から出力される検出結果の比較結果に基づいた判定処理を行う。双方の検出結果が大きく異なる場合、即ち、一致性検証は不合格であると判定した場合、演算処理部はステップS107において、要因解析とモデルの修正処理を行う。
 但し、ステップS101及びステップS102の処理において、カメラモデル2と第1撮像装置1の一致性は確保されているため、ステップS107では、図1及び図2の双方に破線で示す部分以外の部分についての要因解析とモデルの修正を行う。
In step S106, the arithmetic processing unit performs determination processing based on the result of comparison between the detection result output from the recognition model A3 and the detection result output from the recognition model B5. If both detection results are significantly different, that is, if it is determined that the consistency verification fails, the arithmetic processing unit performs factor analysis and model correction processing in step S107.
However, in the processes of steps S101 and S102, the consistency between the camera model 2 and the first imaging device 1 is ensured, so in step S107, the parts other than the parts indicated by the dashed lines in both FIG. 1 and FIG. factor analysis and model modification.
 ステップS106の処理ではステップS103の処理と同様に、要因解析と作業者に対する要因解析結果の提示処理と修正対象のモデルを提示する処理を行ってもよいし、モデルに与えられる変数を変更して上書きする処理を行ってもよいし、要因解析に用いるデータを作業者に提示する処理を行うことにより作業者に要因解析作業を促してもよい。 In the process of step S106, similarly to the process of step S103, factor analysis, process of presenting the factor analysis result to the worker, and process of presenting the model to be corrected may be performed, or the variables given to the model may be changed. A process of overwriting may be performed, or a process of presenting the data used for factor analysis to the worker may be performed to prompt the worker to perform the factor analysis work.
 ステップS107の要因解析及びモデルの修正処理を終えた後、演算処理部はステップS105へと戻り、走行環境でのカメラシステムの一致性検証を行う。
After completing the factor analysis and model correction processing in step S107, the arithmetic processing unit returns to step S105 to verify the consistency of the camera system in the driving environment.
<4.カメラモデルの一致性検証>
 カメラモデルの一致性検証についての処理の流れについて図5を用いて説明する。なお、図5に示す一致性検証の流れは、図4のステップS101とステップS102とステップS103の各処理について示しており、特にステップS101において実行される具体的な処理を示したものである。
<4. Consistency Verification of Camera Model>
The flow of processing for verification of matching of camera models will be described with reference to FIG. The flow of consistency verification shown in FIG. 5 shows each process of steps S101, S102, and S103 in FIG. 4, and specifically shows the specific process executed in step S101.
 なお、前述した演算処理部が図5に示す各処理を実行する例を説明するが、これに限らず、コンピュータ装置の演算処理部と第1撮像装置1の制御部が協働することにより図5に示す各処理が実行されてもよいし、それ以外の装置を含んで構成されたシステムによって各処理が実行されてもよい。 An example in which the arithmetic processing unit described above executes each processing shown in FIG. 5 will be described, but the present invention is not limited to this. 5 may be executed, or each process may be executed by a system configured including other devices.
 演算処理部は図5のステップS201において、撮影開始指示を受信したか否かを判定する。撮影開始指示は、例えば、作業者によって演算処理部を有するコンピュータ装置に入力されてもよいし、第1撮像装置1の画角内の所定位置に撮影対象の被写体が位置したことを検出した第1撮像装置1から入力されてもよい。 In step S201 of FIG. 5, the arithmetic processing unit determines whether or not an instruction to start shooting has been received. For example, the instruction to start photographing may be input by an operator to a computer device having an arithmetic processing unit, or may be performed by a first imaging device 1 that detects that a subject to be photographed is positioned at a predetermined position within the angle of view of the first imaging device 1 . 1 may be input from the imaging device 1 .
 撮影開始指示を受信するまで演算処理部はステップS201の処理を繰り返し実行する。なお、撮影開始の指示は、例えば、所定の条件が整った際に作業者から与えられる。
 所定の条件が整うとは、例えば、第1撮像装置1の画角内における測定被写体の設置が完了することなどである。
The arithmetic processing unit repeatedly executes the processing of step S201 until an instruction to start shooting is received. The instruction to start photographing is given by the operator, for example, when predetermined conditions are met.
Satisfying a predetermined condition means, for example, completion of installation of a subject to be measured within the angle of view of the first imaging device 1 .
 具体的に図6を参照して説明する。図6は、第1撮像装置1の画角内に輝度箱3を配置した状態であり、この状態であれば、測定被写体としての輝度箱3の撮像が可能である。作業者は、第1撮像装置1の前方の所定位置に輝度箱3を設置した後、演算処理部に対して撮影開始を指示する。 A specific description will be given with reference to FIG. FIG. 6 shows a state in which the luminance box 3 is arranged within the angle of view of the first imaging device 1. In this state, it is possible to image the luminance box 3 as the object to be measured. After installing the brightness box 3 at a predetermined position in front of the first imaging device 1, the operator instructs the arithmetic processing unit to start imaging.
 輝度箱3は、内部に発光体が配置されると共に、一つの面が内部に配置された発光体から出射される光を透過拡散する拡散面3aとして形成されている。輝度箱3は、拡散面3aが第1撮像装置1に対向するように配置される。 The luminance box 3 has a luminous body disposed therein, and one surface is formed as a diffusion surface 3a for transmitting and diffusing light emitted from the luminous body disposed therein. The brightness box 3 is arranged so that the diffusion surface 3 a faces the first imaging device 1 .
 図5の説明に戻る。
 演算処理部はステップS202において、撮像動作指示を第1撮像装置1に送信する。これにより、第1撮像装置1における撮像動作が実行される。
Returning to the description of FIG.
The arithmetic processing unit transmits an imaging operation instruction to the first imaging device 1 in step S202. Thereby, the imaging operation in the first imaging device 1 is executed.
 演算処理部はステップS203において、第1撮像装置1から出力されるRAWデータを取得する。RAWデータにおいては、所定の領域が検証エリアArとして設定されている。 The arithmetic processing unit acquires the RAW data output from the first imaging device 1 in step S203. In the RAW data, a predetermined area is set as the verification area Ar.
 続いて、演算処理部はステップS204において、RAWデータにおける所定の領域として設定された検証エリアArごとに統計処理を行う。
 具体的に図7を参照して説明する。RAWデータは、第1撮像装置1が有するイメージセンサに形成された画素の二次元配列態様に応じたデータである。即ち、図7に示すように、RAWデータは、二次元データとして捉えることが可能である。2次元データであるRAWデータには、中央部に検証エリアAr1が設定され、左上の角付近に検証エリアAr2が設定され、右上の角付近に検証エリアAr3が設定され、左下の角付近に検証エリアAr4が設定され、右下の角付近に検証エリアAr5が設定されている。
Subsequently, in step S204, the arithmetic processing unit performs statistical processing for each verification area Ar set as a predetermined area in the RAW data.
A specific description will be given with reference to FIG. The RAW data is data corresponding to the two-dimensional arrangement of pixels formed in the image sensor of the first imaging device 1 . That is, as shown in FIG. 7, RAW data can be regarded as two-dimensional data. In the RAW data, which is two-dimensional data, a verification area Ar1 is set in the center, a verification area Ar2 is set near the upper left corner, a verification area Ar3 is set near the upper right corner, and a verification area Ar3 is set near the lower left corner. Area Ar4 is set, and verification area Ar5 is set near the lower right corner.
 ステップS204の統計処理では、検証エリアAr1~検証エリアAr5のそれぞれについて平均値や分散等を算出する。それぞれの検証エリアArの大きさは、例えば、縦及び横それぞれ100画素などとされる。 In the statistical processing of step S204, the average value, variance, etc. are calculated for each of the verification areas Ar1 to Ar5. The size of each verification area Ar is, for example, 100 pixels vertically and 100 pixels horizontally.
 ここで、イメージセンサがベイヤー配列のカラーフィルタなどを備えていた場合、RAWデータは画素ごとに受光した光の赤色光成分の強さを示したデータDrと、緑色光成分の強さを示したデータDgと、青色光成分の強さを示したデータDbとを含んでいる。具体的には、RAWデータは図8に示すようにデータDr、Dg、Dbが所定のパターンで展開された二次元データとされる。従って、検証エリアAr1~Ar5についても同様にデータDr、Dg、Dbが縦横に展開された二次元データとされる。 Here, if the image sensor has a Bayer array color filter or the like, the RAW data includes data Dr indicating the intensity of the red light component and the intensity of the green light component of the light received for each pixel. It contains data Dg and data Db indicating the intensity of the blue light component. Specifically, the RAW data is two-dimensional data in which data Dr, Dg, and Db are developed in a predetermined pattern as shown in FIG. Therefore, the verification areas Ar1 to Ar5 are also two-dimensional data in which the data Dr, Dg, and Db are developed vertically and horizontally.
 検証エリアAr1を例に挙げると、ステップS204の統計処理では、検証エリアAr1に含まれるデータDrについての平均値と分散を算出し、検証エリアAr1に含まれるデータDgについての平均値と分散を算出し、検証エリアAr1に含まれるデータDbについての平均値と分散を算出する。 Taking the verification area Ar1 as an example, in the statistical processing in step S204, the mean value and variance of the data Dr included in the verification area Ar1 are calculated, and the mean value and variance of the data Dg included in the verification area Ar1 are calculated. Then, the average value and variance of the data Db included in the verification area Ar1 are calculated.
 即ち、平均値と分散を算出する場合であれば、一つの検証エリアArについて六つの統計データが算出される。 That is, when calculating the average value and variance, six statistical data are calculated for one verification area Ar.
 図5の説明に戻る。検証エリアArごと且つ色画素ごとの統計データを算出することで、演算処理部は実機としての第1撮像装置1について、一致性検証に用いるデータの取得を終える。 Return to the description of Fig. 5. By calculating the statistical data for each verification area Ar and for each color pixel, the arithmetic processing unit completes acquisition of data used for matching verification with respect to the first imaging device 1 as a real device.
 続いて、演算処理部はステップS205以降の処理を実行することにより、シミュレータシステムS2のカメラモデル2について、一致性検証に用いるデータの取得を開始する。具体的には、演算処理部はステップS205において、測定準備が完了したか否かを判定する。 Subsequently, the arithmetic processing unit executes the processes from step S205 onward, thereby starting acquisition of data used for consistency verification for the camera model 2 of the simulator system S2. Specifically, in step S205, the arithmetic processing unit determines whether or not preparation for measurement has been completed.
 測定準備が完了した状態とは、例えば図9に示すように、輝度箱3の拡散面3aに対向する位置に分光放射輝度計4を設置した状態とされる。即ち、分光放射輝度計4を用いて特定領域の分光放射輝度値の測定を行うことができる状態が整ったことで測定準備が完了したと判定される。
 測定準備が整うまでステップS205の処理が繰り返し実行される。
The state in which the measurement preparation is completed is, for example, a state in which the spectral radiance meter 4 is installed at a position facing the diffusion surface 3a of the luminance box 3, as shown in FIG. That is, it is determined that the preparation for measurement is completed when the spectral radiance value of the specific region can be measured using the spectral radiance meter 4 .
The process of step S205 is repeatedly executed until the preparation for measurement is completed.
 測定準備が整ったと判定した場合、或いは、測定準備が整ったことを示すユーザ操作を受け付けた場合、演算処理部はステップS206において、測定された分光放射輝度値を取得する。なお、ステップS206の前に分光放射輝度計4に対して測定開始指示を行ってもよい。 If it is determined that the preparation for measurement has been completed, or if a user operation indicating that the preparation for measurement has been completed is accepted, the arithmetic processing unit acquires the measured spectral radiance value in step S206. Note that the spectral radiance meter 4 may be instructed to start measurement before step S206.
 なお、分光放射輝度計4は、輝度箱3全体の分光放射輝度値を測定するのではなく、微小立体角とされた狭いエリアについての分光放射輝度値を測定するものである。従って、中央及び各角部付近に設定された図7における5箇所の検証エリアArと比較するための分光放射輝度値を得るためには、検証エリアArに対応した位置に設けられた測定エリアBrについての分光放射輝度値を測定する必要がある。 Note that the spectral radiance meter 4 does not measure the spectral radiance value of the entire brightness box 3, but measures the spectral radiance value of a narrow area with a minute solid angle. Therefore, in order to obtain spectral radiance values for comparison with the five verification areas Ar in FIG. It is necessary to measure the spectral radiance value for
 従って、演算処理部は、図9に示すように、分光放射輝度計4の光軸を測定エリアBr1の中心に合わせた状態でステップS206の処理を行った後、再び、ステップS205において測定準備が整うまで待機する。そして、作業者が分光放射輝度計4の光軸を測定エリアBr2の中心に合わせる操作を行ったことに応じて、演算処理部は再びステップS206の処理を行う。このようにステップS205及びステップS206の処理を繰り返すことで、測定エリアBrごとの分光放射輝度値を取得する。 Therefore, as shown in FIG. 9, the arithmetic processing unit performs the processing of step S206 in a state in which the optical axis of the spectral radiance meter 4 is aligned with the center of the measurement area Br1. wait until it's ready. Then, in response to the operator's operation of aligning the optical axis of the spectral radiance meter 4 with the center of the measurement area Br2, the arithmetic processing unit performs the processing of step S206 again. By repeating the processing of steps S205 and S206 in this way, the spectral radiance value for each measurement area Br is obtained.
 なお、前述したように、測定エリアBrは、検証エリアArに対応した位置に設けられる。即ち、拡散面3aにおける測定エリアBr1の中央部を通過した光は、図7に示すRAWデータにおける検証エリアAr1の中央部に位置するRAWデータに影響を与えるものであり、且つ、測定エリアBr1について測定された分光放射輝度値に影響を与えるものである。 It should be noted that, as described above, the measurement area Br is provided at a position corresponding to the verification area Ar. That is, the light passing through the central portion of the measurement area Br1 on the diffusion surface 3a affects the RAW data located in the central portion of the verification area Ar1 in the RAW data shown in FIG. It affects the measured spectral radiance values.
 演算処理部は図5のステップS207において、取得した分光放輝度値を用いてカメラモデル2への入力データを生成する。生成されるデータは、カメラモデル2が備えるセンサモデルB8の画素配列に対応した二次元データである。
 一例を図10に示す。入力データにおいては、それぞれの測定エリアBrに対応する対応エリアCr1~Cr5に取得した分光放射輝度値が配置される。具体的には、対応エリアCr1には測定エリアBr1から得られた分光放射輝度値が配置され、対応エリアCr2には測定エリアBr2から得られた分光放射輝度値が配置され、対応エリアCr3には測定エリアBr3から得られた分光放射輝度値が配置され、対応エリアCr4には測定エリアBr4から得られた分光放射輝度値が配置され、対応エリアCr5には測定エリアBr5から得られた分光放射輝度値が配置される。
In step S207 of FIG. 5, the arithmetic processing unit generates input data to the camera model 2 using the acquired spectral radiance values. The generated data is two-dimensional data corresponding to the pixel array of the sensor model B8 included in the camera model 2. FIG.
An example is shown in FIG. In the input data, the obtained spectral radiance values are arranged in corresponding areas Cr1 to Cr5 corresponding to the respective measurement areas Br. Specifically, the spectral radiance values obtained from the measurement area Br1 are arranged in the corresponding area Cr1, the spectral radiance values obtained from the measurement area Br2 are arranged in the corresponding area Cr2, and the corresponding area Cr3 The spectral radiance values obtained from the measurement area Br3 are arranged, the spectral radiance values obtained from the measurement area Br4 are arranged in the corresponding area Cr4, and the spectral radiance values obtained from the measurement area Br5 are arranged in the corresponding area Cr5. value is placed.
 また、分光放射輝度値を用いた入力データにおける対応エリアCr以外の領域にはダミーデータが用いられる。ダミーデータが配置された領域は、一致性検証に用いられない領域である。ダミーデータは、ゼロ値であってもよいし、それ以外の値であってもよい。 In addition, dummy data is used for areas other than the corresponding area Cr in the input data using the spectral radiance values. The area where dummy data is placed is an area that is not used for consistency verification. The dummy data may be zero values or other values.
 演算処理部は図5のステップS208において、カメラモデル2から出力されるRAWデータを取得する。カメラモデル2から出力されるRAWデータは、カメラモデル2に入力される分光放射輝度値に対して光学モデルB7による演算が施されて分光放射照度へと変換され、更に、センサモデルB8による演算が施されたものである。 The arithmetic processing unit acquires the RAW data output from the camera model 2 in step S208 of FIG. In the RAW data output from the camera model 2, the spectral radiance value input to the camera model 2 is subjected to calculation by the optical model B7 and converted into spectral irradiance, and further calculation is performed by the sensor model B8. It has been applied.
 RAWデータにおける所定の領域は検証エリアAr’として設けられている。例えば、図11に示すように、対応エリアCr1に配置された分光放射輝度値が入力される画素領域から出力されるRAWデータの領域は検証エリアAr’1とされる。同様に、対応エリアCr2に配置された分光放射輝度値が入力される画素領域から出力されるRAWデータの領域は検証エリアAr’2とされ、対応エリアCr3に配置された分光放射輝度値が入力される画素領域から出力されるRAWデータの領域は検証エリアAr’3とされ、対応エリアCr4に配置された分光放射輝度値が入力される画素領域から出力されるRAWデータの領域は検証エリアAr’4とされ、対応エリアCr5に配置された分光放射輝度値が入力される画素領域から出力されるRAWデータの領域は検証エリアAr’5とされる。 A predetermined area in the RAW data is provided as a verification area Ar'. For example, as shown in FIG. 11, the area of the RAW data output from the pixel area to which the spectral radiance values arranged in the corresponding area Cr1 are input is set as the verification area Ar'1. Similarly, the area of the RAW data output from the pixel area to which the spectral radiance value arranged in the corresponding area Cr2 is input is set as the verification area Ar'2, and the spectral radiance value arranged in the corresponding area Cr3 is input. The area of the RAW data output from the pixel area arranged in the corresponding area Cr4 is designated as the verification area Ar′3, and the area of the RAW data output from the pixel area into which the spectral radiance value arranged in the corresponding area Cr4 is inputted is designated as the verification area Ar '4, and the area of the RAW data output from the pixel area to which the spectral radiance value arranged in the corresponding area Cr5 is input is set as the verification area Ar'5.
 演算処理部は図5のステップS209において、検証エリアAr’ごとの統計処理を行う。例えば、検証エリアAr’1~検証エリアAr’5のそれぞれについて平均値や分散等を算出する。検証エリアAr’1の大きさは、図7における検証エリアArと同じ大きさとされる。
 また、センサモデルB8から出力されるRAWデータは、第1撮像装置1から出力されるRAWデータと同様に、データDr、Dg、Dbが展開された二次元データとされる(図8参照)。ステップS209の統計処理では、一つの検証エリアAr’について、データDr、データDg及びデータDbごとに平均値と分散を算出する。
 即ち、平均値と分散を算出する場合には、一つの検証エリアAr’について六つの統計データが算出される。
In step S209 of FIG. 5, the arithmetic processing unit performs statistical processing for each verification area Ar'. For example, the average value, variance, etc. are calculated for each of the verification areas Ar′1 to Ar′5. The size of the verification area Ar'1 is the same size as the verification area Ar in FIG.
Also, the RAW data output from the sensor model B8 is two-dimensional data in which the data Dr, Dg, and Db are expanded in the same manner as the RAW data output from the first imaging device 1 (see FIG. 8). In the statistical processing of step S209, average values and variances are calculated for each of data Dr, data Dg, and data Db for one verification area Ar'.
That is, when calculating the average value and variance, six statistical data are calculated for one verification area Ar'.
 演算処理部はステップS210において、検証エリアArと検証エリアAr’についての比較処理を行う。具体的には、検証エリアAr1についてステップS204で算出された統計データと、それに対応する検証エリアAr’についてステップS209で算出された統計データとを比較する処理を行う。 In step S210, the arithmetic processing unit performs comparison processing between the verification area Ar and the verification area Ar'. Specifically, the statistical data calculated in step S204 for the verification area Ar1 and the statistical data calculated in step S209 for the corresponding verification area Ar' are compared.
 全ての検証エリアArと検証エリアAr’について、それぞれの統計データが一致している場合や、差分が小さい場合には、カメラモデル2が第1撮像装置1を適切に模擬することができていると判定することができる。 When the statistical data of all the verification areas Ar and the verification areas Ar' match each other, or when the difference is small, the camera model 2 can appropriately simulate the first imaging device 1. can be determined.
 演算処理部はステップS102において、比較結果に基づいた分岐処理を行う。具体的には、ステップS210の比較処理の結果、全ての統計データの差分が小さいと判定した場合、第1撮像装置1とカメラモデル2の一致性が確保されたとして、即ち、一致性検証に合格したとして、図5に示す一連の処理を終える。 In step S102, the arithmetic processing unit performs branch processing based on the comparison result. Specifically, as a result of the comparison processing in step S210, if it is determined that the difference between all the statistical data is small, it is determined that the consistency between the first imaging device 1 and the camera model 2 is ensured. The series of processing shown in FIG. 5 is terminated assuming that the test has passed.
 一方、少なくとも一部の統計データについて差分が閾値よりも大きいと判定した場合、第1撮像装置1とカメラモデル2の一致性は十分でないとして、ステップS103で要因解析と各種モデルの修正処理を行い、再度ステップS201の処理へと戻る。即ち、第1撮像装置1とカメラモデル2の一致性が確保できるまで図5に示す一連の処理を繰り返す。 On the other hand, if it is determined that the difference is greater than the threshold value for at least some of the statistical data, it is determined that the matching between the first imaging device 1 and the camera model 2 is not sufficient, and in step S103 factor analysis and various model correction processes are performed. , the process returns to step S201 again. That is, the series of processing shown in FIG. 5 is repeated until the consistency between the first imaging device 1 and the camera model 2 can be ensured.
 なお、ステップS102の分岐処理で用いる閾値は、統計データの種別ごとに異なるものとされていてもよい。具体的には、平均値同士を比較する際に用いる閾値と、分散同士を比較する際に用いる閾値は異なる値であってもよい。 Note that the threshold used in the branching process in step S102 may be different for each type of statistical data. Specifically, the threshold used when comparing mean values and the threshold used when comparing variances may be different values.
 また、上述した第1撮像装置1とカメラモデル2の一致性検証の例では、輝度箱3を用いていたが、それ以外のものを用いてもよい。
 例えば、測定被写体として輝度箱3を用いるのでは無く、図12に示すようなカラーチャート5や図13に示すようなグレーチャート6を用いてもよい。
In addition, in the above-described example of verification of matching between the first imaging device 1 and the camera model 2, the brightness box 3 was used, but other items may be used.
For example, a color chart 5 as shown in FIG. 12 or a gray chart 6 as shown in FIG. 13 may be used instead of using the luminance box 3 as the object to be measured.
 カラーチャート5やグレーチャート6を測定被写体として用いる場合には、図14に示すように、カラーチャート5やグレーチャート6に光を照射する光源7を配置してもよい。即ち、光源7から出射された光がカラーチャート5やグレーチャート6によって反射されて第1撮像装置1や分光放射輝度計4へ入力されてもよい。
When the color chart 5 or the gray chart 6 is used as the object to be measured, a light source 7 for irradiating the color chart 5 or the gray chart 6 with light may be arranged as shown in FIG. That is, the light emitted from the light source 7 may be reflected by the color chart 5 or the gray chart 6 and input to the first imaging device 1 or the spectral radiance meter 4 .
<5.カメラシステムのキャリブレーション>
 図4のステップS105の走行環境でのカメラシステムの一致性検証を行うための事前準備であるステップS104のカメラ姿勢のキャリブレーションを行うために演算処理部が実行する処理の一例について、図15及び図16を参照して説明する。なお、図15と図16におけるフローの接続はコネクタC1として示している。
<5. Camera system calibration>
An example of the processing executed by the arithmetic processing unit to calibrate the camera posture in step S104, which is a preliminary preparation for verifying the consistency of the camera system in the driving environment in step S105 of FIG. 4, is shown in FIGS. Description will be made with reference to FIG. 15 and 16 are shown as a connector C1.
 なお、図15及び図16に示す一連の処理を実行する演算処理部は、図4に示す各処理を実行する演算処理部と同一のものであってもよいし、別の処置装置であってもよい。 15 and 16 may be the same as the arithmetic processing unit for executing each process shown in FIG. 4, or may be a different treatment device. good too.
 先ず、演算処理部がステップS301の処理を実行する前に、作業者は車両100の前方の所定の位置に指標A及び指標Bを設置しておく。具体的には、図17に示すように、第1撮像装置1からの水平方向の距離が距離L1となるように、且つ、地上からの高さが車両100に搭載された状態の第1撮像装置1の高さHcと一致するように、指標Aを設置する。 First, before the arithmetic processing unit executes the process of step S301, the operator installs the index A and the index B at predetermined positions in front of the vehicle 100. FIG. Specifically, as shown in FIG. 17, the first imaging device 1 is mounted on the vehicle 100 so that the horizontal distance from the first imaging device 1 is the distance L1 and the height from the ground is the vehicle 100. An index A is installed so as to match the height Hc of the device 1 .
 次に、作業者は、図17に示すように、第1撮像装置1からの水平方向の距離が距離L1よりも長い距離L2となるように、且つ、地上からの高さが高さHcとなるように指標Bを設置する。 Next, as shown in FIG. 17, the operator sets the horizontal distance from the first imaging device 1 to the distance L2 longer than the distance L1, and the height from the ground to the height Hc. Set index B so that
 図17に示す状態とされた後、演算処理部は図15のステップS301において、指標Aの中心と指標Bの中心と画角中心が重なるように第1撮像装置1のヨー方向及びピッチ方向の調整を行う。この調整は、例えば、演算処理部が第1撮像装置1に対して第1撮像装置1の撮像方向の調整を行うための制御信号を送信することで実現される。 After the state shown in FIG. 17 is established, in step S301 of FIG. 15, the arithmetic processing unit adjusts the yaw and pitch directions of the first imaging device 1 so that the center of the index A, the center of the index B, and the center of the angle of view overlap. make adjustments. This adjustment is realized, for example, by the arithmetic processing unit transmitting a control signal for adjusting the imaging direction of the first imaging device 1 to the first imaging device 1 .
 なお、演算処理部は、第1撮像装置1に対して制御信号を送信するのではなく、第1撮像装置1の撮像方向の調整が可能な状態で第1撮像装置1を保持している保持機器や、第1撮像装置1の撮像方向を制御する制御機器に対して制御信号を送信してもよい。これは、後述する他の制御信号についても同様である。 Note that the arithmetic processing unit does not transmit a control signal to the first imaging device 1, but holds the first imaging device 1 in a state in which the imaging direction of the first imaging device 1 can be adjusted. A control signal may be transmitted to a device or a control device that controls the imaging direction of the first imaging device 1 . This is the same for other control signals to be described later.
 指標Aの中心と指標Bの中心が画角中心に重なるようにヨー方向とピッチ方向の調整することで、第1撮像装置1の光軸を水平に調整することができる。 By adjusting the yaw direction and the pitch direction so that the center of the index A and the center of the index B overlap with the center of the angle of view, the optical axis of the first imaging device 1 can be horizontally adjusted.
 続いて演算処理部がステップS302の処理を実行する前に、作業者は所定の位置に指標Cを設置する。具体的には、図18に示すように、指標Aが設置された位置から車両100の車幅方向に並行に移動させた位置に指標Cを設置する。このとき、指標Cの中心の高さが高さHcとなるようにされる。 Then, before the arithmetic processing unit executes the process of step S302, the operator places the index C at a predetermined position. Specifically, as shown in FIG. 18, an index C is installed at a position moved in parallel in the vehicle width direction of the vehicle 100 from the position where the index A was installed. At this time, the height of the center of the index C is made to be the height Hc.
 演算処理部は図15のステップS302において、第1撮像装置1の画角中心を通る水平ライン上に指標Cの中心が位置するように第1撮像装置1のロール方向の調整を行うための制御信号を送信する。
 具体的には、図19に示すように、第1撮像装置1の画角における中心に指標Aの中心が撮像されると共に、画角における垂直方向の中間位置に指標Cの中心が撮像されるようにロール方向の調整を行う。
In step S302 of FIG. 15, the arithmetic processing unit performs control for adjusting the roll direction of the first imaging device 1 so that the center of the index C is positioned on the horizontal line passing through the center of the angle of view of the first imaging device 1. Send a signal.
Specifically, as shown in FIG. 19, the center of the index A is imaged at the center of the angle of view of the first imaging device 1, and the center of the index C is imaged at an intermediate position in the vertical direction of the angle of view. Adjust the roll direction as follows.
 演算処理部は図15のステップS303において、第1撮像装置1の姿勢に俯角を設けるか否かを判定する。俯角を設けるか否かについては、作業者により設定されてもよいし、第1撮像装置1の画角に撮像される歩行者や先行車などの被写体に応じて自動的に設定されてもよい。 In step S303 of FIG. 15, the arithmetic processing unit determines whether or not to provide the posture of the first imaging device 1 with a depression angle. Whether or not to provide the depression angle may be set by the operator, or may be automatically set according to subjects such as pedestrians and preceding vehicles imaged at the angle of view of the first imaging device 1. .
 俯角を設ける場合には、例えば図20に示すように、作業者によって所定の位置に指標Dが設置される。指標Dは、第1撮像装置1との距離が距離L3とされ、中心位置の高さがHdとされている。距離L3及び高さHdは、どのような値に設定されてもよい。 When providing the angle of depression, for example, as shown in FIG. 20, the operator sets an index D at a predetermined position. The index D has a distance L3 from the first imaging device 1 and a height Hd at the center position. Any value may be set for the distance L3 and the height Hd.
 俯角を設ける場合には、演算処理部は図15のステップS304において、第1撮像装置1の画角中心を通る水平ライン上に指標Dの中心が位置するように第1撮像装置1のピッチ方向の調整を行うための制御信号を送信する。このとき、ヨー方向やロール方向の調整は行わない。これにより、既に設定済みのヨー方向やロール方向を変えることなく第1撮像装置1の撮像方向の調整を行うことができる。 If the depression angle is provided, the arithmetic processing unit adjusts the pitch direction of the first imaging device 1 in step S304 of FIG. Sends a control signal for adjusting the At this time, no adjustments are made in the yaw or roll direction. Thereby, the imaging direction of the first imaging device 1 can be adjusted without changing the already set yaw direction and roll direction.
 演算処理部は続くステップS305以降の処理を行うことにより、シミュレータシステムS2についてのカメラモデル2のキャリブレーションを行う。
 具体的には、演算処理部は図15のステップS305において、シミュレータシステムS2の環境マップ上における所定の位置に指標A’と指標B’を設定する。指標A’と指標B’の設定位置は、第1撮像装置1に対する指標Aと指標Bの位置関係に基づくものとされる。即ち、カメラモデル2と指標A’の位置関係は第1撮像装置1と指標Aの位置関係と同じとされ、カメラモデル2と指標B’の位置関係は第1撮像装置1と指標Bの位置関係と同じとされる。
 なお、距離L1と距離L2については変更してもよいが、環境マップ上における車両の位置からの距離が異なるように指標A’と指標B’を設定する。
The arithmetic processing unit performs the processing from step S305 onward to calibrate the camera model 2 for the simulator system S2.
Specifically, in step S305 of FIG. 15, the arithmetic processing unit sets the index A' and the index B' at predetermined positions on the environment map of the simulator system S2. The set positions of the index A′ and the index B′ are based on the positional relationship of the index A and the index B with respect to the first imaging device 1 . That is, the positional relationship between the camera model 2 and the index A' is the same as the positional relationship between the first imaging device 1 and the index A, and the positional relationship between the camera model 2 and the index B' is the same as that between the first imaging device 1 and the index B. equated with a relationship.
Although the distance L1 and the distance L2 may be changed, the index A' and the index B' are set so that the distances from the position of the vehicle on the environment map are different.
 演算処理部は図15のステップS306において、指標A’の中心と指標B’の中心とカメラモデル2の画角中心が重なるようにカメラモデル2のヨー方向及びピッチ方向の調整を行う。 In step S306 of FIG. 15, the arithmetic processing unit adjusts the yaw direction and pitch direction of the camera model 2 so that the center of the index A', the center of the index B', and the center of the angle of view of the camera model 2 overlap.
 演算処理部は図16のステップS307において、シミュレータシステムS2の環境マップ上における所定の位置に指標C’を設定する。具体的には、指標Aと指標Cの位置関係と、指標A’と指標C’の位置関係が一致するように指標Cの中心位置を設定する。 In step S307 of FIG. 16, the arithmetic processing unit sets an index C' at a predetermined position on the environment map of the simulator system S2. Specifically, the center position of the index C is set so that the positional relationship between the index A and the index C matches the positional relationship between the index A' and the index C'.
 演算処理部はステップS308において、カメラモデル2の画角中心を通る水平ライン上に指標C’の中心が位置するようにカメラモデル2のロール方向の調整を行う。 In step S308, the arithmetic processing unit adjusts the roll direction of the camera model 2 so that the center of the index C' is positioned on the horizontal line passing through the center of the angle of view of the camera model 2.
 演算処理部はステップS309において、カメラモデル2の姿勢に俯角を設けるか否かを判定する。第1撮像装置1の姿勢に俯角を設ける場合には、同様の状態を模擬するためにカメラモデル2の姿勢についても俯角を設ける。 In step S309, the arithmetic processing unit determines whether or not to provide the posture of the camera model 2 with a depression angle. When the angle of depression is provided for the posture of the first imaging device 1, the angle of depression is also provided for the posture of the camera model 2 in order to simulate a similar state.
 俯角を設ける場合には、演算処理部はステップS310において、シミュレータシステムS2の環境マップ上における所定の位置に指標D’を設定する。カメラモデル2に対する指標D’の位置及び高さは、第1撮像装置1に対する指標Dの位置及び高さと一致するようにされる。 When setting the depression angle, the arithmetic processing unit sets an index D' at a predetermined position on the environment map of the simulator system S2 in step S310. The position and height of the index D' with respect to the camera model 2 are made to match the position and height of the index D with respect to the first imaging device 1 .
 続いて、演算処理部はステップS311において、カメラモデル2の画角中心を通る水平ライン上に指標D’の中心が位置するようにカメラモデル2のピッチ方向の調整を行う。このとき、ヨー方向やロール方向の調整は行わない。これにより、既に設定済みのヨー方向やロール方向を変えることなくカメラモデル2の撮像方向の調整を行うことができる。 Subsequently, in step S<b>311 , the arithmetic processing unit adjusts the pitch direction of the camera model 2 so that the center of the index D′ is positioned on the horizontal line passing through the center of the angle of view of the camera model 2 . At this time, no adjustments are made in the yaw or roll direction. Thereby, the imaging direction of the camera model 2 can be adjusted without changing the already set yaw direction and roll direction.
 第1撮像装置1の撮像方向に対してカメラモデル2の撮像方向を一致させるようにキャリブレーションを行うことで、第1撮像装置1を適切に模擬したカメラモデル2を用意することができる。 By performing calibration so that the imaging direction of the camera model 2 matches the imaging direction of the first imaging device 1, the camera model 2 that appropriately simulates the first imaging device 1 can be prepared.
 図15及び図16に示すように第1撮像装置1とカメラモデル2のキャリブレーションを適切に行うことで、図4のステップS105の一致性検証を適切に行うことが可能となる。
By properly calibrating the first imaging device 1 and the camera model 2 as shown in FIGS. 15 and 16, it is possible to properly perform the matching verification in step S105 of FIG.
<6.変形例>
 上述した例では、第1撮像装置1がベイヤー配列のカラーフィルタ(図21参照)を備えることにより、カメラモデル2がカラーフィルタを模したセンサモデルB8を有していた。しかし、第1撮像装置1が備えるフィルタは、ベイヤー配列以外のものであってもよい。
<6. Variation>
In the above example, the camera model 2 has the sensor model B8 imitating the color filter by the first imaging device 1 having the Bayer array color filter (see FIG. 21). However, the filter provided in the first image pickup device 1 may be one other than the Bayer array.
 具体的には、第1撮像装置1がR(Red)、C(Clear)、B(Blue)に対応したフィルタを並べたRCCB配列のフィルタ(図22参照)や、R(Red)、G(Green)、B(Blue)、IR(Infrared)に対応したフィルタを並べたRGBIR配列のフィルタ(図23参照)や、Cy(Cyan)、Ye(Yellow)、G(Green)、Mg(Magenta)に対応したフィルタを並べた補色フィルタ(図24参照)を備えていてもよい。その場合には、カメラモデル2のセンサモデルB8についても、RCCB配列のフィルタモデルやRGBIR配列のフィルタモデルや補色フィルタモデルを備えたものとされる。 Specifically, the first imaging device 1 uses an RCCB array filter (see FIG. 22) in which filters corresponding to R (Red), C (Clear), and B (Blue) are arranged, or R (Red), G ( Green), B (Blue), and IR (Infrared) filters in RGBIR array (see Fig. 23), and Cy (Cyan), Ye (Yellow), G (Green), and Mg (Magenta). A complementary color filter (see FIG. 24) in which corresponding filters are arranged may be provided. In that case, the sensor model B8 of the camera model 2 is also provided with an RCCB arrangement filter model, an RGBIR arrangement filter model, and a complementary color filter model.
 図7や図9から図11の各図に示す例では、画角や画像内の所定位置に設定された検証エリアArや測定エリアBrや対応エリアCrや検証エリアAr’はそれぞれ5か所とされていた。しかし、これはあくまで一例であり、そのほかの設定例も考えられる。例えば、画角や画像における周辺領域についての一致性を確認する必要がない場合には、各領域の中央部に一つだけ所定の領域が設定されてもよい。具体的には、検証エリアArとして検証エリアAr1が設定され、測定エリアBrとして測定エリアBr1が設定され、対応エリアCrとして対応エリアCr1が設定され、検証エリアAr’として検証エリアAr’1が設定されてもよい。 In the examples shown in FIGS. 7 and 9 to 11, there are five verification areas Ar, measurement areas Br, corresponding areas Cr, and verification areas Ar′ set at predetermined positions in the angle of view and the image. It had been. However, this is only an example, and other setting examples are also conceivable. For example, if it is not necessary to check the angle of view or the consistency of the peripheral areas in the image, only one predetermined area may be set in the center of each area. Specifically, a verification area Ar1 is set as the verification area Ar, a measurement area Br1 is set as the measurement area Br, a corresponding area Cr1 is set as the corresponding area Cr, and a verification area Ar'1 is set as the verification area Ar'. may be
 或いは、周辺領域のみ一致性を確認する場合には、角部周辺の領域にそれぞれ所定の領域が設定されてもよい。また、6個以上の所定の領域が設定されてもよい。例えば、画角を上下方向に3分割、左右方向に3分割、合計9分割し、それぞれの領域の中央部分に検証エリアArを設けてもよい。
Alternatively, when confirming the matching only in the peripheral area, each predetermined area may be set in the area around the corner. Also, six or more predetermined areas may be set. For example, the angle of view may be vertically divided into three and horizontally divided into three, for a total of nine divisions, and the verification area Ar may be provided in the central portion of each area.
<7.コンピュータ装置>
 上述した図4、図5、図15及び図16に示す各処理を実行する演算処理部を備えるコンピュータ装置の構成について図25を参照して説明する。
<7. Computer device>
A configuration of a computer device including an arithmetic processing unit that executes each process shown in FIGS. 4, 5, 15, and 16 will be described with reference to FIG.
 コンピュータ装置のCPU71は、上述した各種の処理を行う演算処理部として機能し、ROM72や例えばEEP-ROM(Electrically Erasable Programmable Read-Only Memory)などの不揮発性メモリ部74に記憶されているプログラム、または記憶部79からRAM73にロードされたプログラムに従って各種の処理を実行する。RAM73にはまた、CPU71が各種の処理を実行する上において必要なデータなども適宜記憶される。
 CPU71、ROM72、RAM73、不揮発性メモリ部74は、バス83を介して相互に接続されている。このバス83にはまた、入出力インタフェース(I/F)75も接続されている。
The CPU 71 of the computer device functions as an arithmetic processing unit that performs the various processes described above, and programs stored in a non-volatile memory unit 74 such as a ROM 72 or an EEP-ROM (Electrically Erasable Programmable Read-Only Memory), or Various processes are executed according to programs loaded from the storage unit 79 to the RAM 73 . The RAM 73 also appropriately stores data necessary for the CPU 71 to execute various processes.
The CPU 71 , ROM 72 , RAM 73 and nonvolatile memory section 74 are interconnected via a bus 83 . An input/output interface (I/F) 75 is also connected to this bus 83 .
 入出力インタフェース75には、操作子や操作デバイスよりなる入力部76が接続される。
 例えば入力部76としては、キーボード、マウス、キー、ダイヤル、タッチパネル、タッチパッド、リモートコントローラ等の各種の操作子や操作デバイスが想定される。
 入力部76によりユーザの操作が検知され、入力された操作に応じた信号はCPU71によって解釈される。
The input/output interface 75 is connected to an input section 76 including operators and operating devices.
For example, as the input unit 76, various operators and operation devices such as a keyboard, mouse, key, dial, touch panel, touch pad, remote controller, etc. are assumed.
A user's operation is detected by the input unit 76 , and a signal corresponding to the input operation is interpreted by the CPU 71 .
 また入出力インタフェース75には、LCD或いは有機ELパネルなどよりなる表示部77や、スピーカなどよりなる音声出力部78が一体又は別体として接続される。
 表示部77は各種表示を行う表示部であり、例えばコンピュータ装置の筐体に設けられるディスプレイデバイスや、コンピュータ装置に接続される別体のディスプレイデバイス等により構成される。
 表示部77は、CPU71の指示に基づいて表示画面上に各種の画像処理のための画像や処理対象の動画等の表示を実行する。また表示部77はCPU71の指示に基づいて、各種操作メニュー、アイコン、メッセージ等、即ちGUI(Graphical User Interface)としての表示を行う。
The input/output interface 75 is connected integrally or separately with a display unit 77 such as an LCD or an organic EL panel, and an audio output unit 78 such as a speaker.
The display unit 77 is a display unit that performs various displays, and is configured by, for example, a display device provided in the housing of the computer device, a separate display device connected to the computer device, or the like.
The display unit 77 displays images for various types of image processing, moving images to be processed, etc. on the display screen based on instructions from the CPU 71 . Further, the display unit 77 displays various operation menus, icons, messages, etc., ie, as a GUI (Graphical User Interface), based on instructions from the CPU 71 .
 入出力インタフェース75には、ハードディスクや固体メモリなどより構成される記憶部79や、モデムなどより構成される通信部80が接続される場合もある。 The input/output interface 75 may be connected to a storage unit 79 made up of a hard disk, solid-state memory, etc., and a communication unit 80 made up of a modem or the like.
 通信部80は、インターネット等の伝送路を介しての通信処理や、各種機器との有線/無線通信、バス通信などによる通信を行う。 The communication unit 80 performs communication processing via a transmission line such as the Internet, wired/wireless communication with various devices, bus communication, and the like.
 入出力インタフェース75にはまた、必要に応じてドライブ81が接続され、磁気ディスク、光ディスク、光磁気ディスク、或いは半導体メモリなどのリムーバブル記憶媒体82が適宜装着される。
 ドライブ81により、リムーバブル記憶媒体82から各処理に用いられるプログラム等のデータファイルなどを読み出すことができる。読み出されたデータファイルは記憶部79に記憶されたり、データファイルに含まれる画像や音声が表示部77や音声出力部78で出力されたりする。またリムーバブル記憶媒体82から読み出されたコンピュータプログラム等は必要に応じて記憶部79にインストールされる。
A drive 81 is also connected to the input/output interface 75 as required, and a removable storage medium 82 such as a magnetic disk, optical disk, magneto-optical disk, or semiconductor memory is appropriately mounted.
Data files such as programs used for each process can be read from the removable storage medium 82 by the drive 81 . The read data file is stored in the storage unit 79 , and the image and sound contained in the data file are output by the display unit 77 and the sound output unit 78 . Computer programs and the like read from the removable storage medium 82 are installed in the storage unit 79 as required.
 このコンピュータ装置では、例えば本実施の形態の処理のためのソフトウェアを、通信部80によるネットワーク通信やリムーバブル記憶媒体82を介してインストールすることができる。或いは当該ソフトウェアは予めROM72や記憶部79等に記憶されていてもよい。 In this computer device, for example, software for the processing of this embodiment can be installed via network communication by the communication unit 80 or via the removable storage medium 82 . Alternatively, the software may be stored in advance in the ROM 72, the storage unit 79, or the like.
 CPU71が各種のプログラムに基づいて処理動作を行うことで、上述した演算処理部を備えた情報処理装置としての必要な情報処理や通信処理が実行される。
 なお、情報処理装置は、図2のようなコンピュータ装置が単一で構成されることに限らず、複数のコンピュータ装置がシステム化されて構成されてもよい。複数のコンピュータ装置は、LAN(Local Area Network)等によりシステム化されていてもよいし、インターネット等を利用したVPN(Virtual Private Network)等により遠隔地に配置されたものでもよい。複数のコンピュータ装置には、クラウドコンピューティングサービスによって利用可能なサーバ群(クラウド)としてのコンピュータ装置が含まれてもよい。
The CPU 71 performs processing operations based on various programs, thereby executing necessary information processing and communication processing as an information processing apparatus having the arithmetic processing unit described above.
Note that the information processing apparatus is not limited to being composed of a single computer device as shown in FIG. 2, and may be configured by systematizing a plurality of computer devices. The plurality of computer devices may be systematized by a LAN (Local Area Network) or the like, or may be remotely located by a VPN (Virtual Private Network) or the like using the Internet or the like. The plurality of computing devices may include computing devices as a group of servers (cloud) available through a cloud computing service.
<8.まとめ>
 上述した各例において説明したように、本技術の情報処理方法は、特定の被写体(上述した輝度箱や各チャートや実際の走行環境シーンにおける被写体や先行車など)を撮像した第1撮像装置1から出力される第1画像G1と、特定の被写体からの光を計測した計測結果に基づく二次元の入力データ(例えば分光放射輝度値)が入力されるカメラモデル2から出力される第2画像G2と、を用いて、第1撮像装置1についての特性とカメラモデル2についての特性の比較を行うことにより第1撮像装置1とカメラモデル2についての一致性検証をコンピュータ装置(例えばCPU71などの演算処理部を備えた情報処理装置)が行うものである。
 第1撮像装置1とカメラモデル2についての特性の比較とは、例えば、第1画像G1と第2画像G2を比較することにより行われてもよいし、第1画像G1に画像認識処理を適用することにより得られた認識結果(ラベル情報)と第2画像G2に画像認識処理を適用することにより得られた認識結果を比較することにより行われてもよい。
 このような比較処理によって、第1撮像装置1とカメラモデル2の一致性を検証することにより、カメラモデル2を用いたシミュレーションを適切に行うことが可能となる。なお、第1画像G1と第2画像G2が一致している場合には、各画像に対する画像認識処理の認識結果は同じものとなり、カメラモデル2を用いたシミュレーションによって実環境を適切に模擬することが可能である。しかし、カメラモデル2に入力されるデータは、あくまで被写体からの光を計測した計測結果(分光放射輝度値や分光放射照度など)に基づくものであるため、第1画像G1と第2画像G2を完全に一致させることは難しい。そこで、第1画像G1から得られた統計データと第2画像G2から得られた同様の統計データを比較し、その差分が小さい場合に一致性を確保できたと判定してもよい。これにより、一致性検証を容易に行うことができ、各種コストの削減を図ることができる。
 或いは、上述したように、第1画像G1に対して画像認識処理を適用した認識結果と、第2画像G2についての認識結果を比較してもよい。第1画像G1と第2画像G2が多少異なっていたとしても、認識結果が同じであれば、その後の制御を正しく行うことができる。即ち、同じ認識結果を出力可能な程度に第1撮像装置1とカメラモデル2の一致性を確保できれば自動運転制御や運転支援制御を適切に行うことができる。そして、同じ認識結果を出力可能な程度に第1撮像装置とカメラモデルの一致性を確保することは、第1撮像装置とカメラモデルに対して過剰な一致性を求めなくても済むことになるため、効率よくカメラモデルの検証を行うことができる。
<8. Summary>
As described in each of the examples above, the information processing method of the present technology uses the first imaging device 1 that captures an image of a specific subject (such as the above-described brightness box, charts, subject in an actual driving environment scene, and preceding vehicle). and a second image G2 output from the camera model 2 to which two-dimensional input data (for example, spectral radiance values) based on measurement results of measuring light from a specific subject is input. By comparing the characteristics of the first imaging device 1 and the characteristics of the camera model 2 using and, the matching verification between the first imaging device 1 and the camera model 2 is performed by a computer device (for example, the calculation of the CPU 71, etc.) an information processing apparatus having a processing unit).
The comparison of the characteristics of the first imaging device 1 and the camera model 2 may be performed, for example, by comparing the first image G1 and the second image G2, or applying image recognition processing to the first image G1. may be performed by comparing the recognition result (label information) obtained by applying the image recognition processing to the second image G2.
By verifying the matching between the first imaging device 1 and the camera model 2 through such comparison processing, it is possible to appropriately perform a simulation using the camera model 2 . Note that when the first image G1 and the second image G2 match, the recognition result of the image recognition processing for each image is the same, and the simulation using the camera model 2 appropriately simulates the actual environment. is possible. However, the data input to the camera model 2 is based on the measurement results (spectral radiance value, spectral irradiance, etc.) of measuring the light from the subject. A perfect match is difficult. Therefore, the statistical data obtained from the first image G1 and similar statistical data obtained from the second image G2 may be compared, and if the difference is small, it may be determined that the consistency has been ensured. As a result, consistency verification can be easily performed, and various costs can be reduced.
Alternatively, as described above, the recognition result obtained by applying image recognition processing to the first image G1 may be compared with the recognition result for the second image G2. Even if the first image G1 and the second image G2 are slightly different, if the recognition results are the same, the subsequent control can be performed correctly. In other words, automatic driving control and driving support control can be performed appropriately if the matching between the first imaging device 1 and the camera model 2 can be ensured to the extent that the same recognition result can be output. Ensuring the matching between the first imaging device and the camera model to the extent that the same recognition result can be output eliminates the need for excessive matching between the first imaging device and the camera model. Therefore, the camera model can be verified efficiently.
 図9等を参照して説明したように、カメラモデル2に入力される入力データは、特定の被写体からの光についての分光放射輝度値とされてもよい。
 二次元の分光放射輝度値をカメラモデル2に入力することにより、カメラモデル2は実在の第1撮像装置1と同様の信号を出力することが可能となる。
 これにより、一致性検証を適切に行うことができる。
As described with reference to FIG. 9 and the like, the input data input to the camera model 2 may be spectral radiance values of light from a specific subject.
By inputting a two-dimensional spectral radiance value to the camera model 2, the camera model 2 can output a signal similar to that of the first imaging device 1 that actually exists.
As a result, consistency verification can be performed appropriately.
 図3等を参照して説明したように、カメラモデルは、光学モデルB7とイメージセンサモデル(センサモデルB8)とを有していてもよい。
 例えば、光学モデルB7は、第1撮像装置1が備える各種のレンズや、射影補正や、アパーチャ補正や、シェーディング補正や、IRカットフィルタ等を模したモデルとされる。また、イメージセンサモデルは、各画素に設けられるカラーフィルタや光電変換処理やAD(Analog to Digital)変換処理等を模したモデルとされる。
 このように、実際の撮像装置である第1撮像装置1において組み込まれる各種の補正や処理を含んで各モデルが構成されることにより、第1撮像装置1を適切に模したカメラモデル2を構築することができる。また、実際の第1撮像装置1においてこれらの各要素のうちの一部が変更になった場合に必要となるカメラモデル2の変更を容易にすることができる。
As described with reference to FIG. 3 and the like, the camera model may have an optical model B7 and an image sensor model (sensor model B8).
For example, the optical model B7 is a model imitating various lenses, projection correction, aperture correction, shading correction, IR cut filter, and the like provided in the first imaging device 1 . The image sensor model is a model imitating a color filter provided in each pixel, photoelectric conversion processing, AD (Analog to Digital) conversion processing, and the like.
In this way, each model is configured including various corrections and processes incorporated in the first imaging device 1, which is an actual imaging device, to construct a camera model 2 that appropriately imitates the first imaging device 1. can do. Further, it is possible to easily change the camera model 2 that is required when some of these elements are changed in the actual first imaging device 1 .
 図7や図10等を参照して説明したように、被写体からの光を計測した計測結果は、第1撮像装置1の画角内に設定された所定の領域(検証エリアArに対応する領域)についての計測結果とされ、二次元の入力データは、検証エリアArに対応する領域(対応エリアCr)に計測結果が配置されてもよい。
 これにより、第1撮像装置1の画角に撮像される被写体全てに対応した入力データを生成する必要がない。
 従って、二次元の入力データの生成に要するコスト削減を図ることができる。
As described with reference to FIGS. 7, 10, etc., the measurement result of measuring the light from the subject is a predetermined area (area corresponding to the verification area Ar) set within the angle of view of the first imaging device 1. ), and the two-dimensional input data may be arranged in an area (corresponding area Cr) corresponding to the verification area Ar.
As a result, it is not necessary to generate input data corresponding to all subjects imaged at the angle of view of the first imaging device 1 .
Therefore, it is possible to reduce the cost required to generate two-dimensional input data.
 図7や図11等を参照して説明したように、所定の領域に対応する画素領域(検証エリアAr)と、第2画像G2における画素領域(検証エリアAr)に対応する領域(検証エリアAr’)とを比較することにより一致性検証を行ってもよい。
 一部の画素領域から出力されたデータ(統計データ)を用いて比較を行うことで簡潔に一致性検証を行うことができる。
 従って、一致性検証に要する演算量を削減することができる。
 なお、RAWデータにおける検証エリアArとそれに対応する検証エリアAr’のデータを比較し、ある程度の一致性が見られた場合に画像認識処理とその認識結果の比較を行うように構成することが可能である。この場合には、検証エリアAr,Ar’についてのデータを比較した時点で第1撮像装置1とカメラモデル2の一致性が確保できないと判定した場合に、画像認識処理を行う必要がなくなるため、処理負担の軽減を図ることができる。
As described with reference to FIGS. 7 and 11, a pixel region (verification area Ar) corresponding to a predetermined region and a region (verification area Ar) corresponding to the pixel region (verification area Ar) in the second image G2 ') may be performed to verify consistency.
Matching can be simply verified by performing comparison using data (statistical data) output from a part of pixel regions.
Therefore, it is possible to reduce the amount of calculation required for consistency verification.
In addition, it is possible to configure such that the data of the verification area Ar in the RAW data and the data of the corresponding verification area Ar' are compared, and if a certain degree of matching is found, the image recognition processing and the recognition result are compared. is. In this case, when it is determined that the matching between the first imaging device 1 and the camera model 2 cannot be ensured at the time of comparing the data regarding the verification areas Ar and Ar', it is not necessary to perform the image recognition processing. It is possible to reduce the processing load.
 図10等を参照して説明したように、二次元の入力データ(分光放射輝度値を用いた入力データ)は、所定の領域に対応する領域(対応エリアCr)以外の領域にダミーデータが配置されてもよい。
 これにより、所定の領域に対応する領域以外の領域に配置されるデータを生成するために計測結果を利用しなくて済む。
 従って、二次元の入力データの生成に時間的コスト及び演算コストを削減することができる。
As described with reference to FIG. 10 and the like, two-dimensional input data (input data using spectral radiance values) has dummy data arranged in an area other than an area (corresponding area Cr) corresponding to a predetermined area. may be
This eliminates the need to use the measurement result to generate data to be placed in an area other than the area corresponding to the predetermined area.
Therefore, it is possible to reduce the time cost and calculation cost for generating two-dimensional input data.
 図3や図7等を参照して説明したように、イメージセンサモデル(センサモデルB8)はフィルタモデルを有し、第2画像G2におけるフィルタの種別ごとの画素群ごとに統計処理を行うことにより一致性検証を行ってもよい。
 これにより、カラーフィルタなどを備えた実際の第1撮像装置1の構成に則した一致性検証が行われる。
 従って、一致性検証を適切に行うことが可能となる。
As described with reference to FIGS. 3 and 7, the image sensor model (sensor model B8) has a filter model, and by performing statistical processing for each pixel group for each filter type in the second image G2 Consistency verification may be performed.
As a result, matching verification is performed in accordance with the actual configuration of the first imaging device 1 having color filters and the like.
Therefore, it is possible to appropriately perform consistency verification.
 図8等を参照して説明したように、統計処理によって画素群ごと(即ちDrごとやDgごとやDbごと)に平均値及び分散を算出し、平均値及び分散をそれぞれの閾値と比較することにより一致性検証を行ってもよい。
 平均値や分散等を用いることにより、簡潔な処理で一致性の検証を行うことができる。 従って、一致性検証を実現するための演算量の削減を図ることができる。
 また、平均値や分散等を用いることにより、ノイズ等によって誤判定してしまうことを防止することができる。
As described with reference to FIG. 8 and the like, the average value and variance are calculated for each pixel group (that is, each Dr, Dg, and Db) by statistical processing, and the average value and variance are compared with respective thresholds. Consistency verification may be performed by
Consistency can be verified with simple processing by using the average value, variance, or the like. Therefore, it is possible to reduce the amount of calculation for realizing consistency verification.
In addition, by using the average value, variance, etc., it is possible to prevent erroneous determination due to noise or the like.
 図21から図24の各図を参照して説明したように、フィルタモデルは、ベイヤー配列、RCCB配列、RGBIR配列、補色フィルタを用いた配列の少なくとも何れかを採用したモデルとされてもよい。
 フィルタモデルに各種のモデルを用いることで、ユーザの要望に沿ったカメラモデル2についての一致性検証を行うことができる。
 従って、多様な要求に合わせた柔軟な一致性検証を行うことができ、利便性の向上を図ることができる。
As described with reference to FIGS. 21 to 24, the filter model may be a model that employs at least one of the Bayer array, RCCB array, RGBIR array, and array using complementary color filters.
By using various models for the filter model, it is possible to verify the conformity of the camera model 2 according to the user's request.
Therefore, it is possible to flexibly perform matching verification according to various demands, and to improve convenience.
 上述した例で各図を参照して説明したように、本技術の情報処理方法は、車載カメラとしての第1撮像装置1と模擬カメラとしてのカメラモデル2に関する一致性検証を行うための事前処理として、第1撮像装置1の前方且つ第1撮像装置1と同じ高さ位置(高さHc)とされた少なくとも2箇所に設置された第1目標物(指標A、B)の中心が画角の中心に一致されるように第1撮像装置1のヨー方向及びピッチ方向の調整を行うことによりキャリブレーションをコンピュータ装置(例えばCPU71などの演算処理部を備えた情報処理装置)が行うものである。
 このようなキャリブレーションを行うことにより、第1撮像装置1を搭載した車載カメラシステムS1と、カメラモデル2を搭載したシミュレータシステムS2の一致性を検証することができる。
 従って、カメラモデル2が車載された状態の第1撮像装置1を適切に模擬できていることを保証できれば、カメラモデル2を用いて車載カメラのテストを適切に行うことができ、各種のコスト削減を行うことができる。
As described with reference to the respective drawings in the above examples, the information processing method of the present technology includes pre-processing for verifying consistency between the first imaging device 1 as an in-vehicle camera and the camera model 2 as a simulated camera. , the center of the first targets (indexes A and B) installed at least two locations in front of the first imaging device 1 and at the same height position (height Hc) as the first imaging device 1 is the angle of view. A computer device (for example, an information processing device having an arithmetic processing unit such as a CPU 71) performs calibration by adjusting the yaw direction and the pitch direction of the first imaging device 1 so that the first imaging device 1 is aligned with the center of the .
By performing such calibration, it is possible to verify the consistency between the in-vehicle camera system S1 in which the first imaging device 1 is mounted and the simulator system S2 in which the camera model 2 is mounted.
Therefore, if it can be ensured that the camera model 2 can appropriately simulate the first imaging device 1 in the vehicle-mounted state, the camera model 2 can be used to properly test the vehicle-mounted camera, thereby reducing various costs. It can be performed.
 図18及び図19を参照して説明したように、第1撮像装置1の光軸に対して水平方向にオフセットさせた位置且つ第1撮像装置1と同じ高さ位置に設置された第2目標物(指標C)の中心が画角における垂直方向の中間位置に一致されるように第1撮像装置1のロール方向の調整を行うことによりキャリブレーションを行ってもよい。
 これにより、簡易な手法でロール方向のキャリブレーションを行うことができる。
 従って、一致性検証の確度を向上させることができる。
As described with reference to FIGS. 18 and 19, the second target is installed at a position offset in the horizontal direction with respect to the optical axis of the first imaging device 1 and at the same height as the first imaging device 1. Calibration may be performed by adjusting the roll direction of the first imaging device 1 so that the center of the object (marker C) coincides with the middle position in the vertical direction of the angle of view.
This makes it possible to calibrate in the roll direction with a simple method.
Therefore, accuracy of matching verification can be improved.
 図20等を参照して説明したように、第1撮像装置1の前方に設置された第3目標物(指標D)の中心が垂直方向の中間位置に一致されるようにピッチ方向の調整をおこなうことによりキャリブレーションとしての俯角調整を行ってもよい。
 これにより、光軸方向が略水平方向とされた第1撮像装置1だけでなく、若干俯瞰で車両100の前方を捉える第1撮像装置1などについてもキャリブレーションを行うことができる。
 このようにしてキャリブレーションされた第1撮像装置1の光軸方向に合わせてカメラモデル2の光軸方向を設定することで、種々の向きに設定された第1撮像装置1を適切に模擬したカメラモデル2を用意することができる。従って、第1撮像装置1についてのシミュレーションを適切に行うことができる。
As described with reference to FIG. 20 and the like, the pitch direction is adjusted so that the center of the third target (index D) installed in front of the first imaging device 1 coincides with the vertical intermediate position. By doing so, the depression angle may be adjusted as calibration.
As a result, it is possible to calibrate not only the first imaging device 1 whose optical axis is substantially horizontal, but also the first imaging device 1 that captures the front of the vehicle 100 from a slightly bird's-eye view.
By setting the optical axis direction of the camera model 2 according to the optical axis direction of the first imaging device 1 calibrated in this way, the first imaging device 1 set in various orientations can be appropriately simulated. A camera model 2 can be provided. Therefore, the simulation for the first imaging device 1 can be properly performed.
 図15及び図16等を参照して説明したように、キャリブレーションによって調整された第1撮像装置1の光軸方向に基づいてカメラモデル2の光軸方向の調整を行ってもよい。
 これにより、第1撮像装置1の光軸方向とカメラモデル2の光軸方向を一致させることができる。
 従って、第1撮像装置1を適切に模擬したカメラモデル2を用意することが可能となり、第1撮像装置1の代わりにカメラモデル2を用いてシミュレーションを行うことができ、運転支援制御に用いられるアルゴリズムの検証作業を効率的に行うことができる。
As described with reference to FIGS. 15 and 16, etc., the optical axis direction of the camera model 2 may be adjusted based on the optical axis direction of the first imaging device 1 adjusted by calibration.
Thereby, the optical axis direction of the first imaging device 1 and the optical axis direction of the camera model 2 can be matched.
Therefore, it is possible to prepare a camera model 2 that appropriately simulates the first imaging device 1, and it is possible to perform a simulation using the camera model 2 instead of the first imaging device 1, and it is used for driving support control. Algorithm verification work can be performed efficiently.
 本実施の形態の情報処理装置は、特定の被写体(上述した輝度箱や各チャートや実際の走行環境シーンにおける被写体や先行車など)を撮像した第1撮像装置1から出力される第1画像G1と、特定の被写体からの光を計測した計測結果に基づく二次元の入力データ(例えば分光放射輝度値)が入力されるカメラモデル2から出力される第2画像G2と、を用いて、第1撮像装置1についての特性とカメラモデル2についての特性の比較を行うことにより第1撮像装置1とカメラモデル2についての一致性検証を行う演算処理部(例えばCPU71)を備えたものである。 The information processing apparatus according to the present embodiment uses the first image G1 output from the first imaging device 1 that captures a specific subject (such as the luminance box and charts described above, the subject in the actual driving environment scene, the preceding vehicle, etc.). and a second image G2 output from the camera model 2 to which two-dimensional input data (for example, spectral radiance values) based on measurement results of measuring light from a specific subject is input, It has an arithmetic processing unit (for example, CPU 71) that verifies the consistency between the first imaging device 1 and the camera model 2 by comparing the characteristics of the imaging device 1 and the characteristics of the camera model 2. FIG.
 本実施の形態の情報処理装置は、車載カメラとしての第1撮像装置1と模擬カメラとしてのカメラモデル2に関する一致性検証を行うための事前処理として、第1撮像装置1の前方且つ第1撮像装置1と同じ高さ位置(高さHc)とされた少なくとも2箇所に設置された第1目標物(指標A、B)の中心が画角の中心に一致されるように第1撮像装置1のヨー方向及びピッチ方向の調整を行うことによりキャリブレーションを行う演算処理部(例えばCPU71)を備えたものである。 The information processing apparatus according to the present embodiment performs the first imaging in front of the first imaging device 1 as preprocessing for verifying the matching between the first imaging device 1 as an in-vehicle camera and the camera model 2 as a simulated camera. The first imaging device 1 is arranged so that the centers of the first targets (indicators A and B) set at at least two locations at the same height position (height Hc) as the device 1 are aligned with the center of the angle of view. It has an arithmetic processing unit (for example, CPU 71) that performs calibration by adjusting the yaw direction and the pitch direction.
 上述した情報処理装置に実行させるプログラムはコンピュータ装置等の機器に内蔵されている記録媒体としてのHDD(Hard Disk Drive)や、CPUを有するマイクロコンピュータ内のROM等に予め記録しておくことができる。あるいはまたプログラムは、フレキシブルディスク、CD-ROM(Compact Disk Read Only Memory)、MO(Magneto Optical)ディスク、DVD(Digital Versatile Disc)、ブルーレイディスク(Blu-ray Disc(登録商標))、磁気ディスク、半導体メモリ、メモリカードなどのリムーバブル記録媒体に、一時的あるいは永続的に格納(記録)しておくことができる。このようなリムーバブル記録媒体は、いわゆるパッケージソフトウェアとして提供することができる。
 また、このようなプログラムは、リムーバブル記録媒体からパーソナルコンピュータ等にインストールする他、ダウンロードサイトから、LAN(Local Area Network)、インターネットなどのネットワークを介してダウンロードすることもできる。
The program to be executed by the information processing apparatus described above can be recorded in advance in a HDD (Hard Disk Drive) as a recording medium built in a device such as a computer device, or in a ROM or the like in a microcomputer having a CPU. . Alternatively, the program may be a flexible disk, a CD-ROM (Compact Disk Read Only Memory), an MO (Magneto Optical) disk, a DVD (Digital Versatile Disc), a Blu-ray Disc (registered trademark), a magnetic disk, a semiconductor It can be temporarily or permanently stored (recorded) in a removable recording medium such as a memory or memory card. Such removable recording media can be provided as so-called package software.
In addition to installing such a program from a removable recording medium to a personal computer or the like, it can also be downloaded from a download site via a network such as a LAN (Local Area Network) or the Internet.
 なお、本明細書に記載された効果はあくまでも例示であって限定されるものではなく、また他の効果があってもよい。 It should be noted that the effects described in this specification are merely examples and are not limited, and other effects may also occur.
 また、上述した各例はいかように組み合わせてもよく、各種の組み合わせを用いた場合であっても上述した種々の作用効果を得ることが可能である。
Further, the examples described above may be combined in any way, and even when various combinations are used, it is possible to obtain the various effects described above.
<9.本技術>
 本技術は以下のような構成を採ることもできる。
(1)
 特定の被写体を撮像した第1撮像装置から出力される第1画像データと、前記特定の被写体からの光を計測した計測結果に基づく二次元の入力データが入力されるカメラモデルから出力される第2画像データと、を用いて、前記第1撮像装置についての特性と前記カメラモデルについての特性の比較を行うことにより前記第1撮像装置と前記カメラモデルについての一致性検証を行う
 情報処理方法。
(2)
 前記入力データは、前記特定の被写体からの光についての分光放射輝度値とされた
 上記(1)に記載の情報処理方法。
(3)
 前記カメラモデルは、光学モデルとイメージセンサモデルとを有する
 上記(1)から上記(2)の何れかに記載の情報処理方法。
(4)
 前記光の計測結果は、前記第1撮像装置の画角内に設定された所定の領域についての計測結果とされ、
 前記二次元の入力データは、前記所定の領域に対応する領域に前記計測結果が配置される
 上記(1)から上記(3)の何れかに記載の情報処理方法。
(5)
 前記所定の領域に対応する画素領域と、前記第2画像データにおける前記画素領域に対応する領域とを比較することにより前記一致性検証を行う
 上記(4)に記載の情報処理方法。
(6)
 前記二次元の入力データは、前記所定の領域に対応する領域以外の領域にダミーデータが配置された
 上記(4)から上記(5)の何れかに記載の情報処理方法。
(7)
 前記イメージセンサモデルはフィルタモデルを有し、
 前記第2画像データにおけるフィルタの種別ごとの画素群ごとに統計処理を行うことにより前記一致性検証を行う
 上記(3)に記載の情報処理方法。
(8)
 前記統計処理によって前記画素群ごとに平均値及び分散を算出し、前記平均値及び分散をそれぞれの閾値と比較することにより前記一致性検証を行う
 上記(7)に記載の情報処理方法。
(9)
 前記フィルタモデルは、ベイヤー配列、RCCB配列、RGBIR配列、補色フィルタを用いた配列の少なくとも何れかを採用したモデルとされた
 上記(7)に記載の情報処理方法。
(10)
 車載カメラとしての第1撮像装置と模擬カメラとしてのカメラモデルに関する一致性検証を行うための事前処理として、前記第1撮像装置の前方且つ前記第1撮像装置と同じ高さ位置とされた少なくとも2箇所に設置された第1目標物の中心が画角の中心に一致されるように前記第1撮像装置のヨー方向及びピッチ方向の調整を行うことによりキャリブレーションを行う
 情報処理方法。
(11)
 前記第1撮像装置の光軸に対して水平方向にオフセットさせた位置且つ前記第1撮像装置と同じ高さ位置に設置された第2目標物の中心が画角における垂直方向の中間位置に一致されるように前記第1撮像装置のロール方向の調整を行うことにより前記キャリブレーションを行う
 上記(10)に記載の情報処理方法。
(12)
 前記第1撮像装置の前方に設置された第3目標物の中心が前記垂直方向の中間位置に一致されるように前記ピッチ方向の調整をおこなうことにより前記キャリブレーションとしての俯角調整を行う
 上記(11)に記載の情報処理方法。
(13)
 前記キャリブレーションによって調整された前記第1撮像装置の光軸方向に基づいて前記カメラモデルの光軸方向の調整を行う
 上記(10)から上記(12)の何れかに記載の情報処理方法。
(14)
 特定の被写体を撮像した第1撮像装置から出力される第1画像データと、前記特定の被写体からの光を計測した計測結果に基づく二次元の入力データが入力されるカメラモデルから出力される第2画像データと、を用いて、前記第1撮像装置についての特性と前記カメラモデルについての特性の比較を行うことにより前記第1撮像装置と前記カメラモデルについての一致性検証を行う演算処理部を備えた
 情報処理装置。
(15)
 車載カメラとしての第1撮像装置と模擬カメラとしてのカメラモデルに関する一致性検証を行うための事前処理として、前記第1撮像装置の前方且つ前記第1撮像装置と同じ高さ位置とされた少なくとも2箇所に設置された第1目標物の中心が画角の中心に一致されるように前記第1撮像装置のヨー方向及びピッチ方向の調整を行うことによりキャリブレーションを行う演算処理部を備えた
 情報処理装置。
<9. This technology>
The present technology can also adopt the following configuration.
(1)
First image data output from a first imaging device that captures an image of a specific subject, and two-dimensional input data based on measurement results of measuring light from the specific subject 2 image data, and comparing the characteristics of the first imaging device and the characteristics of the camera model to verify consistency between the first imaging device and the camera model.
(2)
The information processing method according to (1), wherein the input data is a spectral radiance value of light from the specific subject.
(3)
The information processing method according to any one of (1) to (2) above, wherein the camera model includes an optical model and an image sensor model.
(4)
The measurement result of the light is the measurement result of a predetermined area set within the angle of view of the first imaging device,
The information processing method according to any one of (1) to (3) above, wherein the two-dimensional input data includes the measurement result arranged in an area corresponding to the predetermined area.
(5)
The information processing method according to (4), wherein the matching verification is performed by comparing a pixel area corresponding to the predetermined area and an area corresponding to the pixel area in the second image data.
(6)
The information processing method according to any one of (4) to (5) above, wherein the two-dimensional input data includes dummy data arranged in an area other than an area corresponding to the predetermined area.
(7)
the image sensor model has a filter model;
The information processing method according to (3), wherein the matching verification is performed by performing statistical processing for each pixel group for each filter type in the second image data.
(8)
The information processing method according to (7), wherein an average value and a variance are calculated for each pixel group by the statistical processing, and the matching verification is performed by comparing the average value and the variance with respective threshold values.
(9)
The information processing method according to (7), wherein the filter model employs at least one of a Bayer array, an RCCB array, an RGBIR array, and an array using complementary color filters.
(10)
As a preliminary process for verifying matching between a first imaging device as an in-vehicle camera and a camera model as a simulated camera, at least two cameras positioned in front of the first imaging device and at the same height as the first imaging device. An information processing method, wherein calibration is performed by adjusting the yaw direction and the pitch direction of the first imaging device so that the center of a first target placed at a location coincides with the center of the angle of view.
(11)
The center of a second target placed at a position offset in the horizontal direction with respect to the optical axis of the first imaging device and at the same height as the first imaging device coincides with the middle position in the vertical direction of the angle of view. The information processing method according to (10), wherein the calibration is performed by adjusting the roll direction of the first imaging device such that
(12)
The depression angle adjustment as the calibration is performed by adjusting the pitch direction so that the center of the third target placed in front of the first imaging device coincides with the intermediate position in the vertical direction. 11) The information processing method described in 11).
(13)
The information processing method according to any one of (10) to (12) above, wherein the optical axis direction of the camera model is adjusted based on the optical axis direction of the first imaging device adjusted by the calibration.
(14)
First image data output from a first imaging device that captures an image of a specific subject, and two-dimensional input data based on measurement results of measuring light from the specific subject 2 image data, and comparing the characteristics of the first imaging device and the characteristics of the camera model, thereby verifying the consistency between the first imaging device and the camera model. information processing device.
(15)
As a preliminary process for verifying matching between a first imaging device as an in-vehicle camera and a camera model as a simulated camera, at least two cameras positioned in front of the first imaging device and at the same height as the first imaging device. an arithmetic processing unit that performs calibration by adjusting the yaw direction and the pitch direction of the first imaging device so that the center of the first target placed at the location is aligned with the center of the angle of view. processing equipment.
1 第1撮像装置
2 カメラモデル
B7 光学モデル
B8 センサモデル(イメージセンサモデル)
G1 第1画像
G2 第2画像
Ar、Ar1、Ar2、Ar3、Ar4、Ar5 検証エリア
Cr、Cr1、Cr2、Cr3、Cr4、Cr5 対応エリア
Ar’、Ar’1、Ar’2、Ar’3、Ar’4、Ar’5 検証エリア
A、B 指標(第1目標物)
C 指標(第2目標物)
D 指標(第3目標物)
1 first imaging device 2 camera model B7 optical model B8 sensor model (image sensor model)
G1 First image G2 Second image Ar, Ar1, Ar2, Ar3, Ar4, Ar5 Verification area Cr, Cr1, Cr2, Cr3, Cr4, Cr5 Corresponding area Ar', Ar'1, Ar'2, Ar'3, Ar '4, Ar'5 Verification area A, B index (first target)
C index (second target)
D index (third target)

Claims (15)

  1.  特定の被写体を撮像した第1撮像装置から出力される第1画像データと、前記特定の被写体からの光を計測した計測結果に基づく二次元の入力データが入力されるカメラモデルから出力される第2画像データと、を用いて、前記第1撮像装置についての特性と前記カメラモデルについての特性の比較を行うことにより前記第1撮像装置と前記カメラモデルについての一致性検証を行う
     情報処理方法。
    First image data output from a first imaging device that captures an image of a specific subject, and two-dimensional input data based on measurement results of measuring light from the specific subject 2 image data, and comparing the characteristics of the first imaging device and the characteristics of the camera model to verify consistency between the first imaging device and the camera model.
  2.  前記入力データは、前記特定の被写体からの光についての分光放射輝度値とされた
     請求項1に記載の情報処理方法。
    2. The information processing method according to claim 1, wherein said input data is a spectral radiance value of light from said specific subject.
  3.  前記カメラモデルは、光学モデルとイメージセンサモデルとを有する
     請求項1に記載の情報処理方法。
    The information processing method according to claim 1, wherein the camera model has an optical model and an image sensor model.
  4.  前記光の計測結果は、前記第1撮像装置の画角内に設定された所定の領域についての計測結果とされ、
     前記二次元の入力データは、前記所定の領域に対応する領域に前記計測結果が配置される
     請求項1に記載の情報処理方法。
    The measurement result of the light is the measurement result of a predetermined area set within the angle of view of the first imaging device,
    2. The information processing method according to claim 1, wherein said two-dimensional input data has said measurement result arranged in a region corresponding to said predetermined region.
  5.  前記所定の領域に対応する画素領域と、前記第2画像データにおける前記画素領域に対応する領域とを比較することにより前記一致性検証を行う
     請求項4に記載の情報処理方法。
    5. The information processing method according to claim 4, wherein the matching verification is performed by comparing a pixel area corresponding to the predetermined area and an area corresponding to the pixel area in the second image data.
  6.  前記二次元の入力データは、前記所定の領域に対応する領域以外の領域にダミーデータが配置された
     請求項4に記載の情報処理方法。
    5. The information processing method according to claim 4, wherein the two-dimensional input data has dummy data arranged in an area other than the area corresponding to the predetermined area.
  7.  前記イメージセンサモデルはフィルタモデルを有し、
     前記第2画像データにおけるフィルタの種別ごとの画素群ごとに統計処理を行うことにより前記一致性検証を行う
     請求項3に記載の情報処理方法。
    the image sensor model has a filter model;
    4. The information processing method according to claim 3, wherein the matching verification is performed by performing statistical processing for each pixel group for each filter type in the second image data.
  8.  前記統計処理によって前記画素群ごとに平均値及び分散を算出し、前記平均値及び分散をそれぞれの閾値と比較することにより前記一致性検証を行う
     請求項7に記載の情報処理方法。
    8. The information processing method according to claim 7, wherein an average value and a variance are calculated for each pixel group by the statistical processing, and the match verification is performed by comparing the average value and the variance with respective threshold values.
  9.  前記フィルタモデルは、ベイヤー配列、RCCB配列、RGBIR配列、補色フィルタを用いた配列の少なくとも何れかを採用したモデルとされた
     請求項7に記載の情報処理方法。
    8. The information processing method according to claim 7, wherein the filter model adopts at least one of a Bayer array, an RCCB array, an RGBIR array, and an array using complementary color filters.
  10.  車載カメラとしての第1撮像装置と模擬カメラとしてのカメラモデルに関する一致性検証を行うための事前処理として、前記第1撮像装置の前方且つ前記第1撮像装置と同じ高さ位置とされた少なくとも2箇所に設置された第1目標物の中心が画角の中心に一致されるように前記第1撮像装置のヨー方向及びピッチ方向の調整を行うことによりキャリブレーションを行う
     情報処理方法。
    As a preliminary process for verifying matching between a first imaging device as an in-vehicle camera and a camera model as a simulated camera, at least two cameras positioned in front of the first imaging device and at the same height as the first imaging device. An information processing method, wherein calibration is performed by adjusting the yaw direction and the pitch direction of the first imaging device so that the center of a first target placed at a location coincides with the center of the angle of view.
  11.  前記第1撮像装置の光軸に対して水平方向にオフセットさせた位置且つ前記第1撮像装置と同じ高さ位置に設置された第2目標物の中心が画角における垂直方向の中間位置に一致されるように前記第1撮像装置のロール方向の調整を行うことにより前記キャリブレーションを行う
     請求項10に記載の情報処理方法。
    The center of a second target placed at a position offset in the horizontal direction with respect to the optical axis of the first imaging device and at the same height as the first imaging device coincides with the middle position in the vertical direction of the angle of view. 11. The information processing method according to claim 10, wherein the calibration is performed by adjusting the roll direction of the first imaging device such that the roll direction is adjusted.
  12.  前記第1撮像装置の前方に設置された第3目標物の中心が前記垂直方向の中間位置に一致されるように前記ピッチ方向の調整をおこなうことにより前記キャリブレーションとしての俯角調整を行う
     請求項11に記載の情報処理方法。
    The depression angle adjustment as the calibration is performed by adjusting the pitch direction so that the center of a third target placed in front of the first imaging device coincides with the middle position in the vertical direction. 12. The information processing method according to 11.
  13.  前記キャリブレーションによって調整された前記第1撮像装置の光軸方向に基づいて前記カメラモデルの光軸方向の調整を行う
     請求項10に記載の情報処理方法。
    The information processing method according to claim 10, further comprising adjusting the optical axis direction of the camera model based on the optical axis direction of the first imaging device adjusted by the calibration.
  14.  特定の被写体を撮像した第1撮像装置から出力される第1画像データと、前記特定の被写体からの光を計測した計測結果に基づく二次元の入力データが入力されるカメラモデルから出力される第2画像データと、を用いて、前記第1撮像装置についての特性と前記カメラモデルについての特性の比較を行うことにより前記第1撮像装置と前記カメラモデルについての一致性検証を行う演算処理部を備えた
     情報処理装置。
    First image data output from a first imaging device that captures an image of a specific subject, and two-dimensional input data based on measurement results of measuring light from the specific subject 2 image data, and comparing the characteristics of the first imaging device and the characteristics of the camera model, thereby verifying the consistency between the first imaging device and the camera model. information processing device.
  15.  車載カメラとしての第1撮像装置と模擬カメラとしてのカメラモデルに関する一致性検証を行うための事前処理として、前記第1撮像装置の前方且つ前記第1撮像装置と同じ高さ位置とされた少なくとも2箇所に設置された第1目標物の中心が画角の中心に一致されるように前記第1撮像装置のヨー方向及びピッチ方向の調整を行うことによりキャリブレーションを行う演算処理部を備えた
     情報処理装置。
    As a preliminary process for verifying matching between a first imaging device as an in-vehicle camera and a camera model as a simulated camera, at least two cameras positioned in front of the first imaging device and at the same height as the first imaging device. an arithmetic processing unit that performs calibration by adjusting the yaw direction and the pitch direction of the first imaging device so that the center of the first target placed at the location is aligned with the center of the angle of view. processing equipment.
PCT/JP2022/001336 2021-03-23 2022-01-17 Information processing method and information processing device WO2022201776A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023508671A JPWO2022201776A1 (en) 2021-03-23 2022-01-17

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021048977 2021-03-23
JP2021-048977 2021-03-23

Publications (1)

Publication Number Publication Date
WO2022201776A1 true WO2022201776A1 (en) 2022-09-29

Family

ID=83396789

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/001336 WO2022201776A1 (en) 2021-03-23 2022-01-17 Information processing method and information processing device

Country Status (2)

Country Link
JP (1) JPWO2022201776A1 (en)
WO (1) WO2022201776A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019051786A (en) * 2017-09-14 2019-04-04 トヨタ自動車株式会社 Positioning method for target
JP2020095484A (en) * 2018-12-12 2020-06-18 凸版印刷株式会社 Texture adjustment supporting system and texture adjustment supporting method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019051786A (en) * 2017-09-14 2019-04-04 トヨタ自動車株式会社 Positioning method for target
JP2020095484A (en) * 2018-12-12 2020-06-18 凸版印刷株式会社 Texture adjustment supporting system and texture adjustment supporting method

Also Published As

Publication number Publication date
JPWO2022201776A1 (en) 2022-09-29

Similar Documents

Publication Publication Date Title
CN103021061B (en) Method for assessing the object detector of motor vehicles
US20180308282A1 (en) Shape measuring apparatus and method
US10007853B2 (en) Image generation device for monitoring surroundings of vehicle
JP6544257B2 (en) INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
CN105474634A (en) Camera calibration device, camera calibration system, and camera calibration method
JP2020035447A (en) Object identification method, device, apparatus, vehicle and medium
KR102256583B1 (en) System for Measuring Position of Subject
JP5697646B2 (en) Vehicle periphery monitoring device
JP7448484B2 (en) Online evaluation of camera internal parameters
CN115134537A (en) Image processing method and device and vehicle
WO2022201776A1 (en) Information processing method and information processing device
JP2006300890A (en) Device and method for inspecting image processing
JP4311107B2 (en) Three-dimensional object recognition device and setting method thereof
Grapinet et al. Characterization and simulation of optical sensors
KR20160099807A (en) A detecting system for restrict vechile for service of road traffic safety
CN114460862A (en) Camera equipment ring test simulation method applied to ADAS
JP5964093B2 (en) Vehicle size measuring device, vehicle size measuring method, and program
JP6855254B2 (en) Image processing device, image processing system, and image processing method
TW201404120A (en) Method and system for testing digital imaging devices
JP5904825B2 (en) Image processing device
JP5252247B2 (en) Target position determination device
CN107533798B (en) Image processing device, traffic management system having the same, and image processing method
Grapinet et al. Characterization and simulation of optical sensors
KR20150111611A (en) Apparatus and method for detecting vehicle candidate
US20230373396A1 (en) Monitoring system monitoring periphery of mobile object, method of controlling monitoring system, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22774580

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023508671

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22774580

Country of ref document: EP

Kind code of ref document: A1