WO2023017635A1 - Computing device, monitoring system and parallax calculation method - Google Patents

Computing device, monitoring system and parallax calculation method Download PDF

Info

Publication number
WO2023017635A1
WO2023017635A1 PCT/JP2022/011116 JP2022011116W WO2023017635A1 WO 2023017635 A1 WO2023017635 A1 WO 2023017635A1 JP 2022011116 W JP2022011116 W JP 2022011116W WO 2023017635 A1 WO2023017635 A1 WO 2023017635A1
Authority
WO
WIPO (PCT)
Prior art keywords
similarity
information
correction
parallax
unit
Prior art date
Application number
PCT/JP2022/011116
Other languages
French (fr)
Japanese (ja)
Inventor
圭介 稲田
雅士 高田
進一 野中
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to DE112022001667.1T priority Critical patent/DE112022001667T5/en
Publication of WO2023017635A1 publication Critical patent/WO2023017635A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to an arithmetic device, a monitoring system, and a parallax calculation method.
  • a known method is to use a stereo camera to calculate the distance between the camera and the object to be observed, and perform recognition processing of the object to be observed.
  • Patent Literature 1 among left and right captured images captured by a stereo camera that captures images of the same space from left and right viewpoints, each of a plurality of regions obtained by dividing one image is used as a reference block, and each reference block is: Similarity data for generating similarity data by setting a search range in the other image and calculating the similarity of the image with the reference block with respect to a position within the search range by associating the position within the search range with the position within the search range.
  • a generation unit a similarity correction unit that smoothes the similarity data in the spatial direction based on the similarity data generated for a predetermined number of reference blocks surrounding the corresponding reference block, and each smoothed similarity
  • a result evaluation unit that detects a position where the degree of similarity is maximum in the degree data, and a parallax for each reference block is obtained using the detection result of the result evaluation unit, and based on this, the position of the subject in the depth direction is calculated and an image is obtained.
  • An information processing apparatus characterized by comprising:
  • An arithmetic device is connected to a first imaging unit and a second imaging unit, and is configured to provide a first image captured by the first imaging unit and a second image captured by the second imaging unit.
  • An arithmetic unit for generating parallax information with an image by searching for each matching block unit, wherein matching blocks within a search range of the second image for matching blocks within the first image are generated for each matching block unit.
  • a monitoring system includes the arithmetic device described above, the first imaging section, and the second imaging section.
  • a parallax calculation method is connected to a first imaging unit and a second imaging unit, and provides a first image captured by the first imaging unit and a second image captured by the second imaging unit.
  • a parallax calculation method executed by an arithmetic device that generates by searching for parallax information between two images in units of predetermined matching blocks, wherein the second image with respect to matching blocks in the first image is calculated in units of matching blocks. and a first correction information generated based on the first similarity of any adjacent matching block within the search range of the second image. , and second correction information generated based on the parallax information, generating a second similarity by correcting the first similarity using at least one of the parallax information based on the second similarity and generating a
  • Configuration diagram of a monitoring system including an arithmetic unit Configuration diagram of the similarity correction unit Diagram for explaining first similarity information and first correction information
  • Diagram explaining the correction factor A diagram for explaining a specific example of a correction coefficient generation method.
  • Configuration diagram of a correction execution unit in the second embodiment Configuration diagram of a similarity correction unit in the third embodiment
  • FIG. 1 A first embodiment of the monitoring system will be described below with reference to FIGS. 1 to 7.
  • FIG. 1 A first embodiment of the monitoring system will be described below with reference to FIGS. 1 to 7.
  • FIG. 1 is a configuration diagram of a monitoring system S including an arithmetic device 1.
  • a vehicle surroundings monitoring system S includes an arithmetic device 1 , a first imaging unit 2 , a second imaging unit 3 , a recognition processing unit 4 , and a vehicle control unit 5 .
  • a vehicle surroundings monitoring system S is mounted on a vehicle C.
  • Arithmetic device 1 is connected to first imaging unit 2 , second imaging unit 3 , and recognition processing unit 4 .
  • the recognition processing section 4 is connected to the arithmetic device 1 and the vehicle control section 5 .
  • Each of the arithmetic device 1, the recognition processing unit 4, and the vehicle control unit 5 is an ECU (Electronic Control Unit) mounted on the vehicle C.
  • the arithmetic unit 1, the recognition processing unit 4, and the vehicle control unit 5 may not be individual ECUs, and two or more may be realized by the same ECU.
  • Both the first imaging unit 2 and the second imaging unit 3 are cameras, and output captured images, which are images obtained by capturing, to the arithmetic device 1 .
  • the captured image output by the first imaging unit 2 is called a first captured image 100
  • the captured image output by the second imaging unit 3 is called a second captured image 101 .
  • the first imaging unit 2 and the second imaging unit 3 face the vehicle C in the same direction and are arranged laterally apart from each other by a predetermined baseline length. Note that the horizontal direction in which the first imaging unit 2 and the second imaging unit 3 are arranged is hereinafter also referred to as the “base line direction”.
  • the computing device 1 Based on the first captured image 100 and the second captured image 101 , the computing device 1 generates the parallax information 106 by a method described later and outputs the parallax information 106 to the recognition processing section 4 .
  • the computing device 1 includes an input unit 10 , a similarity generation unit 11 , a similarity correction unit 12 and a similarity determination unit 13 .
  • the input unit 10 is implemented by a hardware interface with the first imaging unit 2 and the second imaging unit 3, and an arithmetic processing unit.
  • the similarity generation unit 11, the similarity correction unit 12, and the similarity determination unit 13 are realized by an arithmetic processing device.
  • the hardware interface is, for example, RJ45 or camera link.
  • An arithmetic processing unit includes, for example, a CPU that is a central processing unit, a ROM that is a read-only storage device, and a RAM that is a readable/writable storage device. By doing so, the calculation described later is performed.
  • the arithmetic processing unit may be realized by FPGA (Field Programmable Gate Array), which is a rewritable logic circuit, or ASIC (Application Specific Integrated Circuit), which is an application specific integrated circuit, instead of the combination of CPU, ROM, and RAM. good.
  • the arithmetic processing unit may be realized by a combination of different configurations, for example, a combination of CPU, ROM, RAM and FPGA, instead of the combination of CPU, ROM, and RAM.
  • the input unit 10 supplies a first image 102 obtained by performing image processing on the first captured image 100 and a second image 103 obtained by performing image processing on the second captured image 101 to the similarity generating unit 11. .
  • This image processing is performed using, for example, the intrinsic and extrinsic parameters of the camera to eliminate adverse effects on the image processing. Examples of image processing include correction of lens distortion and correction of mounting position and mounting orientation errors.
  • the processing in the input unit 10 is not essential in the present embodiment, the first captured image 100 and the first image 102 may be regarded as the same, or the second captured image 1021 and the second image 103 may be regarded as the same.
  • the similarity generation unit 11 receives the first image 102 and the second image 103 supplied from the input unit 10 , generates first similarity information 104 , and supplies the first similarity information 104 to the similarity correction unit 12 .
  • the similarity correction unit 12 is information representing the similarity between matching blocks in the first image 102 and matching blocks in the second image 103 .
  • the first similarity information 104 is, for example, the amount of luminance variation between each pixel in the matching block of the first image 102 and the corresponding pixel in the matching block of the second image 103 .
  • Luminance variation is calculated by SAD (Sum of Absolute Difference), ZSAD (Zero means Sum of Absolute Difference), SSD (Sum of Squared Difference), NCC (Normalized Cross Correlation), ZNCC (Zero means Normalized Cross Correlation), etc. method can be used.
  • the first similarity information 104 may be the luminance fluctuation amount, luminance fluctuation direction, or luminance fluctuation pattern between each pixel in the matching block of the first image 102 and the second image 103 and its surrounding pixels.
  • the luminance variation pattern there is an increase/decrease pattern from the left for three horizontal pixels including pixels adjacent to the left and right of the target pixel. For example, when the left adjacent pixel value is 10, the target pixel value is 25, and the right adjacent pixel value is 80, the increase/decrease pattern is "increase ⁇ increase".
  • the similarity correction unit 12 generates second similarity information 105 by correcting the first similarity information 104 based on the first similarity information 104 or the parallax information 106 . Based on the second similarity information 105, the similarity determination unit 13 detects the position with the highest similarity within the search range, and generates parallax information 106 based on this position information. The similarity determination unit 13 outputs the generated parallax information 106 to the recognition processing unit 4 and the similarity correction unit 12 .
  • the recognition processing unit 4 receives the parallax information 106 and performs various recognition processes.
  • An example of recognition processing in the recognition processing unit 4 is three-dimensional object detection using the parallax information 106 .
  • objects to be recognized by the recognition processing unit 4 include subject position information, type information, motion information, and risk information.
  • An example of position information is the direction and distance from the own vehicle.
  • type information include pedestrians, adults, children, elderly people, animals, falling rocks, bicycles, surrounding vehicles, surrounding structures, and curbs.
  • Examples of motion information include swaying, jumping, crossing, moving direction, moving speed, and moving trajectory of a pedestrian or a bicycle.
  • Examples of danger information include pedestrians running out, falling rocks, and abnormal behavior of surrounding vehicles such as sudden stops, sudden deceleration, and sudden steering.
  • the recognition information 107 generated by the recognition processing section 4 is supplied to the vehicle control section 5 .
  • the vehicle control unit 5 performs various vehicle controls for the vehicle C based on the recognition information 107.
  • vehicle control performed by the vehicle control unit 5 include brake control, steering wheel control, accelerator control, in-vehicle lamp control, warning sound generation, in-vehicle camera control, peripheral vehicles connected via a network, and remote center equipment. output of information related to objects to be observed around the imaging device.
  • a specific example is speed and brake control according to the parallax information 106 of obstacles present in front of the vehicle.
  • distance information may be generated based on the parallax information and used instead of the parallax information 106.
  • the vehicle control unit 5 may perform subject detection processing based on image processing results using the first image 102 and the second image 103 .
  • the vehicle control unit 5 displays an image obtained through the first imaging unit 2 or the second imaging unit 3 and a display for the viewer to recognize on a display device connected to the vehicle control unit 5. you can go Further, the vehicle control unit 5 may supply the information of the observation object detected based on the image processing result to an information device that processes traffic information such as map information and traffic information.
  • FIG. 2 is a configuration diagram of the similarity correction unit 12.
  • the similarity correction unit 12 includes a first correction information generation unit 20 , a second correction information generation unit 21 and a correction execution unit 22 .
  • the first correction information generator 20 generates first correction information 200 based on the first similarity information 104 .
  • the second correction information generator 21 generates second correction information 201 based on the parallax information 106 .
  • the correction execution unit 22 corrects the first similarity information 104 based on the first correction information 200 and the second correction information 201 to generate the second similarity information 105 .
  • the configuration of the first correction information generation unit 20 will be explained.
  • the first correction information generation section 20 includes a correction determination section 40 and a first generation section 41 .
  • the correction determination unit 40 Based on the first similarity information 104 , the correction determination unit 40 generates correction effective information 400 indicating whether or not to generate the first correction information 200 .
  • the correction determination unit 40 determines that generation of the first correction information 200 is necessary when the following first determination condition is satisfied, and generation of the first correction information 200 is unnecessary when the first determination condition is not satisfied. I judge.
  • the similarity A(i) is the similarity at the search position A(i) to be determined within the search range
  • the similarity A(i-1) is the search position A adjacent to the left of the search position A(i).
  • the similarity at (i ⁇ 1), the similarity A(i+1), is the similarity at the search position A(i+1) adjacent to the right of the search position A(i).
  • TH0 and TH1 are predetermined threshold values.
  • the size determination in the first determination condition may be changed so that the condition is satisfied when the sizes are equal.
  • the first determination condition may be determined using not only 3 similarities but also 5 similarities as follows.
  • TH2 to TH5 which are predetermined threshold values, are used here.
  • the first generation unit 41 receives the first similarity information 104 and the correction effectiveness information 400 and generates the first correction information 200 based on the correction effectiveness information 400 .
  • the first correction information 200 is, for example, a combination of a search position to be corrected and corrected similarity information, or a combination of a newly generated search position and similarity information.
  • the correction execution unit 22 refers to the first correction information 200 and corrects the similarity. The operation of the first correction information generator 20 will be described with reference to FIG.
  • FIG. 3 is a diagram for explaining the first similarity information 104 and the first correction information 200.
  • FIG. The upper portion of FIG. 3 shows the entire similarity curve 1010 and the lower portion of FIG. 3 shows details of the location portion of the similarity curve 1010 .
  • a similarity curve 1012 shown in the lower part of FIG. 3 indicates the range of reference numeral 1011 in the upper part of FIG.
  • the horizontal axis indicates the search position in the search range in the horizontal direction
  • the vertical axis indicates the similarity
  • the lower the vertical axis the higher the similarity.
  • the black circles described in the similarity curve 1012 shown at the bottom of FIG. 3 indicate the similarity at each search position. Since the similarity generation unit 11 performs a search in units of predetermined pixels, the similarity in the similarity curve 1012 is indicated by the interval of the search in units of pixels.
  • the pixel unit for search can be set arbitrarily, but the smallest unit is one pixel.
  • the search positions in the central part of the drawing are 1003, 1001, 1000, 1002, and 1004 from the left.
  • the similarity S0 at the search position 1000 on the similarity curve 1012 is higher than the similarity at the left and right search positions. Therefore, the search position 1000 satisfies the first determination condition described above.
  • a method of correcting the similarity curve 1012 there is a method of replacing the similarity S0 at the search position 1000 with the similarity S1.
  • a method of generating a new similarity S1 at search positions around the search position 1000 for example, search position 1003.
  • the search position 1003 having the similarity S1 within the similarity curve 1012 is an example of the first correction information 200 .
  • a method of generating the similarity S1 there is a method of calculating by polynomial interpolation such as linear approximation or SSD parabolic fitting based on a plurality of neighboring similarities.
  • polynomial interpolation such as linear approximation or SSD parabolic fitting based on a plurality of neighboring similarities.
  • multiple degrees of similarity existing in the vicinity there are three degrees of similarity of search positions 1000, 1001, and 1002.
  • FIG. A search position 1003 is a decimal precision search position located between the search position 1000 and the search position 1002 .
  • the first correction information generation unit 20 scans in the baseline direction to search for an inflection point where the difference in similarity switches between positive and negative, calculates a new similarity using the similarity around the inflection point, A combination of the position of the inflection point and the new degree of similarity is calculated as the first correction information 200 .
  • a first method is a method of rewriting the information on the search position 1000 and the similarity S0 with the information on the search position 1003 and the similarity S1.
  • a second method is to leave the information of the search position 1000 and the similarity S0 as they are, and add the information of the search position 1003 and the similarity S1.
  • a third method is a method of rewriting the value of the degree of similarity S0 to the value of S1 without changing the position of the search position 1000 .
  • the information of the search position 1003 may be calculated by 3-point approximation or the like.
  • the second correction information generation unit 21 includes a peripheral parallax continuity determination unit 50 and a second generation unit 51 , and generates second correction information 301 based on the parallax information 106 or the parallax boundary information 300 .
  • FIG. 4 and 5 are diagrams showing examples of the parallax information 106 used for generating the second correction information 201 in the second correction information generation unit 21.
  • the parallax information 106 is set corresponding to each position, for example, for each block of pixels.
  • the region is called a “reference peripheral parallax region”.
  • FIG. 4 is a diagram showing a first example of the parallax information 106 used for generating the second correction information 201.
  • FIG. 4 is a diagram showing a first example of determining a reference peripheral parallax region.
  • FIG. 4 shows the parallax calculation process of the parallax generation target block 801 in the current frame 800, which is the latest captured image.
  • a grid in FIG. 4 is a unit block for calculating parallax.
  • the block size may be 1 pixel, 4 pixels (2 pixels wide by 2 pixels high), 16 pixels (4 pixels wide by 4 pixels high), or a square with 5 pixels or more on each side. , may be a rectangle composed of any number of pixels.
  • the parallax of the upper left block is calculated first, and then the processing target is changed to the right direction as indicated by the arrow. , and similarly, the processing targets are changed in order from left to right. Therefore, when calculating the parallax of the block indicated by reference numeral 801, the calculation of all the parallaxes in the first to third stages in the drawing has been completed, and the calculation of the parallax on the left side of the parallax generation target block 801 in the fourth stage has also been completed. are doing.
  • reference peripheral parallax 802 is used for generating the second correction information 201 . That is, in the first example shown in FIG. 4, the reference peripheral parallax area is the reference peripheral parallax 802 .
  • FIG. 5 is a diagram showing a second example of the parallax information 106 used for generating the second correction information 201.
  • FIG. 5 is a diagram showing a second example of determining the reference peripheral parallax region.
  • FIG. 5 shows three images captured at different times, the current frame 800 being the latest frame, the first past frame 901 being the frame acquired in the immediately preceding processing cycle, and the second past frame 902 being the processing two frames before. It is a frame acquired periodically.
  • a block 801 of the current frame 800 is a parallax generation target block, and corresponding blocks in the past frame are blocks 903 and 904 .
  • the parallax in the shaded area in the first past frame 901 is called “first past reference peripheral parallax 905"
  • the parallax in the shaded area in the second past frame 902 is called “second past reference peripheral parallax 906”.
  • the first past reference peripheral parallax 905 and the second past reference peripheral parallax 906 are a second example of the parallax information 106 used to generate the second correction information 201 . Since these pieces of parallax information are included in past frames, they have already been calculated at the time of generating the parallax generation target block 801 , and parallax information around the blocks 903 and 904 of the past frames corresponding to the parallax generation target block 801 is.
  • the reference peripheral parallax area is only the first past reference peripheral parallax 905 or both the first past reference peripheral parallax 905 and the second past reference peripheral parallax 906 .
  • two past frames are referred to in the example of FIG. 5, only one past frame or three or more past frames may be referred to.
  • the peripheral parallax continuity determination unit 50 Based on the parallax information 106, the peripheral parallax continuity determination unit 50 generates second correction necessity information 500 indicating whether or not the generation of the second correction information 301 is necessary.
  • Examples of the second correction necessity information 500 include peripheral parallax information and correction necessity information.
  • peripheral parallax information is the average parallax value in the reference peripheral parallax area.
  • the average parallax value of the reference peripheral parallax 802 shown in FIG. 4 the average parallax value of the first past reference peripheral parallax 905 shown in FIG. and a second past-reference peripheral disparity 906 average disparity value.
  • the parallax value in only the horizontal direction may be referred to, or the parallax value in only the vertical direction may be referred to.
  • isolated parallaxes within the reference peripheral parallax area may be excluded from the reference target parallax.
  • a method for detecting isolated parallax there is a method of determining isolated parallax when the difference between the value of parallax to be detected and the value of parallax around the parallax to be detected is greater than a predetermined value.
  • the second correction necessity information 500 is information indicating whether or not the second correction information 301 is generated in the second generation unit 51 .
  • the correction execution unit 22 corrects the similarity using the second correction information 301 .
  • the second correction information 301 is “generate: unnecessary”, the correction execution unit 22 does not correct the similarity using the second correction information 301 .
  • the peripheral parallax continuity determination unit 50 calculates the degree of variation in parallax within the reference peripheral parallax area, and if the degree of variation is large, sets the second correction necessity information 500 to "Generation: Not required".
  • the degree of variation include the sum of differences between the average parallax value in the reference peripheral parallax area and each parallax value, and the variance value in the reference peripheral parallax area.
  • the parallax value only in the horizontal direction may be referred to, or the parallax value only in the vertical direction may be referred to.
  • the degree of continuity of parallax in the reference peripheral parallax area may be calculated, and if the degree of continuity is small, the correction necessity information may be set as no correction.
  • An example of the degree of continuity is the average or variance of the difference values between the parallax values in the reference peripheral parallax area. To generate the degree of continuity, the parallax value only in the horizontal direction may be referred to, or the parallax value only in the vertical direction may be referred to.
  • the second generation unit 51 generates second correction information 301 based on the second correction necessity information 500 .
  • An example of the second correction information 301 is a correction coefficient K corresponding to the search position. The role of this correction coefficient K will be described with reference to FIG. A method of calculating the correction coefficient K will be described later.
  • FIG. 6 is a diagram for explaining the correction coefficient K.
  • a graph 1110 shown in the upper part of FIG. 6 is an example of the second correction information 301, and is a set of correction coefficients K for each search position.
  • a graph 1111 shown in the lower part of FIG. 6 indicates the degree of similarity for each search position, and indicates the same kind of information as in FIG. However, in FIG. 6, the solid line indicates the degree of similarity before correction, and the dashed line indicates the correction coefficient after correction.
  • the upper part and the lower part of FIG. 6 will be described in order.
  • the horizontal axis of the graph 1110 indicates the search position within the search range
  • the vertical axis indicates the correction coefficient K for the first similarity information 104.
  • the correction coefficient K0 is a value when the first similarity information 104 is not corrected.
  • the correction coefficient is smaller than the correction coefficient K0
  • the first similarity information 104 is corrected so that the similarity is higher.
  • the correction coefficient is larger than the correction coefficient K0
  • the first similarity information 104 is corrected so that the similarity becomes smaller.
  • the correction coefficient K When the correction coefficient K is used by being multiplied by the first similarity information 104 before correction, for example, the correction coefficient K0 is "1", the correction coefficient K1 is a real number smaller than “1” and larger than “0", The correction coefficient K2 is a real number greater than "1".
  • the correction coefficient K0 When the correction coefficient K is used by being multiplied by the first similarity information 104 before correction, for example, the correction coefficient K0 is "1", the correction coefficient K1 is a real number smaller than "1” and larger than "0”, The correction coefficient K2 is a real number greater than "1".
  • the correction coefficient K When the correction coefficient K is used in addition to the first similarity information 104 before correction, for example, the correction coefficient K0 is "0", the correction coefficient K1 is a positive real number, and the correction coefficient K2 is a negative real number. .
  • the horizontal axis of the graph 1111 indicates the search position within the search range, and the vertical axis indicates the degree of similarity.
  • a first similarity curve 1104 indicated by a solid line is a similarity curve created using the first similarity information 104 as it is.
  • a second similarity curve 1105 indicated by a dashed line is a similarity curve created using the second similarity information 105 generated based on the second correction information 1100 and the first similarity information 104 .
  • the correction coefficient K at the search position A1 is "K0" as indicated by the element 1101 of the second correction information 301. Therefore, as indicated by element 1106 on second similarity curve 1105, the value of first similarity curve 1104 and the value of second similarity curve 1105 are the same at search position A1.
  • the correction coefficient K at the search position A2 is "K1", which is a value smaller than K0 as indicated by the element 1102 of the second correction information 301.
  • FIG. Therefore, as indicated by element 1107 on second similarity curve 1105, the second similarity is higher than the first similarity at search position A2.
  • the correction coefficient at search position A3 is “K2”, which is a value greater than K0 as shown in element 1103 of second correction information 301 . Therefore, as indicated by the element 1108 on the second similarity curve 1105, the second similarity is lower than the first similarity at the search position A3.
  • Correction of the first similarity information 104 using the correction coefficient K is, for example, as follows.
  • Sb(i) is the similarity at the search position (i) on the first similarity curve 110
  • Sa(i) is the similarity at the search position (i) on the second similarity curve 1105
  • K(i) is the correction coefficient K on the second correction information 1100 at the search position (i).
  • the similarity at search position A2 in FIG. 6 has the following relationship.
  • the correction execution unit 22 reads the correction coefficient K for each search position with reference to the second correction information 301, and corrects the first similarity by multiplying it by the similarity corresponding to the position as in the above equation. do. Note that the correction execution unit 22 performs similarity correction using the second correction information 301 after similarity correction using the first correction information 200 is performed.
  • FIG. 7A is a diagram showing a first example of calculating the correction coefficient K.
  • the first row shows the parallax position
  • the second row shows the parallax value
  • the third row shows the continuity
  • the fourth row shows the function
  • the fifth row shows the correction coefficient K.
  • the function "0" means no correction
  • "1" means weak correction
  • "2" means strong correction
  • For continuity '0' and '1' the function is set to '0'
  • for continuity '2' the function is set to '1'
  • for continuity '3' the function is set to '1'. is set to "2".
  • the correction coefficient K is indicated by a numerical value for the convenience of drawing. "5" corresponds to "K0" in FIG. 4" corresponds to between "K1" and "K0” in FIG.
  • the correction coefficient is set according to the value of the function. When the function is "0", the correction coefficient K is set to "5", and when the function is "1", the correction coefficient K is set to "4". , the correction coefficient K is set to "3" when the function is "2".
  • parallax positions A to G have a continuous parallax value of "8". Therefore, the degree of continuity increases by 1 from the initial value of "0" and continues to the maximum value of "3". Since the parallax value is switched to "2" after the parallax position H, the difference from "8" is larger than the threshold value "2". , the continuity is "0". Since the parallax value "2" continues after that, the degree of continuity turns to increase, meaning the continuation of "2", and the maximum value "3" continues. Since the function is linked to the degree of continuity and the correction coefficient K is linked to the function, the value of the function varies according to the degree of continuity in FIG. is fluctuating.
  • FIG. 7(b) is a diagram showing a second example of calculating the correction coefficient K.
  • FIG. 1st row is parallax position
  • 2nd row is parallax value
  • 3rd row is continuity A
  • 4th row is continuity B
  • 5th row is maximum continuity
  • 6th row is function
  • 7th row is correction coefficient indicates K.
  • Continuity A and continuity B count continuous states of different parallax values.
  • the maximum continuity is the larger of continuity A and continuity B.
  • the function is set in conjunction with maximum continuity in this example.
  • the parallax positions A to G have a continuous parallax value of "8"
  • the parallax positions H to M have a continuous parallax value of "2”
  • the parallax positions N to Q have a parallax value of "5". are consecutive. Therefore, at the parallax positions A to G, the degree of continuity A increases in the same way as the degree of continuity in FIG. 7A and maintains the maximum value of "3".
  • the degree of continuity B is described with an asterisk indicating no value because only one type of parallax value still exists and is not used.
  • the degree of continuity B has an initial value of “0” at the parallax position H, and thereafter increases to a maximum value of “3” at the parallax position K because the parallax value continues to be “2”. After that, when the parallax value changes to "5" at the parallax position N, the degree of continuity B decreases, and the degree of continuity A increases to replace this.
  • the maximum continuity is the maximum value of the continuity A and the continuity B as described above, the maximum continuity is the same as the continuity A up to the parallax position G where the value of the continuity B does not exist, and from the parallax position J to N Up to , the value of the continuity B, which is larger, becomes the maximum continuity.
  • the value of the function is determined according to the maximum continuity, and the value of the correction factor K is determined according to the value of the function.
  • the correction coefficient K takes only values corresponding to K0 to K1. can be changed to take the value of For example, when the function is "0", the correction coefficient K is set to "7", namely K2; when the function is "1", the correction coefficient K is set to "5", namely K0; , the correction coefficient K is set to "3", that is, K1.
  • the computing device 1 is connected to the first imaging unit 2 and the second imaging unit 3, and the first image 100 captured by the first imaging unit 2 and the second image captured by the second imaging unit 3 101 is generated by searching in predetermined matching block units.
  • the computing device 1 includes a similarity generating unit 11 that generates a first similarity between a matching block in the first image 100 and a matching block in the search range of the second image 101 for each matching block, Using at least one of the first correction information 200 generated based on the first similarity of any adjacent matching block within the search range of and the second correction information 301 generated based on the parallax information 106
  • a similarity correction unit 12 that corrects first similarity information 104 to generate second similarity information 105 and a similarity determination unit 13 that generates disparity information 106 based on the second similarity information 105 are provided. Therefore, by correcting the degree of similarity using the first correction information 200 and the second correction information 301, erroneous matching can be prevented and the accuracy of disparity calculation can be improved.
  • the first image capturing unit 2 and the second image capturing unit 3 generate the first image 100 and the second image 101 for each predetermined processing cycle.
  • the arithmetic device 1 calculates the latest first image 100 and the latest second image 101, or the first image 100 before the predetermined period and the second image 101 before the predetermined period
  • a second correction information generation unit 21 is provided for generating second correction information, which is a correction coefficient K by which the degree of similarity is multiplied, based on the parallax information 106 in the peripheral region of the matching block. Therefore, when using the latest captured image, the correction coefficient K can be calculated based on the parallax information 106 with no positional deviation, and when using the past captured image, the correction coefficient can be calculated using the parallax information of the entire frame.
  • the second correction information 301 is the first similarity degree corresponding to the parallax information in the peripheral area when the continuity of the parallax information in the peripheral area is equal to or greater than the threshold, as in the search position A2 in FIG. contains a correction factor that corrects to increase Therefore, by increasing the similarity of a matching block that is expected to have a similar parallax to its surroundings, erroneous matching can be prevented and the accuracy of parallax calculation can be improved.
  • the second correction information 301 is the first similarity degree corresponding to the parallax information in the peripheral area when the continuity of the parallax information in the peripheral area is equal to or less than the threshold, as in the search position A3 in FIG. contains a correction factor that corrects to reduce Therefore, it is possible to prevent erroneous matching and improve the accuracy of parallax calculation by making it difficult to match by reducing the similarity of a matching block whose parallax is not similar to its surroundings.
  • the correction execution unit 22 of the similarity correction unit 12 After correcting the first similarity information 104 based on the first correction information 200, the correction execution unit 22 of the similarity correction unit 12 further corrects the first similarity information 104 based on the second correction information 301. A second degree of similarity is generated by correcting. Therefore, the calculation device 1 can use the result of the first correction for correcting the local similarity for the second correction for correcting the global similarity.
  • the first imaging unit 2 and the second imaging unit 3 are arranged in the baseline direction. Scan in the baseline direction to search for inflection points where the difference in similarity switches between positive and negative, calculate a new similarity using the similarity around the inflection point, and combine the position of the inflection point and the new similarity
  • a first correction information generator 20 is provided to calculate the first correction information 200 . Therefore, local similarity correction can be performed more finely than the similarity calculation unit. For example, if the similarity is calculated for each pixel, the search position 1003 has a decimal coordinate value, so it can be said that the similarity can be calculated with sub-pixel precision.
  • the monitoring system S includes an arithmetic device 1 , a first imaging section 2 and a second imaging section 3 . Therefore, the monitoring system S has less false matching and good parallax calculation accuracy.
  • the similarity correction unit 12 of the arithmetic device 1 includes the first correction information generation unit 20 and the second correction information generation unit 21 .
  • the similarity correction section 12 may include only one of the first correction information generation section 20 and the second correction information generation section 21 .
  • the baseline direction is horizontal. was scanned.
  • the first imaging section 2 and the second imaging section 3 may be arranged vertically or diagonally.
  • the first generation unit 41 may scan the similarity along the baseline direction, that is, along the epipolar line.
  • Embodiment- A second embodiment of the monitoring system will be described with reference to FIG.
  • the same components as those in the first embodiment are assigned the same reference numerals, and differences are mainly described. Points that are not particularly described are the same as those in the first embodiment.
  • This embodiment differs from the first embodiment mainly in that it includes a correction information invalidation unit.
  • FIG. 8 is a configuration diagram of the correction execution unit 22A in the second embodiment.
  • the correction executing section 22A includes a correction information invalidating section 60 and a correcting section 61 .
  • the corrector 61 includes a first corrector 70 and a second corrector 71 .
  • First similarity information 104, first correction information 200, and second correction information 301 are input to correction execution unit 22 in the first embodiment.
  • an invalidation instruction 602 is further input in addition to the above three.
  • the correction execution unit 22A performs correction processing of the first similarity information 104 based on the invalidation instruction 602, the first correction information 200 and the second correction information 301, and generates the second similarity information 105.
  • FIG. 1 A block diagram illustrating an invalidation instruction 602
  • the correction execution unit 22A performs correction processing of the first similarity information 104 based on the invalidation instruction 602, the first correction information 200 and the second correction information 301, and generates the second similarity information 105.
  • the correction information invalidation unit 60 can invalidate at least one of the first correction information 200 and the second correction information 301 based on the invalidation instruction 602 .
  • the correction information invalidation unit 60 may invalidate both the first correction information 200 and the second correction information 301, may invalidate only one of them, or may invalidate none of them.
  • the invalidation instruction 602 include information indicating validity or invalidity of the first correction information 200 and information indicating validity or invalidity of the second correction information 301 . That is, the correction information invalidation unit 60 regards the first correction information 200 output by the first correction information generation unit 20 and the second correction information 301 output by the second correction information generation unit 31 as also the second effective correction information 601. accept.
  • the correction information invalidation unit 60 When the correction information invalidation unit 60 receives the invalidation instruction 602 indicating that the first correction information 200 is valid and the second correction information 301 is invalid, for example, it performs the following processing. That is, the correction information nullification section 60 outputs the first correction information 200 as the first effective correction information 600 and outputs the second effective correction information 601 as information obtained by nullifying the second correction information 301 .
  • the correction unit 61 corrects the first similarity information 104 based on the first effective correction information 600 and the second effective correction information 601 to generate the second similarity information 105 .
  • the first correction information 200 is used to replace the similarity at the search position to be corrected, or insert the similarity at the new search position to obtain the second similarity information.
  • correction processing for the second effective correction information 601 there is the method of generating the second similarity curve 1105 described above with reference to FIG. Further, when the first correction information 200 is invalidated in the first effective correction information 600, the correction processing using the first correction information 200 is not performed. When the first correction information 200 is invalidated in the second effective correction information 601, correction processing using the first correction information 200 is not performed.
  • the first correction unit 70 performs correction processing of the first similarity information 104 based on the first effective correction information 600 and generates intermediate similarity information 700 .
  • the second correction unit 71 performs correction processing of the intermediate similarity information 700 based on the second effective correction information 601 to generate the second similarity information 105 .
  • the correction processing in the first correction unit 70 and the correction processing in the second correction unit 71 are as described with reference to FIGS. 3 and 6, so details thereof will be omitted.
  • FIG. 1 A third embodiment of the monitoring system will be described with reference to FIG.
  • the same components as those in the first embodiment are assigned the same reference numerals, and differences are mainly described. Points that are not particularly described are the same as those in the first embodiment.
  • the present embodiment is different from the first embodiment mainly in that the boundary of parallax is not detected to generate the second correction information.
  • FIG. 9 is a configuration diagram of the similarity correction unit 12B in the third embodiment.
  • the similarity correction unit 12B includes a second correction information generation unit 21B instead of the second correction information generation unit 21, and a parallax boundary detection unit 30 newly.
  • the parallax boundary detection unit 30 refers to the calculated parallax information 106, and upon detecting a parallax boundary in the matching block, outputs the parallax boundary information 300 to the second correction information generation unit 21B.
  • one of the following three methods can be used to determine the presence or absence of a parallax boundary within a matching block.
  • the first method is to individually check the parallax of each block in the matching block in the current frame. Specifically, the parallax of each block in the matching block is compared with the parallax of other blocks on the left, right, top and bottom, and if the difference is equal to or greater than a predetermined threshold value, it is determined that there is a boundary of parallax. Any block is judged to have no parallax boundary if the difference in parallax from other blocks on the left, right, top and bottom is less than a predetermined threshold. In this first method, the current frame is used, so the latest information is used, but the difference with blocks for which parallax has not yet been calculated cannot be evaluated.
  • the second method is to individually confirm the parallax of each block within the matching block in the past frame acquired immediately before.
  • the difference from the first method is that the target frame is the target frame, and all blocks in the past have already calculated the parallax difference. can be calculated.
  • This second method assumes that the position of the subject does not change significantly in the time of one frame, and embodies the idea that the benefit of being able to calculate the disparity difference in all blocks outweighs the loss due to the change in the position of the subject. It is what I did.
  • the third method is to detect parallax boundaries by hierarchical search. This method may be applied to the current frame, as in the first method, or to past frames, as in the second method.
  • the parallax is calculated using a block that is larger than the matching block, for example, a block that is twice as large both vertically and horizontally as the matching block. This block is hereinafter referred to as a large size block.
  • the parallax of each block within the matching block is calculated based on the parallax of the large size block. Specifically, a total of three parallaxes, that is, the parallax of a large-sized block including a block for which parallax is to be calculated and the parallax of left and right large-sized blocks adjacent to the large-sized block, are set as parallax candidates, and a predetermined and the most probable parallax is set as the parallax to be calculated.
  • parallax calculated by the hierarchical search in this way, it is determined whether or not the difference from the parallax of other blocks on the left, right, top and bottom is equal to or greater than a predetermined threshold value, and whether or not there is a parallax boundary.
  • the peripheral parallax continuity determination unit 50 determines whether or not the second correction is necessary to generate the second correction information 301 when the parallax boundary exists in the matching block.
  • Information 500 is created and transmitted to the second generator 51 . Since the second generation unit 51 that has received the second correction necessity information 500 does not generate the second correction information 301, the correction execution unit 22 also corrects the first similarity information 104 based on the second correction information 301. do not have.
  • the computing device 1 includes a parallax boundary detection unit 30 that generates parallax boundary information 300 based on luminance information or parallax information of the first image 100 or the second image 101 in the peripheral area.
  • the second correction information generator 21 disables generation of the second correction information 201 based on the parallax boundary information 300 . Therefore, when there is a parallax boundary in the surroundings, the similarity degree is not corrected by the second correction information 301, and the adverse effect of the similarity correction by the second correction information 301 can be prevented.
  • the parallax boundary detection unit 30 uses the parallax information 106 to determine whether or not there is a parallax boundary. However, the parallax boundary detection unit 30 may simply determine whether there is a parallax boundary by using the first image 102 or the second image 103 and using the presence or absence of luminance change. The parallax boundary detection unit 30 refers to the first image 102 or the second image 103, compares the luminance of each block in the matching block with the luminance of other blocks on the left, right, top and bottom, and determines if the difference is equal to or greater than a predetermined threshold. It is determined that there is a parallax boundary in a certain case. Any block is judged to have no luminance boundary if the difference in luminance from other blocks on the left, right, top and bottom is less than a predetermined threshold.
  • the configuration of the functional blocks is merely an example. Some functional configurations shown as separate functional blocks may be configured integrally, or a configuration represented by one functional block diagram may be divided into two or more functions. Further, a configuration may be adopted in which part of the functions of each functional block is provided in another functional block.
  • the program is stored in a ROM (not shown), but the program may be stored in a non-volatile storage device (not shown) such as a flash memory.
  • the arithmetic device may have an input/output interface (not shown), and the program may be read from another device via a medium that can be used by the input/output interface and the arithmetic device when necessary.
  • the medium refers to, for example, a storage medium that can be attached to and detached from an input/output interface, or a communication medium, that is, a wired, wireless, or optical network, or a carrier wave or digital signal that propagates through the network.
  • part or all of the functions realized by the program may be realized by a hardware circuit or FPGA.

Abstract

This computing device is connected to a first imaging unit and a second imaging unit, and generates parallax information between a first image captured by the first imaging unit and a second image captured by the second imaging unit by searching in prescribed matching block units, said computing device being equipped with: a similarity generation unit for generating a first degree of similarity of a matching block within a search range of a second image relative to a matching block in the first image in matching block units; a similarity correction unit for generating a second degree of similarity by correcting the first degree of similarity by using first correction information which is generated on the basis of the first degree of similarity in a desired adjacent matching block within the search range of the second image, and/or second correction information generated on the basis of parallax information; and a similarity determination unit for generating parallax information on the basis of the second degree of similarity.

Description

演算装置、監視システム、視差算出方法Arithmetic device, monitoring system, parallax calculation method
 本発明は、演算装置、監視システム、および視差算出方法に関する。 The present invention relates to an arithmetic device, a monitoring system, and a parallax calculation method.
 ステレオカメラを使用してカメラと観測対象物との距離を算出し、観測対象物の認識処理を行う手法が知られている。特許文献1には、同一空間を左右の視点から撮影するステレオカメラで撮影した左右の撮影画像のうち、一方の画像を分割してなる複数の領域のそれぞれを参照ブロックとし、各参照ブロックについて、他方の画像に探索範囲を設定し、当該探索範囲内の位置に対して前記参照ブロックとの画像の類似度を当該探索範囲内の位置に対応づけて算出した類似度データを生成する類似度データ生成部と、前記類似度データを、対応する参照ブロックの周囲の所定数の参照ブロックに対し生成された類似度データに基づき空間方向に平滑化する類似度補正部と、平滑化された各類似度データにおいて類似度が最大値となる位置を検出する結果評価部と、前記結果評価部による検出結果を用いて参照ブロックごとに視差を求め、それに基づき被写体の奥行き方向の位置を算出して画像平面に対応づけることにより奥行き画像を生成する奥行き画像生成部と、前記奥行き画像を用いて、被写体の3次元空間における位置に基づく所定の情報処理を行いその結果を出力する出力情報生成部と、を備えることを特徴とする情報処理装置が開示されている。 A known method is to use a stereo camera to calculate the distance between the camera and the object to be observed, and perform recognition processing of the object to be observed. In Patent Literature 1, among left and right captured images captured by a stereo camera that captures images of the same space from left and right viewpoints, each of a plurality of regions obtained by dividing one image is used as a reference block, and each reference block is: Similarity data for generating similarity data by setting a search range in the other image and calculating the similarity of the image with the reference block with respect to a position within the search range by associating the position within the search range with the position within the search range. a generation unit, a similarity correction unit that smoothes the similarity data in the spatial direction based on the similarity data generated for a predetermined number of reference blocks surrounding the corresponding reference block, and each smoothed similarity A result evaluation unit that detects a position where the degree of similarity is maximum in the degree data, and a parallax for each reference block is obtained using the detection result of the result evaluation unit, and based on this, the position of the subject in the depth direction is calculated and an image is obtained. a depth image generation unit that generates a depth image by associating it with a plane; an output information generation unit that performs predetermined information processing based on the position of a subject in a three-dimensional space using the depth image and outputs the result; An information processing apparatus characterized by comprising:
日本国特開2016-039618号公報Japanese Patent Application Laid-Open No. 2016-039618
 特許文献1に記載されている発明では、視差の算出に改善の余地がある。 In the invention described in Patent Document 1, there is room for improvement in calculating parallax.
 本発明の第1の態様による演算装置は、第1撮像部と第2撮像部とに接続され、前記第1撮像部で撮影された第1画像と前記第2撮像部で撮像された第2画像との視差情報を所定のマッチングブロック単位で探索することで生成する演算装置であって、前記マッチングブロック単位で、前記第1画像内のマッチングブロックに対する前記第2画像の探索範囲内のマッチングブロックとの第1類似度を生成する類似度生成部と、前記第2画像の探索範囲内の任意の隣接するマッチングブロックの前記第1類似度に基づいて生成される第1補正情報、および前記視差情報に基づいて生成される第2補正情報、の少なくとも一方を用いて前記第1類似度を補正して第2類似度を生成する類似度補正部と、前記第2類似度に基づき視差情報を生成する類似判定部と、を備える。
 本発明の第2の態様による監視システムは、前述の演算装置と、前記第1撮像部と、前記第2撮像部と、を備える。
 本発明の第3の態様による視差算出方法は、第1撮像部と第2撮像部とに接続され、前記第1撮像部で撮影された第1画像と前記第2撮像部で撮像された第2画像との視差情報を所定のマッチングブロック単位で探索することで生成する演算装置が実行する視差算出方法であって、前記マッチングブロック単位で、前記第1画像内のマッチングブロックに対する前記第2画像の探索範囲内のマッチングブロックとの第1類似度を生成することと、前記第2画像の探索範囲内の任意の隣接するマッチングブロックの前記第1類似度に基づいて生成される第1補正情報、および前記視差情報に基づいて生成される第2補正情報、の少なくとも一方を用いて前記第1類似度を補正して第2類似度を生成することと、前記第2類似度に基づき視差情報を生成することと、を含む。
An arithmetic device according to a first aspect of the present invention is connected to a first imaging unit and a second imaging unit, and is configured to provide a first image captured by the first imaging unit and a second image captured by the second imaging unit. An arithmetic unit for generating parallax information with an image by searching for each matching block unit, wherein matching blocks within a search range of the second image for matching blocks within the first image are generated for each matching block unit. a similarity generating unit that generates a first similarity with the first correction information generated based on the first similarity of any adjacent matching block within the search range of the second image, and the parallax a similarity correction unit that corrects the first similarity using at least one of second correction information generated based on information to generate a second similarity; and disparity information based on the second similarity. and a similarity determination unit that generates the similarity determination unit.
A monitoring system according to a second aspect of the present invention includes the arithmetic device described above, the first imaging section, and the second imaging section.
A parallax calculation method according to a third aspect of the present invention is connected to a first imaging unit and a second imaging unit, and provides a first image captured by the first imaging unit and a second image captured by the second imaging unit. A parallax calculation method executed by an arithmetic device that generates by searching for parallax information between two images in units of predetermined matching blocks, wherein the second image with respect to matching blocks in the first image is calculated in units of matching blocks. and a first correction information generated based on the first similarity of any adjacent matching block within the search range of the second image. , and second correction information generated based on the parallax information, generating a second similarity by correcting the first similarity using at least one of the parallax information based on the second similarity and generating a
 本発明によれば、誤マッチングを低減して視差の算出精度を向上できる。 According to the present invention, matching errors can be reduced and the accuracy of parallax calculation can be improved.
演算装置を含む監視システムの構成図Configuration diagram of a monitoring system including an arithmetic unit 類似度補正部の構成図Configuration diagram of the similarity correction unit 第1類似度情報および第1補正情報を説明する図Diagram for explaining first similarity information and first correction information 第2補正情報の生成に用いる視差情報の第1の例を示す図A diagram showing a first example of parallax information used to generate second correction information. 第2補正情報の生成に用いる視差情報の第2の例を示す図A diagram showing a second example of parallax information used to generate the second correction information. 補正係数を説明する図Diagram explaining the correction factor 補正係数の生成方法の具体例を説明する図A diagram for explaining a specific example of a correction coefficient generation method. 第2の実施の形態における補正実行部の構成図Configuration diagram of a correction execution unit in the second embodiment 第3の実施の形態における類似度補正部の構成図Configuration diagram of a similarity correction unit in the third embodiment
―第1の実施の形態―
 以下、図1~図7を参照して、監視システムの第1の実施の形態を説明する。
-First Embodiment-
A first embodiment of the monitoring system will be described below with reference to FIGS. 1 to 7. FIG.
 図1は、演算装置1を含む監視システムSの構成図である。車両周辺監視システムSは、演算装置1と、第1撮像部2と、第2撮像部3と、認識処理部4と、車両制御部5とを含む。車両周辺監視システムSは、車両Cに搭載される。演算装置1は、第1撮像部2と、第2撮像部3と、認識処理部4とに接続される。認識処理部4は、演算装置1と車両制御部5とに接続される。 FIG. 1 is a configuration diagram of a monitoring system S including an arithmetic device 1. FIG. A vehicle surroundings monitoring system S includes an arithmetic device 1 , a first imaging unit 2 , a second imaging unit 3 , a recognition processing unit 4 , and a vehicle control unit 5 . A vehicle surroundings monitoring system S is mounted on a vehicle C. As shown in FIG. Arithmetic device 1 is connected to first imaging unit 2 , second imaging unit 3 , and recognition processing unit 4 . The recognition processing section 4 is connected to the arithmetic device 1 and the vehicle control section 5 .
 演算装置1、認識処理部4、および車両制御部5のそれぞれは、車両Cに搭載されるECU(Electronic Control Unit、電子制御装置)である。ただし、演算装置1、認識処理部4、および車両制御部5のそれぞれが個別のECUでなくてもよく、2以上が同一のECUにより実現されてもよい。 Each of the arithmetic device 1, the recognition processing unit 4, and the vehicle control unit 5 is an ECU (Electronic Control Unit) mounted on the vehicle C. However, the arithmetic unit 1, the recognition processing unit 4, and the vehicle control unit 5 may not be individual ECUs, and two or more may be realized by the same ECU.
 第1撮像部2および第2撮像部3は、いずれもカメラであり、撮影して得られた画像である撮影画像を演算装置1に出力する。第1撮像部2が出力する撮影画像を第1撮影画像100と呼び、第2撮像部3が出力する撮影画像を第2撮影画像101と呼ぶ。第1撮像部2および第2撮像部3は、車両Cに同一方向を向けて、横方向に所定の基線長だけ離して配置される。なお以下では、第1撮像部2および第2撮像部3が並ぶ横方向を「基線方向」とも呼ぶ。演算装置1は、第1撮影画像100および第2撮影画像101に基づき、後述する手法により視差情報106を生成して認識処理部4に出力する。 Both the first imaging unit 2 and the second imaging unit 3 are cameras, and output captured images, which are images obtained by capturing, to the arithmetic device 1 . The captured image output by the first imaging unit 2 is called a first captured image 100 , and the captured image output by the second imaging unit 3 is called a second captured image 101 . The first imaging unit 2 and the second imaging unit 3 face the vehicle C in the same direction and are arranged laterally apart from each other by a predetermined baseline length. Note that the horizontal direction in which the first imaging unit 2 and the second imaging unit 3 are arranged is hereinafter also referred to as the “base line direction”. Based on the first captured image 100 and the second captured image 101 , the computing device 1 generates the parallax information 106 by a method described later and outputs the parallax information 106 to the recognition processing section 4 .
 演算装置1は、入力部10と、類似度生成部11と、類似度補正部12と、類似度判定部13とを含む。入力部10は、第1撮像部2および第2撮像部3とのハードウエアインタフェース、および演算処理装置により実現される。類似度生成部11、類似度補正部12、および類似度判定部13は、演算処理装置により実現される。ハードウエアインタフェースはたとえば、RJ45やカメラリンクなどである。 The computing device 1 includes an input unit 10 , a similarity generation unit 11 , a similarity correction unit 12 and a similarity determination unit 13 . The input unit 10 is implemented by a hardware interface with the first imaging unit 2 and the second imaging unit 3, and an arithmetic processing unit. The similarity generation unit 11, the similarity correction unit 12, and the similarity determination unit 13 are realized by an arithmetic processing device. The hardware interface is, for example, RJ45 or camera link.
 演算処理装置はたとえば、中央演算装置であるCPU、読み出し専用の記憶装置であるROM、および読み書き可能な記憶装置であるRAMを備え、CPUがROMに格納されるプログラムをRAMに展開して実行することで後述する演算を行う。演算処理装置は、CPU、ROM、およびRAMの組み合わせの代わりに書き換え可能な論理回路であるFPGA(Field Programmable Gate Array)や特定用途向け集積回路であるASIC(Application Specific Integrated Circuit)により実現されてもよい。また演算処理装置は、CPU、ROM、およびRAMの組み合わせの代わりに、異なる構成の組み合わせ、たとえばCPU、ROM、RAMとFPGAの組み合わせにより実現されてもよい。 An arithmetic processing unit includes, for example, a CPU that is a central processing unit, a ROM that is a read-only storage device, and a RAM that is a readable/writable storage device. By doing so, the calculation described later is performed. The arithmetic processing unit may be realized by FPGA (Field Programmable Gate Array), which is a rewritable logic circuit, or ASIC (Application Specific Integrated Circuit), which is an application specific integrated circuit, instead of the combination of CPU, ROM, and RAM. good. Also, the arithmetic processing unit may be realized by a combination of different configurations, for example, a combination of CPU, ROM, RAM and FPGA, instead of the combination of CPU, ROM, and RAM.
 入力部10は、第1撮影画像100に対して画像処理を施した第1画像102と、第2撮影画像101に対して画像処理を施した第2画像103を類似度生成部11に供給する。この画像処理はたとえば、カメラの内部パラメータや外部パラメータを利用して実行され、画像処理への悪影響を排除するものである。画像処理の例を挙げると、レンズの歪みの補正、取り付け位置や取り付け姿勢の誤差補正などである。ただし、本実施の形態において入力部10における処理は本質的なものではないため、第1撮影画像100と第1画像102とを同一とみなしてもよいし、第2撮影画像1021と第2画像103とを同一とみなしてもよい。 The input unit 10 supplies a first image 102 obtained by performing image processing on the first captured image 100 and a second image 103 obtained by performing image processing on the second captured image 101 to the similarity generating unit 11. . This image processing is performed using, for example, the intrinsic and extrinsic parameters of the camera to eliminate adverse effects on the image processing. Examples of image processing include correction of lens distortion and correction of mounting position and mounting orientation errors. However, since the processing in the input unit 10 is not essential in the present embodiment, the first captured image 100 and the first image 102 may be regarded as the same, or the second captured image 1021 and the second image 103 may be regarded as the same.
 類似度生成部11は、入力部10から供給される第1画像102と第2画像103とを入力とし、第1類似度情報104を生成して類似度補正部12に供給する。類似度補正部12は、第1画像102におけるマッチングブロックと、第2画像103におけるマッチングブロックとの類似度を表す情報である。第1類似度情報104はたとえば、第1画像102のマッチングブロック内の各画素と、対応する第2画像103のマッチングブロック内の画素の輝度変動量である。輝度変動量はたとえば、SAD(Sum of Absolute Difference)、ZSAD(Zero means Sum of Absolute Difference)、SSD(Sum of Squared Difference)、NCC(Normalized Cross Correlation)、ZNCC(Zero means Normalized Cross Correlation)などの算出方法を用いることができる。 The similarity generation unit 11 receives the first image 102 and the second image 103 supplied from the input unit 10 , generates first similarity information 104 , and supplies the first similarity information 104 to the similarity correction unit 12 . The similarity correction unit 12 is information representing the similarity between matching blocks in the first image 102 and matching blocks in the second image 103 . The first similarity information 104 is, for example, the amount of luminance variation between each pixel in the matching block of the first image 102 and the corresponding pixel in the matching block of the second image 103 . Luminance variation is calculated by SAD (Sum of Absolute Difference), ZSAD (Zero means Sum of Absolute Difference), SSD (Sum of Squared Difference), NCC (Normalized Cross Correlation), ZNCC (Zero means Normalized Cross Correlation), etc. method can be used.
 第1類似度情報104は、第1画像102および第2画像103のマッチングブロック内の各画素およびその周辺画素との輝度変動量、輝度変動方向、または輝度変動パタンでもよい。輝度変動パタンの一例として、対象となる画素の左右隣接画素を含む水平3画素に対する左からの増減パタンがある。たとえば、左隣接画素値が10、対象画素値が25、右隣接画素値が80の場合は、増減パタンは「増加→増加」となる。 The first similarity information 104 may be the luminance fluctuation amount, luminance fluctuation direction, or luminance fluctuation pattern between each pixel in the matching block of the first image 102 and the second image 103 and its surrounding pixels. As an example of the luminance variation pattern, there is an increase/decrease pattern from the left for three horizontal pixels including pixels adjacent to the left and right of the target pixel. For example, when the left adjacent pixel value is 10, the target pixel value is 25, and the right adjacent pixel value is 80, the increase/decrease pattern is "increase→increase".
 類似度補正部12は、第1類似度情報104または視差情報106に基づき、第1類似度情報104を補正した第2類似度情報105を生成する。類似度判定部13は、第2類似度情報105に基づき、探索範囲内で最も高い類似度となる位置を検出し、この位置情報に基づき視差情報106を生成する。類似度判定部13は、生成した視差情報106を認識処理部4および類似度補正部12に出力する。 The similarity correction unit 12 generates second similarity information 105 by correcting the first similarity information 104 based on the first similarity information 104 or the parallax information 106 . Based on the second similarity information 105, the similarity determination unit 13 detects the position with the highest similarity within the search range, and generates parallax information 106 based on this position information. The similarity determination unit 13 outputs the generated parallax information 106 to the recognition processing unit 4 and the similarity correction unit 12 .
 認識処理部4は、視差情報106を入力として、様々な認識処理を実施する。認識処理部4における認識処理の一例としては視差情報106を用いた立体物検知がある。また、認識処理部4による認識対象の一例として、被写体の位置情報、種類情報、動作情報、危険情報がある。位置情報の一例として、自車両からの方向と距離がある。種類情報の一例として、歩行者、大人、子供、高齢者、動物、落石、自転車、周辺車両、周辺構造物、縁石がある。動作情報の一例として、歩行者や自転車のふらつき、飛び出し、横切り、移動方向、移動速度、移動軌跡がある。危険情報の一例として、歩行者飛び出し、落石、急停止や急減速や急ハンドルなど周辺車両の異常動作などがある。認識処理部4で生成した認識情報107は車両制御部5に供給される。 The recognition processing unit 4 receives the parallax information 106 and performs various recognition processes. An example of recognition processing in the recognition processing unit 4 is three-dimensional object detection using the parallax information 106 . Examples of objects to be recognized by the recognition processing unit 4 include subject position information, type information, motion information, and risk information. An example of position information is the direction and distance from the own vehicle. Examples of type information include pedestrians, adults, children, elderly people, animals, falling rocks, bicycles, surrounding vehicles, surrounding structures, and curbs. Examples of motion information include swaying, jumping, crossing, moving direction, moving speed, and moving trajectory of a pedestrian or a bicycle. Examples of danger information include pedestrians running out, falling rocks, and abnormal behavior of surrounding vehicles such as sudden stops, sudden deceleration, and sudden steering. The recognition information 107 generated by the recognition processing section 4 is supplied to the vehicle control section 5 .
 車両制御部5は、認識情報107に基づく様々な車両Cの車両制御が行われる。車両制御部5で実施される車両制御の一例として、ブレーキ制御、ハンドル制御、アクセル制御、車載ランプ制御、警告音発生、車載カメラ制御、ネットワークを介して接続された周辺車両や遠隔地センタ機器への撮像装置周辺の観測対象物に関する情報出力などがある。具体的な一例としては、車両前方に存在する障害物の視差情報106に応じた速度やブレーキ制御がある。 The vehicle control unit 5 performs various vehicle controls for the vehicle C based on the recognition information 107. Examples of vehicle control performed by the vehicle control unit 5 include brake control, steering wheel control, accelerator control, in-vehicle lamp control, warning sound generation, in-vehicle camera control, peripheral vehicles connected via a network, and remote center equipment. output of information related to objects to be observed around the imaging device. A specific example is speed and brake control according to the parallax information 106 of obstacles present in front of the vehicle.
 なお、視差情報に基づき距離情報を生成して、視差情報106の代わりに使用してもよい。また、本実施の形態では記載していないが、車両制御部5は、第1画像102や第2画像103を用いた画像処理結果を元に被写体検知処理を行ってもよい。また車両制御部5は、車両制御部5に接続された表示機器に対して、第1撮像部2または第2撮像部3を介して得られた画像や、視聴者に認識させるための表示を行ってもよい。さらに車両制御部5は、地図情報や、渋滞情報などの交通情報を処理する情報機器に対して、画像処理結果に基づき検知した観測対象物の情報を供給してもよい。 Note that distance information may be generated based on the parallax information and used instead of the parallax information 106. Further, although not described in this embodiment, the vehicle control unit 5 may perform subject detection processing based on image processing results using the first image 102 and the second image 103 . In addition, the vehicle control unit 5 displays an image obtained through the first imaging unit 2 or the second imaging unit 3 and a display for the viewer to recognize on a display device connected to the vehicle control unit 5. you can go Further, the vehicle control unit 5 may supply the information of the observation object detected based on the image processing result to an information device that processes traffic information such as map information and traffic information.
 図2は、類似度補正部12の構成図である。類似度補正部12は、第1補正情報生成部20と、第2補正情報生成部21と、補正実行部22を備える。第1補正情報生成部20は、第1類似度情報104に基づき、第1補正情報200を生成する。第2補正情報生成部21は、視差情報106に基づき、第2補正情報201を生成する。補正実行部22は、第1補正情報200および第2補正情報201に基づき第1類似度情報104の補正を行い、第2類似度情報105を生成する。 FIG. 2 is a configuration diagram of the similarity correction unit 12. As shown in FIG. The similarity correction unit 12 includes a first correction information generation unit 20 , a second correction information generation unit 21 and a correction execution unit 22 . The first correction information generator 20 generates first correction information 200 based on the first similarity information 104 . The second correction information generator 21 generates second correction information 201 based on the parallax information 106 . The correction execution unit 22 corrects the first similarity information 104 based on the first correction information 200 and the second correction information 201 to generate the second similarity information 105 .
 第1補正情報生成部20の構成を説明する。第1補正情報生成部20は、補正判定部40と第1生成部41とを備える。補正判定部40は、第1類似度情報104に基づき、第1補正情報200の生成の要否を示す補正有効情報400を生成する。補正判定部40はたとえば、次に示す第1判定条件を満たす場合に第1補正情報200の生成が必要と判断し、第1判定条件を満たさない場合には第1補正情報200の生成が不要と判断する。 The configuration of the first correction information generation unit 20 will be explained. The first correction information generation section 20 includes a correction determination section 40 and a first generation section 41 . Based on the first similarity information 104 , the correction determination unit 40 generates correction effective information 400 indicating whether or not to generate the first correction information 200 . For example, the correction determination unit 40 determines that generation of the first correction information 200 is necessary when the following first determination condition is satisfied, and generation of the first correction information 200 is unnecessary when the first determination condition is not satisfied. I judge.
 (類似度S(i-1) - 類似度S(i)) > TH0 かつ
 (類似度S(i+1) - 類似度S(i)) > TH1
(Similarity S(i-1)-Similarity S(i))>TH0 and (Similarity S(i+1)-Similarity S(i))>TH1
 なお、類似度A(i)は探索範囲内で判定対象とする探索位置A(i)における類似度、類似度A(i-1)は探索位置A(i)の左に隣接する探索位置A(i-1)における類似度、類似度A(i+1)は探索位置A(i)の右に隣接する探索位置A(i+1)における類似度である。またTH0およびTH1は、あらかじめ定められた閾値である。さらに、第1判定条件における大小判定は、大きさが等しい場合に条件が成り立つように変更してもよい。 Note that the similarity A(i) is the similarity at the search position A(i) to be determined within the search range, and the similarity A(i-1) is the search position A adjacent to the left of the search position A(i). The similarity at (i−1), the similarity A(i+1), is the similarity at the search position A(i+1) adjacent to the right of the search position A(i). TH0 and TH1 are predetermined threshold values. Furthermore, the size determination in the first determination condition may be changed so that the condition is satisfied when the sizes are equal.
 なお第1判定条件において上下ではなく左右を参照する理由、すなわち横方向に走査する理由は、基線方向が横だからである。また第1判定条件は、3点だけでなく次のように5点の類似度を用いて判定してもよい。ただしここでは、あらかじめ定められた閾値であるTH2~TH5を用いている。 The reason for referring to the left and right instead of the top and bottom in the first determination condition, that is, the reason for scanning in the horizontal direction is that the baseline direction is horizontal. Further, the first determination condition may be determined using not only 3 similarities but also 5 similarities as follows. However, TH2 to TH5, which are predetermined threshold values, are used here.
 (類似度S(i-2) - 類似度S(i-1)) > TH2 かつ
 (類似度S(i-1) - 類似度S(i)) > TH3 かつ
 (類似度S(i+1) - 類似度S(i)) > TH4 かつ
 (類似度S(i+2) - 類似度S(i+1)) > TH5
(Similarity S(i-2)-Similarity S(i-1)) > TH2 AND (Similarity S(i-1)-Similarity S(i)) > TH3 AND (Similarity S(i+1)- Similarity S(i))>TH4 and (Similarity S(i+2)-Similarity S(i+1))>TH5
 第1生成部41は、第1類似度情報104と補正有効情報400を入力とし、補正有効情報400に基づき、第1補正情報200を生成する。第1補正情報200はたとえば、補正対象となる探索位置と補正後の類似度情報との組合せや、新たに生成する探索位置と類似度情報の組合せである。補正実行部22は、第1補正情報200を参照して類似度を補正する。第1補正情報生成部20の動作を図3を参照して説明する。 The first generation unit 41 receives the first similarity information 104 and the correction effectiveness information 400 and generates the first correction information 200 based on the correction effectiveness information 400 . The first correction information 200 is, for example, a combination of a search position to be corrected and corrected similarity information, or a combination of a newly generated search position and similarity information. The correction execution unit 22 refers to the first correction information 200 and corrects the similarity. The operation of the first correction information generator 20 will be described with reference to FIG.
 図3は、第1類似度情報104および第1補正情報200を説明する図である。図3の上部は類似度曲線1010の全体を示し、図3の下部は類似度曲線1010の位置部分の詳細を示す。具体的には図3の下部に示す類似度曲線1012は、図3の上部における符号1011の範囲を示している。類似度曲線1010および類似度曲線1012において、横軸は探索範囲内の横方向の探索位置、縦軸は類似度を示し、縦軸の下部ほど類似度は高い。 FIG. 3 is a diagram for explaining the first similarity information 104 and the first correction information 200. FIG. The upper portion of FIG. 3 shows the entire similarity curve 1010 and the lower portion of FIG. 3 shows details of the location portion of the similarity curve 1010 . Specifically, a similarity curve 1012 shown in the lower part of FIG. 3 indicates the range of reference numeral 1011 in the upper part of FIG. In the similarity curve 1010 and the similarity curve 1012, the horizontal axis indicates the search position in the search range in the horizontal direction, the vertical axis indicates the similarity, and the lower the vertical axis, the higher the similarity.
 図3の下部に示す類似度曲線1012に記載している黒丸は各探索位置における類似度を示している。類似度生成部11は、所定の画素単位で探索を行うので、類似度曲線1012における類似度は、その探索の画素単位の間隔で示されている。探索の画素単位は任意に設定可能であるが、最も細かくても1画素である。図示中央部の探索位置は、左から1003、1001、1000、1002、1004である。 The black circles described in the similarity curve 1012 shown at the bottom of FIG. 3 indicate the similarity at each search position. Since the similarity generation unit 11 performs a search in units of predetermined pixels, the similarity in the similarity curve 1012 is indicated by the interval of the search in units of pixels. The pixel unit for search can be set arbitrarily, but the smallest unit is one pixel. The search positions in the central part of the drawing are 1003, 1001, 1000, 1002, and 1004 from the left.
 類似度曲線1012の探索位置1000における類似度S0は、左右の探索位置の類似度よりも類似度が大きい。そのため、探索位置1000は上述した第1判定条件を満たす。類似度曲線1012の補正方法の一例として、探索位置1000における類似度S0を類似度S1に置き換える方法がある。また別の方法の一例として、探索位置1000の周辺の探索位置、たとえば探索位置1003に新たな類似度S1を生成する方法がある。 The similarity S0 at the search position 1000 on the similarity curve 1012 is higher than the similarity at the left and right search positions. Therefore, the search position 1000 satisfies the first determination condition described above. As an example of a method of correcting the similarity curve 1012, there is a method of replacing the similarity S0 at the search position 1000 with the similarity S1. As an example of another method, there is a method of generating a new similarity S1 at search positions around the search position 1000, for example, search position 1003. FIG.
 類似度曲線1012内の類似度S1を有する探索位置1003は、第1補正情報200の一例である。類似度S1を生成する方法の一例として、近傍にある複数の類似度に基づき、線形近似やSSDパラボラフィッティングなどの多項式補間により算出する方法がある。近傍に存在する複数の類似度の一例として、探索位置1000、1001、1002の3つの類似度がある。探索位置1003は、探索位置1000と探索位置1002の間に位置する小数精度の探索位置となる。 The search position 1003 having the similarity S1 within the similarity curve 1012 is an example of the first correction information 200 . As an example of a method of generating the similarity S1, there is a method of calculating by polynomial interpolation such as linear approximation or SSD parabolic fitting based on a plurality of neighboring similarities. As an example of multiple degrees of similarity existing in the vicinity, there are three degrees of similarity of search positions 1000, 1001, and 1002. FIG. A search position 1003 is a decimal precision search position located between the search position 1000 and the search position 1002 .
 このように第1補正情報生成部20は、基線方向に走査して類似度の差分の正負が切り替わる変極点を探索し、変極点の周辺における類似度を用いて新たな類似度を算出し、変極点の位置および新たな類似度の組合せを第1補正情報200として算出する。 In this way, the first correction information generation unit 20 scans in the baseline direction to search for an inflection point where the difference in similarity switches between positive and negative, calculates a new similarity using the similarity around the inflection point, A combination of the position of the inflection point and the new degree of similarity is calculated as the first correction information 200 .
 第1補正情報200に探索位置1003および類似度S1が含まれる場合に、補正実行部22が第1補正情報200を用いて類似度を補正する方法として少なくとも次の3つがある。第1の方法は、探索位置1000および類似度S0の情報を探索位置1003および類似度S1の情報で書き換える方法である。第2の方法は、探索位置1000および類似度S0の情報はそのままとし、探索位置1003および類似度S1の情報を追加する方法である。第3の方法は、探索位置1000の位置は変更せず、類似度S0の値をS1の値に書き換える方法である。第3の方法を採用する場合には、探索位置1003の情報を3点近似などで算出してもよい。 When the search position 1003 and the similarity S1 are included in the first correction information 200, there are at least the following three methods for the correction execution unit 22 to correct the similarity using the first correction information 200. A first method is a method of rewriting the information on the search position 1000 and the similarity S0 with the information on the search position 1003 and the similarity S1. A second method is to leave the information of the search position 1000 and the similarity S0 as they are, and add the information of the search position 1003 and the similarity S1. A third method is a method of rewriting the value of the degree of similarity S0 to the value of S1 without changing the position of the search position 1000 . When adopting the third method, the information of the search position 1003 may be calculated by 3-point approximation or the like.
 図2に戻って第2補正情報生成部21の構成を説明する。第2補正情報生成部21は、周辺視差連続度判定部50と第2生成部51とを備え、視差情報106または視差境界情報300に基づき、第2補正情報301を生成する。 Returning to FIG. 2, the configuration of the second correction information generation unit 21 will be described. The second correction information generation unit 21 includes a peripheral parallax continuity determination unit 50 and a second generation unit 51 , and generates second correction information 301 based on the parallax information 106 or the parallax boundary information 300 .
 図4と図5のそれぞれは、第2補正情報生成部21において第2補正情報201の生成に用いる視差情報106の例を示す図である。以下では、視差情報106は位置、たとえば画素のブロックごとに対応して設定されているので、以下では、第2補正情報生成部21が第2補正情報201の生成に用いる視差情報106に対応する領域を「参照周辺視差領域」と呼ぶ。 4 and 5 are diagrams showing examples of the parallax information 106 used for generating the second correction information 201 in the second correction information generation unit 21. FIG. Below, the parallax information 106 is set corresponding to each position, for example, for each block of pixels. The region is called a “reference peripheral parallax region”.
 図4は、第2補正情報201の生成に用いる視差情報106の第1の例を示す図である。換言すると、図4は参照周辺視差領域を決定する第1の例を示す図である。図4は、最新の撮影画像である現行フレーム800において、視差生成対象ブロック801の視差の算出過程を示している。図4における格子は、視差を算出する単位のブロックである。ブロックのサイズは、1画素でもよいし、横2画素で縦2画素の4画素でもよいし、横4画素で縦4画素の16画素でもよいし、1辺が5画素以上の正方形でもよいし、任意の画素数から構成される長方形でもよい。 FIG. 4 is a diagram showing a first example of the parallax information 106 used for generating the second correction information 201. FIG. In other words, FIG. 4 is a diagram showing a first example of determining a reference peripheral parallax region. FIG. 4 shows the parallax calculation process of the parallax generation target block 801 in the current frame 800, which is the latest captured image. A grid in FIG. 4 is a unit block for calculating parallax. The block size may be 1 pixel, 4 pixels (2 pixels wide by 2 pixels high), 16 pixels (4 pixels wide by 4 pixels high), or a square with 5 pixels or more on each side. , may be a rectangle composed of any number of pixels.
 図4に示す例では、図示左上のブロックの視差を最初に算出し、矢印で示すように処理対象を右方向に順番に変更して、最上段の視差をすべて算出したら処理対象を2段目に変更し、同様に処理対象を左から右に順番に変更している。そのため、符号801で示すブロックの視差を算出する際には、図示1~3段目の視差は全て算出が完了しており、4段目も視差生成対象ブロック801の左側は視差の算出が完了している。図4に斜線で示す領域は、視差生成対象ブロック801の視差を算出する時点ですでに視差を算出済みである、視差生成対象ブロック801を中心とする一辺5画素の領域である。この領域における視差情報106を「参照周辺視差802」と呼ぶ。図4の例では、この参照周辺視差802を第2補正情報201の生成に用いる。すなわち図4に示す第1の例では、参照周辺視差領域が参照周辺視差802である。 In the example shown in FIG. 4, the parallax of the upper left block is calculated first, and then the processing target is changed to the right direction as indicated by the arrow. , and similarly, the processing targets are changed in order from left to right. Therefore, when calculating the parallax of the block indicated by reference numeral 801, the calculation of all the parallaxes in the first to third stages in the drawing has been completed, and the calculation of the parallax on the left side of the parallax generation target block 801 in the fourth stage has also been completed. are doing. The shaded area in FIG. 4 is a five-pixel area centered on the parallax generation target block 801 , in which the parallax has already been calculated when the parallax of the parallax generation target block 801 is calculated. The parallax information 106 in this area is called "reference peripheral parallax 802". In the example of FIG. 4 , this reference peripheral parallax 802 is used for generating the second correction information 201 . That is, in the first example shown in FIG. 4, the reference peripheral parallax area is the reference peripheral parallax 802 .
 図5は、第2補正情報201の生成に用いる視差情報106の第2の例を示す図である。換言すると、図5は参照周辺視差領域を決定する第2の例を示す図である。図5では撮影時刻が異なる3つの画像を示しており、現行フレーム800が最新のフレーム、第1過去フレーム901は直前の処理周期に取得されたフレーム、第2過去フレーム902は2つ前の処理周期に取得されたフレームである。現行フレーム800のブロック801が視差生成対象ブロックであり、これに対応する過去フレーム内のブロックは、ブロック903およびブロック904である。 FIG. 5 is a diagram showing a second example of the parallax information 106 used for generating the second correction information 201. FIG. In other words, FIG. 5 is a diagram showing a second example of determining the reference peripheral parallax region. FIG. 5 shows three images captured at different times, the current frame 800 being the latest frame, the first past frame 901 being the frame acquired in the immediately preceding processing cycle, and the second past frame 902 being the processing two frames before. It is a frame acquired periodically. A block 801 of the current frame 800 is a parallax generation target block, and corresponding blocks in the past frame are blocks 903 and 904 .
 第1過去フレーム901における斜線領域の視差を「第1過去参照周辺視差905」と呼び、第2過去フレーム902における斜線領域の視差を「第2過去参照周辺視差906」と呼ぶ。第1過去参照周辺視差905および第2過去参照周辺視差906は、第2補正情報201の生成に用いる視差情報106の第2の例である。これらの視差情報は過去のフレームに含まれるので、視差生成対象ブロック801の生成時点でいずれも既に算出済であり、視差生成対象ブロック801に対応した過去フレームのブロック903、904の周辺の視差情報である。すなわち図5に示す第2の例では、参照周辺視差領域が第1過去参照周辺視差905のみ、または第1過去参照周辺視差905と第2過去参照周辺視差906の両方である。なお図5の例では2つの過去フレームを参照したが、参照する過去フレームは1つだけでもよいし3つ以上でもよい。 The parallax in the shaded area in the first past frame 901 is called "first past reference peripheral parallax 905", and the parallax in the shaded area in the second past frame 902 is called "second past reference peripheral parallax 906". The first past reference peripheral parallax 905 and the second past reference peripheral parallax 906 are a second example of the parallax information 106 used to generate the second correction information 201 . Since these pieces of parallax information are included in past frames, they have already been calculated at the time of generating the parallax generation target block 801 , and parallax information around the blocks 903 and 904 of the past frames corresponding to the parallax generation target block 801 is. That is, in the second example shown in FIG. 5 , the reference peripheral parallax area is only the first past reference peripheral parallax 905 or both the first past reference peripheral parallax 905 and the second past reference peripheral parallax 906 . Although two past frames are referred to in the example of FIG. 5, only one past frame or three or more past frames may be referred to.
 周辺視差連続度判定部50は、視差情報106に基づき、第2補正情報301の生成の要否を示す第2補正要否情報500を生成する。第2補正要否情報500の一例として、周辺視差情報と補正要否情報がある。 Based on the parallax information 106, the peripheral parallax continuity determination unit 50 generates second correction necessity information 500 indicating whether or not the generation of the second correction information 301 is necessary. Examples of the second correction necessity information 500 include peripheral parallax information and correction necessity information.
 周辺視差情報の一例として、参照周辺視差領域における視差値の平均値がある。この視差値の算出方法の一例として、図4で示す参照周辺視差802の平均視差値、図5で示す第1過去参照周辺視差905の平均視差値、図5で示す第1過去参照周辺視差905および第2過去参照周辺視差906の平均視差値がある。また、水平方向のみの視差値を参照してもよいし、垂直方向のみの視差値を参照してもよい。また、参照周辺視差領域内で孤立している視差は参照対象視差から除外してもよい。孤立している視差の検出方法の一例として、検出対象視差の値と検出対象視差の周辺の視差の値との差分が所定値よりも大きな場合に、孤立している視差と判定する方法がある。 An example of peripheral parallax information is the average parallax value in the reference peripheral parallax area. As an example of the calculation method of this parallax value, the average parallax value of the reference peripheral parallax 802 shown in FIG. 4, the average parallax value of the first past reference peripheral parallax 905 shown in FIG. and a second past-reference peripheral disparity 906 average disparity value. Also, the parallax value in only the horizontal direction may be referred to, or the parallax value in only the vertical direction may be referred to. Also, isolated parallaxes within the reference peripheral parallax area may be excluded from the reference target parallax. As an example of a method for detecting isolated parallax, there is a method of determining isolated parallax when the difference between the value of parallax to be detected and the value of parallax around the parallax to be detected is greater than a predetermined value. .
 第2補正要否情報500は、第2生成部51において、第2補正情報301の生成有無を示す情報である。第2補正要否情報500が「生成:必要」の場合は、補正実行部22において、第2補正情報301を用いて類似度の補正を行う。第2補正情報301が「生成:不要」の場合は、補正実行部22において、第2補正情報301を用いた類似度の補正を行わない。 The second correction necessity information 500 is information indicating whether or not the second correction information 301 is generated in the second generation unit 51 . When the second correction necessity information 500 is “Generate: Necessary”, the correction execution unit 22 corrects the similarity using the second correction information 301 . When the second correction information 301 is “generate: unnecessary”, the correction execution unit 22 does not correct the similarity using the second correction information 301 .
 周辺視差連続度判定部50はたとえば、参照周辺視差領域内の視差のばらつき度を算出し、ばらつき度が大きな場合には、第2補正要否情報500を「生成:不要」とする。ばらつき度の一例として、参照周辺視差領域内の視差値の平均と各視差値との差分和や、参照周辺視差領域内の分散値がある。ばらつき度の生成には、水平方向のみの視差値を参照してもよいし、垂直方向のみの視差値を参照してもよい。補正要否情報の生成方法の別の一例として、参照周辺視差領域内の視差の連続度を算出し、連続度が小さな場合には、補正要否情報を補正なしとしてもよい。連続度の一例として、参照周辺視差領域内の各視差間の値の差分値の平均や分散値がある。連続度の生成には、水平方向のみの視差値を参照してもよいし、垂直方向のみの視差値を参照してもよい。 For example, the peripheral parallax continuity determination unit 50 calculates the degree of variation in parallax within the reference peripheral parallax area, and if the degree of variation is large, sets the second correction necessity information 500 to "Generation: Not required". Examples of the degree of variation include the sum of differences between the average parallax value in the reference peripheral parallax area and each parallax value, and the variance value in the reference peripheral parallax area. To generate the degree of variation, the parallax value only in the horizontal direction may be referred to, or the parallax value only in the vertical direction may be referred to. As another example of the correction necessity information generation method, the degree of continuity of parallax in the reference peripheral parallax area may be calculated, and if the degree of continuity is small, the correction necessity information may be set as no correction. An example of the degree of continuity is the average or variance of the difference values between the parallax values in the reference peripheral parallax area. To generate the degree of continuity, the parallax value only in the horizontal direction may be referred to, or the parallax value only in the vertical direction may be referred to.
 第2生成部51は、第2補正要否情報500に基づき、第2補正情報301を生成する。第2補正情報301の一例として、探索位置に対応する補正係数Kがある。図6を参照して、この補正係数Kの役割を説明する。補正係数Kの算出方法は後述する。 The second generation unit 51 generates second correction information 301 based on the second correction necessity information 500 . An example of the second correction information 301 is a correction coefficient K corresponding to the search position. The role of this correction coefficient K will be described with reference to FIG. A method of calculating the correction coefficient K will be described later.
 図6は、補正係数Kを説明する図である。図6の上部に示すグラフ1110は第2補正情報301の一例であり、探索位置ごとの補正係数Kの集合である。図6の下部に示すグラフ1111は、探索位置ごとの類似度を示しており、図3と同種の情報を示している。ただし図6では補正前の類似度を実線で示し、補正後の補正係数を破線で示している。図6の上部と下部を順番に説明する。 FIG. 6 is a diagram for explaining the correction coefficient K. A graph 1110 shown in the upper part of FIG. 6 is an example of the second correction information 301, and is a set of correction coefficients K for each search position. A graph 1111 shown in the lower part of FIG. 6 indicates the degree of similarity for each search position, and indicates the same kind of information as in FIG. However, in FIG. 6, the solid line indicates the degree of similarity before correction, and the dashed line indicates the correction coefficient after correction. The upper part and the lower part of FIG. 6 will be described in order.
 図6の上部において、グラフ1110の横軸は探索範囲内の探索位置、縦軸は第1類似度情報104に対する補正係数Kを示す。補正係数K0は、第1類似度情報104の補正を行わない場合の値である。補正係数が、補正係数K0よりも小さい場合、より類似度が高くなるように第1類似度情報104を補正する。また、補正係数が、補正係数K0よりも大きい場合、より類似度が小さくなるように第1類似度情報104を補正する。 In the upper part of FIG. 6, the horizontal axis of the graph 1110 indicates the search position within the search range, and the vertical axis indicates the correction coefficient K for the first similarity information 104. The correction coefficient K0 is a value when the first similarity information 104 is not corrected. When the correction coefficient is smaller than the correction coefficient K0, the first similarity information 104 is corrected so that the similarity is higher. Also, when the correction coefficient is larger than the correction coefficient K0, the first similarity information 104 is corrected so that the similarity becomes smaller.
 補正係数Kが補正前の第1類似度情報104と掛け合わされて用いられる場合はたとえば、補正係数K0は「1」であり、補正係数K1は「1」よりも小さく「0」より大きい実数、補正係数K2は「1」よりも大きい実数である。補正係数Kが補正前の第1類似度情報104と掛け合わされて用いられる場合はたとえば、補正係数K0は「1」であり、補正係数K1は「1」よりも小さく「0」より大きい実数、補正係数K2は「1」よりも大きい実数である。補正係数Kが補正前の第1類似度情報104と足し合わせて用いられる場合はたとえば、補正係数K0は「0」であり、補正係数K1は正の実数、補正係数K2は負の実数である。 When the correction coefficient K is used by being multiplied by the first similarity information 104 before correction, for example, the correction coefficient K0 is "1", the correction coefficient K1 is a real number smaller than "1" and larger than "0", The correction coefficient K2 is a real number greater than "1". When the correction coefficient K is used by being multiplied by the first similarity information 104 before correction, for example, the correction coefficient K0 is "1", the correction coefficient K1 is a real number smaller than "1" and larger than "0", The correction coefficient K2 is a real number greater than "1". When the correction coefficient K is used in addition to the first similarity information 104 before correction, for example, the correction coefficient K0 is "0", the correction coefficient K1 is a positive real number, and the correction coefficient K2 is a negative real number. .
 図6の下部において、グラフ1111の横軸は探索範囲内の探索位置、縦軸は類似度を示す。実線で示す第1類似度曲線1104は、第1類似度情報104をそのまま用いて作成された類似度曲線である。破線で示す第2類似度曲線1105は、第2補正情報1100と第1類似度情報104に基づき生成された第2類似度情報105を用いて作成された類似度曲線である。  In the lower part of Fig. 6, the horizontal axis of the graph 1111 indicates the search position within the search range, and the vertical axis indicates the degree of similarity. A first similarity curve 1104 indicated by a solid line is a similarity curve created using the first similarity information 104 as it is. A second similarity curve 1105 indicated by a dashed line is a similarity curve created using the second similarity information 105 generated based on the second correction information 1100 and the first similarity information 104 .
 グラフ1110に示すように、探索位置A1における補正係数Kは第2補正情報301の要素1101に示すように「K0」である。そのため、第2類似度曲線1105上の要素1106に示すように、探索位置A1において第1類似度曲線1104の値と第2類似度曲線1105の値は同一となる。 As shown in the graph 1110, the correction coefficient K at the search position A1 is "K0" as indicated by the element 1101 of the second correction information 301. Therefore, as indicated by element 1106 on second similarity curve 1105, the value of first similarity curve 1104 and the value of second similarity curve 1105 are the same at search position A1.
 探索位置A2における補正係数Kは、第2補正情報301の要素1102に示すとおりK0よりも小さな値である「K1」である。そのため、第2類似度曲線1105上の要素1107に示す通り、探索位置A2においては第1類似度よりも第2類似度は高くなる。探索位置A3における補正係数は、第2補正情報301の要素1103に示すとおりK0よりも大きな値である「K2」である。そのため、第2類似度曲線1105上の要素1108に示す通り、探索位置A3においては第1類似度に対して第2類似度は低くなる。
 補正係数Kを用いた第1類似度情報104の補正は、たとえば次のとおりである。
The correction coefficient K at the search position A2 is "K1", which is a value smaller than K0 as indicated by the element 1102 of the second correction information 301. FIG. Therefore, as indicated by element 1107 on second similarity curve 1105, the second similarity is higher than the first similarity at search position A2. The correction coefficient at search position A3 is “K2”, which is a value greater than K0 as shown in element 1103 of second correction information 301 . Therefore, as indicated by the element 1108 on the second similarity curve 1105, the second similarity is lower than the first similarity at the search position A3.
Correction of the first similarity information 104 using the correction coefficient K is, for example, as follows.
  Sa(Ai) = Sb(Ai)×K(i)   Sa (Ai) = Sb (Ai) x K (i)
 なお、Sb(i)は第1類似度曲線1104上の探索位置(i)における類似度、Sa(i)は第2類似度曲線1105上の探索位置(i)における類似度、K(i)は探索位置(i)における第2補正情報1100上の補正係数Kである。この場合は、図6の探索位置A2における類似度は次の関係を有する。 Note that Sb(i) is the similarity at the search position (i) on the first similarity curve 1104, Sa(i) is the similarity at the search position (i) on the second similarity curve 1105, and K(i) is the correction coefficient K on the second correction information 1100 at the search position (i). In this case, the similarity at search position A2 in FIG. 6 has the following relationship.
   S1 = S0×K1    S1 = S0 x K1
 補正実行部22は、第2補正情報301を参照して探索位置ごとの補正係数Kを読み取り、上記の式のように、その位置に対応する類似度と掛け合わせることで第1類似度を補正する。なお補正実行部22では、第1補正情報200を用いた類似度の補正が行われた後に第2補正情報301を用いた類似度の補正が行われる。 The correction execution unit 22 reads the correction coefficient K for each search position with reference to the second correction information 301, and corrects the first similarity by multiplying it by the similarity corresponding to the position as in the above equation. do. Note that the correction execution unit 22 performs similarity correction using the second correction information 301 after similarity correction using the first correction information 200 is performed.
 図7を参照して第2生成部51による第2補正情報301の生成、具体的には探索位置に対応する補正係数Kの生成方法の具体例を説明する。図7(a)は、補正係数Kを算出する第1の例を示す図である。1段目は視差位置、2段目は視差値、3段目は連続度、4段目は機能、5段目は補正係数Kを示す。ここでは、左に隣接する視差値との差分が所定の閾値である「2」以下であれば、連続と判断して連続度を「1」だけ増加させ、差分が閾値以上であれば連続度を「1」だけ減少させ、連続度の上限を「3」としている。 A specific example of a method of generating the second correction information 301 by the second generation unit 51, specifically a method of generating the correction coefficient K corresponding to the search position will be described with reference to FIG. FIG. 7A is a diagram showing a first example of calculating the correction coefficient K. FIG. The first row shows the parallax position, the second row shows the parallax value, the third row shows the continuity, the fourth row shows the function, and the fifth row shows the correction coefficient K. Here, if the difference from the parallax value adjacent to the left is less than or equal to a predetermined threshold value of "2", it is determined to be continuous and the degree of continuity is increased by "1". is decreased by "1", and the upper limit of continuity is set to "3".
 また、機能は「0」が補正なし、「1」が弱い補正、「2」が強い補正を意味する。連続度の「0」および「1」の場合に機能は「0」に設定され、連続度の「2」の場合に機能は「1」に設定され、連続度の「3」の場合に機能は「2」に設定される。補正係数Kは、作図の都合により図6とは異なり数値で記載しており、「5」が図6における「K0」に対応し、「3」が図6における「K1」に対応し、「4」が図6における「K1」と「K0」の間に対応する。補正係数は、機能の値に応じて設定され、機能が「0」の場合に補正係数Kは「5」に設定され、機能が「1」の場合に補正係数Kは「4」に設定され、機能が「2」の場合に補正係数Kは「3」に設定される。 Also, for the function, "0" means no correction, "1" means weak correction, and "2" means strong correction. For continuity '0' and '1' the function is set to '0', for continuity '2' the function is set to '1' and for continuity '3' the function is set to '1'. is set to "2". Unlike FIG. 6, the correction coefficient K is indicated by a numerical value for the convenience of drawing. "5" corresponds to "K0" in FIG. 4" corresponds to between "K1" and "K0" in FIG. The correction coefficient is set according to the value of the function. When the function is "0", the correction coefficient K is set to "5", and when the function is "1", the correction coefficient K is set to "4". , the correction coefficient K is set to "3" when the function is "2".
 図7(a)の例では、視差位置A~Gは視差値「8」が連続している。そのため、連続度は初期値の「0」から1ずつ増加して最大値の「3」が続く。視差位置H以降は視差値が「2」に切り替わるので、「8」との差が閾値の「2」よりも大きいことから、視差位置H、I、Jで「1」ずつ減少して位置Jでは連続度が「0」となる。その後も視差値「2」が続くので、連続度は「2」の連続を意味するものとして増加に転じて最大値の「3」が続く。機能は連続度に連動しており、補正係数Kは機能に連動するので、図7(a)でも連続度に応じて機能の値が変動し、その機能の値に応じて補正係数Kの値が変動している。 In the example of FIG. 7(a), parallax positions A to G have a continuous parallax value of "8". Therefore, the degree of continuity increases by 1 from the initial value of "0" and continues to the maximum value of "3". Since the parallax value is switched to "2" after the parallax position H, the difference from "8" is larger than the threshold value "2". , the continuity is "0". Since the parallax value "2" continues after that, the degree of continuity turns to increase, meaning the continuation of "2", and the maximum value "3" continues. Since the function is linked to the degree of continuity and the correction coefficient K is linked to the function, the value of the function varies according to the degree of continuity in FIG. is fluctuating.
 図7(b)は補正係数Kを算出する第2の例を示す図である。1段目は視差位置、2段目は視差値、3段目は連続度A、4段目は連続度B、5段目は最大連続度、6段目は機能、7段目は補正係数Kを示す。連続度Aと連続度Bは、異なる視差値の連続状態をカウントする。最大連続度は、連続度Aと連続度Bのうち大きい値である。機能は、この例では最大連続度に連動して設定される。 FIG. 7(b) is a diagram showing a second example of calculating the correction coefficient K. FIG. 1st row is parallax position, 2nd row is parallax value, 3rd row is continuity A, 4th row is continuity B, 5th row is maximum continuity, 6th row is function, 7th row is correction coefficient indicates K. Continuity A and continuity B count continuous states of different parallax values. The maximum continuity is the larger of continuity A and continuity B. The function is set in conjunction with maximum continuity in this example.
 図7(b)の例では、視差位置A~Gは視差値「8」が連続し、視差位置H~Mは視差値「2」が連続し、視差位置N~Qは視差値「5」が連続する。そのため、視差位置A~Gでは連続度Aが図7(a)の連続度と同様に増加して最大値の「3」を維持する。このとき連続度Bは、まだ視差値が1種類しか存在せず利用されないので値なしを示すアスタリスクが記載されている。 In the example of FIG. 7B, the parallax positions A to G have a continuous parallax value of "8", the parallax positions H to M have a continuous parallax value of "2", and the parallax positions N to Q have a parallax value of "5". are consecutive. Therefore, at the parallax positions A to G, the degree of continuity A increases in the same way as the degree of continuity in FIG. 7A and maintains the maximum value of "3". At this time, the degree of continuity B is described with an asterisk indicating no value because only one type of parallax value still exists and is not used.
 視差位置Hで視差値が「2」に切り替わると、以前の視差値「8」との差が閾値「2」よりも大きいので連続度Aは減少し、本例では下限の「0」になる。連続度Bは、視差位置Hにおいて初期値「0」となり、その後は視差値「2」が連続するので増加して視差位置Kでは最大値「3」となる。その後、視差位置Nで視差値が「5」に変化すると連続度Bは減少し、これと入れ替わるように連続度Aが増加する。最大連続度は前述のように連続度Aと連続度Bの最大値なので、連続度Bの値が存在しない視差位置Gまでは最大連続度は連続度Aと同一であり、視差位置JからNまでは大きい方の連続度Bの値が最大連続度となる。機能の値は最大連続度に応じて決定され、補正係数Kの値は機能の値に応じて決定される。 When the parallax value switches to "2" at the parallax position H, the difference from the previous parallax value "8" is greater than the threshold value "2", so the degree of continuity A decreases and becomes the lower limit "0" in this example. . The degree of continuity B has an initial value of “0” at the parallax position H, and thereafter increases to a maximum value of “3” at the parallax position K because the parallax value continues to be “2”. After that, when the parallax value changes to "5" at the parallax position N, the degree of continuity B decreases, and the degree of continuity A increases to replace this. Since the maximum continuity is the maximum value of the continuity A and the continuity B as described above, the maximum continuity is the same as the continuity A up to the parallax position G where the value of the continuity B does not exist, and from the parallax position J to N Up to , the value of the continuity B, which is larger, becomes the maximum continuity. The value of the function is determined according to the maximum continuity, and the value of the correction factor K is determined according to the value of the function.
 なお図7に示した例では補正係数Kは、K0~K1に相当する値しかとらなかったが、機能と補正係数Kの関係を以下のように変更することで、補正係数KがK0~K2の値をとるように変更してもよい。たとえば、機能が「0」の場合に補正係数Kは「7」、すなわちK2に設定され、機能が「1」の場合に補正係数Kは「5」、すなわちK0に設定され、機能が「2」の場合に補正係数Kは「3」、すなわちK1に設定される。 In the example shown in FIG. 7, the correction coefficient K takes only values corresponding to K0 to K1. can be changed to take the value of For example, when the function is "0", the correction coefficient K is set to "7", namely K2; when the function is "1", the correction coefficient K is set to "5", namely K0; , the correction coefficient K is set to "3", that is, K1.
 上述した第1の実施の形態によれば、次の作用効果が得られる。
(1)演算装置1は、第1撮像部2と第2撮像部3とに接続され、第1撮像部2で撮影された第1画像100と第2撮像部3で撮像された第2画像101との視差情報を所定のマッチングブロック単位で探索することで生成する。演算装置1は、マッチングブロック単位で、第1画像100内のマッチングブロックに対する第2画像101の探索範囲内のマッチングブロックとの第1類似度を生成する類似度生成部11と、第2画像101の探索範囲内の任意の隣接するマッチングブロックの第1類似度に基づいて生成される第1補正情報200、および視差情報106に基づいて生成される第2補正情報301、の少なくとも一方を用いて第1類似度情報104を補正して第2類似度情報105を生成する類似度補正部12と、第2類似度情報105に基づき視差情報106を生成する類似度判定部13と、を備える。そのため、第1補正情報200や第2補正情報301を用いて類似度を補正することで誤マッチングを防止して視差の算出精度を向上できる。
According to the first embodiment described above, the following effects are obtained.
(1) The computing device 1 is connected to the first imaging unit 2 and the second imaging unit 3, and the first image 100 captured by the first imaging unit 2 and the second image captured by the second imaging unit 3 101 is generated by searching in predetermined matching block units. The computing device 1 includes a similarity generating unit 11 that generates a first similarity between a matching block in the first image 100 and a matching block in the search range of the second image 101 for each matching block, Using at least one of the first correction information 200 generated based on the first similarity of any adjacent matching block within the search range of and the second correction information 301 generated based on the parallax information 106 A similarity correction unit 12 that corrects first similarity information 104 to generate second similarity information 105 and a similarity determination unit 13 that generates disparity information 106 based on the second similarity information 105 are provided. Therefore, by correcting the degree of similarity using the first correction information 200 and the second correction information 301, erroneous matching can be prevented and the accuracy of disparity calculation can be improved.
(2)第1撮像部2および第2撮像部3は、所定の処理周期ごとに第1画像100および第2画像101を生成する。演算装置1は、図4や図5に示したように、最新の第1画像100および最新の第2画像101、または所定周期前の第1画像100および所定周期前の第2画像101における、マッチングブロックの周辺領域内の視差情報106に基づき類似度に掛け合わされる補正係数Kである第2補正情報を生成する第2補正情報生成部21を備える。そのため、最新の撮影画像を用いる場合には位置ずれがない視差情報106に基づく補正係数Kを算出でき、過去の撮影画像を用いる場合にはフレーム全体の視差情報を用いて補正係数を算出できる。 (2) The first image capturing unit 2 and the second image capturing unit 3 generate the first image 100 and the second image 101 for each predetermined processing cycle. As shown in FIGS. 4 and 5, the arithmetic device 1 calculates the latest first image 100 and the latest second image 101, or the first image 100 before the predetermined period and the second image 101 before the predetermined period, A second correction information generation unit 21 is provided for generating second correction information, which is a correction coefficient K by which the degree of similarity is multiplied, based on the parallax information 106 in the peripheral region of the matching block. Therefore, when using the latest captured image, the correction coefficient K can be calculated based on the parallax information 106 with no positional deviation, and when using the past captured image, the correction coefficient can be calculated using the parallax information of the entire frame.
(3)第2補正情報301は、図6の探索位置A2のように、周辺領域内の視差情報の連続度が閾値以上である場合に当該周辺領域内の視差情報に対応する第1類似度を大きくするように補正する補正係数を含む。そのため、周囲と視差が似ており類似することが期待されるマッチングブロックの類似度を大きくすることで誤マッチングを防止して視差の算出精度を向上できる。 (3) The second correction information 301 is the first similarity degree corresponding to the parallax information in the peripheral area when the continuity of the parallax information in the peripheral area is equal to or greater than the threshold, as in the search position A2 in FIG. contains a correction factor that corrects to increase Therefore, by increasing the similarity of a matching block that is expected to have a similar parallax to its surroundings, erroneous matching can be prevented and the accuracy of parallax calculation can be improved.
(4)第2補正情報301は、図6の探索位置A3のように、周辺領域内の視差情報の連続度が閾値以下である場合に当該周辺視差内の視差情報に対応する第1類似度を小さくするように補正する補正係数を含む。そのため、周囲と視差が似ていないマッチングブロックの類似度を小さくすることでマッチングしにくくすることで誤マッチングを防止し、視差の算出精度を向上できる。 (4) The second correction information 301 is the first similarity degree corresponding to the parallax information in the peripheral area when the continuity of the parallax information in the peripheral area is equal to or less than the threshold, as in the search position A3 in FIG. contains a correction factor that corrects to reduce Therefore, it is possible to prevent erroneous matching and improve the accuracy of parallax calculation by making it difficult to match by reducing the similarity of a matching block whose parallax is not similar to its surroundings.
(5)類似度補正部12の補正実行部22は、第1補正情報200に基づいて第1類似度情報104を補正した後に、第2補正情報301に基づいて第1類似度情報104をさらに補正することで第2類似度を生成する。そのため演算装置1は、局所的な類似度の補正を行う第1の補正の結果を、広域な類似度の補正を行う第2の補正に役立てることができる。 (5) After correcting the first similarity information 104 based on the first correction information 200, the correction execution unit 22 of the similarity correction unit 12 further corrects the first similarity information 104 based on the second correction information 301. A second degree of similarity is generated by correcting. Therefore, the calculation device 1 can use the result of the first correction for correcting the local similarity for the second correction for correcting the global similarity.
(6)第1撮像部2および第2撮像部3は基線方向に並ぶ。基線方向に走査して類似度の差分の正負が切り替わる変極点を探索し、変極点の周辺における類似度を用いて新たな類似度を算出し、変極点の位置および新たな類似度の組合せを第1補正情報200として算出する第1補正情報生成部20を備える。そのため、類似度算出の単位よりも細かく、局所的な類似度の補正を行うことができる。たとえば類似度の算出を1ピクセルごとに行っていた場合には、探索位置1003は小数の座標値になるのでサブピクセルの精度で類似度を算出できると言える。 (6) The first imaging unit 2 and the second imaging unit 3 are arranged in the baseline direction. Scan in the baseline direction to search for inflection points where the difference in similarity switches between positive and negative, calculate a new similarity using the similarity around the inflection point, and combine the position of the inflection point and the new similarity A first correction information generator 20 is provided to calculate the first correction information 200 . Therefore, local similarity correction can be performed more finely than the similarity calculation unit. For example, if the similarity is calculated for each pixel, the search position 1003 has a decimal coordinate value, so it can be said that the similarity can be calculated with sub-pixel precision.
(7)監視システムSは、演算装置1と、第1撮像部2と、第2撮像部3と、を備える。そのため、監視システムSは誤マッチングが少なく視差の算出精度がよい。 (7) The monitoring system S includes an arithmetic device 1 , a first imaging section 2 and a second imaging section 3 . Therefore, the monitoring system S has less false matching and good parallax calculation accuracy.
(変形例1)
 上述した第1の実施の形態では、演算装置1の類似度補正部12には、第1補正情報生成部20および第2補正情報生成部21が含まれた。しかし類似度補正部12には、第1補正情報生成部20および第2補正情報生成部21の一方のみが含まれてもよい。
(Modification 1)
In the first embodiment described above, the similarity correction unit 12 of the arithmetic device 1 includes the first correction information generation unit 20 and the second correction information generation unit 21 . However, the similarity correction section 12 may include only one of the first correction information generation section 20 and the second correction information generation section 21 .
(変形例2)
 上述した第1の実施の形態では、第1撮像部2および第2撮像部3が横に並ぶので基線方向が横であり、第1生成部41は図3に示すように横方向に類似度を走査した。しかし第1撮像部2および第2撮像部3は縦や斜めに並べて設置してもよい。この場合には第1生成部41は、基線方向、すなわちエピポーラ線に沿って類似度を走査すればよい。
(Modification 2)
In the above-described first embodiment, since the first imaging unit 2 and the second imaging unit 3 are arranged horizontally, the baseline direction is horizontal. was scanned. However, the first imaging section 2 and the second imaging section 3 may be arranged vertically or diagonally. In this case, the first generation unit 41 may scan the similarity along the baseline direction, that is, along the epipolar line.
―第2の実施の形態―
 図8を参照して、監視システムの第2の実施の形態を説明する。以下の説明では、第1の実施の形態と同じ構成要素には同じ符号を付して相違点を主に説明する。特に説明しない点については、第1の実施の形態と同じである。本実施の形態では、主に、補正情報無効化部を備える点で、第1の実施の形態と異なる。
-Second Embodiment-
A second embodiment of the monitoring system will be described with reference to FIG. In the following description, the same components as those in the first embodiment are assigned the same reference numerals, and differences are mainly described. Points that are not particularly described are the same as those in the first embodiment. This embodiment differs from the first embodiment mainly in that it includes a correction information invalidation unit.
 図8は、第2の実施の形態における補正実行部22Aの構成図である。補正実行部22Aは、補正情報無効化部60と補正部61とを備える。補正部61は、第1補正部70および第2補正部71を備える。 FIG. 8 is a configuration diagram of the correction execution unit 22A in the second embodiment. The correction executing section 22A includes a correction information invalidating section 60 and a correcting section 61 . The corrector 61 includes a first corrector 70 and a second corrector 71 .
 第1の実施の形態における補正実行部22には、第1類似度情報104と第1補正情報200と、第2補正情報301とが入力されたが、本実施の形態における補正実行部22Aには、前述の3つに加えて無効化指示602がさらに入力される。補正実行部22Aは、無効化指示602、第1補正情報200及び第2補正情報301に基づき、第1類似度情報104の補正処理を行い、第2類似度情報105を生成する。 First similarity information 104, first correction information 200, and second correction information 301 are input to correction execution unit 22 in the first embodiment. , an invalidation instruction 602 is further input in addition to the above three. The correction execution unit 22A performs correction processing of the first similarity information 104 based on the invalidation instruction 602, the first correction information 200 and the second correction information 301, and generates the second similarity information 105. FIG.
 補正情報無効化部60では、無効化指示602に基づき、第1補正情報200、および第2補正情報301の少なくとも一方を無効化できる。たとえば補正情報無効化部60は、第1補正情報200および第2補正情報301の両方を無効化する場合もあるし、一方のみを無効化する場合もあるし、いずれも無効化しない場合もある。無効化指示602の一例としては、第1補正情報200の有効または無効を示す情報、第2補正情報301の有効または無効を示す情報がある。すなわち補正情報無効化部60は、第1補正情報生成部20が出力する第1補正情報200や第2補正情報生成部31が出力する第2補正情報301を第2有効補正情報601でもあるとして受け付ける。 The correction information invalidation unit 60 can invalidate at least one of the first correction information 200 and the second correction information 301 based on the invalidation instruction 602 . For example, the correction information invalidation unit 60 may invalidate both the first correction information 200 and the second correction information 301, may invalidate only one of them, or may invalidate none of them. . Examples of the invalidation instruction 602 include information indicating validity or invalidity of the first correction information 200 and information indicating validity or invalidity of the second correction information 301 . That is, the correction information invalidation unit 60 regards the first correction information 200 output by the first correction information generation unit 20 and the second correction information 301 output by the second correction information generation unit 31 as also the second effective correction information 601. accept.
 補正情報無効化部60は、第1補正情報200が有効であり第2補正情報301が無効である旨の無効化指示602を受信した場合にはたとえば、次の処理を行う。すなわち補正情報無効化部60は、第1補正情報200を第1有効補正情報600として出力し、第2補正情報301を無効化した情報として第2有効補正情報601を出力する。 When the correction information invalidation unit 60 receives the invalidation instruction 602 indicating that the first correction information 200 is valid and the second correction information 301 is invalid, for example, it performs the following processing. That is, the correction information nullification section 60 outputs the first correction information 200 as the first effective correction information 600 and outputs the second effective correction information 601 as information obtained by nullifying the second correction information 301 .
 補正部61は、第1有効補正情報600と第2有効補正情報601に基づき、第1類似度情報104の補正処理を行い、第2類似度情報105を生成する。第1有効補正情報600に対する補正処理の一例として、第1補正情報200を用いて、補正対象となる探索位置における類似度の差し替えまたは、新規探索位置での類似度挿入により、第2類似度情報105を生成する方法がある。第2有効補正情報601に対する補正処理の一例として、先に図6を用いて説明した第2類似度曲線1105の生成方法がある。また、第1有効補正情報600において、第1補正情報200が無効化されている場合は、第1補正情報200を用いた補正処理は行わない。第2有効補正情報601において、第1補正情報200が無効化されている場合は、第1補正情報200を用いた補正処理は行わない。 The correction unit 61 corrects the first similarity information 104 based on the first effective correction information 600 and the second effective correction information 601 to generate the second similarity information 105 . As an example of correction processing for the first effective correction information 600, the first correction information 200 is used to replace the similarity at the search position to be corrected, or insert the similarity at the new search position to obtain the second similarity information. There is a way to generate 105. As an example of correction processing for the second effective correction information 601, there is the method of generating the second similarity curve 1105 described above with reference to FIG. Further, when the first correction information 200 is invalidated in the first effective correction information 600, the correction processing using the first correction information 200 is not performed. When the first correction information 200 is invalidated in the second effective correction information 601, correction processing using the first correction information 200 is not performed.
 第1補正部70は、第1有効補正情報600に基づき第1類似度情報104の補正処理を行い、中間類似度情報700を生成する。第2補正部71は、第2有効補正情報601に基づき、中間類似度情報700の補正処理を行い、第2類似度情報105を生成する。第1補正部70における補正処理、及び第2補正部71における補正処理については、図3及び図6を用いて説明したとおりであるため、詳細は省略する。 The first correction unit 70 performs correction processing of the first similarity information 104 based on the first effective correction information 600 and generates intermediate similarity information 700 . The second correction unit 71 performs correction processing of the intermediate similarity information 700 based on the second effective correction information 601 to generate the second similarity information 105 . The correction processing in the first correction unit 70 and the correction processing in the second correction unit 71 are as described with reference to FIGS. 3 and 6, so details thereof will be omitted.
 上述した第2の実施の形態によれば、適切な補正を使い分けることができる。 According to the above-described second embodiment, it is possible to selectively use appropriate corrections.
―第3の実施の形態―
 図9を参照して、監視システムの第3の実施の形態を説明する。以下の説明では、第1の実施の形態と同じ構成要素には同じ符号を付して相違点を主に説明する。特に説明しない点については、第1の実施の形態と同じである。本実施の形態では、主に、視差の境界を検出して第2補正情報を生成しない点で、第1の実施の形態と異なる。
-Third Embodiment-
A third embodiment of the monitoring system will be described with reference to FIG. In the following description, the same components as those in the first embodiment are assigned the same reference numerals, and differences are mainly described. Points that are not particularly described are the same as those in the first embodiment. The present embodiment is different from the first embodiment mainly in that the boundary of parallax is not detected to generate the second correction information.
 図9は、第3の実施の形態における類似度補正部12Bの構成図である。類似度補正部12Bは、第2補正情報生成部21の代わりに第2補正情報生成部21Bを備え、新たに視差境界検出部30を備える。視差境界検出部30は、算出済みの視差情報106を参照し、マッチングブロック内に視差の境界を検出すると視差境界情報300を第2補正情報生成部21Bに出力する。マッチングブロック内における視差の境界の有無は、たとえば次の3つのいずれかの方法を用いることができる。 FIG. 9 is a configuration diagram of the similarity correction unit 12B in the third embodiment. The similarity correction unit 12B includes a second correction information generation unit 21B instead of the second correction information generation unit 21, and a parallax boundary detection unit 30 newly. The parallax boundary detection unit 30 refers to the calculated parallax information 106, and upon detecting a parallax boundary in the matching block, outputs the parallax boundary information 300 to the second correction information generation unit 21B. For example, one of the following three methods can be used to determine the presence or absence of a parallax boundary within a matching block.
 第1の方法は、現行フレームにおけるマッチングブロック内の各ブロックの視差を個別に確認する方法である。具体的には、マッチングブロック内の各ブロックの視差を左右および上下の他のブロックの視差と比較し、その差が所定の閾値以上である場合に視差の境界があると判断する。いずれのブロックも、左右および上下の他のブロックの視差との差が所定の閾値未満の場合は視差の境界がないと判断する。この第1の方法では、現行フレームを用いるので最新の情報が使われるが、視差をまだ算出していないブロックとの差は評価できない。 The first method is to individually check the parallax of each block in the matching block in the current frame. Specifically, the parallax of each block in the matching block is compared with the parallax of other blocks on the left, right, top and bottom, and if the difference is equal to or greater than a predetermined threshold value, it is determined that there is a boundary of parallax. Any block is judged to have no parallax boundary if the difference in parallax from other blocks on the left, right, top and bottom is less than a predetermined threshold. In this first method, the current frame is used, so the latest information is used, but the difference with blocks for which parallax has not yet been calculated cannot be evaluated.
 第2の方法は、直前に取得した過去のフレームにおけるマッチングブロック内の各ブロックの視差を個別に確認する方法である。第1の方法との相違点は、対象とするフレームであり、過去のフレームは全てのブロックが視差を算出済みなので、第1の方法では視差の差を算出できないブロックでも第2の方法では視差の差を算出できる。この第2の方法は、1フレームの時間では被写体の位置に大きな変化がないと想定し、被写体の位置変化による損失よりも全ブロックで視差の差を算出できることの利益が上回るという考えを具現化したものである。 The second method is to individually confirm the parallax of each block within the matching block in the past frame acquired immediately before. The difference from the first method is that the target frame is the target frame, and all blocks in the past have already calculated the parallax difference. can be calculated. This second method assumes that the position of the subject does not change significantly in the time of one frame, and embodies the idea that the benefit of being able to calculate the disparity difference in all blocks outweighs the loss due to the change in the position of the subject. It is what I did.
 第3の方法は、階層探索により視差の境界を検出する方法である。この方法は、第1の方法のように現行フレームに適用してもよいし、第2の方法のように過去のフレームに適用してもよい。階層探索では、まず1階層ではマッチングブロックよりも大きいサイズ、たとえば縦と横がともにマッチングブロックの2倍のサイズのブロックで視差を算出する。以下では、このブロックを大サイズブロックと呼ぶ。 The third method is to detect parallax boundaries by hierarchical search. This method may be applied to the current frame, as in the first method, or to past frames, as in the second method. In the hierarchical search, first, in the first layer, the parallax is calculated using a block that is larger than the matching block, for example, a block that is twice as large both vertically and horizontally as the matching block. This block is hereinafter referred to as a large size block.
 次に第2階層では、マッチングブロック内の各ブロックの視差を大サイズブロックの視差に基づき算出する。具体的には、視差を算出する対象のブロックを含む大サイズブロックの視差と、その大サイズブロックに隣接する左右の大サイズブロックの視差、の合計3つの視差を視差候補とし、視差候補から所定の範囲内であり最も確からしい視差を算出対象の視差とする。このように階層探索により算出された視差を用いて、左右および上下の他のブロックの視差との差が所定の閾値以上であるかを判断して視差の境界の有無を判断する。 Next, in the second layer, the parallax of each block within the matching block is calculated based on the parallax of the large size block. Specifically, a total of three parallaxes, that is, the parallax of a large-sized block including a block for which parallax is to be calculated and the parallax of left and right large-sized blocks adjacent to the large-sized block, are set as parallax candidates, and a predetermined and the most probable parallax is set as the parallax to be calculated. Using the parallax calculated by the hierarchical search in this way, it is determined whether or not the difference from the parallax of other blocks on the left, right, top and bottom is equal to or greater than a predetermined threshold value, and whether or not there is a parallax boundary.
 視差境界検出部30からcを受信した周辺視差連続度判定部50は、マッチングブロック内に視差の境界が存在する場合は、第2補正情報301の生成が不要である旨の第2補正要否情報500を作成して第2生成部51に送信する。この第2補正要否情報500を受信した第2生成部51は第2補正情報301を生成しないので、補正実行部22による第2補正情報301に基づく第1類似度情報104の補正も行われない。 Upon receiving c from the parallax boundary detection unit 30, the peripheral parallax continuity determination unit 50 determines whether or not the second correction is necessary to generate the second correction information 301 when the parallax boundary exists in the matching block. Information 500 is created and transmitted to the second generator 51 . Since the second generation unit 51 that has received the second correction necessity information 500 does not generate the second correction information 301, the correction execution unit 22 also corrects the first similarity information 104 based on the second correction information 301. do not have.
 上述した第3の実施の形態によれば、次の作用効果が得られる。
(8)演算装置1は、周辺領域内の第1画像100または第2画像101の輝度情報、または視差情報に基づき視差境界情報300を生成する視差境界検出部30を備える。第2補正情報生成部21は、視差境界情報300に基づき第2補正情報201の生成を無効化する。そのため、周囲に視差の境界が存在しておりいる場合には第2補正情報301による類似度の補正を行わず、第2補正情報301による類似度の補正の悪影響を未然に防止できる。
According to the third embodiment described above, the following effects are obtained.
(8) The computing device 1 includes a parallax boundary detection unit 30 that generates parallax boundary information 300 based on luminance information or parallax information of the first image 100 or the second image 101 in the peripheral area. The second correction information generator 21 disables generation of the second correction information 201 based on the parallax boundary information 300 . Therefore, when there is a parallax boundary in the surroundings, the similarity degree is not corrected by the second correction information 301, and the adverse effect of the similarity correction by the second correction information 301 can be prevented.
(第3の実施の形態の変形例)
 上述した第3の実施の形態では、視差境界検出部30は視差情報106を用いて視差の境界の有無を判断した。しかし視差境界検出部30は、第1画像102または第2画像103を用いて輝度変化の有無を利用して簡易に視差の境界の有無を判断してもよい。視差境界検出部30は、第1画像102または第2画像103を参照してマッチングブロック内の各ブロックの輝度を左右および上下の他のブロックの輝度と比較し、その差が所定の閾値以上である場合に視差の境界があると判断する。いずれのブロックも、左右および上下の他のブロックの輝度との差が所定の閾値未満の場合は輝度の境界がないと判断する。
(Modification of the third embodiment)
In the third embodiment described above, the parallax boundary detection unit 30 uses the parallax information 106 to determine whether or not there is a parallax boundary. However, the parallax boundary detection unit 30 may simply determine whether there is a parallax boundary by using the first image 102 or the second image 103 and using the presence or absence of luminance change. The parallax boundary detection unit 30 refers to the first image 102 or the second image 103, compares the luminance of each block in the matching block with the luminance of other blocks on the left, right, top and bottom, and determines if the difference is equal to or greater than a predetermined threshold. It is determined that there is a parallax boundary in a certain case. Any block is judged to have no luminance boundary if the difference in luminance from other blocks on the left, right, top and bottom is less than a predetermined threshold.
 上述した各実施の形態および変形例において、機能ブロックの構成は一例に過ぎない。別々の機能ブロックとして示したいくつかの機能構成を一体に構成してもよいし、1つの機能ブロック図で表した構成を2以上の機能に分割してもよい。また各機能ブロックが有する機能の一部を他の機能ブロックが備える構成としてもよい。 In each embodiment and modification described above, the configuration of the functional blocks is merely an example. Some functional configurations shown as separate functional blocks may be configured integrally, or a configuration represented by one functional block diagram may be divided into two or more functions. Further, a configuration may be adopted in which part of the functions of each functional block is provided in another functional block.
 上述した各実施の形態および変形例において、プログラムは不図示のROMに格納されるとしたが、プログラムは不図示の不揮発性の記憶装置、たとえばフラッシュメモリに格納されていてもよい。また、演算装置が不図示の入出力インタフェースを備え、必要なときに入出力インタフェースと演算装置が利用可能な媒体を介して、他の装置からプログラムが読み込まれてもよい。ここで媒体とは、例えば入出力インタフェースに着脱可能な記憶媒体、または通信媒体、すなわち有線、無線、光などのネットワーク、または当該ネットワークを伝搬する搬送波やディジタル信号、を指す。また、プログラムにより実現される機能の一部または全部がハードウエア回路やFPGAにより実現されてもよい。 In each of the embodiments and modifications described above, the program is stored in a ROM (not shown), but the program may be stored in a non-volatile storage device (not shown) such as a flash memory. Alternatively, the arithmetic device may have an input/output interface (not shown), and the program may be read from another device via a medium that can be used by the input/output interface and the arithmetic device when necessary. Here, the medium refers to, for example, a storage medium that can be attached to and detached from an input/output interface, or a communication medium, that is, a wired, wireless, or optical network, or a carrier wave or digital signal that propagates through the network. Also, part or all of the functions realized by the program may be realized by a hardware circuit or FPGA.
 上述した各実施の形態および変形例は、それぞれ組み合わせてもよい。上記では、種々の実施の形態および変形例を説明したが、本発明はこれらの内容に限定されるものではない。本発明の技術的思想の範囲内で考えられるその他の態様も本発明の範囲内に含まれる。 Each of the above-described embodiments and modifications may be combined. Although various embodiments and modifications have been described above, the present invention is not limited to these contents. Other aspects conceivable within the scope of the technical idea of the present invention are also included in the scope of the present invention.
S…監視システム
1…演算装置
10…入力部
11…類似度生成部
12、12B…類似度補正部
13…類似度判定部
20…第1補正情報生成部
21、21B…第2補正情報生成部
22、22A…補正実行部
30…視差境界検出部
31…第2補正情報生成部
40…補正判定部
41…第1生成部
50…周辺視差連続度判定部
51…第2生成部
60…補正情報無効化部
61…補正部
70…第1補正部
71…第2補正部
S: Monitoring system 1: Arithmetic device 10: Input unit 11: Similarity generation units 12, 12B: Similarity correction unit 13: Similarity determination unit 20: First correction information generation units 21, 21B: Second correction information generation unit 22, 22A... Correction execution unit 30... Parallax boundary detection unit 31... Second correction information generation unit 40... Correction determination unit 41... First generation unit 50... Peripheral parallax continuity determination unit 51... Second generation unit 60... Correction information Invalidation unit 61...Correction unit 70...First correction unit 71...Second correction unit

Claims (9)

  1.  第1撮像部と第2撮像部とに接続され、前記第1撮像部で撮影された第1画像と前記第2撮像部で撮像された第2画像との視差情報を所定のマッチングブロック単位で探索することで生成する演算装置であって、
     前記マッチングブロック単位で、前記第1画像内のマッチングブロックに対する前記第2画像の探索範囲内のマッチングブロックとの第1類似度を生成する類似度生成部と、
     前記第2画像の探索範囲内の任意の隣接するマッチングブロックの前記第1類似度に基づいて生成される第1補正情報、および前記視差情報に基づいて生成される第2補正情報、の少なくとも一方を用いて前記第1類似度を補正して第2類似度を生成する類似度補正部と、
     前記第2類似度に基づき視差情報を生成する類似判定部と、を備える演算装置。
    connected to a first imaging unit and a second imaging unit, and disparity information between a first image captured by the first imaging unit and a second image captured by the second imaging unit in predetermined matching block units; A computing device generated by searching,
    a similarity generating unit that generates a first similarity between a matching block in the first image and a matching block in a search range of the second image for each matching block;
    At least one of first correction information generated based on the first similarity of any adjacent matching block within the search range of the second image, and second correction information generated based on the parallax information. A similarity correction unit that corrects the first similarity using to generate a second similarity,
    and a similarity determining unit that generates parallax information based on the second degree of similarity.
  2.  請求項1に記載の演算装置であって、
     前記第1撮像部および前記第2撮像部は、所定の処理周期ごとに前記第1画像および前記第2画像を生成し、
     最新の前記第1画像および最新の前記第2画像、または所定周期前の前記第1画像および前記所定周期前の前記第2画像における、前記マッチングブロックの周辺領域内の前記視差情報に基づき前記第1類似度に掛け合わされる補正係数である前記第2補正情報を生成する第2補正情報生成部をさらに備える、演算装置。
    The computing device according to claim 1,
    The first imaging unit and the second imaging unit generate the first image and the second image for each predetermined processing cycle,
    Based on the parallax information in the peripheral region of the matching block in the latest first image and the latest second image, or the first image before the predetermined period and the second image before the predetermined period. The computing device further comprising a second correction information generation unit that generates the second correction information that is a correction coefficient to be multiplied by one degree of similarity.
  3.  請求項2に記載の演算装置であって、
     前記第2補正情報は、前記周辺領域内の視差情報の連続度が閾値以上である場合に当該周辺領域内の視差情報に対応する前記第1類似度を大きくするように補正する補正係数を含む、演算装置。
    The arithmetic device according to claim 2,
    The second correction information includes a correction coefficient for correcting so as to increase the first similarity corresponding to the parallax information in the peripheral area when the continuity of the parallax information in the peripheral area is equal to or greater than a threshold. , arithmetic unit.
  4.  請求項2に記載の演算装置であって、
     前記第2補正情報は、前記周辺領域内の視差情報の連続度が閾値以下である場合に当該周辺領域内の視差情報に対応する前記第1類似度を小さくするように補正する補正係数を含む、演算装置。
    The arithmetic device according to claim 2,
    The second correction information includes a correction coefficient for correcting so as to reduce the first similarity corresponding to the parallax information in the peripheral area when the continuity of the parallax information in the peripheral area is equal to or less than a threshold. , arithmetic unit.
  5.  請求項2に記載の演算装置であって、
     前記周辺領域内の前記第1画像の輝度情報または前記視差情報に基づき視差境界情報を生成する視差境界検出部をさらに備え、
     前記第2補正情報生成部は、前記視差境界情報に基づき前記第2補正情報の生成を無効化する、演算装置。
    The arithmetic device according to claim 2,
    further comprising a parallax boundary detection unit that generates parallax boundary information based on the luminance information or the parallax information of the first image in the peripheral area;
    The arithmetic device, wherein the second correction information generation unit disables generation of the second correction information based on the parallax boundary information.
  6.  請求項1に記載の演算装置であって、
     前記類似度補正部は第1補正情報に基づいて前記第1類似度を補正した後に、前記第2補正情報に基づいて前記第1類似度をさらに補正することで前記第2類似度を生成する、演算装置。
    The computing device according to claim 1,
    The similarity correction unit corrects the first similarity based on the first correction information, and then further corrects the first similarity based on the second correction information to generate the second similarity. , arithmetic unit.
  7.  請求項1に記載の演算装置であって、
     前記第1撮像部および前記第2撮像部は基線方向に並び、
     前記基線方向に走査して類似度の差分の正負が切り替わる変極点を探索し、前記変極点の周辺における前記類似度を用いて新たな類似度を算出し、前記変極点の位置および前記新たな類似度の組合せを第1補正情報として算出する第1補正情報生成部をさらに備える、演算装置。
    The computing device according to claim 1,
    The first imaging unit and the second imaging unit are arranged in a baseline direction,
    Scanning in the baseline direction to search for an inflection point where the difference in similarity switches between positive and negative, calculate a new similarity using the similarity around the inflection point, and calculate the position of the inflection point and the new A computing device, further comprising a first correction information generator that calculates a combination of similarities as first correction information.
  8.  請求項1に記載の演算装置と、
     前記第1撮像部と、
     前記第2撮像部と、を備える監視システム。
    A computing device according to claim 1;
    the first imaging unit;
    A monitoring system comprising: the second imaging unit;
  9.  第1撮像部と第2撮像部とに接続され、前記第1撮像部で撮影された第1画像と前記第2撮像部で撮像された第2画像との視差情報を所定のマッチングブロック単位で探索することで生成する演算装置が実行する視差算出方法であって、
     前記マッチングブロック単位で、前記第1画像内のマッチングブロックに対する前記第2画像の探索範囲内のマッチングブロックとの第1類似度を生成することと、
     前記第2画像の探索範囲内の任意の隣接するマッチングブロックの前記第1類似度に基づいて生成される第1補正情報、および前記視差情報に基づいて生成される第2補正情報、の少なくとも一方を用いて前記第1類似度を補正して第2類似度を生成することと、
     前記第2類似度に基づき視差情報を生成することと、を含む視差算出方法。
     
    connected to a first imaging unit and a second imaging unit, and disparity information between a first image captured by the first imaging unit and a second image captured by the second imaging unit in predetermined matching block units; A parallax calculation method executed by an arithmetic device generated by searching,
    generating a first similarity between a matching block in the first image and a matching block in a search range of the second image for each matching block;
    At least one of first correction information generated based on the first similarity of any adjacent matching block within the search range of the second image, and second correction information generated based on the parallax information. generating a second similarity by correcting the first similarity using
    and generating parallax information based on the second similarity.
PCT/JP2022/011116 2021-08-11 2022-03-11 Computing device, monitoring system and parallax calculation method WO2023017635A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE112022001667.1T DE112022001667T5 (en) 2021-08-11 2022-03-11 DATA PROCESSING DEVICE, MONITORING SYSTEM AND PARALLAX CALCULATION METHOD

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-131477 2021-08-11
JP2021131477A JP2023025976A (en) 2021-08-11 2021-08-11 Arithmetic unit, monitor system and parallax calculation method

Publications (1)

Publication Number Publication Date
WO2023017635A1 true WO2023017635A1 (en) 2023-02-16

Family

ID=85200127

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/011116 WO2023017635A1 (en) 2021-08-11 2022-03-11 Computing device, monitoring system and parallax calculation method

Country Status (3)

Country Link
JP (1) JP2023025976A (en)
DE (1) DE112022001667T5 (en)
WO (1) WO2023017635A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07146137A (en) * 1993-11-22 1995-06-06 Matsushita Electric Ind Co Ltd Distance-between-vehicles measuring apparatus
JP2013126114A (en) * 2011-12-14 2013-06-24 Samsung Yokohama Research Institute Co Ltd Stereo image processing method and stereo image processing apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016039618A (en) 2014-08-11 2016-03-22 ソニー株式会社 Information processing apparatus and information processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07146137A (en) * 1993-11-22 1995-06-06 Matsushita Electric Ind Co Ltd Distance-between-vehicles measuring apparatus
JP2013126114A (en) * 2011-12-14 2013-06-24 Samsung Yokohama Research Institute Co Ltd Stereo image processing method and stereo image processing apparatus

Also Published As

Publication number Publication date
DE112022001667T5 (en) 2024-01-25
JP2023025976A (en) 2023-02-24

Similar Documents

Publication Publication Date Title
US10452931B2 (en) Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system
US9378553B2 (en) Stereo image processing device for vehicle
US9424462B2 (en) Object detection device and object detection method
JP4956452B2 (en) Vehicle environment recognition device
JP6417886B2 (en) Parallax value deriving device, moving body, robot, parallax value production method, and program
JP6707022B2 (en) Stereo camera
US20090237491A1 (en) Object Detecting System
JP5752618B2 (en) Stereo parallax calculation device
JP6377970B2 (en) Parallax image generation apparatus and parallax image generation method
JP6544257B2 (en) INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
US20140055572A1 (en) Image processing apparatus for a vehicle
JP2008158640A (en) Moving object detection apparatus
JP6589313B2 (en) Parallax value deriving apparatus, device control system, moving body, robot, parallax value deriving method, and program
JP2005214914A (en) Traveling speed detecting device and traveling speed detection method
JP2015179066A (en) Parallax value derivation device, apparatus control system, moving body, robot, parallax value derivation method and program
WO2023017635A1 (en) Computing device, monitoring system and parallax calculation method
JP4788399B2 (en) Pedestrian detection method, apparatus, and program
CN114572113B (en) Imaging system, imaging device, and driving support device
WO2021245972A1 (en) Computation device and parallax calculation method
JP2011053732A (en) Image processor
JP2013148355A (en) Vehicle position calculation device
JP6515547B2 (en) PARALLEL VALUE DERIVING DEVICE, DEVICE CONTROL SYSTEM, MOBILE OBJECT, ROBOT, PARALLEL VALUE PRODUCTION METHOD, AND PROGRAM
JP2020087210A (en) Calibration device and calibration method
JP7146608B2 (en) Image processing device
JP5904927B2 (en) Vehicle periphery monitoring device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22855701

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 112022001667

Country of ref document: DE