WO2020137101A1 - Embedded object detection device and embedded object detection method - Google Patents

Embedded object detection device and embedded object detection method Download PDF

Info

Publication number
WO2020137101A1
WO2020137101A1 PCT/JP2019/040656 JP2019040656W WO2020137101A1 WO 2020137101 A1 WO2020137101 A1 WO 2020137101A1 JP 2019040656 W JP2019040656 W JP 2019040656W WO 2020137101 A1 WO2020137101 A1 WO 2020137101A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
peak
buried object
data
range
Prior art date
Application number
PCT/JP2019/040656
Other languages
French (fr)
Japanese (ja)
Inventor
曜 岡本
雅思 佐藤
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2020137101A1 publication Critical patent/WO2020137101A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V3/00Electric or magnetic prospecting or detecting; Measuring magnetic field characteristics of the earth, e.g. declination, deviation
    • G01V3/12Electric or magnetic prospecting or detecting; Measuring magnetic field characteristics of the earth, e.g. declination, deviation operating with electromagnetic waves

Definitions

  • the present invention relates to a buried object detection device and a buried object detection method.
  • an embedded object detection device that detects an embedded object from a reflected wave of an electromagnetic wave emitted toward the concrete while moving the surface of the concrete (for example, Patent Document 1). reference).
  • the presence or absence of an embedded object can be detected by repeatedly operating the embedded object detection device so as to reciprocate between two points on the same path while scanning.
  • the distance information is acquired by the encoder installed in the embedded object detection device, and the signal intensity is displayed in color on a plane having two axes in the road direction and the depth direction. be able to.
  • the certainty of the position of the buried object can be improved based on the positions of the buried object detected a plurality of times.
  • An object of the present invention is to provide an embedded object detection device and an embedded object detection method capable of improving the certainty of the position of an embedded object.
  • An embedded object detection apparatus is an embedded object detection apparatus that detects an embedded object in an object by using data regarding a reflected wave of an electromagnetic wave emitted toward the object while moving on the surface of the object. Therefore, it is provided with a receiving unit, an embedded object detecting unit, and a scanning error determining unit.
  • the receiving unit receives the data regarding the reflected wave at each timing accompanying the movement.
  • the embedded object detection unit detects the embedded object by using the data regarding the reflected wave when the object reciprocates along the same path on the surface of the object.
  • the scanning error determination unit corresponds to the data regarding the reflected wave of the predetermined first range in the first movement when the embedded object is detected, and the first range in the second movement after the first movement.
  • the data regarding the reflected waves in the second range are compared with each other to determine whether or not scanning is performed so as to reciprocate along the same path.
  • the difference between the data regarding the reflected wave in the first range and the data regarding the reflected wave in the second range is within a predetermined error range, it is assumed that similar data is received and the same route is reciprocated. You can judge that you are doing.
  • the difference between the data regarding the reflected wave in the first range and the data regarding the reflected wave in the second range is larger than the predetermined error range, it is determined that different data is received, and the vehicle reciprocates along the same route. You can determine that you have not.
  • An embedded object detection device is the embedded object detection device according to the first aspect of the present invention, wherein the scanning error determination unit includes an embedded object position erasing unit.
  • the embedded object position erasing unit deletes the detected position of the embedded object detected by the movement of the path before the determination. As a result, the position of the buried object can be detected again from the beginning, so that the accuracy of the position of the buried object can be improved.
  • the embedded object detection device is the embedded object detection device according to the first aspect of the present invention, wherein the scanning error determination unit has a notification unit.
  • the notification unit notifies the user that a scanning error has occurred when it is determined that the user is not moving back and forth on the same route.
  • An embedded object detection device is the embedded object detection device according to the first aspect of the present invention, wherein the first range is a range including a detection position where the embedded object is detected, or a range near the detection position. Is. At the position of the buried object, a large change occurs in the data on the reflected wave, so by comparing the data on the reflected wave in the first range with the data on the reflected wave in the second range, it can be determined whether or not the same path is reciprocating. Can be detected more accurately.
  • An embedded object detection device is the embedded object detection device according to the first or fourth aspect of the present invention, wherein the first range and the second range are the same distance from the folding position in the reciprocating movement.
  • the distance is obtained by converting the timing associated with the movement.
  • the range of the same distance from the turn-back position when following the same route, it is possible to detect data regarding similar reflected waves. Therefore, by comparing the data regarding reflected waves in the first range and the second range, the same route is reciprocated. Whether or not it can be detected.
  • An embedded object detection device is the embedded object detection device according to the fourth aspect of the present invention, in which the first range includes a detection position and is a range before the detection position in the second movement. This makes it possible to determine that the vehicle is not following the same route as quickly as possible.
  • An embedded object detection apparatus is the embedded object detection apparatus according to the first aspect of the present invention, in which the embedded object detection unit includes a signal intensity peak detection unit and an embedded object position detection unit.
  • the signal intensity peak detector detects the peak of the signal intensity in the depth direction of the object at each timing.
  • the embedded object position detection unit detects the position of the embedded object based on the peak of the signal intensity detected at each timing. Thereby, the position of the buried object can be detected based on the signal strength.
  • An embedded object detection apparatus is the embedded object detection apparatus according to the seventh aspect of the present invention, wherein the scanning error determination unit detects the signal strength at the depth position in the depth direction for each timing in the first range. , And the signal intensity at the depth position in the depth direction for each timing in the second range is compared to determine whether or not scanning is performed so as to reciprocate along the same path. For example, if the difference between the signal intensity at the depth position for each timing in the first range and the signal intensity at the depth position for each timing in the second range is within a predetermined error range, similar data is received. Therefore, it can be judged that they are moving back and forth along the same route.
  • An embedded object detection apparatus is the embedded object detection apparatus according to the eighth aspect of the present invention, wherein the first range includes a plurality of first positions specified by timing and depth position.
  • the second range includes a plurality of second positions specified by timing and depth position.
  • the scanning error determination unit has a difference calculation unit, a counting unit, and a determination unit.
  • the difference calculator calculates the difference between the signal strength of each first position and the signal strength of the second position corresponding to each first position.
  • the counting unit counts the number of differences equal to or larger than the first threshold.
  • the determination unit determines whether or not the count number is equal to or larger than the second threshold value, and when the count number is equal to or larger than the second threshold value, determines that the scanning error occurs. As a result, if the difference between the signal intensity in the first range and the signal intensity in the second range is within the predetermined error range, it is determined that similar data is received, and it is determined that the same route is reciprocating. can do.
  • An embedded object detection apparatus is the embedded object detection apparatus according to the seventh aspect of the present invention, wherein the embedded object detection unit further includes a difference processing unit.
  • the difference processing unit detects a difference in change in signal intensity at a predetermined depth position from the signal intensity at a preceding depth position in the depth direction or the surface direction opposite thereto. This makes it possible to extract a change in signal intensity due to the influence of the buried object.
  • An embedded object detection apparatus is the embedded object detection apparatus according to the tenth aspect of the present invention, wherein the signal intensity peak detection unit has a depth position at which the difference changes from decrease to increase and a difference decreases from increase. At least one of the depth positions that changes to is detected as a peak of the signal intensity. In this way, the peak of the signal strength can be detected using the data of the increase and decrease of the signal strength.
  • An embedded object detection apparatus is the embedded object detection apparatus according to the eleventh aspect of the present invention, in which the embedded object detection unit includes a grouping unit and a depth position peak detection unit.
  • the grouping unit sets, as one group, the depth positions at a plurality of signal intensity peaks that are consecutive within a predetermined interval in the moving direction among the depth positions at the signal intensity peaks detected at each timing. ..
  • the depth position peak detection unit detects a peak at the depth position of the group on the plane in the moving direction and the depth direction. In this way, the position of the buried object can be detected by defining the continuous peaks as one group and detecting the peak at the depth position in the group.
  • An embedded object detection apparatus is the embedded object detection apparatus according to the twelfth invention, in which the embedded object detection unit further includes an embedded object position determination unit.
  • the buried object position determination unit sets the position of the buried object based on the detected peak of the depth position. Thereby, the position of the buried object can be determined.
  • An embedded object detection device is the embedded object detection device according to the thirteenth invention, in which the embedded object position determination unit is a group of peaks of signal strength that change from decrease to increase within a predetermined range.
  • the embedded object position determination unit is a group of peaks of signal strength that change from decrease to increase within a predetermined range.
  • One of the peaks at the depth position in the group and the peak at the depth position in the group of the peaks of the signal strength changing from increasing to decreasing is detected, and the peaks at other depth positions exist within the predetermined range from the peak. If not, the detected peak is not set as the position of the buried object. As a result, the position of the buried object can be set for one detected peak.
  • An embedded object detection apparatus is the embedded object detection apparatus according to the thirteenth invention, in which the embedded object position determining unit determines the depth position in the group of the peaks of the signal intensity that changes from decrease to increase.
  • the position of the shallower peak is set to the position of the buried object. This allows the position of the shallower peak to be the position of the buried object.
  • An embedded object detection apparatus is the embedded object detection apparatus according to the thirteenth invention, wherein the embedded object position determination unit is the depth position in a group of peaks of signal intensity changing from decrease to increase. And the peaks at the depth positions in the group of the peaks of the signal intensity changing from increasing to decreasing are alternately present in the predetermined range by three or more, the position of the shallowest peak is set to the position of the buried object. Set to position. This allows the position of the shallowest peak to be the position of the buried object.
  • An embedded object detection device is the embedded object detection device according to the thirteenth aspect, wherein the embedded object position determination unit has a large number of peaks at adjacent depth positions in reciprocating movement. Adopt the position of the buried object. Thereby, the accuracy of the position of the buried object can be improved.
  • An embedded object detection device is the embedded object detection device according to the twelfth invention, wherein the depth position peak detection unit has a group having a predetermined shape in a plane in the moving direction and the depth direction. If it is, it is determined that there is an embedded object, and if it is not the predetermined shape, it is determined that no embedded object exists. In this way, when the peak continues in the moving direction and the shape is a predetermined shape, it can be determined that the buried object exists.
  • the predetermined shape may be, for example, a mountain shape.
  • An embedded object detection apparatus is the embedded object detection apparatus according to the first invention, wherein the embedded object detection unit further includes an averaging processing unit.
  • the averaging processing unit averages the signal strength in the depth direction of the object at each timing during the reciprocating movement with the signal strength in the depth direction of the object at each timing received before that. .. As a result, the certainty of the position of the buried object can be improved by scanning the same path a plurality of times.
  • An embedded object detection device is the embedded object detection device according to the first invention, further including a display unit.
  • the display unit displays the signal strength on a plane in the movement direction and the depth direction for each movement.
  • the display section displays the detected position of the embedded object together with the display.
  • An embedded object detection device is the embedded object detection device according to the twentieth aspect of the present invention, wherein the display section detects the embedded object detection position when it is determined that the display unit is not reciprocating along the same path. Erase. By deleting the detection position of the embedded object when the scanning error is detected, it is possible to prevent the embedded object from being displayed at a position that does not exist during the movement when the scanning error is detected, and the user is not confused.
  • a buried object detecting method is a buried object detecting method for detecting a buried object in an object using data on a reflected wave of an electromagnetic wave emitted toward the object while moving on the surface of the object. Therefore, it includes a receiving step, a buried object detecting step, and a scanning error determining step.
  • the receiving step receives data regarding the reflected wave at each timing associated with the movement.
  • the buried object detecting step corresponds to data regarding a reflected wave in a predetermined first range in the first movement when the buried object is detected, and the first range in the second movement after the first movement.
  • the data regarding the reflected waves in the second range are compared with each other to determine whether or not scanning is performed so as to reciprocate along the same path. In this way, it is possible to determine that the vehicle has deviated from the same path, and therefore, for example, by clearing the position of the embedded object detected before the determination, the position of the embedded object can be detected again from the beginning. Alternatively, by notifying the user that he/she is out of the same route, the user can erase the detected position of the buried object. Alternatively, it is possible to erase only the position of the buried object detected when the vehicle deviates from the same path.
  • the difference between the data regarding the reflected wave in the first range and the data regarding the reflected wave in the second range is within a predetermined error range, it is assumed that similar data is received and the same route is reciprocated. You can judge that you are doing.
  • the difference between the data regarding the reflected wave in the first range and the data regarding the reflected wave in the second range is larger than the predetermined error range, it is determined that different data is received, and the vehicle reciprocates along the same route. You can determine that you have not.
  • the buried object detecting method according to the twenty-third invention is the buried object detecting method according to the twenty-second invention, further comprising a buried object position erasing step.
  • the embedded object position erasing step erases the detected position of the embedded object detected by the movement of the path before the movement of the path when it is determined that the path is not reciprocating along the same path. As a result, the position of the buried object can be detected again from the beginning, so that the accuracy of the position of the buried object can be improved.
  • FIG. 3 is a block diagram showing a configuration of a main control module of FIG. 2.
  • 6A to 6C are views for explaining processing by the averaging processing unit in FIG.
  • FIG. 6 is a block diagram showing a configuration of an embedded object detection unit in FIG. 5.
  • FIG. 6 is a block diagram showing the configuration of a scanning error determination unit in FIG. 5.
  • A (b) The figure for demonstrating the range setting by the range setting part of FIG. (A), (b) The figure for demonstrating the range setting by the range setting part of FIG.
  • FIG. 28 is a flowchart showing the waveform data averaging process of FIG. 27.
  • FIG. 28 is a flowchart showing the scanning error determination processing of FIG. 27.
  • FIG. 31 is a flowchart showing the positional shift amount acquisition processing of FIG. 30.
  • FIG. 31 is a flowchart showing the processing at the time of position shift in FIG. 30.
  • FIG. 28 is a flowchart showing the embedded object detection process of FIG. 27.
  • the flowchart which shows the pre-processing of FIG. FIG. 34 is a flowchart showing the gain adjustment processing of FIG. 33.
  • the flowchart which shows the difference process of FIG. FIG. 34 is a flowchart showing first-order differentiation processing of the difference result of FIG. 33.
  • the flowchart which shows the peak detection process of FIG. FIG. 34 is a flowchart showing the buried object determination process of FIG. 33.
  • FIG. 40 is a flowchart showing the grouping process of FIG. 39.
  • FIG. 40 is a flowchart showing the vertex detection processing of FIG. 39.
  • FIG. 40 is a flowchart showing the buried object acquisition process of FIG. 39.
  • FIG. 1 is a perspective view showing a state in which an embedded object detection device 1 according to an embodiment of the present invention is placed on concrete 100.
  • FIG. 2 is a block diagram showing a schematic configuration of the embedded object detection device 1 according to the present embodiment.
  • the embedded object detection device 1 of the present embodiment radiates an electromagnetic wave to the concrete 100 while moving on the surface 100a of an object such as concrete 100, receives the reflected wave, and analyzes the electromagnetic wave, thereby burying in the concrete 100.
  • the positions of the objects 101a, 101b, 101c, 101d are detected.
  • the embedded object detection device 1 reciprocates the same path (see arrows A1 and A2) to increase the certainty of the positions of the embedded objects 101a, 101b, 101c, and 101d.
  • the buried objects 101a, 101b, 101c, 101d are reinforcing bars, and are buried at depths of 20 cm, 15 cm, 10 cm, and 5 cm in order from the surface 100a, for example.
  • the depth direction is indicated by arrow B
  • the surface direction is indicated by arrow C.
  • the embedded object detection device 1 includes a main body 2, a handle 3, wheels 4, an impulse control module 5, a main control module 6, an encoder 7, and a display unit 8.
  • a handle 3 is provided on the upper surface of the main body 2.
  • Four wheels are rotatably attached to the bottom of the main body 2.
  • the impulse control module 5 controls the timing of emitting an electromagnetic wave toward the concrete 100, the timing of receiving a reflected wave of the emitted electromagnetic wave, and the like.
  • the encoder 7 is provided on the wheel 4 and transmits a signal for controlling the reception timing of the reflected wave to the impulse control module 5 based on the rotation of the wheel 4.
  • the main control module 6 receives the data regarding the reflected wave received by the impulse control module 5, and detects the buried object.
  • the display unit 8 is provided on the upper surface of the main body unit 2 and displays an image showing the positions of the embedded objects 101a, 101b, 101c, and 101d.
  • FIG. 3 is a block diagram showing the configuration of the impulse control module 5.
  • the impulse control module 5 has a control unit 10, a transmission antenna 11, a reception antenna 12, a pulse generation unit 13, a delay unit 14, and a gate unit 15.
  • the control unit 10 is configured by an MPU (Micro Processing Unit) or the like, and instructs the pulse generation unit 13 to generate a pulse by using an encoder input as a trigger.
  • the pulse generator 13 generates a pulse based on a command from the MPU and sends it to the transmitting antenna 11.
  • the transmitting antenna 11 radiates an electromagnetic wave at a constant cycle based on the pulse cycle.
  • the input timing of the encoder 7 corresponds to an example of the timing.
  • the receiving antenna 12 receives the reflected wave of the radiated electromagnetic wave.
  • the gate unit 15 receives the pulse from the delay unit 14, the gate unit 15 captures the reflected wave received by the reception antenna 12 and transmits the reflected wave to the control unit 10.
  • the delay unit 14 transmits a pulse to the gate unit 15 at a predetermined interval to capture a reflected wave.
  • the predetermined interval is, for example, 2.5 mm pitch.
  • the impulse control module 5 triggers the input from the encoder or 7 to output the electromagnetic wave from the transmitting antenna 11 multiple times. Then, the impulse control module 5 can acquire the reception data for each distance to the reception antenna 12 by delaying the reception timing by using the delay IC of the delay unit 14.
  • FIG. 4 is a diagram showing data of reflected waves acquired by the MPU.
  • the vertical axis represents the intensity of the received signal in ⁇ 4096 to +4096 gradations with the axis O as the center, and the arrow direction indicates the negative side.
  • the horizontal axis indicates the distance from the receiving antenna 12, and the direction of the arrow B (corresponding to the depth direction) indicates that the distance from the receiving antenna 12 is long. In addition, a long distance corresponds to a deep depth.
  • the waveform W1 shown in FIG. 4 also includes the reflected wave reflected by the antenna without being irradiated into the concrete 100 (p1 etc.), the difference from the reference waveform is calculated.
  • the change in the data of the reflected wave from inside the concrete 100 is extracted.
  • the data shown in FIG. 4 is data from when the encoder 7 is input until when the encoder 7 is next input. By gradually delaying the reception timing, the reflected wave from the position where the distance from the reception antenna 12 is long is received. However, if there is an input from the encoder 7, the reception timing delay is restored and the reception timing is changed again. Delay gradually.
  • the reflected wave in the depth direction (the direction of arrow B) at a predetermined measurement position (the position where the encoder 7 has input) in the direction of arrow A indicating the moving direction is received.
  • the data of the reflected wave from after the input of the encoder 7 shown in FIG. 4 to the input of the next encoder is called one line of data.
  • the control unit 10 transmits the RF (Radio Frequency) data for one line to the main control module 6 every time the data for one line is accumulated.
  • the measurement positions are not exactly the same position, and the direction of the arrow B indicating the depth direction is not strictly perpendicular to the surface 100a of the concrete 100.
  • FIG. 5 is a block diagram showing the configuration of the main control module 6.
  • the main control module 6 includes a reception unit 61, an RF data management unit 62, an averaging processing unit 63, an embedded object detection unit 64, a scanning error determination unit 65, and a display control unit 66.
  • the receiver 61 receives RF data for one line each time it is transmitted from the impulse control module 5.
  • the RF data management unit 62 stores the RF data for one line received by the reception unit 61.
  • the averaging processing unit 63 averages the 1-line waveform data when the averaging processing unit 63 moves twice or more along the path. For example, when the route is moved to the arrow A1 shown in FIG. 1 for the first time and then turned back and moved to the direction of the arrow A2 for the second time, the distance from the turnaround point becomes the first time and the second time. An average of signal intensities at the same measurement position and the same position in the depth direction is calculated.
  • the embedded object detection unit 64 detects a peak of signal intensity for each averaged line of data, determines the presence or absence of the embedded object 101 using the peak of signal intensity, and detects the position of the embedded object 101. ..
  • the scanning error determination unit 65 determines whether or not scanning is performed so as to reciprocate along the same path.
  • the display control unit 66 controls the display unit 8 to display an image in which the signal intensity is gradation-processed by color on the plane in the direction of arrow A indicating the movement direction and the direction of arrow B indicating the depth direction.
  • the display control unit 66 also controls the display unit 8 to display the position of the buried object 101.
  • the averaging processing unit 63 performs averaging processing of RF data.
  • 6A to 6C are diagrams for explaining the averaging process.
  • the third drawing from the right end of FIGS. 6A to 6C is a view of the wall (an example of the object) viewed from the front, and the reinforcing bars as the buried object 101 are buried along the vertical direction. ..
  • the embedded object detection device 1 is reciprocated between the points E and F so as to traverse the embedded object 101.
  • the receiving unit 61 moves from the point E to the point F along the arrow A1 indicating the moving direction.
  • FIGS. 6A to 6C are intensity signals on an XY coordinate plane in which the position in the scanning direction converted from the input of the encoder 7 is the X axis and the position in the depth direction is the Y axis. It is a figure which shows the intensity
  • the receiving unit 61 causes the plurality of lines from the point F to the point E along the arrow A2 indicating the moving direction. Minute RF data is received, and the received RF data for a plurality of lines is stored in the RF data management unit 62.
  • the averaging processing unit 63 averages the first and second RF data for each line.
  • the averaging processing unit 63 calculates the average of the signal intensities of the same XY coordinate values at the first and second times for all the XY coordinate values on the XY plane from the point E to the point F, and the RF data managing unit 62 stores the average.
  • the receiving unit 61 causes the plurality of lines from the point E to the point F along the arrow A1 indicating the moving direction. Minute RF data is received, and the received RF data for a plurality of lines is stored in the RF data management unit 62.
  • the averaging processing unit 63 averages the first, second and third RF data for each line, and stores the averaged RF data in the RF data management unit 62. By repeatedly scanning the same path a plurality of times in this manner, the RF data can be averaged, noise can be reduced, and the position of the buried object described later can be detected more accurately.
  • FIG. 7 is a block diagram showing the configuration of the embedded object detection unit 64.
  • the embedded object detection unit 64 includes a preprocessing unit 23, an embedded object determination unit 24 (an example of an embedded object position detection unit), and a determination result registration unit 25.
  • the pre-processing unit 23 detects a peak of signal intensity for each averaged line of data.
  • the embedded object determination unit 24 determines the presence or absence of an embedded object using the peak of the signal intensity for each line of RF data detected by the preprocessing unit 23. Further, the embedded object determination unit 24 detects the position of the embedded object 101.
  • the determination result registration unit 25 registers the position of the buried object detected by the buried object determination unit 24 in the RF data management unit 62.
  • the preprocessing unit 23 includes a gain adjusting unit 31, a difference processing unit 32, a moving average processing unit 33, a first-order differentiation processing unit 34, a peak detection unit 35 (of a signal intensity peak detection unit). One example) and.
  • the gain adjusting unit 31 performs gain adjustment on the RF data averaged for each line. As the distance from the transmission antenna 11 and the reception antenna 12 increases (the delay time increases), the reception sensitivity decreases, so that the density of white and black decreases when an image described later is displayed. Therefore, the gain adjusting unit 31 increases the gain value ( ⁇ 1 to ⁇ 20) by which the signal strength is multiplied (amplified) as the depth position is deeper.
  • FIG. 8A is a diagram showing image data before the gain adjustment.
  • the horizontal axis (X axis) indicates the moving distance, and the direction of arrow A1 indicates the moving direction.
  • the vertical axis (Y axis) indicates the depth position, and the arrow direction indicates the deep side.
  • the signal intensity for each line shown in FIG. 4 is shown in black and white gradation in the vertical axis direction, and the black and white gradation data of all lines is shown in the horizontal axis direction. It is an image.
  • the gradation process is performed such that the greater the intensity of the received signal is, the whiter it is, and the smaller the intensity of the received signal is, the more black it is.
  • FIG. 8A indicates the signal strength.
  • the intensity signal of one line which is subjected to black and white gradation is shown surrounded by a dotted line.
  • FIG. 8B is a diagram showing image data obtained by performing gain adjustment processing on the image data of FIG. 8A. As shown in FIG. 8B, the tone adjustment becomes stronger by the gain adjustment. Further, since the gain value in the deeper part becomes higher, the value of the RF data becomes larger. Therefore, the image data in the lower part becomes whitish as a whole. The whitish part is shown surrounded by a dotted line.
  • the difference processing unit 32 extracts the RF data of the changed portion by calculating the difference from the reference point from the gain-adjusted RF data.
  • FIG. 9A is a diagram showing the image data before the difference process is performed
  • FIG. 9B is a diagram showing the image data after the difference process is performed.
  • FIG. 9A shows the same image data as FIG. 8B.
  • the reference point is the average value of the data acquired so far.
  • the reference point is the average value of the data acquired so far.
  • the reference point when calculating the difference from the reference point of the signal intensity of the depth n (mm) (Y coordinate value Y n ) of the m-th (X coordinate value X m ) line, 1 to m
  • the moving average processing unit 33 performs the moving average processing for each line for the RF data subjected to the difference processing.
  • the moving average process can be performed by averaging 8 points.
  • FIG. 10A is a diagram showing the image data subjected to the moving average processing
  • FIG. 10B is a diagram showing the signal intensity of the RF data of the line L1 of FIG. 10A.
  • the horizontal axis of FIG. 10B indicates the depth position, which is deeper along the arrow direction.
  • the vertical axis of FIG. 10B shows the signal intensity, and the signal intensity increases along the arrow direction.
  • the one with the higher signal strength is grayed out in white
  • the one with the weaker signal strength is grayed out in black
  • the downward peak that is, the position where the black color is the darkest
  • the upward peak that is, the position where the white color is the thinnest are detected.
  • the position of a downward peak indicates the position of a reinforcing bar or the like in concrete
  • the position of an upward peak indicates the position of a cavity or resin in concrete.
  • the primary differential processing unit 34 performs primary differential processing on the data subjected to the differential processing in order to detect the downward peak.
  • the first-order differentiation processing unit 34 calculates the difference from the signal strength at the predetermined depth position to the signal strength at the next depth position.
  • FIG. 11 is an enlarged view between P10 and P3 in FIG.
  • FIG. 12 is a diagram showing a table 150 of the signal strength of the graph of FIG. 11 and the result of the first derivative processing.
  • Sequence numbers are shown in the leftmost column of the table 150 shown in FIG. The position becomes deeper as the sequence number increases.
  • the second column from the left shows the signal strength at each sequence number.
  • the third column from the left shows the difference calculated by the primary differential processing unit 34.
  • the difference of the sequence number n is a value obtained by subtracting the signal intensity of the sequence number n from the signal intensity of the sequence number n+1.
  • the difference with the sequence number 7 is a value (-9) obtained by subtracting the 7th signal strength (431) from the 8th signal strength (422).
  • the primary differential processing unit 34 performs the primary differential processing on all the data of one line.
  • the peak detection unit 35 detects the peak of the RF data of one line after the primary differential processing. For example, when detecting a downward peak (black peak), the peak detection unit 35 detects, as a peak, a point at which the change after the first-order differentiation process changes from a negative change to a positive change. Specifically, as shown in Table 150 of FIG. 12, the change in sequence number 33 is a negative (-) change, and the change in sequence number 34 is a positive (+) change. The peak detection unit 35 detects that the signal intensity has a downward peak at the depth position of the sequence number 34.
  • the peak detection unit 35 determines the point at which the change after performing the first derivative process changes from a positive change to a negative change. Detect as a peak. For example, in the table 150 of FIG. 12, it is detected that the signal intensity has an upward peak at the depth position of the sequence number 5.
  • the buried object determination unit 24 includes a grouping unit 51, a shape determination unit 52 (an example of a depth position peak detection unit), a buried object position determination unit 53, and a buried object data integration unit 54.
  • the grouping unit 51 detects, as a group, peaks that are continuous with respect to the moving distance (X-axis coordinate) among the plurality of peaks detected by the peak detection unit 35.
  • the shape determination unit 52 determines the presence/absence of an embedded object based on whether or not the group has a mountain shape, and when it determines that the embedded object exists, determines the apex (an example of a peak at the depth position) in the group. Detect and use as the position of the buried object.
  • the grouping unit 51 groups the peak detection results by the peak detection unit 35.
  • the grouping unit 51 confirms the presence or absence of the peak detection result in order from the past line. With the result as a starting point, the presence or absence of continuous peak detection in the traveling direction is checked.
  • FIG. 13 is a diagram showing image data after preprocessing by the preprocessing unit 23. In FIG. 13, the line L2 acquired this time is shown. 14A to 14D are diagrams for explaining the processing by the grouping unit 51.
  • the grouping unit 51 uses the position QS of the first found peak as a starting point (indicated by ⁇ in FIG. 13) and has a peak of the next line within 5 pixels in the direction of arrow B indicating the moving direction and within 5 pixels above and below. Confirm whether to do.
  • the line in which the peak position Q is found is the current line.
  • FIG. 14A shows a state where the peak position QS is found.
  • FIG. 14B shows a case where the position Q2 of the next peak exists within 5 pixels of the direction of arrow A1 indicating the moving direction of the position QS of the peak of the current line and within 5 pixels of the upper side. In FIG. 14B, the peak position is rising (moving to the shallow side) in the moving direction.
  • FIG. 14A shows a state where the peak position QS is found.
  • FIG. 14B shows a case where the position Q2 of the next peak exists within 5 pixels of the direction of arrow A1 indicating the moving direction of the position QS of the peak of the current line
  • FIG. 14C shows the case where the peak position Q2 of the next line exists within 5 pixels and below 5 pixels in the direction of arrow A1 indicating the moving direction of the peak position QS of the current line.
  • the position of the peak is descending (moving to the deep side) in the moving direction.
  • the grouping unit 51 performs grouping of peak positions.
  • a group for example, group G1 in which a black circle and a black square are connected by a line is shown.
  • the grouping of the upward peak position is also performed.
  • the shape determination unit 52 determines whether the shape of the group is a predetermined mountain shape.
  • FIG. 15 is a diagram showing the positions of a plurality of grouped peaks. Further, FIG. 15 also shows conditions for determining the mountain shape. The shape determination unit 52 determines that the group has a mountain shape when the three conditions of the first condition, the second condition, and the third condition are satisfied.
  • the first condition is that the direction of movement is increased by 5 pixels or more in the upward direction (shallow direction) in the direction of the arrow A1
  • the second condition is that the movement direction is changed.
  • the third condition is that the difference in the depth direction is 10 pixels or more as the third condition is continuously descending by 5 pixels or more in the direction of arrow A1.
  • the shape determination unit 52 determines that the embedded object exists.
  • the shape determination unit 52 also detects an upward peak of the position of the group when performing the determinations of the first to third conditions.
  • the shape determination unit 52 sets the shallowest position as the apex of the group and stores the position.
  • the shape determination unit 52 sets the point where the change in position changes from increase to decrease as the apex of the group. For example, in FIG. 16, since the 7th to 8th changes are positive (+) and the 8th to 9th changes are negative ( ⁇ ), as shown in FIG. 15, the 8th position is the vertex. Is determined. Note that, similarly to the above, the shape determination unit 52 determines the shape of the group at the position of the upward peak (the position where the white color is the thinnest), and detects the shallowest position of the group as the vertex position. To do.
  • the embedded object position determination unit 53 determines the position of the embedded object 101 based on the position and the number of peaks detected by the shape determination unit 52. The buried object position determination unit 53 determines whether or not adjacent peaks in the plurality of detected peaks are within a predetermined range.
  • the buried object position determination unit 53 determines whether or not the adjacent peaks are within ⁇ 10 pixels on the X axis and ⁇ 40 pixels on the Y axis.
  • the buried object position determination unit 53 determines whether or not there is another peak within ⁇ 10 pixels of the X axis and within ⁇ 40 pixels of the Y axis from the predetermined peak, and when another peak exists, the other peak is determined. It is determined whether or not there is another peak within ⁇ 10 pixels on the X-axis and within ⁇ 40 pixels on the Y-axis from the peak in (1), and the determination is performed for the peaks in the entire range in which the RF data is acquired.
  • the buried object position determination unit 53 detects five patterns of peaks, as shown in Table 1 of FIG.
  • the pattern A1 is composed of white (upward peak position) group vertices, black (downward peak position) group vertices, and white (upward peak position) adjacent to each other in order from the shallowest.
  • the pattern in which the vertices of the group are detected is shown.
  • the pattern A2 is, in order from the shallowest, a vertex of a group of black (downward peak position), a vertex of a group of white (upward peak position), and a black (downward peak position) which are adjacent to each other.
  • the pattern in which the vertices of the group are detected is shown.
  • the pattern B1 shows a pattern in which the vertices of the white (upward peak position) group and the black (downward peak position) group vertex, which are adjacent to each other, are detected in order from the shallow side.
  • the pattern B2 is a pattern in which the vertices of the black (downward peak position) group and the vertices of the white (upward peak position) group, which are adjacent to each other, are detected in order from the shallow side.
  • Pattern C is a pattern other than the above-mentioned patterns A1, A2, B1, and B2, and has only one vertex of a white (upward peak position) group or a black (downward peak position) group vertex. The detected pattern is shown.
  • FIG. 18A is a diagram showing image data in which three adjacent peaks are detected
  • FIG. 18B is a diagram showing an example of the determined position of the buried object.
  • the position P1 of the apex of the group G1 of black (the position of the downward peak)
  • the position P2 of the apex of the group G2 of the white the position of the upward peak
  • the black (the downward direction) in order from the shallow side.
  • the position P3 of the apex of the group G3 (the position of the peak of 1) is detected, and is indicated by a cross.
  • the difference between the X-axis values of the vertex position P1 and the vertex position P2 is within 10 pixels
  • the difference between the X-axis values of the vertex position P2 and the vertex position P3 is within 10 pixels
  • the vertex position The difference between the Y-axis values of P1 and the vertex position P2 is within 40 pixels
  • the difference between the Y-axis values of the vertex position P2 and the vertex position P3 is within 40 pixels.
  • the embedded object position determining unit 53 shows the position P1 (black apex), P2 (white apex), and P3 (black apex) of three adjacent apexes as shown in FIG.
  • the position P1 of the shallowest vertex is determined as the position Pd of the buried object.
  • the embedded object position determination unit 53 determines whether the vertex position P1, the vertex position P2, and the vertex position P3 are provided in the vicinity, respectively. It is determined that the apex is detected from the object, and it is determined that the embedded object is arranged at the shallowest position. Also in the case of the pattern A1, the embedded object position determination unit 53 determines the shallowest vertex as the position Pd of the embedded object.
  • FIG. 19A is a diagram showing image data in which two adjacent vertices are detected
  • FIG. 19B is a diagram showing an example of the determined position of the buried object.
  • FIG. 19A shows images of a white (upward peak position) group G2 and a black (downward peak position) group G3 in order from the shallow side. Then, the position P2 of the apex of the group G2 and the position P3 of the apex of the group G3 are detected, which is indicated by a cross. The difference between the X-axis values of the vertex position P2 and the vertex position P3 is within 10 pixels, and the difference between the Y-axis values of the vertex position P2 and the vertex position P3 is within 40 pixels.
  • the buried object position determination unit 53 determines the shallower peak P2 as the position of the buried object as shown in FIG. 19(b). That is, the buried object position determination unit 53 determines whether the vertex position P2 and the vertex position P3 are provided in the vicinity, respectively, and when they are provided in the vicinity, they are detected from one buried object. It is determined that the embedded object is placed at the position of the shallower apex.
  • the embedded object position determination unit 53 determines the position of the apex in the shallow direction as the position of the embedded object.
  • 20A is a diagram showing an example in which adjacent vertices are not detected and one vertice is detected
  • FIG. 20B is a diagram showing a state in which the position of the buried object is not determined. Is.
  • the position P2 of the apex of the white (upward peak position) group G2 is detected in order from the shallower side, and is indicated by a cross.
  • the embedded object position determination unit 53 does not determine the position of the embedded object, as shown in FIG.
  • FIGS. 6A to 6C the buried object data integration unit 54 updates the detected position of the buried object when the same path is moved multiple times.
  • the buried object data integration unit 54 changes the buried object data based on the buried object data transition table shown in FIG. 22A to 22C are diagrams for explaining the transition of the buried object data.
  • FIG. 22A is a diagram showing the scanning direction, the detection result of the apex, and the embedded object position determination result when the embedded object detection device 1 is scanned from the point E to the point F for the first time.
  • FIG. 22A is a diagram showing the scanning direction, the detection result of the apex, and the embedded object position determination result when the embedded object detection device 1 is scanned from the point E to the point F for the first time.
  • FIG. 22B is a diagram showing the scanning direction, the vertex detection result, and the embedded object position determination result when the embedded object detection device 1 is scanned for the second time from the point E to the point F.
  • FIG. 22C is a diagram showing a scanning direction, a vertex detection result, and an embedded object position determination result when the embedded object detection device 1 is scanned for the third time from point E to point F.
  • the buried object position determination unit 53 does not determine the position of the buried object (see the image on the right end).
  • the buried object position is determined as described above.
  • the unit 53 determines the position of the buried object to be the position P2 of the apex (see the image at the right end).
  • the buried object data integration unit 54 has the first result (before the change) as the pattern C and the second result (after the change) as the pattern B1, B2 is adopted according to the transition table of FIG. To be done.
  • the embedded object position determination is performed as described above.
  • the unit 53 determines the position of the buried object to be the position P1 of the apex (see the image at the right end).
  • the buried object data integration unit 54 uses the pattern B1 as the second result (before the change) and the pattern A2 as the second result (after the change). Then, the position Pd of the buried object is determined as the position P1 of the apex.
  • the determination result registration unit 25 registers the result (group, peak position, determined position of the buried object, etc.) judged by the buried object judgment unit 24 in the RF data management unit 62.
  • the scanning error determination unit 65 determines whether or not scanning is performed so as to reciprocate along the same path.
  • a problem that occurs when the same path is not reciprocally moved also referred to as a position shift
  • FIG. 23 is a diagram for explaining a state in which the embedded object detection device is not reciprocating along the same path.
  • the embedded object 101 when the user moves the embedded object detection device 1 from the position J to the turning point O so as to cross the embedded object 101 (for example, a reinforcing bar) and scans, the embedded object 101 is embedded at the position K. Can be detected.
  • the image data at this time is shown in FIG.
  • the position K is a position separated from the folding origin O by a distance L. It is assumed that when the embedded object detection device 1 is moved to the folding origin O and then folded back at the folding origin O to move the embedded object detection device 1 toward the position J, the route is shifted to the position R. In this case, the embedded object 101 cannot be detected at the position of the distance L from the folding origin O.
  • the image data at this time is shown in FIG. FIG.
  • FIG. 23C shows the data when the image data from the folding origin O to the position K is folded and moved, and the remaining data is the data of FIG. 23B and the previous position J.
  • the position P1 (Pd) of the embedded object 101 during the movement from the origin to the turn-back origin O is shown.
  • the embedded object detection apparatus 1 of the present embodiment determines a scanning error, and when a scanning error is determined, the previously detected position P1 of the embedded object is erased. As a result, the embedded object can be displayed at a position that should be originally displayed on the shifted route.
  • FIG. 24 is a block diagram showing the configuration of the scanning error determination unit 65.
  • the scanning error determination unit 65 includes a range setting unit 71, a difference calculation unit 72, a counting unit 73, a determination unit 74, and an embedded object position erasing unit 75.
  • the range setting unit 71 includes a predetermined range S (one example of the first range) in the RF data when scanning along a predetermined path, and a range T (the second range) in the RF data when the RF data is subsequently moved back. Example) is set.
  • the range T and the range S are corresponding ranges.
  • FIG. 25A and FIG. 25B are diagrams for explaining the setting of the range by the range setting unit 71.
  • FIG. 25A is a diagram for explaining the range S when moving from the position J to the folding origin O.
  • FIG. 25B is a diagram for explaining the range T when moving from the folding origin O toward the position R.
  • the range S is set to a range of the distance L1 from the detected position K of the buried object toward the folding origin O.
  • the X coordinate value of the position K is X1
  • the X coordinate value of the position moved by the distance L1 from X1 toward the folding origin O is X2.
  • the range S indicates the range of the X coordinate values X1 to X2.
  • the range T corresponding to the range S is set to the range of the distance L from the folding origin O to the position K, and the distance L1 from the position P moved in the direction of the position R toward the folding origin O.
  • the X coordinate value of the position P is X1 which is the same as the X coordinate value of the position K.
  • the X coordinate value of the position moved by the distance L1 from the coordinate value X1 toward the folding origin O is X2.
  • the range T also indicates the range of the X coordinate values X1 to X2, and is a range corresponding to the range S.
  • the difference calculation unit 72 compares the signal intensities at the same distance and the same depth position from the folding origin O, as shown in FIGS. As shown in FIGS. 25A and 25B, the range S is divided into a line (X coordinate) and a point sn (an example of the first position) at the depth position (Y coordinate), and the range T is a line. And a point tn at the depth position (an example of the second position). For example, the difference calculation unit 72 determines the difference between the signal intensity fs1 of the point s1 closest to the X coordinate value X1 in the range S and the signal intensity ft1 of the point t1 closest to the X coordinate value X1 in the range T. The absolute value of
  • is calculated.
  • the counting unit 73 counts the number m of points at which the absolute value Dn of the difference calculated by the difference calculation unit 72 is larger than the positional deviation comparison value D0.
  • the determination unit 74 determines that the second route that has passed in FIG. 25(b) is the first route that has passed in FIG. 25(a). Since it is out of alignment, it is determined that a scanning error has occurred. On the other hand, when the counted number m is equal to or less than the positional deviation comparison number v, it is determined that a scanning error has not occurred because the first and second paths are not out of alignment.
  • the embedded object position erasing unit 75 determines the position P1 (Pd) of the embedded object determined by the first scanning from the RF data management unit 62. It is also deleted from the display of the display unit 8. As a result, as shown in FIG. 23D, the position P1 of the buried object displayed at the position P is deleted on the display unit 8, and the position Q detected by the scanning from the folding origin O to the position R is deleted. The position P1′ of the buried object is displayed in (see FIG. 23A).
  • the same route is repeated a plurality of times to reciprocate, so that the averaging processing unit 63 averages the RF data and integrates the buried object data.
  • the averaging processing unit 63 averages the RF data and integrates the buried object data.
  • a range S of the distance L1 on the folding origin O side from the position K where the embedded object is detected is set as a range for comparison.
  • the range of the distance L1 from the position P of the distance L from the folding origin O (distance from the folding origin O to the position K) when the scanning error is determined to the folding origin O side. R is set as a range corresponding to the range S. In this way, the range on the opposite side to the moving direction when determining the scanning error of the position of the buried object is set as the range S to be compared.
  • the range S of the distance L1 from the position K where the embedded object is detected to the side opposite to the folding origin O is set as a range to be compared.
  • the range R of the distance L1 is set as the range corresponding to the range S. In this way, the range on the opposite side to the moving direction when determining the scanning error of the position of the buried object is set as the range S to be compared.
  • the scanning direction can be detected by using a plurality of encoders.
  • the display control unit 26 indicates a group, a peak position, and the like on the data image and causes the display unit 8 to display the data image. For example, like the image data on the right side of FIG. 22C, image data obtained by performing grayscale gradation on RF data and the determined position Pd (P1) of the buried object are displayed on the display unit 8.
  • FIG. 27 is a flowchart showing the processing of the embedded object detection device 1. (Overview of the whole process)
  • step S11 the embedded object detection device 1 is first initialized. In the initialization processing, clearing of each data is executed.
  • step S12 when the user moves the embedded object detection device 1, in step S12, the RF data is acquired by using the input from the encoder 7 as a trigger.
  • step S13 the averaging processing of the waveform data (RF data) acquired by the averaging processing unit 63 is performed.
  • step S14 the scanning error determination unit 65 performs a scanning error determination process.
  • the scanning error determination process is not determined during the first movement. Further, the fact that the robot is continuously and repeatedly moved can be determined by, for example, the encoder being input.
  • step S15 the embedded object detection unit 64 performs an embedded object detection process.
  • step S12 to step S15 will be described in detail.
  • FIG. 28 is a flowchart showing the data acquisition process of step S12.
  • the input from the encoder 7 is performed in step S1, and the impulse output control is started in step S2.
  • the transmission antenna 11 transmits a constant period (for example, Electromagnetic wave pulse is output at 1 MHz).
  • the delay unit 14 sets the Delay time in the DelayIC.
  • the delay time can be set in units of 10 psec from 0 to 5120 psec.
  • step S4 the control unit 10 AD-converts the RF data received from the receiving antenna 12 via the gate unit 15.
  • step S5 it is determined whether or not the delay time is maximum (for example, 5120 psec), and if not, the control returns to step S3. By repeating steps S3, S4, and S5, data for one line can be acquired.
  • step S6 the AD-converted RF data is transmitted to the main control module 6.
  • steps S1 to S6 are processes performed in the impulse control module 5.
  • the control of steps S2 to S7 is performed, and the next one line The data is acquired and transmitted to the main control module 6.
  • step S7 the receiver 61 waits until it receives RF data for one line from the impulse control module 5, and when the RF data is received, the data acquisition process ends.
  • FIG. 29 is a flowchart showing the waveform data averaging process.
  • the averaging processing unit 63 detects whether 1-line RF data exists at the same X coordinate value of the scanning X axis. Then, if the 1-line RF data acquired so far exists at the same X coordinate value, the averaging processing unit 63, in step S172, the 1-line RF data acquired this time and the 1-line RF acquired so far. The average of the data is calculated and recorded in the RF data management unit 62.
  • the averaging processing unit 63 compares the 1 line RF data acquired this time and the 1 line RF data acquired so far. The data of one line is averaged by calculating the average of the signal intensities of the same XY coordinate values.
  • FIG. 30 is a flowchart showing the scanning error determination processing.
  • the scanning error determination process is started, first in step S181, the range setting unit 71 determines whether or not there is comparison reference data.
  • the reference data for comparison is data acquired by scanning a predetermined route, and is data in which an embedded object is detected. That is, it is detected whether or not the scan for acquiring the current RF data is a scan further performed to improve the certainty of the position of the buried object after the position of the buried object is detected at least once. If the reference data for comparison does not exist, that is, if the position of the buried object has not been detected yet, the scanning error determination process ends.
  • the range setting unit 71 determines whether or not the X-axis coordinate value at which the detected position of the embedded object exists and the current X-axis coordinate value are within the comparison range. That is, to explain using the example of FIG. 23, the range setting unit 71 causes the X coordinate value of the RF data of one line currently acquired in FIG. 23C to be within the range (L ⁇ L1) to L from the folding origin O. It is determined whether or not
  • step S183 the range setting unit 71 has the X coordinate value of the 1-line RF data acquired at the input timing of the encoder 7 which is one before the present time smaller than the X coordinate value of the detected position of the embedded object. Or not. Then, if it is smaller, in step S183, the range setting unit 71 determines whether or not the X coordinate value of the currently acquired 1-line RF data is greater than or equal to the X coordinate value of the detected value of the embedded object. This indicates that the X-coordinate value of the current 1-line RF data is equal to or more than the X-coordinate value X1 of the detection position of the embedded object in the state of scanning in the direction of arrow A1. That is, it is detected whether or not the X coordinate value X1 of the detection position of the embedded object is reached in the scan in the direction of the arrow A1.
  • step S185 determines in step S185 whether the current X-axis coordinate value is equal to or less than the embedded object X coordinate value. This detects whether or not the current X-coordinate value of the 1-line RF data has reached the X-coordinate value x1 of the detected position of the embedded object in the state of scanning in the arrow A2 direction. In this way, in steps S183 to S185, it is possible to detect whether the X coordinate value x1 of the detected position of the embedded object has been reached in both the scanning in the arrow A1 direction and the scanning in the arrow A2 direction.
  • the range setting unit 71 sets the position shift determination range. Specifically, the range setting unit 71 sets the range S shown in FIG. 23(b) and the range T shown in FIG. 23(c). It should be noted that the range S and the range T can be appropriately changed within a range where the positional deviation can be determined, and are not particularly limited.
  • step S187 the positional deviation amount acquisition process is performed by the difference calculation unit 72 and the counting unit 73.
  • FIG. 31 is a flowchart showing the positional shift amount acquisition processing.
  • the count unit 73 clears the comparative number to zero.
  • the difference calculation unit 72 substitutes the position shift search start position in the position shift search counter in step S192. For example, the value of the X coordinate value X1 shown in FIGS. 25(a) and 25(b) is substituted into the misregistration search counter.
  • the difference calculation unit 72 determines whether or not the position shift search counter is smaller than the search end position.
  • the search end position can be the X coordinate value X2 shown in FIGS. 25(a) and 25(b). That is, in steps S192 and S193, it is determined whether or not the difference calculation is performed up to the X coordinate values X1 to X2 (range T), and when the value of the position shift search counter reaches the search end position, the position shift amount acquisition is performed. The process ends.
  • the difference calculation unit 72 determines in step S194 whether there is one-line latest data of the position shift search counter position.
  • the difference calculation unit 72 detects, for example, whether the latest 1-line RF data exists at the X coordinate value X1. If the latest 1-line RF data exists, the difference calculation unit 72 sets the 1-line counter number to zero in step S195.
  • step S196 the difference calculation unit 72 determines whether or not the 1-line counter number is smaller than the 1-line size. Here, it is detected whether or not the calculation for calculating the difference has been performed for all the data of one line.
  • step S197 the difference calculation unit 72 causes the signal intensities of the current misalignment search counter (X1) of the 1-line RF data and the 1-line counter (zero) and the same misalignment search counter of the comparison reference data ( X1) and the absolute value of the difference from the signal strength of the same one-line counter (zero).
  • step S199 the counting unit 73 increments the value of the comparison number by +1. This is because the absolute value of the difference between the intensity signal at the point s1 and the intensity signal at the point t1 shown in FIGS. 25A and 25B is calculated, and whether the absolute value of the difference is within the displacement comparison value. It is to judge whether or not.
  • step S200 the difference calculation unit 72 increments the 1-line counter value by +1. The control proceeds from step S200 to step S196, and when the 1-line counter number is smaller than the 1-line size, in step S197, the difference computing unit 72 causes the current 1-line RF data misregistration search counter (X1).
  • the difference calculator 72 calculates the absolute value of the difference between the intensity signal at the point s2 and the intensity signal at the point t2 shown in FIGS. 25(a) and 25(b), and the absolute value of the difference is compared with the positional deviation. It is determined whether it is within the value.
  • step S196 when the calculation of the absolute value of the difference and the comparison with the position shift comparison value are completed for all the RF data for one line, the control proceeds from step S196 to step S198, and the difference calculation unit 72 causes the position shift. Increment the search count value by +1. Thereby, the calculation of the absolute value of the difference and the comparison with the positional deviation comparison value (first threshold value) are performed for the data of one line at the X coordinate value next to the X coordinate value X1. As a result, the differences in the signal intensities at all the points in the range S and the range T are calculated, and the number of comparisons whose absolute value of the difference exceeds the positional deviation comparison value is counted. Note that, in step S194, if the latest RF data of one line does not exist at the position shift search counter position, the control proceeds to step S98.
  • step S188 of FIG. 30 the determination unit 74 determines whether or not the comparison number is larger than the positional deviation comparison number (an example of the second threshold value). Determine that When it is determined that the positional deviation has occurred, the positional deviation processing is performed in step S189.
  • FIG. 32 is a flowchart showing the processing at the time of displacement.
  • the embedded object position erasing unit 75 erases the position of the embedded object detected so far in step S110.
  • step S111 the latest RF data is registered in the RF data management unit 62 as reference data for comparison.
  • FIG. 33 is a flowchart showing the embedded object detection processing.
  • the pre-processing unit 23 performs a pre-process before determining the buried object in step S201.
  • the embedded object determination unit 24 performs an embedded object determination process.
  • step S ⁇ b>203 the result (position of the embedded object) determined by the embedded object determination unit 24 is registered by the determination result registration unit 25 and displayed on the display unit 8 by the display control unit 66.
  • the processing in each step will be described in detail.
  • FIG. 34 is a flowchart showing the preprocessing.
  • the gain adjusting unit 31 adjusts the gain of the RF data for one line.
  • the difference processing unit 32 calculates the difference from the reference value, and the change in the RF data is extracted.
  • the moving average processing unit 33 performs moving average processing on the RF data for one line that has undergone the difference processing. For example, the moving average process can be performed using an 8-point average.
  • step S24 the primary differential processing unit 34 performs primary differential processing on the difference result that has been subjected to the moving average processing, and the difference between adjacent data in the depth direction is positive (increase) or negative (decrease). Whether or not it is determined.
  • step S25 the peak detection unit 35 detects the peak of the signal intensity using the result of the first-order differentiation processing.
  • FIG. 35 is a flowchart showing the gain adjustment processing.
  • the gain adjustment unit 31 selects the reception data of sequence number 1 from the RF data received by the reception unit 61 in step S31.
  • step S33 it is determined whether or not the sequence number is the maximum value. If the sequence number is not the maximum value, the control returns to step S31, the sequence number is incremented by 1, and the received data of the sequence number 2 is received. Selected. Then, the process of step S32 is performed on the data of sequence number 2.
  • step S32 the signal strength data of each sequence number is multiplied by a predetermined scale factor. For example, the sequence No. having the shortest delay time of the RF data of one line.
  • the sequence No. 1 is followed by a sequence No. having a short delay time.
  • the data of No. 2 is multiplied by a predetermined magnification, and the predetermined magnification is multiplied in the sequence number order until the sequence number becomes maximum.
  • the magnification is increased in the depth direction, and the magnification is set to 1 for the data of 1 to 25 pixels from the shallow side, and set to 2 for the data of 26 to 50 pixels.
  • the data of 51 to 75 pixels can be set to 3 times and the magnification can be increased in order, and the data of 500 to 511 pixels can be set to 21 times.
  • FIG. 36 is a flowchart showing the difference processing.
  • step S41 the difference processing unit 32 selects the reception data of sequence number 1 from the gain-adjusted RF data. Then, after the difference processing unit 32 performs the processing of steps S42 and S43 on the data of sequence number 1, the control proceeds to step S44. In step S44, it is determined whether or not the sequence number is the maximum value. If the sequence number is not the maximum value, the control returns to step S41, the sequence number is incremented by one, and the received data of the sequence number 2 is received. Selected. Then, the processes of steps S42 and S43 are performed on the data of sequence number 2.
  • step S42 the difference processing unit 32 calculates the average value of the past gain-adjusted reception data up to this line (all reception data received in the past).
  • step S43 the difference processing unit 32 sets the calculated average value as the value of the reference point, and calculates the difference between that value and the received data of the current line.
  • step S44 the difference processing unit 32 determines whether or not the sequence number is the maximum value, and if the sequence number is not the maximum value, the control returns to step S41 and the number is incremented by one. Thus, the received data of sequence number 2 is selected. In this way, steps S42 and S43 are repeated until the numbers are sequentially incremented and the difference processing is performed on all the received data of one line.
  • the average value of the signal intensities at the 1-m-1st predetermined depth positions is subtracted from the signal intensity at the predetermined depth position of the m-th line. .. Further, when the difference processing is performed on the next m+1-th line, the average value of the 1st to m-th signal intensities is calculated, and the value of the reference point is updated.
  • a change in RF data can be extracted as in the image data shown in FIG. 9B.
  • FIG. 37 is a flowchart showing the first derivative processing of the difference result.
  • the primary differential processing unit 34 selects the differential result of sequence number 1 from the differential results.
  • step S52 the primary differential processing unit 34 performs primary differential processing of the difference result.
  • the primary differential processing is to calculate the difference between the difference result data at a predetermined position and the difference result data at the next position in the depth direction. That is, the difference between the sequence number 1 and the next sequence number 2 is calculated.
  • step S53 it is determined whether or not the sequence number is the maximum value. If the sequence number is not the maximum value, the control returns to step S51, the number is incremented by one, and the sequence number 2 is received. The data is selected. Then, the difference between the sequence number 2 and the sequence number 3 is calculated.
  • step S52 is repeated until the primary differential processing is performed on all the data for one line. That is, when the primary differential processing of the sequence number n is performed, the primary differential processing is performed on the data of the sequence number n by subtracting the differential result data of the sequence number n from the differential result data of the sequence number n+1. It can be carried out. As a result, the difference in the third column from the left in the table 150 in FIG. 12 is calculated.
  • FIG. 38 is a flowchart showing the peak detection processing.
  • the peak detection unit 35 selects the sequence number 1 for which the primary differentiation process has been performed.
  • step S74 it is determined whether or not the sequence number is the maximum value. If the sequence number is not the maximum value, control returns to step S71, the sequence number is incremented by 1, and the received data of sequence number 2 is Selected. Then, the processes of steps S72 and S73 are performed on the data of sequence number 2.
  • step S72 the peak detection unit 35 determines whether or not the state of the previous sequence number n-1 after the primary differentiation is negative (-) and the state of the current sequence number n is positive (+). ..
  • the peak detector 35 stores the nth coordinate in step S73. To do.
  • the coordinates can be indicated by a moving distance (also referred to as a line number) and a depth position in units of pixels. Thereby, as described above, for example, the sequence number 34 in the table 150 of FIG. 12 can be detected as a peak. This peak is a black (downward) peak.
  • FIG. 39 is a flowchart showing the embedded object determination processing.
  • the grouping unit 51 performs the peak detection result grouping process performed by the preprocessing unit 23.
  • the shape determination unit 52 performs vertex detection processing.
  • a buried object acquisition process is performed by the buried object position determining unit 53 or the buried object data integrating unit 54.
  • FIG. 40 is a flowchart showing a grouping process of peak detection results.
  • the grouping unit 51 sets the detection state to the “undetected” state.
  • step S92 All target data acquired in the past are targeted, and in step S92, the grouping unit 51 selects old data in the past as a target for processing. Then, in step S103, the grouping unit 51 determines whether or not the processing has been performed on all the data acquired in the past up to the line acquired this time. If the processing has not been performed, the control returns to step S92, and Old data is subject to processing. In this way, for example, the processes of steps S93 to S102 are sequentially performed from the sequence number 1 of the oldest line.
  • step S93 the grouping unit 51 determines whether the state is undetected. Since the initial state is "undetected", the control proceeds to step S94. In step S94, the grouping unit 51 determines whether or not the position where the peak is detected is within the predetermined range. When the peak is not detected in the predetermined range, the control proceeds to step S103.
  • the predetermined range can be set as appropriate, and for example, may be set for one line or may be set for the sequence number of one line.
  • step S94 it is determined whether or not there is a position where the peak is detected in order from the old data, and when there is the position where the peak is detected, in step S95, the grouping unit 51 The detection state is set to “detecting”.
  • step S96 the grouping unit 51 stores the point where the peak is detected. This point is a coordinate with a pixel as a unit, and can be indicated by, for example, a moving distance (also called a line number) and a depth position. This point corresponds to the starting point ( ⁇ ) in FIG.
  • step S93 since the detection state is "detecting", the control proceeds to step S97.
  • step S97 the grouping unit 51 determines whether or not there is a peak detected position within a predetermined range from the position stored in step S96.
  • This predetermined range can be set within, for example, 5 pixels within the moving direction described with reference to FIGS. 14A to 14D and within 5 pixels above and below.
  • the grouping unit 51 determines that there is a continuous position and stores the position in step S98.
  • step S99 the grouping unit 51 compares the previous Y coordinate (depth position) with the current Y coordinate (depth position) (see FIG. 14).
  • step S100 the grouping unit 51 stores the comparison result as positive (+) or negative (-).
  • positive (+) is stored as the depth position is rising.
  • a negative (-) is stored as the depth position is descending.
  • step S93 since the detection state is "detecting", the control proceeds to step S97.
  • step S97 it is detected whether or not there is a peak-detected point within a predetermined range from the point previously stored in step S98, and if there is, that point is stored in step S99. ..
  • step S100 the previous point and the comparison result are stored. As a result, as shown in FIG. 13, consecutive points are sequentially grouped, and whether the change to the next point is rising or falling is also stored.
  • step S97 when the peak is not detected within the predetermined range, the control proceeds to step S101.
  • step S101 the grouping unit 51 determines that there are no consecutive points, and stores the detection results up to that point. The last detected point corresponds to the end point ( ⁇ ) in FIG.
  • step S102 the grouping unit 51 sets the detection state to the "undetected" state. Then, in step S103, when it is determined that the processing has been performed for all the data of the line acquired in the past, the processing ends.
  • FIG. 41 is a flowchart showing the vertex detection processing.
  • the vertex detection processing is performed on all data as a result of grouping.
  • the shape determination unit 52 sequentially selects data from the start point side of the grouped data. For example, in step S121, the shape determination unit 52 sets the data of the change from the first to the second shown in FIGS. 15 and 16 as the processing target.
  • step S122 the shape determination unit 52 reads the result of the change from the first to the second.
  • step S123 the shape determination unit 52 determines whether or not the result of the change is positive (+). For example, since the change from the first to the second shown in FIGS. 15 and 16 is positive (+), the control proceeds to step S124.
  • step S124 the shape determination unit 52 sets the ⁇ (minus) count to 0.
  • step S125 the shape determination unit 52 determines whether or not the + (plus) count is 0. Since the + (plus) count is 0, the control proceeds to step S126.
  • step S126 the shape determination unit 52 stores the first Y coordinate (depth position) and sets the start point.
  • step S127 the shape determination unit 52 sets the + (plus) count to +1.
  • step S138 the shape determination unit 52 determines whether or not the processing has been completed for all data as a result of grouping, and if not completed, the control returns to step S121, and the next The data (change from second to third) is selected for processing.
  • step S122 the shape determination unit 52 reads the result of the change from the second to the third.
  • step S123 the shape determination unit 52 determines whether or not the result of the change after the grouping process is positive (+). For example, since the result after the post-grouping processing of the change from the second to the third shown in FIGS. 15 and 16 is positive (+), the control proceeds to step S124.
  • step S125 the shape determination unit 52 determines whether or not the + (plus) count is 0. Since the + (plus) count is +1, the control proceeds to step S127. Then, in step S127, the shape determination unit 52 adds +1 to the + (plus) count and sets it to +2.
  • step S123 when the result of the change becomes negative (-), the control proceeds to step S128.
  • the count is 5 or more, it means that the condition 1 that the pixel number continuously rises by 5 pixels or more in the Y-axis direction is satisfied.
  • step S129 the shape determination unit 52 determines whether or not the ⁇ (minus) count is 0. Since the-(minus) count is 0, the control proceeds to step S130.
  • step S130 the shape determination unit 52 stores the previous depth position (also referred to as Y coordinate). That is, the shape determination unit 52 stores the Y coordinate of the apex of the mountain (the point where the inclination changes from + to ⁇ ). In the data of FIGS. 15 and 16, the 8th Y coordinate which is the previous point in the change from the 8th to the 9th is stored. Next, in step S131, the shape determination unit 52 adds +1 to the-(minus) count.
  • step S132 the shape determination unit 52 determines whether the-(minus) count is 5 or more.
  • the count is 5 or more, it means that the condition 2 in which the count is continuously decreased by 5 pixels or more in the Y-axis direction is satisfied.
  • the ⁇ count is +1. Therefore, the control proceeds to step S138, and the change from the 9th to the 10th is selected as a processing target through step S121.
  • step S132 the shape determination unit 52 calculates the difference between the Y coordinate of the starting point and the Y coordinate of the vertex. With the data in FIGS. 15 and 16, the difference between the first Y coordinate and the eighth Y coordinate is calculated.
  • step S134 the shape determination unit 52 determines whether or not the calculated difference is 10 or more.
  • the condition 3 that the difference in the Y-axis direction is 10 pixels or more is satisfied.
  • the shape determination unit 52 determines that the gusset and the loop have a mountain shape, and determines that an embedded object exists.
  • the control proceeds to step S138.
  • FIG. 42 is a flowchart showing the buried object acquisition processing.
  • the buried object position determination unit 53 detects the pattern of the combination of vertices detected by the above-described vertex detection process (see FIG. 17) in step S141.
  • the embedded object position determination unit 53 determines whether or not the detected pattern is other than C (that is, any of the patterns A1, A2, B1, and B2 having two or more peaks). If the detected pattern is C, the embedded object acquisition process is terminated without setting the position of the embedded object. On the other hand, when the detected pattern is other than C and the pattern is detected by the previous scan, the buried object position determination unit 53 follows the buried object data (pattern) according to the buried object data transition table shown in FIG. And the determined position of the buried object) is updated and stored in the RF data management unit 62.
  • the display control unit 26 controls the display unit 8 to display a ⁇ in the image data in order to indicate the position of the buried object acquired by the buried object acquisition process. (See, for example, FIG. 18B). In the case of the example shown in FIG. 15, the display control unit 26 causes the display unit 8 to display an image with a ⁇ mark at the position of the eighth data, which is the apex.
  • the present invention may be implemented as a program that causes a computer to execute the embedded object detection device 1 and the embedded object detection method implemented according to the flowcharts shown in FIGS. 27 to 42.
  • one usage form of the program may be a mode in which the program is recorded in a recording medium such as a ROM readable by a computer and operates in cooperation with the computer.
  • one usage form of the program may be a mode in which the program is transmitted through a transmission medium such as the Internet or a transmission medium such as light, radio waves, sound waves, read by a computer, and operates in cooperation with the computer.
  • the computer described above is not limited to hardware such as a CPU (Central Processing Unit), but may include firmware, an OS, and peripheral devices.
  • the method of controlling the power consuming body may be realized by software or hardware.
  • the reinforcing bar is described as an example of the buried object, but it is not limited to the reinforcing bar, and may be a gas pipe, a water pipe, wood, or the like, and the buried object is provided.
  • the target object is not limited to concrete.
  • step S88 it is determined whether or not the number of comparisons is larger than the number of positional deviation comparisons as the lower limit value, but the upper limit value may be set.
  • the embedded object position erasing unit 75 erases the previously acquired embedded object data (the pattern and the determined position of the embedded object). You may notify that it occurred.
  • the scanning error determination unit 65′ further includes a notification unit 76, and when a scanning error is detected, the user may be notified of the scanning error by a warning sound or a display. good.
  • the main control module 6 is provided in the main body 2 of the embedded object detection device 1, but the main control module 6 may be provided separately from the main body 2.
  • the main control module 6 and the display unit 8 may be provided on the tablet or the like. Communication between the main body 2 and the tablet may be performed wirelessly or by wire.
  • the range S to be compared when determining a scanning error is set to include the detection position Pd of the embedded object, but the range S is not limited to this.
  • the range S does not have to include the detection position of the buried object, but it is preferable to set the range S near the detection position Pd of the buried object because it is easier to determine the scanning error when the change in the signal intensity is large. ..
  • the buried object detecting apparatus and the buried object detecting method of the present invention have the effect of improving the accuracy of the position of the buried object, and are useful for detecting the buried object in concrete.

Landscapes

  • Remote Sensing (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Environmental & Geological Engineering (AREA)
  • Geology (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

An embedded object detection device (1) that moves over the surface of a target object and detects embedded objects in the target object using data about waves that are the reflection of electromagnetic waves that have been radiated at the target object. The embedded object detection device comprises a reception part (61), an embedded object detection part (64), and a scanning error determination part (65). The reception part (61) receives data about reflected waves at timing that is associated with movement. The embedded object detection part (64) detects embedded objects using data about reflected waves for reciprocal movement along the same route over the surface of the target object. The scanning error determination part (65) compares data about reflected waves in a range S that is for first movement during which an embedded object was detected and data about reflected waves in a range T that corresponds to range S and is for second movement that is after the first movement and thereby determines whether scanning is being performed such that reciprocal movement is occurring along the same route.

Description

埋設物検出装置および埋設物検出方法Buried object detection device and buried object detection method
 本発明は、埋設物検出装置および埋設物検出方法に関する。 The present invention relates to a buried object detection device and a buried object detection method.
 コンクリート内の埋設物を探索する装置として、コンクリートの表面を移動させながら、コンクリートに向かって放射した電磁波の反射波から埋設物を検出する埋設物検出装置が用いられている(例えば、特許文献1参照)。
 埋設物検出装置を走査しているときに、同じ道筋上にある2点間を往復移動するように繰り返し操作することにより埋設物の有無が検出される。一方の点から他方の点の間を走査させることによって埋設物検出装置に設置されたエンコーダにより距離情報を取得し、道筋方向と深さ方向を2軸とする平面上に信号強度を色調表示することができる。この2点間の走査を繰り返すたびに埋設物の位置が検出されると、複数回検出された埋設物の位置に基づいて、埋設物の位置の確からしさを向上できる。
As a device for searching for an embedded object in concrete, an embedded object detection device is used that detects an embedded object from a reflected wave of an electromagnetic wave emitted toward the concrete while moving the surface of the concrete (for example, Patent Document 1). reference).
The presence or absence of an embedded object can be detected by repeatedly operating the embedded object detection device so as to reciprocate between two points on the same path while scanning. By scanning between one point and the other point, the distance information is acquired by the encoder installed in the embedded object detection device, and the signal intensity is displayed in color on a plane having two axes in the road direction and the depth direction. be able to. When the position of the buried object is detected each time the scanning between these two points is repeated, the certainty of the position of the buried object can be improved based on the positions of the buried object detected a plurality of times.
特許第5789088号公報Japanese Patent No. 5789088
 しかしながら、従来の埋設物検出装置の場合、埋設物を検出した後に更に走査させるときに道筋から外れると、以前の走査によって検出された埋設物の位置と異なる位置に埋設物が検出されることになり、埋設物の位置の確からしさが低減することになる。
 本発明は、埋設物の位置の確からしさを向上することが可能な埋設物検出装置および埋設物検出方法を提供することを目的とする。
However, in the case of the conventional embedded object detection device, if the embedded object is detected and then deviated from the route when further scanning, the embedded object is detected at a position different from the position of the embedded object detected by the previous scanning. Therefore, the certainty of the position of the buried object is reduced.
An object of the present invention is to provide an embedded object detection device and an embedded object detection method capable of improving the certainty of the position of an embedded object.
 第1の発明にかかる埋設物検出装置は、対象物の表面を移動しながら対象物に向かって放射した電磁波の反射波に関するデータを用いて対象物内の埋設物を検出する埋設物検出装置であって、受信部と、埋設物検出部と、走査エラー判定部と、を備える。受信部は、移動に伴ったタイミング毎に反射波に関するデータを受信する。埋設物検出部は、対象物の表面における同じ道筋を往復移動する際の反射波に関するデータを用いて埋設物を検出する。走査エラー判定部は、埋設物が検出された際の第1の移動における所定の第1範囲の反射波に関するデータと、第1の移動よりも後の第2の移動における第1範囲に対応する第2範囲の反射波に関するデータと、を比較して、同じ道筋を往復移動するように走査されているか否かを判定する。 An embedded object detection apparatus according to a first aspect of the present invention is an embedded object detection apparatus that detects an embedded object in an object by using data regarding a reflected wave of an electromagnetic wave emitted toward the object while moving on the surface of the object. Therefore, it is provided with a receiving unit, an embedded object detecting unit, and a scanning error determining unit. The receiving unit receives the data regarding the reflected wave at each timing accompanying the movement. The embedded object detection unit detects the embedded object by using the data regarding the reflected wave when the object reciprocates along the same path on the surface of the object. The scanning error determination unit corresponds to the data regarding the reflected wave of the predetermined first range in the first movement when the embedded object is detected, and the first range in the second movement after the first movement. The data regarding the reflected waves in the second range are compared with each other to determine whether or not scanning is performed so as to reciprocate along the same path.
 このように、同じ道筋から外れたことを判定できるため、たとえば、判定する以前に検出された埋設物の位置をクリアすることによって、再度一から埋設物の位置の検出をやり直すことができる。若しくは、同じ道筋から外れたことを使用者に報知することによって、使用者が検出された埋設物の位置を消去することができる。若しくは、同じ道筋から外れたときに検出された埋設物の位置のみ消去することができる。 In this way, it is possible to determine that the vehicle has deviated from the same path. For example, by clearing the position of the embedded object that was detected before the determination, it is possible to restart the detection of the position of the embedded object from the beginning. Alternatively, by notifying the user that he/she is out of the same route, the user can erase the detected position of the buried object. Alternatively, it is possible to erase only the position of the buried object detected when the vehicle deviates from the same path.
 以上のように、同じ道筋から外れたことを検出することによって、埋設物の位置の確からしさを向上することができる。
 また、例えば、第1範囲における反射波に関するデータと、第2範囲における反射波に関するデータの違いが、所定の誤差範囲内であれば、同様のデータを受信しているとして、同じ道筋を往復移動していると判断することができる。一方、第1範囲における反射波に関するデータと、第2範囲における反射波に関するデータの違いが、所定の誤差範囲より大きい場合には、異なったデータを受信しているとして、同じ道筋を往復移動していないと判断することができる。
As described above, it is possible to improve the certainty of the position of the buried object by detecting the deviation from the same path.
Further, for example, if the difference between the data regarding the reflected wave in the first range and the data regarding the reflected wave in the second range is within a predetermined error range, it is assumed that similar data is received and the same route is reciprocated. You can judge that you are doing. On the other hand, when the difference between the data regarding the reflected wave in the first range and the data regarding the reflected wave in the second range is larger than the predetermined error range, it is determined that different data is received, and the vehicle reciprocates along the same route. You can determine that you have not.
 第2の発明にかかる埋設物検出装置は、第1の発明にかかる埋設物検出装置であって、走査エラー判定部は、埋設物位置消去部を有する。埋設物位置消去部は、同じ道筋を往復移動していないと判定された場合に、判定された際の道筋の移動よりも前の道筋の移動によって検出された埋設物の検出位置を消去する。
 これにより、再度一から埋設物の位置の検出をやり直すことができるため、埋設物の位置の確からしさを向上することができる。
An embedded object detection device according to a second aspect of the present invention is the embedded object detection device according to the first aspect of the present invention, wherein the scanning error determination unit includes an embedded object position erasing unit. When it is determined that the embedded object position erasing unit is not reciprocating along the same path, the embedded object position erasing unit deletes the detected position of the embedded object detected by the movement of the path before the determination.
As a result, the position of the buried object can be detected again from the beginning, so that the accuracy of the position of the buried object can be improved.
 第3の発明にかかる埋設物検出装置は、第1の発明にかかる埋設物検出装置であって、走査エラー判定部は、報知部を有する。報知部は、同じ道筋を往復移動していないと判定された場合に、使用者に走査エラーが発生したことを報知する。 The embedded object detection device according to a third aspect of the present invention is the embedded object detection device according to the first aspect of the present invention, wherein the scanning error determination unit has a notification unit. The notification unit notifies the user that a scanning error has occurred when it is determined that the user is not moving back and forth on the same route.
 第4の発明にかかる埋設物検出装置は、第1の発明にかかる埋設物検出装置であって、第1範囲は、埋設物が検出された検出位置を含む範囲、または検出位置の近傍の範囲である。
 埋設物の位置では、反射波に関するデータに大きな変化が生じるため、第1範囲における反射波に関するデータと、第2範囲における反射波に関するデータを比較することで、同じ道筋を往復移動しているか否かを、より正確に検出することができる。
An embedded object detection device according to a fourth aspect of the present invention is the embedded object detection device according to the first aspect of the present invention, wherein the first range is a range including a detection position where the embedded object is detected, or a range near the detection position. Is.
At the position of the buried object, a large change occurs in the data on the reflected wave, so by comparing the data on the reflected wave in the first range with the data on the reflected wave in the second range, it can be determined whether or not the same path is reciprocating. Can be detected more accurately.
 第5の発明にかかる埋設物検出装置は、第1または第4の発明にかかる埋設物検出装置であって、第1範囲と第2範囲は、往復移動における折り返し位置から同じ距離である。
 ここで、距離は、移動に伴うタイミングから換算することによって求められる。折り返し位置から同じ距離の範囲では、同じ道筋をたどる場合、同じような反射波に関するデータを検出できるため、第1範囲と第2範囲の反射波に関するデータを比較することによって、同じ道筋を往復移動しているか否かを検出することができる。
An embedded object detection device according to a fifth aspect of the present invention is the embedded object detection device according to the first or fourth aspect of the present invention, wherein the first range and the second range are the same distance from the folding position in the reciprocating movement.
Here, the distance is obtained by converting the timing associated with the movement. In the range of the same distance from the turn-back position, when following the same route, it is possible to detect data regarding similar reflected waves. Therefore, by comparing the data regarding reflected waves in the first range and the second range, the same route is reciprocated. Whether or not it can be detected.
 第6の発明にかかる埋設物検出装置は、第4の発明にかかる埋設物検出装置であって、第1範囲は、検出位置を含み、第2の移動における検出位置の手前の範囲である。
 これによって、同じ道筋を通っていないことを出来るだけ速く判定することができる。
An embedded object detection device according to a sixth aspect of the present invention is the embedded object detection device according to the fourth aspect of the present invention, in which the first range includes a detection position and is a range before the detection position in the second movement.
This makes it possible to determine that the vehicle is not following the same route as quickly as possible.
 第7の発明にかかる埋設物検出装置は、第1の発明にかかる埋設物検出装置であって、埋設物検出部は、信号強度ピーク検出部と、埋設物位置検出部と、を有する。信号強度ピーク検出部は、各々のタイミングにおける対象物の深さ方向の信号強度のピークを検出する。埋設物位置検出部は、各々のタイミングにおいて検出された信号強度のピークに基づいて埋設物の位置を検出する。
 これにより、信号強度に基づいて、埋設物の位置を検出することができる。
An embedded object detection apparatus according to a seventh aspect of the present invention is the embedded object detection apparatus according to the first aspect of the present invention, in which the embedded object detection unit includes a signal intensity peak detection unit and an embedded object position detection unit. The signal intensity peak detector detects the peak of the signal intensity in the depth direction of the object at each timing. The embedded object position detection unit detects the position of the embedded object based on the peak of the signal intensity detected at each timing.
Thereby, the position of the buried object can be detected based on the signal strength.
 第8の発明にかかる埋設物検出装置は、第7の発明にかかる埋設物検出装置であって、走査エラー判定部は、第1範囲におけるタイミング毎の深さ方向の深さ位置における信号強度と、第2範囲におけるタイミング毎の深さ方向の深さ位置における信号強度とを比較して、同じ道筋を往復移動するように走査されているか否かを判定する。
 例えば、第1範囲におけるタイミング毎の深さ位置における信号強度と、第2範囲におけるタイミング毎の深さ位置における信号強度の違いが、所定の誤差範囲内であれば、同様のデータを受信しているとして、同じ道筋を往復移動していると判断することができる。一方、第1範囲におけるタイミング毎の深さ位置における信号強度と、第2範囲におけるタイミング毎の深さ位置における信号強度の違いが、所定の誤差範囲より大きい場合には、異なったデータを受信しているとして、同じ道筋を往復移動していないと判断することができる。
An embedded object detection apparatus according to an eighth aspect of the present invention is the embedded object detection apparatus according to the seventh aspect of the present invention, wherein the scanning error determination unit detects the signal strength at the depth position in the depth direction for each timing in the first range. , And the signal intensity at the depth position in the depth direction for each timing in the second range is compared to determine whether or not scanning is performed so as to reciprocate along the same path.
For example, if the difference between the signal intensity at the depth position for each timing in the first range and the signal intensity at the depth position for each timing in the second range is within a predetermined error range, similar data is received. Therefore, it can be judged that they are moving back and forth along the same route. On the other hand, when the difference between the signal intensity at the depth position for each timing in the first range and the signal intensity at the depth position for each timing in the second range is larger than the predetermined error range, different data is received. Therefore, it can be determined that the vehicle is not moving back and forth on the same route.
 第9の発明にかかる埋設物検出装置は、第8の発明にかかる埋設物検出装置であって、第1範囲は、タイミングと深さ位置で特定される複数の第1位置を含む。第2範囲は、タイミングと深さ位置で特定される複数の第2位置を含む。走査エラー判定部は、差演算部と、カウント部と、判定部と、を有する。差演算部は、各々の第1位置の信号強度と、各々の第1位置に対応する第2位置の信号強度の差を演算する。カウント部は、第1閾値以上の差の数をカウントする。判定部は、カウント数が、第2閾値以上であるか否かを判定し、第2閾値以上の場合に走査エラーと判定する。
 これにより、第1範囲における信号強度と、第2範囲における信号強度の違いが、所定の誤差範囲内であれば、同様のデータを受信しているとして、同じ道筋を往復移動していると判断することができる。
An embedded object detection apparatus according to a ninth aspect of the present invention is the embedded object detection apparatus according to the eighth aspect of the present invention, wherein the first range includes a plurality of first positions specified by timing and depth position. The second range includes a plurality of second positions specified by timing and depth position. The scanning error determination unit has a difference calculation unit, a counting unit, and a determination unit. The difference calculator calculates the difference between the signal strength of each first position and the signal strength of the second position corresponding to each first position. The counting unit counts the number of differences equal to or larger than the first threshold. The determination unit determines whether or not the count number is equal to or larger than the second threshold value, and when the count number is equal to or larger than the second threshold value, determines that the scanning error occurs.
As a result, if the difference between the signal intensity in the first range and the signal intensity in the second range is within the predetermined error range, it is determined that similar data is received, and it is determined that the same route is reciprocating. can do.
 第10の発明にかかる埋設物検出装置は、第7の発明にかかる埋設物検出装置であって、埋設物検出部は、差分処理部を更に備える。差分処理部は、深さ方向またはその反対の表面方向において、所定の深さ位置の信号強度の、その前の深さ位置の信号強度からの変化の差分を検出する。
 これにより、埋設物の影響による信号強度の変化を抽出することができる。
An embedded object detection apparatus according to a tenth aspect of the present invention is the embedded object detection apparatus according to the seventh aspect of the present invention, wherein the embedded object detection unit further includes a difference processing unit. The difference processing unit detects a difference in change in signal intensity at a predetermined depth position from the signal intensity at a preceding depth position in the depth direction or the surface direction opposite thereto.
This makes it possible to extract a change in signal intensity due to the influence of the buried object.
 第11の発明にかかる埋設物検出装置は、第10の発明にかかる埋設物検出装置であって、信号強度ピーク検出部は、差分が減少から増加に変化する深さ位置および差分が増加から減少に変化する深さ位置の少なくとも一方を信号強度のピークとして検出する。
 このように、信号強度の増減のデータを用いて信号強度のピークを検出することができる。
An embedded object detection apparatus according to an eleventh aspect of the present invention is the embedded object detection apparatus according to the tenth aspect of the present invention, wherein the signal intensity peak detection unit has a depth position at which the difference changes from decrease to increase and a difference decreases from increase. At least one of the depth positions that changes to is detected as a peak of the signal intensity.
In this way, the peak of the signal strength can be detected using the data of the increase and decrease of the signal strength.
 第12の発明にかかる埋設物検出装置は、第11の発明にかかる埋設物検出装置であって、埋設物検出部は、グルーピング部と、深さ位置ピーク検出部と、を有する。グルーピング部は、各々のタイミングで検出された信号強度のピークにおける深さ位置のうち、移動方向において所定間隔以内で連続している複数の信号強度のピークにおける深さ位置を、1つのグループとする。深さ位置ピーク検出部は、移動方向と深さ方向における平面において、グループの深さ位置のピークを検出する。
 このように、連続しているピークを1つのグループとし、そのグループにおける深さ位置のピークを検出することにより、埋設物の位置を検出することができる。
An embedded object detection apparatus according to a twelfth aspect of the present invention is the embedded object detection apparatus according to the eleventh aspect of the present invention, in which the embedded object detection unit includes a grouping unit and a depth position peak detection unit. The grouping unit sets, as one group, the depth positions at a plurality of signal intensity peaks that are consecutive within a predetermined interval in the moving direction among the depth positions at the signal intensity peaks detected at each timing. .. The depth position peak detection unit detects a peak at the depth position of the group on the plane in the moving direction and the depth direction.
In this way, the position of the buried object can be detected by defining the continuous peaks as one group and detecting the peak at the depth position in the group.
 第13の発明にかかる埋設物検出装置は、第12の発明にかかる埋設物検出装置であって、埋設物検出部は、埋設物位置決定部を更に有する。埋設物位置決定部は、検出された深さ位置のピークに基づいて、埋設物の位置を設定する。
 これにより、埋設物の位置を決定することができる。
An embedded object detection apparatus according to a thirteenth invention is the embedded object detection apparatus according to the twelfth invention, in which the embedded object detection unit further includes an embedded object position determination unit. The buried object position determination unit sets the position of the buried object based on the detected peak of the depth position.
Thereby, the position of the buried object can be determined.
 第14の発明にかかる埋設物検出装置は、第13の発明にかかる埋設物検出装置であって、埋設物位置決定部は、所定範囲内において、減少から増加に変化する信号強度のピークのグループにおける深さ位置のピークと、増加から減少に変化する信号強度のピークのグループにおける深さ位置のピークのいずれか一方が検出され、そのピークから所定範囲内に他の深さ位置のピークが存在しない場合、検出されたピークを埋設物の位置として設定しない。
 これにより、検出された1つのピークを埋設物の位置を設定することができる。
An embedded object detection device according to a fourteenth invention is the embedded object detection device according to the thirteenth invention, in which the embedded object position determination unit is a group of peaks of signal strength that change from decrease to increase within a predetermined range. One of the peaks at the depth position in the group and the peak at the depth position in the group of the peaks of the signal strength changing from increasing to decreasing is detected, and the peaks at other depth positions exist within the predetermined range from the peak. If not, the detected peak is not set as the position of the buried object.
As a result, the position of the buried object can be set for one detected peak.
 第15の発明にかかる埋設物検出装置は、第13の発明にかかる埋設物検出装置であって、埋設物位置決定部は、減少から増加に変化する信号強度のピークのグループにおける深さ位置のピークと、増加から減少に変化する信号強度のピークのグループにおける深さ位置のピークが、所定範囲内で隣り合って存在する場合、浅いほうのピークの位置を埋設物の位置に設定する。
 これにより、浅い方のピークの位置を埋設物の位置とすることができる。
An embedded object detection apparatus according to a fifteenth invention is the embedded object detection apparatus according to the thirteenth invention, in which the embedded object position determining unit determines the depth position in the group of the peaks of the signal intensity that changes from decrease to increase. When the peak and the peak at the depth position in the group of the peaks of the signal intensity changing from the increase to the decrease exist adjacent to each other within the predetermined range, the position of the shallower peak is set to the position of the buried object.
This allows the position of the shallower peak to be the position of the buried object.
 第16の発明にかかる埋設物検出装置は、第13の発明にかかる埋設物検出装置であって、埋設物位置決定部は、減少から増加に変化する信号強度のピークのグループにおける前記深さ位置のピークと、増加から減少に変化する信号強度のピークのグループにおける前記深さ位置のピークが、交互に3つ以上所定範囲内で隣り合って存在する場合、最も浅いピークの位置を埋設物の位置に設定する。
 これにより、最も浅いピークの位置を埋設物の位置とすることができる。
An embedded object detection apparatus according to a sixteenth invention is the embedded object detection apparatus according to the thirteenth invention, wherein the embedded object position determination unit is the depth position in a group of peaks of signal intensity changing from decrease to increase. And the peaks at the depth positions in the group of the peaks of the signal intensity changing from increasing to decreasing are alternately present in the predetermined range by three or more, the position of the shallowest peak is set to the position of the buried object. Set to position.
This allows the position of the shallowest peak to be the position of the buried object.
 第17の発明にかかる埋設物検出装置は、第13の発明にかかる埋設物検出装置であって、埋設物位置決定部は、往復移動において、隣り合う深さ位置のピークの数が多いときの埋設物の位置を採用する。
 これにより、埋設物の位置の精度を向上することができる。
An embedded object detection device according to a seventeenth aspect of the present invention is the embedded object detection device according to the thirteenth aspect, wherein the embedded object position determination unit has a large number of peaks at adjacent depth positions in reciprocating movement. Adopt the position of the buried object.
Thereby, the accuracy of the position of the buried object can be improved.
 第18の発明にかかる埋設物検出装置は、第12の発明にかかる埋設物検出装置であって、深さ位置ピーク検出部は、移動方向と深さ方向における平面において、グループが所定形状となっている場合、埋設物が存在すると判定し、所定形状ではない場合、埋設物が存在しないと判定する。
 このように、移動方向に連続してピークが続き、更に、その形状が所定形状である場合に、埋設物が存在していると判定することができる。なお、所定形状は、例えば山形状が挙げられる。
An embedded object detection device according to an eighteenth invention is the embedded object detection device according to the twelfth invention, wherein the depth position peak detection unit has a group having a predetermined shape in a plane in the moving direction and the depth direction. If it is, it is determined that there is an embedded object, and if it is not the predetermined shape, it is determined that no embedded object exists.
In this way, when the peak continues in the moving direction and the shape is a predetermined shape, it can be determined that the buried object exists. The predetermined shape may be, for example, a mountain shape.
 第19の発明にかかる埋設物検出装置は、第1の発明にかかる埋設物検出装置であって、埋設物検出部は、平均化処理部を更に有する。平均化処理部は、往復移動の間において、各々のタイミングにおける対象物の深さ方向の信号強度を、それより前に受信した各々のタイミングにおける対象物の深さ方向の信号強度と平均化する。
 これにより、同じ道筋を複数回走査することで、埋設物の位置の確からしさを向上することができる。
An embedded object detection apparatus according to a nineteenth invention is the embedded object detection apparatus according to the first invention, wherein the embedded object detection unit further includes an averaging processing unit. The averaging processing unit averages the signal strength in the depth direction of the object at each timing during the reciprocating movement with the signal strength in the depth direction of the object at each timing received before that. ..
As a result, the certainty of the position of the buried object can be improved by scanning the same path a plurality of times.
 第20の発明にかかる埋設物検出装置は、第1の発明にかかる埋設物検出装置であって、表示部を更に備える。表示部は、往復移動において、移動ごとに、移動方向と深さ方向の平面において信号強度を示す表示を行う。表示部は、埋設物が検出された場合には、表示とともに埋設物の検出位置を示す。
 このように、埋設物の検出位置を表示することにより、ユーザーは容易に埋設物の位置を認識することができる。
An embedded object detection device according to a twentieth invention is the embedded object detection device according to the first invention, further including a display unit. In the reciprocating movement, the display unit displays the signal strength on a plane in the movement direction and the depth direction for each movement. When an embedded object is detected, the display section displays the detected position of the embedded object together with the display.
By displaying the detected position of the embedded object in this way, the user can easily recognize the position of the embedded object.
 第21の発明にかかる埋設物検出装置は、第20の発明にかかる埋設物検出装置であって、表示部は、同じ道筋を往復移動していないと判定された場合に、埋設物の検出位置を消去する。
 走査エラーを検出した場合に埋設物の検出位置を消去することにより、走査エラーを検出したときの移動では存在しない位置に埋設物が表示されることを防ぐことができ、使用者が混乱しない。
An embedded object detection device according to a twenty-first aspect of the invention is the embedded object detection device according to the twentieth aspect of the present invention, wherein the display section detects the embedded object detection position when it is determined that the display unit is not reciprocating along the same path. Erase.
By deleting the detection position of the embedded object when the scanning error is detected, it is possible to prevent the embedded object from being displayed at a position that does not exist during the movement when the scanning error is detected, and the user is not confused.
 第22の発明にかかる埋設物検出方法は、対象物の表面を移動しながら対象物に向かって放射した電磁波の反射波に関するデータを用いて対象物内の埋設物を検出する埋設物検出方法であって、受信ステップと、埋設物検出ステップと、走査エラー判定ステップと、を備える。受信ステップは、移動に伴ったタイミング毎に反射波に関するデータを受信する。埋設物検出ステップは、埋設物が検出された際の第1の移動における所定の第1範囲の反射波に関するデータと、第1の移動よりも後の第2の移動における第1範囲に対応する第2範囲の反射波に関するデータと、を比較して、同じ道筋を往復移動するように走査されているか否かを判定する。
 このように、同じ道筋から外れたことを判定できるため、たとえば、判定する以前に検出された埋設物の位置をクリアすることによって、再度一から埋設物の位置の検出をやり直すことができる。若しくは、同じ道筋から外れたことを使用者に報知することによって、使用者が検出された埋設物の位置を消去することができる。若しくは、同じ道筋から外れたときに検出された埋設物の位置のみ消去することができる。
A buried object detecting method according to a twenty-second aspect of the present invention is a buried object detecting method for detecting a buried object in an object using data on a reflected wave of an electromagnetic wave emitted toward the object while moving on the surface of the object. Therefore, it includes a receiving step, a buried object detecting step, and a scanning error determining step. The receiving step receives data regarding the reflected wave at each timing associated with the movement. The buried object detecting step corresponds to data regarding a reflected wave in a predetermined first range in the first movement when the buried object is detected, and the first range in the second movement after the first movement. The data regarding the reflected waves in the second range are compared with each other to determine whether or not scanning is performed so as to reciprocate along the same path.
In this way, it is possible to determine that the vehicle has deviated from the same path, and therefore, for example, by clearing the position of the embedded object detected before the determination, the position of the embedded object can be detected again from the beginning. Alternatively, by notifying the user that he/she is out of the same route, the user can erase the detected position of the buried object. Alternatively, it is possible to erase only the position of the buried object detected when the vehicle deviates from the same path.
 以上のように、同じ道筋から外れたことを検出することによって、埋設物の位置の確からしさを向上することができる。
 また、例えば、第1範囲における反射波に関するデータと、第2範囲における反射波に関するデータの違いが、所定の誤差範囲内であれば、同様のデータを受信しているとして、同じ道筋を往復移動していると判断することができる。一方、第1範囲における反射波に関するデータと、第2範囲における反射波に関するデータの違いが、所定の誤差範囲より大きい場合には、異なったデータを受信しているとして、同じ道筋を往復移動していないと判断することができる。
As described above, it is possible to improve the certainty of the position of the buried object by detecting the deviation from the same path.
Further, for example, if the difference between the data regarding the reflected wave in the first range and the data regarding the reflected wave in the second range is within a predetermined error range, it is assumed that similar data is received and the same route is reciprocated. You can judge that you are doing. On the other hand, when the difference between the data regarding the reflected wave in the first range and the data regarding the reflected wave in the second range is larger than the predetermined error range, it is determined that different data is received, and the vehicle reciprocates along the same route. You can determine that you have not.
 第23の発明にかかる埋設物検出方法は、第22の発明にかかる埋設物検出方法であって、埋設物位置消去ステップを更に備える。埋設物位置消去ステップは、同じ道筋を往復移動していないと判定された場合に、判定された際の道筋の移動よりも前の道筋の移動によって検出された埋設物の検出位置を消去する。
 これにより、再度一から埋設物の位置の検出をやり直すことができるため、埋設物の位置の確からしさを向上することができる。
The buried object detecting method according to the twenty-third invention is the buried object detecting method according to the twenty-second invention, further comprising a buried object position erasing step. The embedded object position erasing step erases the detected position of the embedded object detected by the movement of the path before the movement of the path when it is determined that the path is not reciprocating along the same path.
As a result, the position of the buried object can be detected again from the beginning, so that the accuracy of the position of the buried object can be improved.
(発明の効果)
 本発明によれば、埋設物の位置の確からしさを向上することが可能な埋設物検出装置および埋設物検出方法を提供することができる。
(Effect of the invention)
According to the present invention, it is possible to provide a buried object detection apparatus and a buried object detection method capable of improving the certainty of the position of a buried object.
本発明にかかる実施の形態における埋設物検出装置の構成を示す斜視図。The perspective view which shows the structure of the buried object detection apparatus in embodiment concerning this invention. 図1の埋設物検出装置の構成を示すブロック図。The block diagram which shows the structure of the embedded object detection apparatus of FIG. 図2のパルス制御モジュールの構成を示すブロック図。The block diagram which shows the structure of the pulse control module of FIG. 図3のMPUが取得する反射波のデータを示す図。The figure which shows the data of the reflected wave which the MPU of FIG. 3 acquires. 図2のメイン制御モジュールの構成を示すブロック図。FIG. 3 is a block diagram showing a configuration of a main control module of FIG. 2. (a)~(c)図5の平均化処理部による処理を説明するための図。6A to 6C are views for explaining processing by the averaging processing unit in FIG. 図5の埋設物検出部の構成を示すブロック図。FIG. 6 is a block diagram showing a configuration of an embedded object detection unit in FIG. 5. (a)ゲイン調整を行う前の画像データを示す図、(b)ゲイン調整処理後の画像データを示す図。(A) The figure which shows the image data before performing the gain adjustment, (b) The figure which shows the image data after the gain adjusting process. (a)差分処理を行う前の画像データを示す図、(b)差分処理後の画像データを示す図。(A) The figure which shows the image data before performing difference processing, (b) The figure which shows the image data after difference processing. (a)移動平均処理を行った後の画像データを示す図、(b)図10(a)のラインL1のRFデータの信号強度を示す図。(A) The figure which shows the image data after performing a moving average process, (b) The figure which shows the signal strength of RF data of the line L1 of FIG. 10 (a). 図10(b)のP10~P3の間の拡大図。The enlarged view between P10 and P3 of FIG.10(b). 一次微分処理部による処理を説明するための図。The figure for demonstrating the process by a primary differential process part. 前処理後の画像データを示す図。The figure which shows the image data after preprocessing. (a)~(d)グルーピング部による処理を説明するための図。6A to 6D are views for explaining processing by the grouping unit. グルーピングされた複数のピークの位置を示す図。The figure which shows the position of the some grouped peak. 図15に示すNo.1~No.15までの隣り合うピークの位置の変化を示す図。No. shown in FIG. 1 to No. The figure which shows the change of the position of the adjacent peak to 15th. 検出されたピークのパターンを示す図。The figure which shows the pattern of the detected peak. (a)3つの隣り合うピークが検出された画像データを示す図、(b)決定された埋設物の位置の例を示す図。(A) The figure which shows the image data in which three adjacent peaks were detected, (b) The figure which shows the example of the position of the determined buried object. (a)2つの隣り合うピークが検出された画像データを示す図、(b)決定された埋設物の位置の例を示す図。(A) The figure which shows the image data in which two adjacent peaks were detected, (b) The figure which shows the example of the position of the determined buried object. (a)隣り合うピークが検出されずピークが1つ検出された画像データを示す図、(b)埋設物の位置が決定されていない状態を示す図。(A) The figure which shows the image data in which the adjacent peak was not detected but one peak was detected, (b) The figure which shows the state in which the position of the buried object is not determined. 埋設物データ遷移表を示す図。The figure which shows a buried object data transition table. (a)~(c)図21の埋設物データ遷移表に従った埋設物データの遷移を示す図。(A)-(c) The figure which shows the transition of the buried object data according to the buried object data transition table of FIG. 埋設物検出装置が同じ道筋を往復移動していない状態(位置ずれしている状態)を説明するための図。The figure for demonstrating the state (position displaced) in which the embedded object detection device is not reciprocating along the same path. 図5の走査エラー判定部の構成を示すブロック図。FIG. 6 is a block diagram showing the configuration of a scanning error determination unit in FIG. 5. (a)、(b)図24の範囲設定部による範囲の設定を説明するための図。(A), (b) The figure for demonstrating the range setting by the range setting part of FIG. (a)、(b)図24の範囲設定部による範囲の設定を説明するための図。(A), (b) The figure for demonstrating the range setting by the range setting part of FIG. 埋設物検出装置1の処理を示すフロー図。The flowchart which shows the process of the buried object detection apparatus 1. 図27のデータ取得処理を示すフロー図。The flowchart which shows the data acquisition process of FIG. 図27の波形データ平均化処理を示すフロー図。FIG. 28 is a flowchart showing the waveform data averaging process of FIG. 27. 図27の走査エラー判定処理を示すフロー図。FIG. 28 is a flowchart showing the scanning error determination processing of FIG. 27. 図30の位置ずれ量取得処理を示すフロー図。FIG. 31 is a flowchart showing the positional shift amount acquisition processing of FIG. 30. 図30の位置ずれ時処理を示すフロー図。FIG. 31 is a flowchart showing the processing at the time of position shift in FIG. 30. 図27の埋設物検出処理を示すフロー図。FIG. 28 is a flowchart showing the embedded object detection process of FIG. 27. 図33の前処理を示すフロー図。The flowchart which shows the pre-processing of FIG. 図33のゲイン調整処理を示すフロー図。FIG. 34 is a flowchart showing the gain adjustment processing of FIG. 33. 図33の差分処理を示すフロー図。The flowchart which shows the difference process of FIG. 図33の差分結果の一次微分処理を示すフロー図。FIG. 34 is a flowchart showing first-order differentiation processing of the difference result of FIG. 33. 図33のピーク検出処理を示すフロー図。The flowchart which shows the peak detection process of FIG. 図33の埋設物判定処理を示すフロー図。FIG. 34 is a flowchart showing the buried object determination process of FIG. 33. 図39のグルーピング処理を示すフロー図。FIG. 40 is a flowchart showing the grouping process of FIG. 39. 図39の頂点検出処理を示すフロー図。FIG. 40 is a flowchart showing the vertex detection processing of FIG. 39. 図39の埋設物取得処理を示すフロー図。FIG. 40 is a flowchart showing the buried object acquisition process of FIG. 39. 本発明の実施の形態の変形例における走査エラー判定部を示すブロック図。The block diagram which shows the scanning error determination part in the modification of embodiment of this invention.
 以下に、本発明の実施の形態に係る埋設物検出装置について図面に基づいて説明する。
 <構成>
 (埋設物検出装置1の概要)
 図1は、本発明に係る実施の形態における埋設物検出装置1をコンクリート100上に配置した状態を示す斜視図である。図2は、本実施の形態における埋設物検出装置1の概略構成を示すブロック図である。
An embedded object detection device according to an embodiment of the present invention will be described below with reference to the drawings.
<Structure>
(Outline of buried object detection device 1)
FIG. 1 is a perspective view showing a state in which an embedded object detection device 1 according to an embodiment of the present invention is placed on concrete 100. FIG. 2 is a block diagram showing a schematic configuration of the embedded object detection device 1 according to the present embodiment.
 本実施の形態の埋設物検出装置1は、コンクリート100等の対象物の表面100aを移動しながら電磁波をコンクリート100に放射し、その反射波を受信して解析することによって、コンクリート100内の埋設物101a、101b、101c、101dの位置を検出する。埋設物検出装置1は、同じ道筋を往復移動(矢印A1およびA2参照)することによって、埋設物101a、101b、101c、101dの位置の確からしさを高める。 The embedded object detection device 1 of the present embodiment radiates an electromagnetic wave to the concrete 100 while moving on the surface 100a of an object such as concrete 100, receives the reflected wave, and analyzes the electromagnetic wave, thereby burying in the concrete 100. The positions of the objects 101a, 101b, 101c, 101d are detected. The embedded object detection device 1 reciprocates the same path (see arrows A1 and A2) to increase the certainty of the positions of the embedded objects 101a, 101b, 101c, and 101d.
 図1では、埋設物101a、101b、101c、101dは、鉄筋であり、例えば、表面100aから順に20cm、15cm、10cm、5cmの深さ位置に埋設されている。深さ方向が矢印Bで示されており、表面方向が矢印Cで示されている。
 埋設物検出装置1は、本体部2と、把手3と、車輪4と、インパルス制御モジュール5と、メイン制御モジュール6と、エンコーダ7と、表示部8と、を有する。
In FIG. 1, the buried objects 101a, 101b, 101c, 101d are reinforcing bars, and are buried at depths of 20 cm, 15 cm, 10 cm, and 5 cm in order from the surface 100a, for example. The depth direction is indicated by arrow B, and the surface direction is indicated by arrow C.
The embedded object detection device 1 includes a main body 2, a handle 3, wheels 4, an impulse control module 5, a main control module 6, an encoder 7, and a display unit 8.
 本体部2の上面に把手3が設けられている。本体部2の下部に4つの車輪が回転自在に取り付けられている。作業者は、コンクリート100内部の埋設物を検出する際には、把手3を把持して車輪4を回転させながら埋設物検出装置1をコンクリート100の表面100a上で移動させる。
 インパルス制御モジュール5は、コンクリート100に向けて電磁波を放射するタイミング、および放射した電磁波の反射波を受信するタイミング等の制御を行う。
A handle 3 is provided on the upper surface of the main body 2. Four wheels are rotatably attached to the bottom of the main body 2. When detecting an embedded object inside the concrete 100, the operator moves the embedded object detection device 1 on the surface 100 a of the concrete 100 while holding the grip 3 and rotating the wheels 4.
The impulse control module 5 controls the timing of emitting an electromagnetic wave toward the concrete 100, the timing of receiving a reflected wave of the emitted electromagnetic wave, and the like.
 エンコーダ7は、車輪4に設けられており、車輪4の回転に基づいてインパルス制御モジュール5に反射波の受信タイミングを制御するための信号を送信する。
 メイン制御モジュール6は、インパルス制御モジュール5で受信された反射波に関するデータを受け取り、埋設物の検出を行う。
 表示部8は、本体部2の上面に設けられており、埋設物101a、101b、101c、101dの位置を示す画像を表示する。
The encoder 7 is provided on the wheel 4 and transmits a signal for controlling the reception timing of the reflected wave to the impulse control module 5 based on the rotation of the wheel 4.
The main control module 6 receives the data regarding the reflected wave received by the impulse control module 5, and detects the buried object.
The display unit 8 is provided on the upper surface of the main body unit 2 and displays an image showing the positions of the embedded objects 101a, 101b, 101c, and 101d.
 (インパルス制御モジュール5)
 図3は、インパルス制御モジュール5の構成を示すブロック図である。
 インパルス制御モジュール5は、制御部10と、送信アンテナ11と、受信アンテナ12と、パルス発生部13と、ディレイ部14と、ゲート部15と、を有する。
 制御部10は、MPU(Micro Processing Unit)等によって構成されており、エンコーダ入力をトリガとして、パルス発生部13にパルスの発生を指令する。パルス発生部13は、MPUからの指令に基づいてパルスを発生し、送信アンテナ11に送る。送信アンテナ11は、パルスの周期に基づいて電磁波を一定周期で放射する。エンコーダ7の入力のタイミングが、タイミングの一例に対応する。
(Impulse control module 5)
FIG. 3 is a block diagram showing the configuration of the impulse control module 5.
The impulse control module 5 has a control unit 10, a transmission antenna 11, a reception antenna 12, a pulse generation unit 13, a delay unit 14, and a gate unit 15.
The control unit 10 is configured by an MPU (Micro Processing Unit) or the like, and instructs the pulse generation unit 13 to generate a pulse by using an encoder input as a trigger. The pulse generator 13 generates a pulse based on a command from the MPU and sends it to the transmitting antenna 11. The transmitting antenna 11 radiates an electromagnetic wave at a constant cycle based on the pulse cycle. The input timing of the encoder 7 corresponds to an example of the timing.
 受信アンテナ12は、放射された電磁波の反射波を受信する。ゲート部15は、ディレイ部14からのパルスを受信すると、受信アンテナ12で受信した反射波を取り込み、制御部10へと送信する。ディレイ部14は、所定間隔でゲート部15にパルスを送信し、反射波を取り込ませる。この所定間隔は、例えば2.5mmピッチとなっている。
 これにより、インパルス制御モジュール5は、エンコーダか7からの入力をトリガとして、送信アンテナ11から電磁波を複数回出力する。そして、インパルス制御モジュール5は、ディレイ部14によるディレイICを用いて受信タイミングを遅らせることで受信アンテナ12との距離ごとの受信データを取得することができる。
The receiving antenna 12 receives the reflected wave of the radiated electromagnetic wave. When the gate unit 15 receives the pulse from the delay unit 14, the gate unit 15 captures the reflected wave received by the reception antenna 12 and transmits the reflected wave to the control unit 10. The delay unit 14 transmits a pulse to the gate unit 15 at a predetermined interval to capture a reflected wave. The predetermined interval is, for example, 2.5 mm pitch.
As a result, the impulse control module 5 triggers the input from the encoder or 7 to output the electromagnetic wave from the transmitting antenna 11 multiple times. Then, the impulse control module 5 can acquire the reception data for each distance to the reception antenna 12 by delaying the reception timing by using the delay IC of the delay unit 14.
 図4は、MPUが取得する反射波のデータを示す図である。縦軸は、軸Oを中心として、-4096~+4096階調で受信信号の強度を示し、矢印方向がマイナス側を示す。横軸は、受信アンテナ12との距離を示し、矢印Bの方向(深さ方向に対応)が受信アンテナ12からの距離が長いことを示す。また、距離が長いとは、深さが深いことに相当する。 FIG. 4 is a diagram showing data of reflected waves acquired by the MPU. The vertical axis represents the intensity of the received signal in −4096 to +4096 gradations with the axis O as the center, and the arrow direction indicates the negative side. The horizontal axis indicates the distance from the receiving antenna 12, and the direction of the arrow B (corresponding to the depth direction) indicates that the distance from the receiving antenna 12 is long. In addition, a long distance corresponds to a deep depth.
 なお、詳しくは後述するが、図4に示す波形W1には、コンクリート100内に照射されずにアンテナで反射した反射波も含まれる(p1等)ため、基準波形との差分を算出することにより、コンクリート100内からの反射波のデータの変化が抽出される。
 また、図4に示すデータは、エンコーダ7の入力があった後からエンコーダ7の入力が次にあるときまでのデータである。受信タイミングを除々に遅らせることによって、受信アンテナ12からの距離が長い位置からの反射波を受信するが、エンコーダ7からの入力があると、受信タイミングの遅延が元に戻され、再び受信タイミングを除々に遅らせる。すなわち、移動方向を示す矢印Aの方向における所定の計測位置(エンコーダ7からの入力があった位置)における深さ方向(矢印Bの方向)の反射波を受信することになる。このような図4に示すエンコーダ7の入力があった後から次のエンコーダの入力があるまでの反射波のデータを1ライン分のデータという。制御部10は、1ライン分のデータが貯まるごとに、その1ライン分のRF(Radio Frequency)データをメイン制御モジュール6へ送信する。
As will be described later in detail, since the waveform W1 shown in FIG. 4 also includes the reflected wave reflected by the antenna without being irradiated into the concrete 100 (p1 etc.), the difference from the reference waveform is calculated. The change in the data of the reflected wave from inside the concrete 100 is extracted.
Further, the data shown in FIG. 4 is data from when the encoder 7 is input until when the encoder 7 is next input. By gradually delaying the reception timing, the reflected wave from the position where the distance from the reception antenna 12 is long is received. However, if there is an input from the encoder 7, the reception timing delay is restored and the reception timing is changed again. Delay gradually. That is, the reflected wave in the depth direction (the direction of arrow B) at a predetermined measurement position (the position where the encoder 7 has input) in the direction of arrow A indicating the moving direction is received. The data of the reflected wave from after the input of the encoder 7 shown in FIG. 4 to the input of the next encoder is called one line of data. The control unit 10 transmits the RF (Radio Frequency) data for one line to the main control module 6 every time the data for one line is accumulated.
 なお、埋設物検出装置1は動かされているため、計測位置は厳密に同じ位置ではなく、深さ方向を示す矢印Bの方向もコンクリート100の表面100aに対して厳密に垂直な方向ではない。 Since the embedded object detection device 1 is moved, the measurement positions are not exactly the same position, and the direction of the arrow B indicating the depth direction is not strictly perpendicular to the surface 100a of the concrete 100.
 (メイン制御モジュール6)
 図5は、メイン制御モジュール6の構成を示すブロック図である。メイン制御モジュール6は、受信部61と、RFデータ管理部62と、平均化処理部63と、埋設物検出部64と、走査エラー判定部65と、表示制御部66と、を有する。
 受信部61は、インパルス制御モジュール5から送信されるごとに、1ライン分のRFデータを受信する。
 RFデータ管理部62は、受信部61が受信した1ライン分のRFデータを記憶する。
 平均化処理部63は、道筋に沿って2回以上移動した際に、1ライン波形データを平均する。例えば、図1に示す矢印A1に向かって、道筋を1回目に移動した後に、折り返して矢印A2方向に向かって2回目に移動した場合には、1回目と2回目において折り返し点からの距離が同じ計測位置の同じ深さ方向の位置における信号強度の平均が算出される。
(Main control module 6)
FIG. 5 is a block diagram showing the configuration of the main control module 6. The main control module 6 includes a reception unit 61, an RF data management unit 62, an averaging processing unit 63, an embedded object detection unit 64, a scanning error determination unit 65, and a display control unit 66.
The receiver 61 receives RF data for one line each time it is transmitted from the impulse control module 5.
The RF data management unit 62 stores the RF data for one line received by the reception unit 61.
The averaging processing unit 63 averages the 1-line waveform data when the averaging processing unit 63 moves twice or more along the path. For example, when the route is moved to the arrow A1 shown in FIG. 1 for the first time and then turned back and moved to the direction of the arrow A2 for the second time, the distance from the turnaround point becomes the first time and the second time. An average of signal intensities at the same measurement position and the same position in the depth direction is calculated.
 埋設物検出部64は、平均化された1ライン分のデータ毎に信号強度のピークを検出し、信号強度のピークを用いて埋設物101の有無を判定し、埋設物101の位置を検出する。
 走査エラー判定部65は、同じ道筋を往復移動するように走査されているか否かを判定する。
 表示制御部66は、移動方向を示す矢印Aの方向と深さ方向を示す矢印Bの方向の平面において信号強度を色で階調処理した画像を表示部8に表示させる制御を行う。また、表示制御部66は、埋設物101の位置を表示部8に表示させる制御を行う。
The embedded object detection unit 64 detects a peak of signal intensity for each averaged line of data, determines the presence or absence of the embedded object 101 using the peak of signal intensity, and detects the position of the embedded object 101. ..
The scanning error determination unit 65 determines whether or not scanning is performed so as to reciprocate along the same path.
The display control unit 66 controls the display unit 8 to display an image in which the signal intensity is gradation-processed by color on the plane in the direction of arrow A indicating the movement direction and the direction of arrow B indicating the depth direction. The display control unit 66 also controls the display unit 8 to display the position of the buried object 101.
 (平均化処理部63)
 平均化処理部63は、RFデータの平均化処理を行う。図6(a)~(c)は、平均化処理を説明するための図である。
(Averaging processing unit 63)
The averaging processing unit 63 performs averaging processing of RF data. 6A to 6C are diagrams for explaining the averaging process.
 図6(a)~(c)の右端から3番目の図は、壁(対象物の一例)を正面から見た図であり、上下方向に沿って埋設物101としての鉄筋が埋設されている。
 ここで、埋設物101を横切るように点Eと点Fの間で埋設物検出装置1を往復移動させたとする。図6(a)に示すように、1回目に点Eから点Fまで埋設物検出装置1を移動させると、受信部61は、移動方向を示す矢印A1に沿って点Eから点Fまでのエンコーダ7の入力がある度に一ライン分のRFデータを受信し、受信した複数ライン分のRFデータがRFデータ管理部62に記憶される。図6(a)~(c)の最も右側の図は、エンコーダ7の入力から換算された走査方向の位置をX軸とし、深さ方向の位置をY軸としたXY座標平面上に強度信号の強弱を白黒階調して示す図である。
The third drawing from the right end of FIGS. 6A to 6C is a view of the wall (an example of the object) viewed from the front, and the reinforcing bars as the buried object 101 are buried along the vertical direction. ..
Here, it is assumed that the embedded object detection device 1 is reciprocated between the points E and F so as to traverse the embedded object 101. As shown in FIG. 6A, when the embedded object detection device 1 is moved from the point E to the point F for the first time, the receiving unit 61 moves from the point E to the point F along the arrow A1 indicating the moving direction. Each time the encoder 7 receives an input, the RF data for one line is received, and the received RF data for a plurality of lines is stored in the RF data management unit 62. The rightmost diagrams of FIGS. 6A to 6C are intensity signals on an XY coordinate plane in which the position in the scanning direction converted from the input of the encoder 7 is the X axis and the position in the depth direction is the Y axis. It is a figure which shows the intensity|strength of a black and white gradation.
 次に、図6(b)に示すように、2回目に点Fから点Eまで移動させると、受信部61は、移動方向を示す矢印A2に沿って点Fから点Eまでの複数のライン分のRFデータを受信し、受信した複数ライン分のRFデータがRFデータ管理部62に記憶される。
 平均化処理部63は、ラインごとに1回目と2回目のRFデータを平均化する。平均化処理部63は、点Eから点FまでのXY平面上の全てのXY座標値について、1回目と2回目で同じXY座標値の信号強度の平均が算出され、RFデータ管理部62に記憶される。
Next, as shown in FIG. 6B, when the point F is moved to the point E for the second time, the receiving unit 61 causes the plurality of lines from the point F to the point E along the arrow A2 indicating the moving direction. Minute RF data is received, and the received RF data for a plurality of lines is stored in the RF data management unit 62.
The averaging processing unit 63 averages the first and second RF data for each line. The averaging processing unit 63 calculates the average of the signal intensities of the same XY coordinate values at the first and second times for all the XY coordinate values on the XY plane from the point E to the point F, and the RF data managing unit 62 stores the average. Remembered.
 次に、図6(c)に示すように、3回目に点Eから点Fまで移動させると、受信部61は、移動方向を示す矢印A1に沿って点Eから点Fまでの複数のライン分のRFデータを受信し、受信した複数ライン分のRFデータがRFデータ管理部62に記憶される。
 平均化処理部63は、ラインごとに1回目と2回目と3回目のRFデータを平均化し、平均化されたRFデータをRFデータ管理部62に記憶させる。
 このように複数回同じ道筋を繰り返して走査することによって、RFデータを平均化でき、ノイズを低減し、より正確に後述する埋設物の位置を検出することができる。
Next, as shown in FIG. 6C, when the point E is moved to the point F for the third time, the receiving unit 61 causes the plurality of lines from the point E to the point F along the arrow A1 indicating the moving direction. Minute RF data is received, and the received RF data for a plurality of lines is stored in the RF data management unit 62.
The averaging processing unit 63 averages the first, second and third RF data for each line, and stores the averaged RF data in the RF data management unit 62.
By repeatedly scanning the same path a plurality of times in this manner, the RF data can be averaged, noise can be reduced, and the position of the buried object described later can be detected more accurately.
 (埋設物検出部64)
 図7は、埋設物検出部64の構成を示すブロック図である。
 埋設物検出部64は、前処理部23と、埋設物判定部24(埋設物位置検出部の一例)と、判定結果登録部25と、を有する。
(Buried object detection unit 64)
FIG. 7 is a block diagram showing the configuration of the embedded object detection unit 64.
The embedded object detection unit 64 includes a preprocessing unit 23, an embedded object determination unit 24 (an example of an embedded object position detection unit), and a determination result registration unit 25.
 前処理部23は、平均化された1ライン分のデータ毎に、信号強度のピークを検出する。
 埋設物判定部24は、前処理部23において検出された1ライン分のRFデータごとの信号強度のピークを用いて、埋設物の有無の判定を行う。また、埋設物判定部24は、埋設物101の位置を検出する。
 判定結果登録部25は、埋設物判定部24によって検出された埋設物の位置をRFデータ管理部62に登録する。
The pre-processing unit 23 detects a peak of signal intensity for each averaged line of data.
The embedded object determination unit 24 determines the presence or absence of an embedded object using the peak of the signal intensity for each line of RF data detected by the preprocessing unit 23. Further, the embedded object determination unit 24 detects the position of the embedded object 101.
The determination result registration unit 25 registers the position of the buried object detected by the buried object determination unit 24 in the RF data management unit 62.
 (前処理部23)
 図7に示すように、前処理部23は、ゲイン調整部31と、差分処理部32と、移動平均処理部33と、一次微分処理部34と、ピーク検出部35(信号強度ピーク検出部の一例)と、を有する。
(Preprocessing unit 23)
As shown in FIG. 7, the preprocessing unit 23 includes a gain adjusting unit 31, a difference processing unit 32, a moving average processing unit 33, a first-order differentiation processing unit 34, a peak detection unit 35 (of a signal intensity peak detection unit). One example) and.
 (ゲイン調整部31)
 ゲイン調整部31は、1ラインごとに平均化されたRFデータに対してゲイン調整を行う。送信アンテナ11および受信アンテナ12からの距離が大きくなる(ディレイ時間が大きくなると)受信感度が弱くなるため、後述する画像を表示する際に白と黒の濃淡が少なくなる。そのため、ゲイン調整部31は、深さ位置が深いほど、信号強度に掛ける(増幅する)ゲイン値(×1~×20)を大きくする。
(Gain adjuster 31)
The gain adjusting unit 31 performs gain adjustment on the RF data averaged for each line. As the distance from the transmission antenna 11 and the reception antenna 12 increases (the delay time increases), the reception sensitivity decreases, so that the density of white and black decreases when an image described later is displayed. Therefore, the gain adjusting unit 31 increases the gain value (×1 to ×20) by which the signal strength is multiplied (amplified) as the depth position is deeper.
 図8(a)は、ゲイン調整を行う前の画像データを示す図である。図8(a)に示す検出画像では、横軸(X軸)が移動距離を示しており、矢印A1の方向が移動方向を示している。縦軸(Y軸)が深さ位置を示しており、矢印方向が深い側を示している。図8(a)に示す図は、図4に示す1ラインごとの信号強度を白黒階調して縦軸方向に示し、さらに全てのラインの白黒階調したデータを横軸方向に示した検出画像である。なお、本実施の形態では、例えば、受信信号の強度が大きいほうが白く、受信信号が小さいほうが黒くなるように階調処理を行った。 FIG. 8A is a diagram showing image data before the gain adjustment. In the detected image shown in FIG. 8A, the horizontal axis (X axis) indicates the moving distance, and the direction of arrow A1 indicates the moving direction. The vertical axis (Y axis) indicates the depth position, and the arrow direction indicates the deep side. In the diagram shown in FIG. 8A, the signal intensity for each line shown in FIG. 4 is shown in black and white gradation in the vertical axis direction, and the black and white gradation data of all lines is shown in the horizontal axis direction. It is an image. In the present embodiment, for example, the gradation process is performed such that the greater the intensity of the received signal is, the whiter it is, and the smaller the intensity of the received signal is, the more black it is.
 このため、図8(a)に示す濃淡が信号強度を示している。また、白黒階調された1ラインの強度信号が点線で囲まれて示されている。図8(b)は、図8(a)の画像データにゲイン調整処理を行った画像データを示す図である。図8(b)に示すように、ゲイン調整によって濃淡が強くなる。また、深い箇所のほうのゲイン値が高くなるため、RFデータの値が大きくなる。そのため、下部の画像データは全体的に白っぽくなる。白っぽくなっている部分が点線で囲まれて示されている。 Therefore, the shading shown in FIG. 8A indicates the signal strength. In addition, the intensity signal of one line which is subjected to black and white gradation is shown surrounded by a dotted line. FIG. 8B is a diagram showing image data obtained by performing gain adjustment processing on the image data of FIG. 8A. As shown in FIG. 8B, the tone adjustment becomes stronger by the gain adjustment. Further, since the gain value in the deeper part becomes higher, the value of the RF data becomes larger. Therefore, the image data in the lower part becomes whitish as a whole. The whitish part is shown surrounded by a dotted line.
 (差分処理部)
 差分処理部32は、ゲイン調整したRFデータから、基準点との差分を算出することによって、変化した箇所のRFデータを抽出する。図9(a)は、差分処理を行う前の画像データを示す図であり、図9(b)は、差分処理を行った後の画像データを示す図である。図9(a)は、図8(b)を同じ画像データである。
(Differential processing unit)
The difference processing unit 32 extracts the RF data of the changed portion by calculating the difference from the reference point from the gain-adjusted RF data. FIG. 9A is a diagram showing the image data before the difference process is performed, and FIG. 9B is a diagram showing the image data after the difference process is performed. FIG. 9A shows the same image data as FIG. 8B.
 ここで、基準点は今まで取得したデータの平均値である。例えば、RFデータにおいて、m番目(X座標値X)のラインの深さn(mm)(Y座標値Y)の信号強度の基準点との差分を算出する場合には、1~m-1番目(X座標値X~Xm-1))までのラインの深さY(mm)の信号強度の平均値を算出し、その平均値を、m番目のラインの深さn(mm)の信号強度から引く。この演算を、全てのラインの全ての深さ位置に対して行うことにより、図9(b)に示すようにRFデータ信号の変化を明確にすることができる。 Here, the reference point is the average value of the data acquired so far. For example, in the RF data, when calculating the difference from the reference point of the signal intensity of the depth n (mm) (Y coordinate value Y n ) of the m-th (X coordinate value X m ) line, 1 to m An average value of the signal intensities of the line depths Y n (mm) up to the -1st (X coordinate values X 1 to X m-1 ) is calculated, and the average value is calculated as the depth n of the m-th line. Subtract from the signal strength in (mm). By performing this calculation for all depth positions of all lines, it is possible to clarify changes in the RF data signal as shown in FIG. 9B.
 (移動平均処理部)
 移動平均処理部33は、差分処理を行ったRFデータについて、1ラインごとに移動平均処理を行う。本実施の形態では、例えば8点平均で移動平均処理を行うことができる。
 図10(a)は、移動平均処理を行った画像データを示す図であり、図10(b)は、図10(a)のラインL1のRFデータの信号強度を示す図である。図10(b)の横軸は深さ位置を示し、矢印方向に沿って深くなっている。図10(b)の縦軸は信号強度を示し、矢印方向に沿って信号強度が強くなっている。
(Moving average processing unit)
The moving average processing unit 33 performs the moving average processing for each line for the RF data subjected to the difference processing. In the present embodiment, for example, the moving average process can be performed by averaging 8 points.
FIG. 10A is a diagram showing the image data subjected to the moving average processing, and FIG. 10B is a diagram showing the signal intensity of the RF data of the line L1 of FIG. 10A. The horizontal axis of FIG. 10B indicates the depth position, which is deeper along the arrow direction. The vertical axis of FIG. 10B shows the signal intensity, and the signal intensity increases along the arrow direction.
 なお、本実施の形態では、信号強度が強い方が白く階調され、信号強度が弱いほうが黒く階調される。また、本実施の形態では、下向きのピーク、すなわち黒色が最も濃くなっている位置と、上向きのピーク、すなわち白色が最も薄くなっている位置が検出される。例えば、下向きのピークの位置は、コンクリート中の鉄筋等の位置を示し、上向きのピークの位置は、コンクリート中の空洞や樹脂の位置を示す。 Note that, in the present embodiment, the one with the higher signal strength is grayed out in white, and the one with the weaker signal strength is grayed out in black. Further, in the present embodiment, the downward peak, that is, the position where the black color is the darkest, and the upward peak, that is, the position where the white color is the thinnest are detected. For example, the position of a downward peak indicates the position of a reinforcing bar or the like in concrete, and the position of an upward peak indicates the position of a cavity or resin in concrete.
 下向きのピークと上向きのピークを検出する原理は同じであるため、以下の説明では、下向きのピーク(黒色が最も濃くなっている位置)を検出することについて具体的に説明する。
 図10(a)のラインL1上の黒色部分をP1~P5で示す。このP1~P5が、図10(b)にも示されている。また、図10(b)には、P2とP3の間の上向きのピークの1つがP10として示されている。
Since the principle of detecting the downward peak and the upward peak is the same, in the following description, the detection of the downward peak (the position where black is the darkest) will be specifically described.
Black portions on the line L1 in FIG. 10A are indicated by P1 to P5. These P1 to P5 are also shown in FIG. 10(b). Further, in FIG. 10B, one of the upward peaks between P2 and P3 is shown as P10.
 (一次微分処理部34)
 一次微分処理部34は、下向きのピークを検出するために、差分処理が行われたデータに対して一次微分処理を行う。一次微分処理部34は、所定の深さ位置における信号強度から、次の深さ位置における信号強度への差分を算出する。
 図11は、図10(b)のP10~P3の間の拡大図である。図12は、図11のグラフの信号強度および一次微分処理の結果の表150を示す図である。
(First derivative processing unit 34)
The primary differential processing unit 34 performs primary differential processing on the data subjected to the differential processing in order to detect the downward peak. The first-order differentiation processing unit 34 calculates the difference from the signal strength at the predetermined depth position to the signal strength at the next depth position.
FIG. 11 is an enlarged view between P10 and P3 in FIG. FIG. 12 is a diagram showing a table 150 of the signal strength of the graph of FIG. 11 and the result of the first derivative processing.
 図11に示す表150の最も左の欄には、シーケンスナンバーが示されている。シーケンスナンバーが大きくなるに従って位置が深くなっている。左から2つ目の欄には、各シーケンスナンバーでの信号強度が示されている。左から3つ目の欄には、一次微分処理部34によって算出された差分が示されている。
 シーケンスナンバーnの差分は、シーケンスナンバーn+1の信号強度からシーケンスナンバーnの信号強度を引いた値となっている。例えば、シーケンスナンバーが7番の差分は、8番目の信号強度(422)から7番目の信号強度(431)を引いた値(-9)となっている。
 このように、一次微分処理部34は、1ラインの全てのデータに対して一次微分処理を行う。
Sequence numbers are shown in the leftmost column of the table 150 shown in FIG. The position becomes deeper as the sequence number increases. The second column from the left shows the signal strength at each sequence number. The third column from the left shows the difference calculated by the primary differential processing unit 34.
The difference of the sequence number n is a value obtained by subtracting the signal intensity of the sequence number n from the signal intensity of the sequence number n+1. For example, the difference with the sequence number 7 is a value (-9) obtained by subtracting the 7th signal strength (431) from the 8th signal strength (422).
In this way, the primary differential processing unit 34 performs the primary differential processing on all the data of one line.
 (ピーク検出部35)
 ピーク検出部35は、一次微分処理を行った後の1ラインのRFデータのピークを検出する。例えば、下向きのピーク(黒色のピーク)を検出する場合、ピーク検出部35は、一次微分処理を行った後の変化が、負の変化から正の変化に変わるポイントをピークとして検出する。具体的には、図12の表150に示すように、シーケンスナンバー33における変化が負(-)の変化となっており、シーケンスナンバー34における変化が正(+)の変化となっていることから、ピーク検出部35は、シーケンスナンバー34の深さ位置において信号強度が下向きのピークとなっていると検出する。
 なお、上向きのピーク(白色が最も薄くなっている位置)を検出する場合には、ピーク検出部35は、一次微分処理を行った後の変化が、正の変化から負の変化に変わるポイントをピークとして検出する。例えば、図12の表150では、シーケンスナンバー5の深さ位置において信号強度が上向きのピークとなっていることが検出される。
(Peak detection unit 35)
The peak detection unit 35 detects the peak of the RF data of one line after the primary differential processing. For example, when detecting a downward peak (black peak), the peak detection unit 35 detects, as a peak, a point at which the change after the first-order differentiation process changes from a negative change to a positive change. Specifically, as shown in Table 150 of FIG. 12, the change in sequence number 33 is a negative (-) change, and the change in sequence number 34 is a positive (+) change. The peak detection unit 35 detects that the signal intensity has a downward peak at the depth position of the sequence number 34.
In addition, when detecting the upward peak (the position where the whitest color is the thinnest), the peak detection unit 35 determines the point at which the change after performing the first derivative process changes from a positive change to a negative change. Detect as a peak. For example, in the table 150 of FIG. 12, it is detected that the signal intensity has an upward peak at the depth position of the sequence number 5.
 (埋設物判定部24)
 埋設物判定部24は、図7に示すように、グルーピング部51と、形状判定部52(深さ位置ピーク検出部の一例)と、埋設物位置決定部53と、埋設物データ積算部54と、を有する。グルーピング部51は、ピーク検出部35によって検出された複数のピークのうち、移動距離(X軸座標)に対して連続したピークをグループとして検出する。形状判定部52は、グループが山形状であるか否かに基づいて埋設物の有無を判定し、埋設物が存在すると判定した場合には、グループにおける頂点(深さ位置のピークの一例)を検出し、埋設物の位置とする。
(Embedded object determination unit 24)
As shown in FIG. 7, the buried object determination unit 24 includes a grouping unit 51, a shape determination unit 52 (an example of a depth position peak detection unit), a buried object position determination unit 53, and a buried object data integration unit 54. With. The grouping unit 51 detects, as a group, peaks that are continuous with respect to the moving distance (X-axis coordinate) among the plurality of peaks detected by the peak detection unit 35. The shape determination unit 52 determines the presence/absence of an embedded object based on whether or not the group has a mountain shape, and when it determines that the embedded object exists, determines the apex (an example of a peak at the depth position) in the group. Detect and use as the position of the buried object.
 (グルーピング部51)
 グルーピング部51は、ピーク検出部35によるピーク検出結果をグルーピングする。グルーピング部51は、過去のラインから順番にピーク検出結果の有無を確認する。その結果を始点として進行方向に対して連続するピーク検出の有無をチェックする。図13は、前処理部23による前処理後の画像データを示す図である。図13では、今回取得したラインL2が示されている。図14(a)~(d)は、グルーピング部51による処理を説明するための図である。
(Grouping part 51)
The grouping unit 51 groups the peak detection results by the peak detection unit 35. The grouping unit 51 confirms the presence or absence of the peak detection result in order from the past line. With the result as a starting point, the presence or absence of continuous peak detection in the traveling direction is checked. FIG. 13 is a diagram showing image data after preprocessing by the preprocessing unit 23. In FIG. 13, the line L2 acquired this time is shown. 14A to 14D are diagrams for explaining the processing by the grouping unit 51.
 グルーピング部51は、最初に見つけたピークの位置QSを始点(図13において●で示す)として、移動方向を示す矢印Bの方向の5pixel以内且つ、上下の5pixel以内に次のラインのピークが存在するか否かを確認する。なお、ピークの位置Qを見つけたラインを現在のラインとする。
 図14(a)は、ピークの位置QSを見つけた状態を示す。図14(b)は、次のピークの位置Q2が、現在のラインのピークの位置QSの移動方向を示す矢印A1の方向の5pixel以内且つ、上側5pixel以内に存在する場合を示す。図14(b)では、ピークの位置が移動方向において上昇(浅い側に移動)していることになる。図14(c)は、次のラインのピークの位置Q2が、現在のラインのピークの位置QSの移動方向を示す矢印A1の方向の5pixel以内且つ、下側5pixel以内に存在する場合を示す。図14(c)では、ピークの位置が移動方向において下降(深い側に移動)していることになる。
The grouping unit 51 uses the position QS of the first found peak as a starting point (indicated by ● in FIG. 13) and has a peak of the next line within 5 pixels in the direction of arrow B indicating the moving direction and within 5 pixels above and below. Confirm whether to do. The line in which the peak position Q is found is the current line.
FIG. 14A shows a state where the peak position QS is found. FIG. 14B shows a case where the position Q2 of the next peak exists within 5 pixels of the direction of arrow A1 indicating the moving direction of the position QS of the peak of the current line and within 5 pixels of the upper side. In FIG. 14B, the peak position is rising (moving to the shallow side) in the moving direction. FIG. 14C shows the case where the peak position Q2 of the next line exists within 5 pixels and below 5 pixels in the direction of arrow A1 indicating the moving direction of the peak position QS of the current line. In FIG. 14C, the position of the peak is descending (moving to the deep side) in the moving direction.
 続いて、ピークの位置Q2が存在するラインを現在のラインとして、ピークの位置Q2の移動方向を示す矢印A1の方向の5pixel以内且つ、上下の5pixel以内に次のラインのピークが存在するか否かを確認する。このように、ラインのRFデータを受信するごとに、現在のラインを移動方向にずらしてピークの連続性を確認する。
 そして、図14(d)に示すように、次のラインのピークが、現在のラインのピークの位置の移動方向を示す矢印Bの方向の5pixel以内且つ、上下の5pixel以内に存在しない場合には、ピークの位置Qeをグループの終点(図13で■で示す)とする。
 以上のように、グルーピング部51は、ピークの位置のグルーピングを行う。図13では、黒丸と黒四角の間が線で繋がれたグループ(例えばグループG1)が示されている。
 なお、同様に上向きのピークの位置(白色が最も薄くなっている位置)のグルーピングも行われる。
Then, with the line having the peak position Q2 as the current line, whether the peak of the next line exists within 5 pixels in the direction of arrow A1 indicating the moving direction of the peak position Q2 and within 5 pixels above and below Check if In this way, each time the RF data of a line is received, the current line is shifted in the moving direction to check the continuity of peaks.
Then, as shown in FIG. 14D, when the peak of the next line does not exist within 5 pixels in the direction of arrow B indicating the moving direction of the peak position of the current line and within 5 pixels above and below , The peak position Qe is the end point of the group (indicated by ▪ in FIG. 13).
As described above, the grouping unit 51 performs grouping of peak positions. In FIG. 13, a group (for example, group G1) in which a black circle and a black square are connected by a line is shown.
In addition, similarly, the grouping of the upward peak position (the position where the whitest color is the thinnest) is also performed.
 (形状判定部52)
 形状判定部52は、グループの形状が所定の山形状であるか否かを判定する。図15は、グルーピングされた複数のピークの位置を示す図である。また、図15には、山形状と判定するための条件も示されている。形状判定部52は、第1条件、第2条件、および第3条件の3つの条件を満たす場合に、グループが山形状であると判定する。
(Shape determination unit 52)
The shape determination unit 52 determines whether the shape of the group is a predetermined mountain shape. FIG. 15 is a diagram showing the positions of a plurality of grouped peaks. Further, FIG. 15 also shows conditions for determining the mountain shape. The shape determination unit 52 determines that the group has a mountain shape when the three conditions of the first condition, the second condition, and the third condition are satisfied.
 図15に示すように、第1条件は、移動方向を示す矢印A1の方向において連続して5pixel以上、上方向(浅い方向)に上昇していることであり、第2条件は、移動方向を示す矢印A1の方向において連続して5pixel以上、下方向(深い方向)に下降していることであり、第3条件は、深さ方向の差が10pixel以上ある。
 形状判定部52は、これら3つの条件を満たす場合に、グループの形状が山形状であると判定し、埋設物が存在すると判定する。
As shown in FIG. 15, the first condition is that the direction of movement is increased by 5 pixels or more in the upward direction (shallow direction) in the direction of the arrow A1, and the second condition is that the movement direction is changed. The third condition is that the difference in the depth direction is 10 pixels or more as the third condition is continuously descending by 5 pixels or more in the direction of arrow A1.
When the three conditions are satisfied, the shape determination unit 52 determines that the shape of the group is a mountain shape, and determines that an embedded object exists.
 一方、第1~第3条件のいずれか1つの条件でも満たさない場合には、形状判定部52は、埋設物が存在すると判定しない。
 また、形状判定部52は、第1~第3条件までの判定を行う際に、グループの位置の上向きのピークも検出する。
 形状判定部52は、最も浅くなっている位置をグループの頂点とし、その位置を記憶する。
On the other hand, if any one of the first to third conditions is not satisfied, the shape determination unit 52 does not determine that the embedded object exists.
The shape determination unit 52 also detects an upward peak of the position of the group when performing the determinations of the first to third conditions.
The shape determination unit 52 sets the shallowest position as the apex of the group and stores the position.
 形状判定部52は、位置の変化が増加から減少に変わるポイントをグループの頂点とする。例えば、図16では、7番目から8番目の変化まで正(+)であり、8番目から9番目の変化が負(-)であるため、図15に示すように、8番目の位置が頂点であると判定される。
 なお、上記と同様に、形状判定部52は、上向きのピークの位置(白色が最も薄くなっている位置)のグループの形状の判定、およびグループの最も浅くなっている位置を頂点の位置として検出する。
The shape determination unit 52 sets the point where the change in position changes from increase to decrease as the apex of the group. For example, in FIG. 16, since the 7th to 8th changes are positive (+) and the 8th to 9th changes are negative (−), as shown in FIG. 15, the 8th position is the vertex. Is determined.
Note that, similarly to the above, the shape determination unit 52 determines the shape of the group at the position of the upward peak (the position where the white color is the thinnest), and detects the shallowest position of the group as the vertex position. To do.
 (埋設物位置決定部53)
 埋設物位置決定部53は、形状判定部52で検出されたピークの位置および数に基づいて埋設物101の位置を決定する。
 埋設物位置決定部53は、検出された複数のピークにおいて隣り合うピークが所定範囲内に存在するか否かを判定する。
(Embedded object position determination unit 53)
The embedded object position determination unit 53 determines the position of the embedded object 101 based on the position and the number of peaks detected by the shape determination unit 52.
The buried object position determination unit 53 determines whether or not adjacent peaks in the plurality of detected peaks are within a predetermined range.
 具体的には、埋設物位置決定部53は、隣り合うピークがX軸の±10pixel以内かつY軸の±40pixel以内にあるか否かを判定する。埋設物位置決定部53は、所定のピークからX軸の±10pixel以内かつY軸の±40pixel以内に別のピークが有るか否かを判定し、別のピークが存在する場合には、その別のピークからX軸の±10pixel以内かつY軸の±40pixel以内に更に別のピークが存在するか否かを判定し、RFデータを取得した全ての範囲のピークについて判定を行う。 Specifically, the buried object position determination unit 53 determines whether or not the adjacent peaks are within ±10 pixels on the X axis and ±40 pixels on the Y axis. The buried object position determination unit 53 determines whether or not there is another peak within ±10 pixels of the X axis and within ±40 pixels of the Y axis from the predetermined peak, and when another peak exists, the other peak is determined. It is determined whether or not there is another peak within ±10 pixels on the X-axis and within ±40 pixels on the Y-axis from the peak in (1), and the determination is performed for the peaks in the entire range in which the RF data is acquired.
 その結果、埋設物位置決定部53は、図17の表1に示すように、5パターンのピークが検出される。
 パターンA1は、浅い方から順に、互いに隣り合った、白(上向きのピークの位置)のグループの頂点、黒(下向きのピークの位置)のグループの頂点、および白(上向きのピークの位置)のグループの頂点が検出されたパターンを示す。
As a result, the buried object position determination unit 53 detects five patterns of peaks, as shown in Table 1 of FIG.
The pattern A1 is composed of white (upward peak position) group vertices, black (downward peak position) group vertices, and white (upward peak position) adjacent to each other in order from the shallowest. The pattern in which the vertices of the group are detected is shown.
 パターンA2は、浅い方から順に、互いに隣り合った、黒(下向きのピークの位置)のグループの頂点、白(上向きのピークの位置)のグループの頂点、および黒(下向きのピークの位置)のグループの頂点が検出されたパターンを示す。
 パターンB1は、浅い方から順に、互いに隣り合った、白(上向きのピークの位置)のグループの頂点、および黒(下向きのピークの位置)のグループの頂点が検出されたパターンを示す。
The pattern A2 is, in order from the shallowest, a vertex of a group of black (downward peak position), a vertex of a group of white (upward peak position), and a black (downward peak position) which are adjacent to each other. The pattern in which the vertices of the group are detected is shown.
The pattern B1 shows a pattern in which the vertices of the white (upward peak position) group and the black (downward peak position) group vertex, which are adjacent to each other, are detected in order from the shallow side.
 パターンB2は、浅い方から順に、互いに隣り合った、黒(下向きのピークの位置)のグループの頂点、および白(上向きのピークの位置)のグループの頂点が検出されたパターンを示す。
 パターンCは、上記パターンA1、A2、B1、B2以外のパターンであって、白(上向きのピークの位置)のグループの頂点、または黒(下向きのピークの位置)のグループの頂点が1つのみ検出されたパターンを示す。
The pattern B2 is a pattern in which the vertices of the black (downward peak position) group and the vertices of the white (upward peak position) group, which are adjacent to each other, are detected in order from the shallow side.
Pattern C is a pattern other than the above-mentioned patterns A1, A2, B1, and B2, and has only one vertex of a white (upward peak position) group or a black (downward peak position) group vertex. The detected pattern is shown.
 図18(a)は、3つの隣り合うピークが検出された画像データを示す図であり、図18(b)は、決定された埋設物の位置の例を示す図である。
 図18(a)には、浅い方から順に黒(下向きのピークの位置)のグループG1の頂点の位置P1、白(上向きのピークの位置)のグループG2の頂点の位置P2、および黒(下向きのピークの位置)のグループG3の頂点の位置P3が検出されており、×印で示されている。
FIG. 18A is a diagram showing image data in which three adjacent peaks are detected, and FIG. 18B is a diagram showing an example of the determined position of the buried object.
In FIG. 18A, the position P1 of the apex of the group G1 of black (the position of the downward peak), the position P2 of the apex of the group G2 of the white (the position of the upward peak), and the black (the downward direction) in order from the shallow side. The position P3 of the apex of the group G3 (the position of the peak of 1) is detected, and is indicated by a cross.
 ここで、頂点の位置P1と頂点の位置P2のX軸の値の差が10pixel以内かつ頂点の位置P2と頂点の位置P3のX軸の値の差が10pixel以内であり、更に、頂点の位置P1と頂点の位置P2のY軸の値の差が40pixel以内かつ頂点の位置P2と頂点の位置P3のY軸の値の差が40pixel以内となっている。
 埋設物位置決定部53は、パターンA1の場合、3つの隣り合う頂点の位置P1(黒の頂点)、P2(白の頂点)、P3(黒の頂点)のうち、図18(b)に示すように最も浅い頂点の位置P1を埋設物の位置Pdと決定する。
Here, the difference between the X-axis values of the vertex position P1 and the vertex position P2 is within 10 pixels, the difference between the X-axis values of the vertex position P2 and the vertex position P3 is within 10 pixels, and the vertex position The difference between the Y-axis values of P1 and the vertex position P2 is within 40 pixels, and the difference between the Y-axis values of the vertex position P2 and the vertex position P3 is within 40 pixels.
In the case of the pattern A1, the embedded object position determining unit 53 shows the position P1 (black apex), P2 (white apex), and P3 (black apex) of three adjacent apexes as shown in FIG. Thus, the position P1 of the shallowest vertex is determined as the position Pd of the buried object.
 すなわち、埋設物位置決定部53は、頂点の位置P1と頂点の位置P2と頂点の位置P3が、それぞれ近傍に設けられているかを判定し、近傍に設けられている場合には、1つの埋設物から検出された頂点と判断して、最も浅い位置に埋設物が配置されていると判断する。
 なお、パターンA1の場合も、埋設物位置決定部53は、最も浅い頂点を埋設物の位置Pdと決定する。
That is, the embedded object position determination unit 53 determines whether the vertex position P1, the vertex position P2, and the vertex position P3 are provided in the vicinity, respectively. It is determined that the apex is detected from the object, and it is determined that the embedded object is arranged at the shallowest position.
Also in the case of the pattern A1, the embedded object position determination unit 53 determines the shallowest vertex as the position Pd of the embedded object.
 また、検出された頂点の位置P2、P3の位置はRFデータ管理部62に保存されるだけで表示されず、埋設物の位置Pdと決定された頂点の位置P1のみが、図18(b)に示すように、表示制御部66によって表示部8に表示される。
 図19(a)は、2つの隣り合う頂点が検出された画像データを示す図であり、図19(b)は、決定された埋設物の位置の例を示す図である。
The detected positions of the vertices P2 and P3 are only stored in the RF data management unit 62 and are not displayed, and only the position Pd of the vertices determined to be the position Pd of the buried object is shown in FIG. As shown in, the display control unit 66 displays it on the display unit 8.
FIG. 19A is a diagram showing image data in which two adjacent vertices are detected, and FIG. 19B is a diagram showing an example of the determined position of the buried object.
 図19(a)には、浅い方から順に白(上向きのピークの位置)のグループG2、および黒(下向きのピークの位置)のグループG3の画像が示されている。そして、グループG2の頂点の位置P2と、グループG3の頂点の位置P3が検出されており、×印で示されている。
 頂点の位置P2と頂点の位置P3のX軸の値の差が10pixel以内かつ頂点の位置P2と頂点の位置P3のY軸の値の差が40pixel以内となっている。
FIG. 19A shows images of a white (upward peak position) group G2 and a black (downward peak position) group G3 in order from the shallow side. Then, the position P2 of the apex of the group G2 and the position P3 of the apex of the group G3 are detected, which is indicated by a cross.
The difference between the X-axis values of the vertex position P2 and the vertex position P3 is within 10 pixels, and the difference between the Y-axis values of the vertex position P2 and the vertex position P3 is within 40 pixels.
 埋設物位置決定部53は、パターンB1の場合、図19(b)に示すように浅い方のピークP2を埋設物の位置と決定する。すなわち、埋設物位置決定部53は、頂点の位置P2と頂点の位置P3が、それぞれ近傍に設けられているかを判定し、近傍に設けられている場合には、1つの埋設物から検出された頂点と判断して、浅いほうの頂点の位置に埋設物が配置されていると判断する。 In the case of the pattern B1, the buried object position determination unit 53 determines the shallower peak P2 as the position of the buried object as shown in FIG. 19(b). That is, the buried object position determination unit 53 determines whether the vertex position P2 and the vertex position P3 are provided in the vicinity, respectively, and when they are provided in the vicinity, they are detected from one buried object. It is determined that the embedded object is placed at the position of the shallower apex.
 なお、パターンB2の場合も、埋設物位置決定部53は、浅い方向の頂点の位置を埋設物の位置と決定する。
 また、図20(a)は、隣り合う頂点が検出されず頂点が1つ検出された例を示す図であり、図20(b)は、埋設物の位置が決定されていない状態を示す図である。
 図20(a)には、浅い方から順に白(上向きのピークの位置)のグループG2の頂点の位置P2が検出されており、×印で示されている。
Also in the case of the pattern B2, the embedded object position determination unit 53 determines the position of the apex in the shallow direction as the position of the embedded object.
20A is a diagram showing an example in which adjacent vertices are not detected and one vertice is detected, and FIG. 20B is a diagram showing a state in which the position of the buried object is not determined. Is.
In FIG. 20A, the position P2 of the apex of the white (upward peak position) group G2 is detected in order from the shallower side, and is indicated by a cross.
 頂点の位置P2からX軸の±10pixel以内かつY軸の±40pixel以内に他のピークが存在しない。このようなパターンCの場合、埋設物位置決定部53は、図20(b)に示すように、埋設物の位置を決定しない。 There are no other peaks within ±10 pixels on the X axis and ±40 pixels on the Y axis from the vertex position P2. In the case of such a pattern C, the embedded object position determination unit 53 does not determine the position of the embedded object, as shown in FIG.
 (埋設物データ積算部54)
 埋設物データ積算部54は、図6(a)~図6(c)で説明したように、同じ道筋を複数回移動させた場合に、検出された埋設物の位置を更新する。
 埋設物データ積算部54は、図21に示す埋設物データ遷移表に基づいて、埋設物データを変更する。図22(a)~(c)は、埋設物データの遷移を説明するための図である。
 図22(a)は、点Eから点Fまで1回目に埋設物検出装置1を走査させた際の走査方向、頂点の検出結果、および埋設物位置判定結果を示す図である。図22(b)は、点Eから点Fまで2回目に埋設物検出装置1を走査させた際の走査方向、頂点の検出結果、および埋設物位置判定結果を示す図である。図22(c)は、点Eから点Fまで3回目に埋設物検出装置1を走査させた際の走査方向、頂点の検出結果、および埋設物位置判定結果を示す図である。
(Buried object data integration unit 54)
As described with reference to FIGS. 6A to 6C, the buried object data integration unit 54 updates the detected position of the buried object when the same path is moved multiple times.
The buried object data integration unit 54 changes the buried object data based on the buried object data transition table shown in FIG. 22A to 22C are diagrams for explaining the transition of the buried object data.
FIG. 22A is a diagram showing the scanning direction, the detection result of the apex, and the embedded object position determination result when the embedded object detection device 1 is scanned from the point E to the point F for the first time. FIG. 22B is a diagram showing the scanning direction, the vertex detection result, and the embedded object position determination result when the embedded object detection device 1 is scanned for the second time from the point E to the point F. FIG. 22C is a diagram showing a scanning direction, a vertex detection result, and an embedded object position determination result when the embedded object detection device 1 is scanned for the third time from point E to point F.
 図22(a)に示すように、点Eから点Fまで1回目に埋設物検出装置1を走査させた際に、パターンC(右端から2番目の画像参照)が検出されたときには、上述したように埋設物位置決定部53は、埋設物の位置を決定しない(右端の画像参照)。
 次に、図22(b)に示すように、点Fから点Eまで2回目に埋設物検出装置1を走査させた際に、パターンB1が検出されたときには、上述したように埋設物位置決定部53は、埋設物の位置を頂点の位置P2(右端の画像参照)に決定する。
As shown in FIG. 22A, when a pattern C (see the second image from the right end) is detected when the embedded object detection device 1 is scanned for the first time from point E to point F, the above-described operation is performed. Thus, the buried object position determination unit 53 does not determine the position of the buried object (see the image on the right end).
Next, as shown in FIG. 22B, when the pattern B1 is detected when the embedded object detection device 1 is scanned from point F to point E for the second time, the embedded object position is determined as described above. The unit 53 determines the position of the buried object to be the position P2 of the apex (see the image at the right end).
 このとき、埋設物データ積算部54は、1回目(変更前)の結果がパターンCであり、2回目(変更後)の結果がパターンB1であるため、図21の遷移表に従って、B2が採用される。
 次に、図22(c)に示すように、点Eから点Fまで3回目に埋設物検出装置1を走査させた際に、パターンA2が検出されたときには、上述したように埋設物位置決定部53は、埋設物の位置を頂点の位置P1(右端の画像参照)に決定する。
 このとき、埋設物データ積算部54は、2回目(変更前)の結果がパターンB1であり、2回目(変更後)の結果がパターンA2であるため、図21の遷移表に従って、A2が採用され、埋設物の位置Pdが頂点の位置P1に決定する。
At this time, since the buried object data integration unit 54 has the first result (before the change) as the pattern C and the second result (after the change) as the pattern B1, B2 is adopted according to the transition table of FIG. To be done.
Next, as shown in FIG. 22C, when the pattern A2 is detected when the embedded object detection device 1 is scanned for the third time from the point E to the point F, the embedded object position determination is performed as described above. The unit 53 determines the position of the buried object to be the position P1 of the apex (see the image at the right end).
At this time, the buried object data integration unit 54 uses the pattern B1 as the second result (before the change) and the pattern A2 as the second result (after the change). Then, the position Pd of the buried object is determined as the position P1 of the apex.
 (判定結果登録部25)
 判定結果登録部25は、埋設物判定部24で判定した結果(グループ、ピーク位置、決定された埋設物の位置など)をRFデータ管理部62に登録する。
(Judgment result registration unit 25)
The determination result registration unit 25 registers the result (group, peak position, determined position of the buried object, etc.) judged by the buried object judgment unit 24 in the RF data management unit 62.
 (走査エラー判定部65)
 走査エラー判定部65は、同じ道筋を往復移動するように走査されているか否かを判定する。
 ここで、同じ道筋を往復移動していない場合(位置ずれしている場合ともいう)に生じる問題について詳しく説明する。図23は、埋設物検出装置が同じ道筋を往復移動していない状態を説明するための図である。
(Scanning error determination unit 65)
The scanning error determination unit 65 determines whether or not scanning is performed so as to reciprocate along the same path.
Here, a problem that occurs when the same path is not reciprocally moved (also referred to as a position shift) will be described in detail. FIG. 23 is a diagram for explaining a state in which the embedded object detection device is not reciprocating along the same path.
 例えば、使用者が埋設物検出装置1を位置Jから折り返し原点Oまで埋設物101(例えば、鉄筋)を横切るように移動して走査が行われた場合、位置Kに埋設物101が埋設されていることを検出することができる。この時の画像データが図23(b)に示されている。位置Kは、折り返し原点Oから距離L離れた位置である。
 埋設物検出装置1を折り返し原点Oまで移動させた後、折り返し原点Oで折り返して埋設物検出装置1を位置Jに向けて移動する際に、道筋がずれて位置Rに向かったとする。この場合、折り返し原点Oからの距離Lの位置で埋設物101が検出できない。このときの画像データが図23(c)に示されている。図23(c)は、折り返し原点Oから位置Kまでの画像データが折り返した移動の際のデータを示し、残りのデータは、図23(b)のデータが示されており、前回の位置Jから折り返し原点Oまでの移動の際の埋設物101の位置P1(Pd)が示されている。
For example, when the user moves the embedded object detection device 1 from the position J to the turning point O so as to cross the embedded object 101 (for example, a reinforcing bar) and scans, the embedded object 101 is embedded at the position K. Can be detected. The image data at this time is shown in FIG. The position K is a position separated from the folding origin O by a distance L.
It is assumed that when the embedded object detection device 1 is moved to the folding origin O and then folded back at the folding origin O to move the embedded object detection device 1 toward the position J, the route is shifted to the position R. In this case, the embedded object 101 cannot be detected at the position of the distance L from the folding origin O. The image data at this time is shown in FIG. FIG. 23C shows the data when the image data from the folding origin O to the position K is folded and moved, and the remaining data is the data of FIG. 23B and the previous position J. The position P1 (Pd) of the embedded object 101 during the movement from the origin to the turn-back origin O is shown.
 このように、埋設物検出装置の走査時に同じ道筋から外れた場合に、埋設物が存在しないところで、以前に検出された埋設物の位置P1が表示されることがある。
 そのため、本実施の形態の埋設物検出装置1では、走査エラーの判定を行い、走査エラーと判定された場合には、以前に検出された埋設物の位置P1を消去する。これによって、ずれた道筋において本来表示されるべき位置に埋設物を表示させることができる。
In this way, when the embedded object detection device deviates from the same path when scanning, the position P1 of the embedded object detected previously may be displayed in the place where the embedded object does not exist.
Therefore, the embedded object detection apparatus 1 of the present embodiment determines a scanning error, and when a scanning error is determined, the previously detected position P1 of the embedded object is erased. As a result, the embedded object can be displayed at a position that should be originally displayed on the shifted route.
 図24は、走査エラー判定部65の構成を示すブロック図である。図24に示すように、走査エラー判定部65は、範囲設定部71と、差演算部72と、カウント部73と、判定部74と、埋設物位置消去部75と、を有する。
 範囲設定部71は、所定の道筋に沿って走査したときのRFデータにおける所定の範囲S(第1範囲の一例)と、その後に折り返し移動させているときのRFデータにおける範囲T(第2範囲の一例)を設定する。範囲Tと範囲Sは対応する範囲である。
FIG. 24 is a block diagram showing the configuration of the scanning error determination unit 65. As shown in FIG. 24, the scanning error determination unit 65 includes a range setting unit 71, a difference calculation unit 72, a counting unit 73, a determination unit 74, and an embedded object position erasing unit 75.
The range setting unit 71 includes a predetermined range S (one example of the first range) in the RF data when scanning along a predetermined path, and a range T (the second range) in the RF data when the RF data is subsequently moved back. Example) is set. The range T and the range S are corresponding ranges.
 図25(a)および図25(b)は、範囲設定部71による範囲の設定を説明するための図である。図25(a)は、位置Jから折り返し原点Oまで移動した際の範囲Sを説明するための図である。図25(b)は、折り返し原点Oから位置Rに向かって移動している際の範囲Tを説明するための図である。
 範囲Sは、埋設物の検出された位置Kから折り返し原点Oに向かって距離L1の範囲に設定される。ここで、位置KのX座標値をX1とし、X1から折り返し原点Oに向かって距離L1移動した位置のX座標値をX2とする。このように、範囲Sは、X座標値X1からX2の範囲を示す。
FIG. 25A and FIG. 25B are diagrams for explaining the setting of the range by the range setting unit 71. FIG. 25A is a diagram for explaining the range S when moving from the position J to the folding origin O. FIG. 25B is a diagram for explaining the range T when moving from the folding origin O toward the position R.
The range S is set to a range of the distance L1 from the detected position K of the buried object toward the folding origin O. Here, it is assumed that the X coordinate value of the position K is X1, and the X coordinate value of the position moved by the distance L1 from X1 toward the folding origin O is X2. In this way, the range S indicates the range of the X coordinate values X1 to X2.
 範囲Sに対応する範囲Tは、折り返し原点Oから位置Kまでの距離L分、位置R方向に向かって移動した位置Pから折り返し原点Oに向かって距離L1の範囲に設定される。ここで、折り返し原点Oからの距離が同じであるから、位置PのX座標値は、位置KのX座標値と同じX1となる。また、座標値X1から折り返し原点Oに向かって距離L1移動した位置のX座標値はX2となる。このように、範囲TもX座標値X1からX2の範囲を示し、範囲Sと対応する範囲となっている。
 このように埋設物の検出された位置KのX座標X1から範囲Sおよび範囲Tを設定することにより、信号強度の変化が大きい範囲で比較できるため、道筋を逸れたか否かを判定しやすくなる。
The range T corresponding to the range S is set to the range of the distance L from the folding origin O to the position K, and the distance L1 from the position P moved in the direction of the position R toward the folding origin O. Here, since the distance from the folding origin O is the same, the X coordinate value of the position P is X1 which is the same as the X coordinate value of the position K. Further, the X coordinate value of the position moved by the distance L1 from the coordinate value X1 toward the folding origin O is X2. In this way, the range T also indicates the range of the X coordinate values X1 to X2, and is a range corresponding to the range S.
By setting the range S and the range T from the X-coordinate X1 of the detected position K of the buried object in this way, it is possible to compare in a range in which the change in signal strength is large, and it is easy to determine whether or not the road has deviated. ..
 差演算部72は、図25(a)、(b)に示すように、折り返し原点Oから同じ距離であって同じ深さ位置の信号強度同士を比較する。図25(a)および図25(b)に示すように、範囲Sがライン(X座標)と深さ位置(Y座標)におけるポイントsn(第1位置の一例)に分けられ、範囲Tがラインと深さ位置におけるポイントtn(第2位置の一例)に分けられる。例えば、差演算部72は、範囲SのうちX座標値X1に最も近く且つ浅いポイントs1の信号強度fs1と、範囲TのうちX座標値X1に最も近く且つ浅いポイントt1の信号強度ft1の差分の絶対値|fs1-ft1|を算出する。 The difference calculation unit 72 compares the signal intensities at the same distance and the same depth position from the folding origin O, as shown in FIGS. As shown in FIGS. 25A and 25B, the range S is divided into a line (X coordinate) and a point sn (an example of the first position) at the depth position (Y coordinate), and the range T is a line. And a point tn at the depth position (an example of the second position). For example, the difference calculation unit 72 determines the difference between the signal intensity fs1 of the point s1 closest to the X coordinate value X1 in the range S and the signal intensity ft1 of the point t1 closest to the X coordinate value X1 in the range T. The absolute value of |fs1-ft1| is calculated.
 このように、差演算部72は、範囲Sのポイントsnの信号強度fsnと、範囲Tのポイントtnの信号強度ftnとの差分の絶対値Dn(=|fsn-ftn|)を算出することにより、全ての対応するポイント同士の差分の絶対値を算出する。
 カウント部73は、差演算部72で算出された差分の絶対値Dnが、位置ずれ比較値D0よりも大きいポイントの数mをカウントする。
In this way, the difference calculation unit 72 calculates the absolute value Dn (=|fsn−ftn|) of the difference between the signal intensity fsn of the point sn of the range S and the signal intensity ftn of the point tn of the range T. , Calculate the absolute value of the difference between all corresponding points.
The counting unit 73 counts the number m of points at which the absolute value Dn of the difference calculated by the difference calculation unit 72 is larger than the positional deviation comparison value D0.
 判定部74は、カウントした数mが、位置ずれ比較数vよりも大きい場合には、図25(b)で通っている2回目の道筋は、図25(a)で通った1回目の道筋からずれているため走査エラーが発生したと判定する。一方、カウントした数mが位置ずれ比較数v以下で有る場合には、1回目と2回目の道筋はずれていないため走査エラーが発生していないと判定する。 When the counted number m is larger than the positional deviation comparison number v, the determination unit 74 determines that the second route that has passed in FIG. 25(b) is the first route that has passed in FIG. 25(a). Since it is out of alignment, it is determined that a scanning error has occurred. On the other hand, when the counted number m is equal to or less than the positional deviation comparison number v, it is determined that a scanning error has not occurred because the first and second paths are not out of alignment.
 埋設物位置消去部75は、判定部74によって走査エラーが発生していると判定された場合には、1回目の走査によって決定された埋設物の位置P1(Pd)をRFデータ管理部62から削除し、表示部8の表示からも削除する。
 これによって、表示部8には、図23(d)に示すように、位置Pで表示されていた埋設物の位置P1が削除され、折り返し原点Oから位置Rへ向かう走査によって検出される位置Q(図23(a)参照)に埋設物の位置P1´が表示される。
When the determination unit 74 determines that a scanning error has occurred, the embedded object position erasing unit 75 determines the position P1 (Pd) of the embedded object determined by the first scanning from the RF data management unit 62. It is also deleted from the display of the display unit 8.
As a result, as shown in FIG. 23D, the position P1 of the buried object displayed at the position P is deleted on the display unit 8, and the position Q detected by the scanning from the folding origin O to the position R is deleted. The position P1′ of the buried object is displayed in (see FIG. 23A).
 一般的に、埋設物検出装置1を用いて埋設物101を検出する際には、同じ道筋を複数回繰り返して往復することにより、平均化処理部63でRFデータを平均化し、埋設物データ積算部54で埋設物の位置を更新することによって、埋設物の位置の確からしさが向上する。本実施の形態の埋設物検出装置1は、この繰り返しの際に道筋を外れた場合に、それまでの走査によって取得された埋設物の位置を削除する。これにより、道筋を外れた場合に、以前の位置に埋設物が表示されることを防ぐことができる。 Generally, when the buried object 101 is detected by using the buried object detection apparatus 1, the same route is repeated a plurality of times to reciprocate, so that the averaging processing unit 63 averages the RF data and integrates the buried object data. By updating the position of the buried object in the portion 54, the certainty of the position of the buried object is improved. The embedded object detection apparatus 1 of the present embodiment deletes the position of the embedded object acquired by the scanning up to that time when the road is out of the way during this repetition. This prevents the buried object from being displayed at the previous position when the vehicle is off the road.
 なお、図25(a)および図25(b)に示すように、例えば初回の矢印A1方向の移動からの道筋のずれを判定する場合、矢印A2方向への移動の走査エラーを判定するときには、埋設物が検出された位置Kから折り返し原点O側の距離L1の範囲Sを比較する範囲として設定する。また、図25(b)に示すように、走査エラーを判定する際の折り返し原点Oからの距離L(折り返し原点Oから位置Kまでの距離)の位置Pから折り返し原点O側に距離L1の範囲Rを範囲Sに対応する範囲として設定する。このように、埋設物の位置の走査エラーを判定する際の移動方向とは反対側の範囲を比較する範囲Sとして設定する。 As shown in FIGS. 25(a) and 25(b), for example, when determining the deviation of the path from the first movement in the direction of arrow A1, when determining the scanning error of the movement in the direction of arrow A2, A range S of the distance L1 on the folding origin O side from the position K where the embedded object is detected is set as a range for comparison. Further, as shown in FIG. 25B, the range of the distance L1 from the position P of the distance L from the folding origin O (distance from the folding origin O to the position K) when the scanning error is determined to the folding origin O side. R is set as a range corresponding to the range S. In this way, the range on the opposite side to the moving direction when determining the scanning error of the position of the buried object is set as the range S to be compared.
 一方、図26(a)および図26(b)に示すように、例えば初回の矢印A1方向の移動からの道筋のずれを判定する場合、矢印A1方向への移動の走査エラーを判定するときには、埋設物が検出された位置Kから折り返し原点Oと反対側への距離L1の範囲Sを比較する範囲として設定する。また、図26(b)に示すように、走査エラーを判定する移動の際の折り返し原点Oからの距離L(折り返し原点Oから位置Kまでの距離)の位置Pから折り返し原点Oと反対側に距離L1の範囲Rを範囲Sに対応する範囲として設定する。このように、埋設物の位置の走査エラーを判定する際の移動方向とは反対側の範囲を比較する範囲Sとして設定する。
 なお、走査方向は、複数のエンコーダを用いることで検出することができる。
On the other hand, as shown in FIGS. 26(a) and 26(b), for example, when determining the deviation of the path from the first movement in the arrow A1 direction, when determining the scanning error of the movement in the arrow A1 direction, The range S of the distance L1 from the position K where the embedded object is detected to the side opposite to the folding origin O is set as a range to be compared. Further, as shown in FIG. 26B, from the position P of the distance L (distance from the folding origin O to the position K) from the folding origin O during the movement for determining the scanning error to the opposite side of the folding origin O. The range R of the distance L1 is set as the range corresponding to the range S. In this way, the range on the opposite side to the moving direction when determining the scanning error of the position of the buried object is set as the range S to be compared.
The scanning direction can be detected by using a plurality of encoders.
 (表示制御部26)
 表示制御部26は、データ画像にグループ、ピーク位置などを示して、表示部8に表示させる。例えば、図22(c)の右側の画像データのように、RFデータを白黒階調した画像データおよび決定された埋設物の位置Pd(P1)が表示部8に表示される。
(Display control unit 26)
The display control unit 26 indicates a group, a peak position, and the like on the data image and causes the display unit 8 to display the data image. For example, like the image data on the right side of FIG. 22C, image data obtained by performing grayscale gradation on RF data and the determined position Pd (P1) of the buried object are displayed on the display unit 8.
 <動作>
 次に、本発明にかかる実施の形態の埋設物検出装置1の動作について説明する。
 図27は、埋設物検出装置1の処理を示すフロー図である。
 (全体の処理の概要)
 埋設物検出装置1は、ステップS11において、はじめに、初期化処理が行われる。初期化処理では、各データのクリア等が実行される。
<Operation>
Next, the operation of the embedded object detection device 1 according to the embodiment of the present invention will be described.
FIG. 27 is a flowchart showing the processing of the embedded object detection device 1.
(Overview of the whole process)
In step S11, the embedded object detection device 1 is first initialized. In the initialization processing, clearing of each data is executed.
 次に、使用者が埋設物検出装置1を移動すると、ステップS12において、エンコーダ7からの入力をトリガとして、RFデータが取得される。
 次に、ステップS13において、平均化処理部63によって取得された波形データ(RFデータ)の平均化処理が行われる。
 次に、ステップS14において、走査エラー判定部65によって走査エラーの判定処理が行われる。なお、走査エラー判定処理は、1回目の移動の際には判定されない。また、連続して繰り返し移動していることは、たとえばエンコーダが入力されていることで判断することができる。
Next, when the user moves the embedded object detection device 1, in step S12, the RF data is acquired by using the input from the encoder 7 as a trigger.
Next, in step S13, the averaging processing of the waveform data (RF data) acquired by the averaging processing unit 63 is performed.
Next, in step S14, the scanning error determination unit 65 performs a scanning error determination process. The scanning error determination process is not determined during the first movement. Further, the fact that the robot is continuously and repeatedly moved can be determined by, for example, the encoder being input.
 次に、ステップS15において、埋設物検出部64によって埋設物検出処理が行われる。
 次に、ステップS12~ステップS15について詳しく説明する。
Next, in step S15, the embedded object detection unit 64 performs an embedded object detection process.
Next, step S12 to step S15 will be described in detail.
 (データ取得処理)
 図28は、ステップS12のデータ取得処理を示すフロー図である。
 データ取得処理が開始されると、ステップS1において、エンコーダ7から入力されると、ステップS2において、インパルス出力制御が開始され、パルス発生部13からのパルスに基づいて送信アンテナ11から一定周期(例えば、1MHz)で電磁波のパルスが出力される。
 次に、ステップS3において、ディレイ部14がDelayICにDelay時間を設定する。例えば、0~5120psecまで10psec単位でDelay時間を設定することができる。
(Data acquisition process)
FIG. 28 is a flowchart showing the data acquisition process of step S12.
When the data acquisition process is started, the input from the encoder 7 is performed in step S1, and the impulse output control is started in step S2. Based on the pulse from the pulse generation unit 13, the transmission antenna 11 transmits a constant period (for example, Electromagnetic wave pulse is output at 1 MHz).
Next, in step S3, the delay unit 14 sets the Delay time in the DelayIC. For example, the delay time can be set in units of 10 psec from 0 to 5120 psec.
 次に、ステップS4において、制御部10は、受信アンテナ12からゲート部15を介して受信したRFデータをAD変換する。
 次に、ステップS5において、Delay時間が最大(例えば、5120psec)であるか否かが判断され、最大でない場合、制御がステップS3に戻る。このステップS3、S4、S5が繰り返されることにより、1ライン分のデータを取得することができる。
Next, in step S4, the control unit 10 AD-converts the RF data received from the receiving antenna 12 via the gate unit 15.
Next, in step S5, it is determined whether or not the delay time is maximum (for example, 5120 psec), and if not, the control returns to step S3. By repeating steps S3, S4, and S5, data for one line can be acquired.
 次に、ステップS6において、AD変換されたRFデータをメイン制御モジュール6に送信する。これらステップS1~ステップS6が、インパルス制御モジュール5において行われる処理である。
 次に、作業者によって埋設物検出装置1が移動方向を示す矢印Aの方向に移動されると、エンコーダ7からの入力があり、ステップS2~S7の制御が行われ、次の1ライン分のデータが取得され、メイン制御モジュール6に送信される。
Next, in step S6, the AD-converted RF data is transmitted to the main control module 6. These steps S1 to S6 are processes performed in the impulse control module 5.
Next, when the worker moves the embedded object detection device 1 in the direction of the arrow A indicating the moving direction, there is an input from the encoder 7, the control of steps S2 to S7 is performed, and the next one line The data is acquired and transmitted to the main control module 6.
 次に、ステップS7において、受信部61がインパルス制御モジュール5から1ライン分のRFデータを受信するまで待機し、RFデータを受信するとデータ取得処理が終了する。 Next, in step S7, the receiver 61 waits until it receives RF data for one line from the impulse control module 5, and when the RF data is received, the data acquisition process ends.
 (波形データ平均化処理)
 次に、ステップS13の波形データ平均化処理について説明する。図29は、波形データ平均化処理を示すフロー図である。
 波形データ平均化処理が開始されると、ステップS171において、平均化処理部63が、走査X軸の同じX座標値に1ラインRFデータが存在するか否かを検出する。
 そして、同じX座標値に今まで取得した1ラインRFデータが存在する場合には、ステップS172において、平均化処理部63は、今回取得した1ラインRFデータと、今までに取得した1ラインRFデータの平均を算出し、RFデータ管理部62に記録する。より詳細には、深さ方向をY軸とし、深さの位置をY座標値とすると、平均化処理部63は、今回取得した1ラインRFデータと、今までに取得した1ラインRFデータの同じXY座標値の信号強度の平均を算出することによって、1ライン分のデータを平均化する。
(Waveform data averaging process)
Next, the waveform data averaging process in step S13 will be described. FIG. 29 is a flowchart showing the waveform data averaging process.
When the waveform data averaging process is started, in step S171, the averaging processing unit 63 detects whether 1-line RF data exists at the same X coordinate value of the scanning X axis.
Then, if the 1-line RF data acquired so far exists at the same X coordinate value, the averaging processing unit 63, in step S172, the 1-line RF data acquired this time and the 1-line RF acquired so far. The average of the data is calculated and recorded in the RF data management unit 62. More specifically, when the depth direction is the Y axis and the depth position is the Y coordinate value, the averaging processing unit 63 compares the 1 line RF data acquired this time and the 1 line RF data acquired so far. The data of one line is averaged by calculating the average of the signal intensities of the same XY coordinate values.
 (走査エラー判定処理)
 次に、走査エラー判定処理について説明する。図30は、走査エラー判定処理を示すフロー図である。
 走査エラー判定処理が開始されると、はじめにステップS181において、範囲設定部71は、比較用基準データがあるか否かを判定する。ここで、比較用基準データとは、所定の道筋を走査することによって取得したデータであって、埋設物が検出されたデータのことである。すなわち、現在のRFデータの取得の走査が、埋設物の位置が少なくとも1度検出された後に埋設物の位置の確からしさを向上させるために更に行った走査であるか否かが検出される。比較用基準データが存在しない場合、すなわち埋設物の位置が未だ検出されていない場合には、走査エラー判定処理が終了する。
(Scan error determination process)
Next, the scanning error determination processing will be described. FIG. 30 is a flowchart showing the scanning error determination processing.
When the scanning error determination process is started, first in step S181, the range setting unit 71 determines whether or not there is comparison reference data. Here, the reference data for comparison is data acquired by scanning a predetermined route, and is data in which an embedded object is detected. That is, it is detected whether or not the scan for acquiring the current RF data is a scan further performed to improve the certainty of the position of the buried object after the position of the buried object is detected at least once. If the reference data for comparison does not exist, that is, if the position of the buried object has not been detected yet, the scanning error determination process ends.
 次に、ステップS182において、範囲設定部71は、埋設物の検出位置が存在するX軸座標値と、現在のX軸座標値が比較範囲内であるか否かを判定する。すなわち、図23の例で説明すると、範囲設定部71は、図23(c)において現在取得した1ラインのRFデータのX座標値が、折り返し原点Oから距離(L-L1)~Lの範囲内であるか否かを判定する。 Next, in step S182, the range setting unit 71 determines whether or not the X-axis coordinate value at which the detected position of the embedded object exists and the current X-axis coordinate value are within the comparison range. That is, to explain using the example of FIG. 23, the range setting unit 71 causes the X coordinate value of the RF data of one line currently acquired in FIG. 23C to be within the range (L−L1) to L from the folding origin O. It is determined whether or not
 次に、ステップS183において、範囲設定部71は、現在よりも1つ前のエンコーダ7の入力タイミングで取得した1ラインRFデータのX座標値が、埋設物の検出位置のX座標値よりも小さいか否かを判定する。
 そして、小さい場合には、ステップS183において、範囲設定部71は、現在取得した1ラインRFデータのX座標値が埋設物の検出値のX座標値以上であるか否かを判定する。これは、矢印A1方向に走査している状態において、現在の1ラインRFデータのX座標値が埋設物の検出位置のX座標値X1以上になっていることを示す。すなわち、矢印A1方向への走査において、埋設物の検出位置のX座標値X1に達したか否かを検出する。
Next, in step S183, the range setting unit 71 has the X coordinate value of the 1-line RF data acquired at the input timing of the encoder 7 which is one before the present time smaller than the X coordinate value of the detected position of the embedded object. Or not.
Then, if it is smaller, in step S183, the range setting unit 71 determines whether or not the X coordinate value of the currently acquired 1-line RF data is greater than or equal to the X coordinate value of the detected value of the embedded object. This indicates that the X-coordinate value of the current 1-line RF data is equal to or more than the X-coordinate value X1 of the detection position of the embedded object in the state of scanning in the direction of arrow A1. That is, it is detected whether or not the X coordinate value X1 of the detection position of the embedded object is reached in the scan in the direction of the arrow A1.
 一方、ステップS183において、小さくない場合には、ステップS185において、範囲設定部71は、現在X軸座標値が埋設物X座標値以下の値であるか否かを判定する。これは、矢印A2方向に走査している状態において、現在の1ラインRFデータのX座標値が埋設物の検出位置のX座標値x1に達したか否かを検出する。
 このように、ステップS183~S185において、矢印A1方向への走査と矢印A2方向への走査の双方の走査において、埋設物の検出位置のX座標値x1に達したかを検出できる。
On the other hand, if it is not smaller in step S183, the range setting unit 71 determines in step S185 whether the current X-axis coordinate value is equal to or less than the embedded object X coordinate value. This detects whether or not the current X-coordinate value of the 1-line RF data has reached the X-coordinate value x1 of the detected position of the embedded object in the state of scanning in the arrow A2 direction.
In this way, in steps S183 to S185, it is possible to detect whether the X coordinate value x1 of the detected position of the embedded object has been reached in both the scanning in the arrow A1 direction and the scanning in the arrow A2 direction.
 このように、現在の1ラインRFデータのX座標値が、埋設物の検出位置のX座標値x1に達した場合、範囲設定部71は、位置ずれ判定範囲の設定を行う。具体的には、範囲設定部71は、図23(b)に示す範囲Sおよび図23(c)に示す範囲Tの設定を行う。なお、範囲Sおよび範囲Tは、位置ずれを判定可能な範囲で適宜変更可能であり、特に限定されるものではない。 In this way, when the X coordinate value of the current 1-line RF data reaches the X coordinate value x1 of the detected position of the embedded object, the range setting unit 71 sets the position shift determination range. Specifically, the range setting unit 71 sets the range S shown in FIG. 23(b) and the range T shown in FIG. 23(c). It should be noted that the range S and the range T can be appropriately changed within a range where the positional deviation can be determined, and are not particularly limited.
 次に、ステップS187において、差演算部72とカウント部73によって、位置ずれ量取得処理が行われる。
 図31は、位置ずれ量取得処理を示すフロー図である。
 位置ずれ量取得処理が開始されると、ステップS191において、カウント部73は、比較超数をゼロクリアする。
Next, in step S187, the positional deviation amount acquisition process is performed by the difference calculation unit 72 and the counting unit 73.
FIG. 31 is a flowchart showing the positional shift amount acquisition processing.
When the positional shift amount acquisition process is started, in step S191, the count unit 73 clears the comparative number to zero.
 次に、差演算部72は、ステップS192において、位置ずれ検索カウンタに位置ずれ検索開始位置を代入する。例えば、図25(a)および図25(b)に示すX座標値X1の値が位置ずれ検索カウンタに代入される。
 次に、差演算部72は、ステップS193において、位置ずれ検索カウンタが検索終了位置よりも小さいか否かを判定する。ここで、検索終了位置は、図25(a)および25(b)に示すX座標値X2とすることができる。すなわち、ステップS192およびステップS193によって、X座標値X1~X2(範囲T)まで差演算を行ったか否かが判定され、位置ズレ検索カウンタの値が、検索終了位置に達すると、位置ずれ量取得処理が終了される。
Next, the difference calculation unit 72 substitutes the position shift search start position in the position shift search counter in step S192. For example, the value of the X coordinate value X1 shown in FIGS. 25(a) and 25(b) is substituted into the misregistration search counter.
Next, in step S193, the difference calculation unit 72 determines whether or not the position shift search counter is smaller than the search end position. Here, the search end position can be the X coordinate value X2 shown in FIGS. 25(a) and 25(b). That is, in steps S192 and S193, it is determined whether or not the difference calculation is performed up to the X coordinate values X1 to X2 (range T), and when the value of the position shift search counter reaches the search end position, the position shift amount acquisition is performed. The process ends.
 ステップS193において位置ずれ検索カウンタが検索終了位置に達していない場合は、ステップS194において、差演算部72は、位置ずれ検索カウンタ位置の1ライン最新データがあるか否か判定する。差演算部72は、例えば、X座標値X1に最新の1ラインRFデータが存在するか否かを検出する。
 最新の1ラインRFデータが存在する場合には、ステップS195において、差演算部72は、1ラインカウンタ数をゼロに設定する。
If the position shift search counter has not reached the search end position in step S193, the difference calculation unit 72 determines in step S194 whether there is one-line latest data of the position shift search counter position. The difference calculation unit 72 detects, for example, whether the latest 1-line RF data exists at the X coordinate value X1.
If the latest 1-line RF data exists, the difference calculation unit 72 sets the 1-line counter number to zero in step S195.
 次に、ステップS196において、差演算部72は、1ラインカウンタ数が1ラインサイズよりも小さいか否かを判定する。ここでは、1ラインの全てのデータについて差分を算出する演算を行ったか否かが検出される。
 次に、ステップS197において、差演算部72は、現在の1ラインRFデータの位置ずれ検索カウンタ(X1)および1ラインカウンタ(ゼロ)の信号強度と、比較用基準データの同じ位置ずれ検索カウンタ(X1)および同じ1ラインカウンタ(ゼロ)の信号強度との差分の絶対値を演算する。
Next, in step S196, the difference calculation unit 72 determines whether or not the 1-line counter number is smaller than the 1-line size. Here, it is detected whether or not the calculation for calculating the difference has been performed for all the data of one line.
Next, in step S197, the difference calculation unit 72 causes the signal intensities of the current misalignment search counter (X1) of the 1-line RF data and the 1-line counter (zero) and the same misalignment search counter of the comparison reference data ( X1) and the absolute value of the difference from the signal strength of the same one-line counter (zero).
 そして、その差分の絶対値が、位置ずれ比較値以内でない場合には、ステップS199において、カウント部73は、比較超数の値を+1増やす。これは、図25(a)および図25(b)に示すポイントs1の強度信号とポイントt1の強度信号の差分の絶対値を算出し、その差分の絶対値が位置ずれ比較値以内であるか否かを判定することである。
 次に、ステップS200において、差演算部72は、1ラインカウンタ値を+1増加させる。制御はステップS200からステップS196へと進み、1ラインカウンタ数が1ラインサイズよりも小さい場合には、ステップS197において、差演算部72は、現在の1ラインRFデータの位置ずれ検索カウンタ(X1)および1ラインカウンタ(1)の信号強度と、比較用基準データの同じ位置ずれ検索カウンタ(X1)および同じ1ラインカウンタ(1)の信号強度との差分の絶対値を演算する。ここで、1ラインカウンタが1とは、前回よりも深い側のポイントの信号強度が比較される。すなわち、差演算部72は、図25(a)および図25(b)に示すポイントs2の強度信号とポイントt2の強度信号の差分の絶対値を算出し、その差分の絶対値が位置ずれ比較値以内であるか否かを判定する。
Then, when the absolute value of the difference is not within the positional deviation comparison value, in step S199, the counting unit 73 increments the value of the comparison number by +1. This is because the absolute value of the difference between the intensity signal at the point s1 and the intensity signal at the point t1 shown in FIGS. 25A and 25B is calculated, and whether the absolute value of the difference is within the displacement comparison value. It is to judge whether or not.
Next, in step S200, the difference calculation unit 72 increments the 1-line counter value by +1. The control proceeds from step S200 to step S196, and when the 1-line counter number is smaller than the 1-line size, in step S197, the difference computing unit 72 causes the current 1-line RF data misregistration search counter (X1). And the absolute value of the difference between the signal strength of the 1-line counter (1) and the signal strength of the same misregistration search counter (X1) and the same 1-line counter (1) of the reference data for comparison. Here, when the 1-line counter is 1, the signal intensities at points on the deeper side than the previous time are compared. That is, the difference calculator 72 calculates the absolute value of the difference between the intensity signal at the point s2 and the intensity signal at the point t2 shown in FIGS. 25(a) and 25(b), and the absolute value of the difference is compared with the positional deviation. It is determined whether it is within the value.
 このように、1ライン分の全てのRFデータについて差分の絶対値の演算および位置ずれ比較値との比較が終了すると、ステップS196から制御はステップS198へと進み、差演算部72は、位置ずれ検索カウント値を+1増加する。これによって、X座標値X1の次のX座標値における1ライン分のデータについて差分の絶対値の演算および位置ずれ比較値(第1閾値)との比較が行われる。これによって、範囲Sおよび範囲Tの全てのポイントにおける信号強度の差が演算され、差の絶対値が位置ずれ比較値を超えた比較超数がカウントされる。なお、ステップS194において、位置ずれ検索カウンタ位置に1ラインの最新RFデータが存在しない場合にも制御はステップS98へと進む。 In this way, when the calculation of the absolute value of the difference and the comparison with the position shift comparison value are completed for all the RF data for one line, the control proceeds from step S196 to step S198, and the difference calculation unit 72 causes the position shift. Increment the search count value by +1. Thereby, the calculation of the absolute value of the difference and the comparison with the positional deviation comparison value (first threshold value) are performed for the data of one line at the X coordinate value next to the X coordinate value X1. As a result, the differences in the signal intensities at all the points in the range S and the range T are calculated, and the number of comparisons whose absolute value of the difference exceeds the positional deviation comparison value is counted. Note that, in step S194, if the latest RF data of one line does not exist at the position shift search counter position, the control proceeds to step S98.
 次に、図30のステップS188において、判定部74は、比較超数が位置ずれ比較数(第2閾値の一例)より大きいか否かを判定し、大きい場合には、位置ずれが発生していると判定する。
 位置ずれが発生していると判定された場合には、ステップS189において、位置ずれ時処理が行われる。
Next, in step S188 of FIG. 30, the determination unit 74 determines whether or not the comparison number is larger than the positional deviation comparison number (an example of the second threshold value). Determine that
When it is determined that the positional deviation has occurred, the positional deviation processing is performed in step S189.
 図32は、位置ずれ時処理を示すフロー図である。
 位置ずれ時処理が開始されると、ステップS110において、埋設物位置消去部75は、今までに検出された埋設物の位置を消去する。
 次に、ステップS111において、比較用基準データとして、最新のRFデータをRFデータ管理部62に登録する。
FIG. 32 is a flowchart showing the processing at the time of displacement.
When the position shift processing is started, the embedded object position erasing unit 75 erases the position of the embedded object detected so far in step S110.
Next, in step S111, the latest RF data is registered in the RF data management unit 62 as reference data for comparison.
 (埋設物検出処理)
 図33は、埋設物検出処理を示すフロー図である。
 埋設物検出処理が貸されると、ステップS201において、埋設物の判定を行う前の前処理が、前処理部23によって行われる。
 次に、ステップS202において、埋設物判定部24によって埋設物判定処理が行われる。
(Buried object detection process)
FIG. 33 is a flowchart showing the embedded object detection processing.
When the buried object detection process is lent, the pre-processing unit 23 performs a pre-process before determining the buried object in step S201.
Next, in step S202, the embedded object determination unit 24 performs an embedded object determination process.
 次に、ステップS203において、埋設物判定部24によって判定された結果(埋設物の位置)が判定結果登録部25によって登録され、表示制御部66によって表示部8に表示される。
 次に、各ステップにおける処理について詳しく説明する。
Next, in step S<b>203, the result (position of the embedded object) determined by the embedded object determination unit 24 is registered by the determination result registration unit 25 and displayed on the display unit 8 by the display control unit 66.
Next, the processing in each step will be described in detail.
 (前処理)
 図34は、前処理を示すフロー図である。
 はじめに、ステップS21において、ゲイン調整部31が1ライン分のRFデータについてゲイン調整を行う。
 次に、ステップS22において、差分処理部32が、基準の値との差分を算出し、RFデータの変化が抽出される。
 次に、ステップS23において、移動平均処理部33が差分処理された1ライン分のRFデータに対して移動平均処理を行う。例えば、8点平均を用いて移動平均処理を行うことができる。
(Preprocessing)
FIG. 34 is a flowchart showing the preprocessing.
First, in step S21, the gain adjusting unit 31 adjusts the gain of the RF data for one line.
Next, in step S22, the difference processing unit 32 calculates the difference from the reference value, and the change in the RF data is extracted.
Next, in step S23, the moving average processing unit 33 performs moving average processing on the RF data for one line that has undergone the difference processing. For example, the moving average process can be performed using an 8-point average.
 次に、ステップS24において、一次微分処理部34が、移動平均処理された差分結果に対して一次微分処理を行い、深さ方向において隣り合うデータ間の差分が正(増加)か負(減少)かの判定を行う。
 最後に、ステップS25において、ピーク検出部35が、一次微分処理された結果を用いて信号強度のピークを検出する。
Next, in step S24, the primary differential processing unit 34 performs primary differential processing on the difference result that has been subjected to the moving average processing, and the difference between adjacent data in the depth direction is positive (increase) or negative (decrease). Whether or not it is determined.
Finally, in step S25, the peak detection unit 35 detects the peak of the signal intensity using the result of the first-order differentiation processing.
 (ゲイン調整処理)
 次に、図34のステップS21のゲイン調整処理について説明する。図35は、ゲイン調整処理を示すフロー図である。
 ゲイン調整処理が開始されると、ステップS31において、ゲイン調整部31が、受信部61で受信したRFデータのうち、シーケンスナンバー1の受信データを選択する。
(Gain adjustment process)
Next, the gain adjustment process of step S21 of FIG. 34 will be described. FIG. 35 is a flowchart showing the gain adjustment processing.
When the gain adjustment process is started, the gain adjustment unit 31 selects the reception data of sequence number 1 from the RF data received by the reception unit 61 in step S31.
 そして、ゲイン調整部31がシーケンスナンバー1の信号強度のデータについてステップS32の処理を行った後、制御はステップS33に進む。
 ステップS33では、シーケンスナンバーが最大値であるか否かが判定され、シーケンスナンバーが最大値でない場合には、制御はステップS31に戻り、シーケンスナンバーが1つ繰り上げられ、シーケンスナンバー2の受信データが選択される。そして、シーケンスナンバー2のデータについてステップS32の処理が行われる。
Then, after the gain adjusting unit 31 performs the process of step S32 on the signal intensity data of sequence number 1, the control proceeds to step S33.
In step S33, it is determined whether or not the sequence number is the maximum value. If the sequence number is not the maximum value, the control returns to step S31, the sequence number is incremented by 1, and the received data of the sequence number 2 is received. Selected. Then, the process of step S32 is performed on the data of sequence number 2.
 このように、1ライン分のデータの全てに対してステップS32の処理が行われるまで繰り返される。
 ステップS32では、各々シーケンスナンバーの信号強度のデータに対して所定倍率が掛けられる。
 例えば1ラインのRFデータのdelay時間の最も短いシーケンスNo.1のデータ(最も浅い位置のデータともいえる)に対して所定倍率を掛けると、シーケンスNo.1の次にdelay時間の短いシーケンスNo.2のデータに対して所定倍率が掛けられ、シーケンスNoが最大になるまでシーケンスナンバー順に所定倍率が掛けられる。具体的には、深さ方向に対して倍率を大きくしており、浅い側から1~25pixelのデータに対しては倍率を1倍とし、26~50pixelのデータに対しては倍率を2倍とし、51~75pixelのデータに対しては倍率を3倍とし、順に倍率を大きくし、500~511pixelのデータに対しては倍率を21倍と設定することができる。
 このゲイン調整処理によって、図8(b)に示す画像データのように、明暗を明確にすることができる。
In this manner, the process is repeated until the process of step S32 is performed on all the data for one line.
In step S32, the signal strength data of each sequence number is multiplied by a predetermined scale factor.
For example, the sequence No. having the shortest delay time of the RF data of one line. When the data of No. 1 (which can be said to be the data at the shallowest position) is multiplied by a predetermined magnification, the sequence No. 1 is followed by a sequence No. having a short delay time. The data of No. 2 is multiplied by a predetermined magnification, and the predetermined magnification is multiplied in the sequence number order until the sequence number becomes maximum. Specifically, the magnification is increased in the depth direction, and the magnification is set to 1 for the data of 1 to 25 pixels from the shallow side, and set to 2 for the data of 26 to 50 pixels. , The data of 51 to 75 pixels can be set to 3 times and the magnification can be increased in order, and the data of 500 to 511 pixels can be set to 21 times.
By this gain adjustment processing, it is possible to clarify the brightness and darkness as in the image data shown in FIG.
 (差分処理)
 次に、図34のステップS22の差分処理について説明する。図36は、差分処理を示すフロー図である。
(Differential processing)
Next, the difference processing in step S22 of FIG. 34 will be described. FIG. 36 is a flowchart showing the difference processing.
 差分処理が開示されると、ステップS41において、差分処理部32が、ゲイン調整されたRFデータのうち、シーケンスナンバー1の受信データを選択する。
 そして、差分処理部32がシーケンスナンバー1のデータについてステップS42、S43の処理を行った後、制御はステップS44に進む。
 ステップS44では、シーケンスナンバーが最大値であるか否かが判定され、シーケンスナンバーが最大値でない場合には、制御はステップS41に戻り、シーケンスナンバーが1つ繰り上げられ、シーケンスナンバー2の受信データが選択される。そして、シーケンスナンバー2のデータについてステップS42、S43の処理が行われる。
When the difference processing is disclosed, in step S41, the difference processing unit 32 selects the reception data of sequence number 1 from the gain-adjusted RF data.
Then, after the difference processing unit 32 performs the processing of steps S42 and S43 on the data of sequence number 1, the control proceeds to step S44.
In step S44, it is determined whether or not the sequence number is the maximum value. If the sequence number is not the maximum value, the control returns to step S41, the sequence number is incremented by one, and the received data of the sequence number 2 is received. Selected. Then, the processes of steps S42 and S43 are performed on the data of sequence number 2.
 このように、1ライン分のデータの全てに対してステップS42、S43の処理が行われるまでフローが繰り返される。
 ステップS42では、差分処理部32が、今回のラインまでの過去のゲイン調整した受信データ(過去受信した全ての受信データ)の平均値を算出する。
 次に、ステップS43において、差分処理部32は、算出した平均値を基準点の値とし、その値と、今回のラインの受信データとの差分を算出する。
In this way, the flow is repeated until the processes of steps S42 and S43 are performed on all the data for one line.
In step S42, the difference processing unit 32 calculates the average value of the past gain-adjusted reception data up to this line (all reception data received in the past).
Next, in step S43, the difference processing unit 32 sets the calculated average value as the value of the reference point, and calculates the difference between that value and the received data of the current line.
 次に、ステップS44において、差分処理部32は、シーケンスナンバーが最大値であるか否かを判定し、シーケンスナンバーが最大値でない場合には、制御はステップS41に戻り、1つ番号が繰り上げられてシーケンスナンバー2の受信データが選択される。
 このように順次番号が繰り上げられ1ラインの全ての受信データに対して差分処理が行われるまで、ステップS42、S43が繰り返される。
Next, in step S44, the difference processing unit 32 determines whether or not the sequence number is the maximum value, and if the sequence number is not the maximum value, the control returns to step S41 and the number is incremented by one. Thus, the received data of sequence number 2 is selected.
In this way, steps S42 and S43 are repeated until the numbers are sequentially incremented and the difference processing is performed on all the received data of one line.
 なお、m番目ラインのデータの差分処理を行う際には、m番目のラインの所定深さ位置における信号強度から、1~m―1番目の所定深さ位置における信号強度の平均値が引かれる。また、次のm+1番目のラインに対して差分処理を行う際には、1~m番目の信号強度の平均値が算出され、基準点の値が更新される。
 この差分処理によって、図9(b)に示す画像データのように、RFデータの変化を抽出することができる。
When the difference processing of the data of the m-th line is performed, the average value of the signal intensities at the 1-m-1st predetermined depth positions is subtracted from the signal intensity at the predetermined depth position of the m-th line. .. Further, when the difference processing is performed on the next m+1-th line, the average value of the 1st to m-th signal intensities is calculated, and the value of the reference point is updated.
By this difference processing, a change in RF data can be extracted as in the image data shown in FIG. 9B.
 (差分結果の一次微分処理)
 次に、図34のステップS23の差分結果の一次微分処理について説明する。図37は、差分結果の一次微分処理を示すフロー図である。
 差分結果の一次微分処理が開始されると、ステップS51において、一次微分処理部34が、差分結果のうち、シーケンスナンバー1の差分結果を選択する。
(First order differential processing of the difference result)
Next, the primary differential processing of the difference result in step S23 of FIG. 34 will be described. FIG. 37 is a flowchart showing the first derivative processing of the difference result.
When the primary differential processing of the difference result is started, in step S51, the primary differential processing unit 34 selects the differential result of sequence number 1 from the differential results.
 次に、ステップS52において、一次微分処理部34は、差分結果の一次微分処理を行う。ここで、一次微分処理とは、深さ方向において、所定の位置の差分結果のデータと次の位置の差分結果のデータとの差を算出することである。すなわち、シーケンスナンバー1と、次のシーケンスナンバー2の差分が算出される。
 次に、ステップS53において、シーケンスナンバーが最大値であるか否かが判定され、シーケンスナンバーが最大値でない場合には、制御はステップS51に戻り、1つ番号が繰り上げられ、シーケンスナンバー2の受信データが選択される。そして、シーケンスナンバー2とシーケンスナンバー3の差分が算出される。
Next, in step S52, the primary differential processing unit 34 performs primary differential processing of the difference result. Here, the primary differential processing is to calculate the difference between the difference result data at a predetermined position and the difference result data at the next position in the depth direction. That is, the difference between the sequence number 1 and the next sequence number 2 is calculated.
Next, in step S53, it is determined whether or not the sequence number is the maximum value. If the sequence number is not the maximum value, the control returns to step S51, the number is incremented by one, and the sequence number 2 is received. The data is selected. Then, the difference between the sequence number 2 and the sequence number 3 is calculated.
 このように、1ライン分のデータの全てに対して一次微分処理が行われるまで、順次番号が繰り上げられ、ステップS52が繰り返される。
 すなわち、シーケンスナンバーnの一次微分処理を行う場合には、シーケンスナンバーn+1の差分結果のデータから、シーケンスナンバーnの差分結果のデータを引くことによって、シーケンスナンバーnのデータに対して一次微分処理を行うことができる。
 これによって、図12の表150の左から3番目の欄の差分が算出される。
In this way, the numbers are sequentially incremented and step S52 is repeated until the primary differential processing is performed on all the data for one line.
That is, when the primary differential processing of the sequence number n is performed, the primary differential processing is performed on the data of the sequence number n by subtracting the differential result data of the sequence number n from the differential result data of the sequence number n+1. It can be carried out.
As a result, the difference in the third column from the left in the table 150 in FIG. 12 is calculated.
 (ピーク検出処理)
 次に、図34のステップS25のピーク検出処理について説明する。図38は、ピーク検出処理を示すフロー図である。
 ピーク検出処理が開始されると、ステップS71において、ピーク検出部35が、一次微分処理が行われたシーケンスナンバー1を選択する。
(Peak detection process)
Next, the peak detection process of step S25 of FIG. 34 will be described. FIG. 38 is a flowchart showing the peak detection processing.
When the peak detection process is started, in step S71, the peak detection unit 35 selects the sequence number 1 for which the primary differentiation process has been performed.
 そして、ピーク検出部35は、シーケンスナンバー1のデータについて、ステップS72、S73の制御を行った後、制御はステップS74に進む。
 ステップS74では、シーケンスナンバーが最大値であるか否かが判定され、シーケンスナンバーが最大値でない場合には、制御はステップS71に戻り、シーケンスナンバーが1つ繰り上げられ、シーケンスナンバー2の受信データが選択される。そして、シーケンスナンバー2のデータについてステップS72、S73の処理が行われる。
Then, the peak detection unit 35 performs the control of steps S72 and S73 for the data of sequence number 1, and then the control proceeds to step S74.
In step S74, it is determined whether or not the sequence number is the maximum value. If the sequence number is not the maximum value, control returns to step S71, the sequence number is incremented by 1, and the received data of sequence number 2 is Selected. Then, the processes of steps S72 and S73 are performed on the data of sequence number 2.
 このように、1ライン分のデータの全てに対して処理が行われるまで、順次番号が繰り上げられ、ステップS72、S73の処理が繰り返される。
 ここで、n番目のデータについてピーク検出処理を行うとして、ステップS72~S73について説明する。
 ステップS72において、ピーク検出部35は、一次微分後の前回のシーケンスナンバーn-1の状態が負(-)で、今回のシーケンスナンバーnの状態が正(+)であるか否かを判定する。
In this way, the numbers are sequentially incremented and the processing of steps S72 and S73 is repeated until the processing is performed on all the data for one line.
Here, assuming that the peak detection processing is performed on the n-th data, steps S72 to S73 will be described.
In step S72, the peak detection unit 35 determines whether or not the state of the previous sequence number n-1 after the primary differentiation is negative (-) and the state of the current sequence number n is positive (+). ..
 そして、前回のシーケンスナンバーn-1の状態が負(-)で、今回のシーケンスナンバーnの状態が正(+)である場合、ピーク検出部35は、ステップS73において、n番目の座標を記憶する。座標は、例えば、ピクセルを単位とし、移動距離(ラインの番号ともいえる)と深さ位置で示すことができる。
 これにより、上述したように、例えば、図12の表150のシーケンスナンバー34をピークとして検出することができる。このピークは黒(下向き)のピークとなっている。
If the previous sequence number n-1 is negative (-) and the current sequence number n is positive (+), the peak detector 35 stores the nth coordinate in step S73. To do. The coordinates can be indicated by a moving distance (also referred to as a line number) and a depth position in units of pixels.
Thereby, as described above, for example, the sequence number 34 in the table 150 of FIG. 12 can be detected as a peak. This peak is a black (downward) peak.
 一方、白(上向き)のピークを検出する場合には、前回の一次微分後の状態が、“+”で、今回が“-”のとき、今回の座標を記憶することによって、白のピークを検出することができる。これにより、図12の表150のシーケンスナンバー5を上向きのピークとして検出することができる。 On the other hand, when a white (upward) peak is detected, when the state after the previous first derivative is “+” and this time is “−”, the current peak is stored to store the white peak. Can be detected. Thereby, the sequence number 5 in the table 150 of FIG. 12 can be detected as an upward peak.
 (埋設物判定処理)
 次に、図33のステップS122に示す埋設物判定処理について説明する。図39は、埋設物判定処理を示すフロー図である。
 埋設物判定処理が開始されると、はじめに、ステップS81において、グルーピング部51が、前処理部23で行われたピーク検出結果のグルーピング処理を行う。
 次に、ステップS82において、形状判定部52が頂点検出処理を行う。
 次に、ステップS83において、埋設物位置決定部53または埋設物データ積算部54によって埋設物取得処理が行われる。
(Underground judgment process)
Next, the buried object determination process shown in step S122 of FIG. 33 will be described. FIG. 39 is a flowchart showing the embedded object determination processing.
When the buried object determination process is started, first, in step S81, the grouping unit 51 performs the peak detection result grouping process performed by the preprocessing unit 23.
Next, in step S82, the shape determination unit 52 performs vertex detection processing.
Next, in step S83, a buried object acquisition process is performed by the buried object position determining unit 53 or the buried object data integrating unit 54.
 (ピーク検出結果のグルーピング処理)
 図39のステップS81のピーク検出結果のグルーピング処理について説明する。図40は、ピーク検出結果のグルーピング処理を示すフロー図である。
 はじめに、ステップS91において、グルーピング部51は、検出状態を“未検出”の状態とする。
(Grouping processing of peak detection results)
The grouping process of the peak detection result in step S81 of FIG. 39 will be described. FIG. 40 is a flowchart showing a grouping process of peak detection results.
First, in step S91, the grouping unit 51 sets the detection state to the “undetected” state.
 過去に取得した全てのデータを対象とし、ステップS92では、グルーピング部51は、過去の古いデータを処理の対象として選択する。そして、ステップS103において、グルーピング部51は、今回取得したラインまでの過去に取得したデータの全てに対して処理を行ったか判定し、処理を行っていない場合、制御はステップS92へと戻り、次に古いデータが処理の対象とされる。このように、例えば、最も古いラインのシーケンスナンバー1から順にステップS93~ステップS102の処理が行われる。 All target data acquired in the past are targeted, and in step S92, the grouping unit 51 selects old data in the past as a target for processing. Then, in step S103, the grouping unit 51 determines whether or not the processing has been performed on all the data acquired in the past up to the line acquired this time. If the processing has not been performed, the control returns to step S92, and Old data is subject to processing. In this way, for example, the processes of steps S93 to S102 are sequentially performed from the sequence number 1 of the oldest line.
 ステップS93において、グルーピング部51は状態が未検出であるか否かを判定する。はじめの状態は“未検出”であるため、制御はステップS94に進む。
 ステップS94において、グルーピング部51は、所定範囲内にピークが検出された位置があるか否かを判定する。所定範囲にピークが検出されない場合には、制御はステップS103へと進む。所定範囲は適宜設定することができ、例えば、1つのラインに設定してもよいし、1つのラインのシーケンスナンバーで設定してもよい。
In step S93, the grouping unit 51 determines whether the state is undetected. Since the initial state is "undetected", the control proceeds to step S94.
In step S94, the grouping unit 51 determines whether or not the position where the peak is detected is within the predetermined range. When the peak is not detected in the predetermined range, the control proceeds to step S103. The predetermined range can be set as appropriate, and for example, may be set for one line or may be set for the sequence number of one line.
 このように、ステップS94において、古いデータから順番にピークが検出された位置があるか否かの判定が行われ、ピークが検出された位置がある場合に、ステップS95において、グルーピング部51は、検出状態を“検出中”とする。
 次に、ステップS96において、グルーピング部51は、ピークを検出した点を記憶する。この点は、ピクセルを単位とする座標であり、例えば、移動距離(ラインの番号ともいえる)と深さ位置で示すことができる。なお、この点が、図13の始点(●)に対応する。
As described above, in step S94, it is determined whether or not there is a position where the peak is detected in order from the old data, and when there is the position where the peak is detected, in step S95, the grouping unit 51 The detection state is set to “detecting”.
Next, in step S96, the grouping unit 51 stores the point where the peak is detected. This point is a coordinate with a pixel as a unit, and can be indicated by, for example, a moving distance (also called a line number) and a depth position. This point corresponds to the starting point (●) in FIG.
 次に、ステップS103およびステップS92を経て、次のデータが処理の対象とされる。
 次に、ステップS93において、検出状態が“検出中”となっているため、制御はステップS97に進む。
 ステップS97において、グルーピング部51は、ステップS96で記憶した位置から所定範囲内にピークを検出した位置があるか否かを判定する。この所定範囲内は、例えば、図14(a)~図14(d)で説明した移動方向5pixel以内であって、上下5pixel以内に設定することができる。ピークを検出した位置が所定範囲内に存在する場合には、ステップS98において、グルーピング部51は、連続した位置があるとして、その位置を記憶する。
Next, through steps S103 and S92, the next data is processed.
Next, in step S93, since the detection state is "detecting", the control proceeds to step S97.
In step S97, the grouping unit 51 determines whether or not there is a peak detected position within a predetermined range from the position stored in step S96. This predetermined range can be set within, for example, 5 pixels within the moving direction described with reference to FIGS. 14A to 14D and within 5 pixels above and below. When the position where the peak is detected is within the predetermined range, the grouping unit 51 determines that there is a continuous position and stores the position in step S98.
 次に、ステップS99において、グルーピング部51は、前回のY座標(深さ位置)と今回のY座標(深さ位置)を比較する(図14参照)。
 次に、ステップS100において、グルーピング部51は、比較結果を正(+)または負(-)として記憶する。ここで、今回の深さ位置が前回の深さ位置よりも浅くなっている場合は、深さ位置が上昇しているとして正(+)が記憶される。また、今回の深さ位置が前回の深さ位置よりも深くなっている場合には、深さ位置が下降しているとして負(-)が記憶される。
Next, in step S99, the grouping unit 51 compares the previous Y coordinate (depth position) with the current Y coordinate (depth position) (see FIG. 14).
Next, in step S100, the grouping unit 51 stores the comparison result as positive (+) or negative (-). Here, when the depth position of this time is shallower than the depth position of the previous time, positive (+) is stored as the depth position is rising. When the depth position of this time is deeper than the depth position of the previous time, a negative (-) is stored as the depth position is descending.
 次に、ステップS97およびステップS92を経て、次のデータが処理の対象とされる。
 次に、ステップS93において、検出状態が“検出中”となっているため、制御はステップS97に進む。
 このステップS97では、前回にステップS98で記憶した点から所定範囲内に、ピークを検出した点が存在するか否かが検出され、存在する場合には、ステップS99において、その点が記憶される。そして、ステップS100において、前回の点と比較結果が記憶される。これにより、図13に示すように連続している点が順次グループとされるとともに、次の点への変化が上昇または下降であるかも記憶される。
Next, through steps S97 and S92, the next data is processed.
Next, in step S93, since the detection state is "detecting", the control proceeds to step S97.
In this step S97, it is detected whether or not there is a peak-detected point within a predetermined range from the point previously stored in step S98, and if there is, that point is stored in step S99. .. Then, in step S100, the previous point and the comparison result are stored. As a result, as shown in FIG. 13, consecutive points are sequentially grouped, and whether the change to the next point is rising or falling is also stored.
 そして、ステップS97において、所定範囲内にピークが検出されない場合には、制御はステップS101に進む。
 ステップS101において、グルーピング部51は、連続する点がないと判断し、それまでの検出結果を保存する。なお、最後に検出された点が、図13の終点(■)に対応する。
Then, in step S97, when the peak is not detected within the predetermined range, the control proceeds to step S101.
In step S101, the grouping unit 51 determines that there are no consecutive points, and stores the detection results up to that point. The last detected point corresponds to the end point (■) in FIG.
 次に、ステップS102において、グルーピング部51は、検出状態を“未検出”の状態とする。
 そして、ステップS103において、過去に取得したラインの全てのデータについて処理が行われたと判断されると、処理が終了する。
Next, in step S102, the grouping unit 51 sets the detection state to the "undetected" state.
Then, in step S103, when it is determined that the processing has been performed for all the data of the line acquired in the past, the processing ends.
 (頂点検出処理)
 図39のステップS83の頂点検出処理について説明する。図41は、頂点検出処理を示すフロー図である。
 頂点検出処理は、グルーピングした結果の全てのデータを対象として行われる。
 頂点検出処理が開始されると、ステップS121において、形状判定部52がグルーピングされたデータの始点側から順にデータを選択する。例えば、ステップS121において、形状判定部52は、図15及び図16に示す1番目から2番目への変化のデータを処理の対象とする。
(Vertex detection process)
The vertex detection processing in step S83 of FIG. 39 will be described. FIG. 41 is a flowchart showing the vertex detection processing.
The vertex detection processing is performed on all data as a result of grouping.
When the vertex detection process is started, in step S121, the shape determination unit 52 sequentially selects data from the start point side of the grouped data. For example, in step S121, the shape determination unit 52 sets the data of the change from the first to the second shown in FIGS. 15 and 16 as the processing target.
 ステップS122において、形状判定部52は、1番目から2番目への変化の結果を読み出す。
 ステップS123において、形状判定部52は、変化の結果が正(+)であるか否かを判定する。例えば、図15及び図16に示す1番目から2番目への変化は正(+)であるため、制御はステップS124に進む。
In step S122, the shape determination unit 52 reads the result of the change from the first to the second.
In step S123, the shape determination unit 52 determines whether or not the result of the change is positive (+). For example, since the change from the first to the second shown in FIGS. 15 and 16 is positive (+), the control proceeds to step S124.
 ステップS124では、形状判定部52は、-(マイナス)カウントを0に設定する。
 次に、ステップS125において、形状判定部52は、+(プラス)カウントが0か否かを判定する。+(プラス)カウントが0であるため、制御はステップS126へと進む。
 ステップS126では、形状判定部52は、1番目のY座標(深さ位置)を記憶し、始点を設定する。
In step S124, the shape determination unit 52 sets the −(minus) count to 0.
Next, in step S125, the shape determination unit 52 determines whether or not the + (plus) count is 0. Since the + (plus) count is 0, the control proceeds to step S126.
In step S126, the shape determination unit 52 stores the first Y coordinate (depth position) and sets the start point.
 次に、ステップS127において、形状判定部52は、+(プラス)カウントを+1に設定する。
 次に、ステップS138において、形状判定部52は、グルーピングした結果の全てのデータについて処理が終了したか否かを判定し、終了していない場合には、制御はステップS121へと戻り、次のデータ(2番目から3番目への変化)が処理対象として選択される。
Next, in step S127, the shape determination unit 52 sets the + (plus) count to +1.
Next, in step S138, the shape determination unit 52 determines whether or not the processing has been completed for all data as a result of grouping, and if not completed, the control returns to step S121, and the next The data (change from second to third) is selected for processing.
 そして、ステップS122において、形状判定部52は、2番目から3番目への変化の結果を読み出す。
 次に、ステップS123において、形状判定部52は、グルーピング処理後の変化の結果が正(+)であるか否かを判定する。例えば、図15および図16に示す2番目から3番目への変化のグルーピング後処理後の結果は正(+)であるため、制御はステップS124に進む。
Then, in step S122, the shape determination unit 52 reads the result of the change from the second to the third.
Next, in step S123, the shape determination unit 52 determines whether or not the result of the change after the grouping process is positive (+). For example, since the result after the post-grouping processing of the change from the second to the third shown in FIGS. 15 and 16 is positive (+), the control proceeds to step S124.
 次に、ステップS125において、形状判定部52は、+(プラス)カウントが0か否かを判定する。+(プラス)カウントが+1であるため、制御はステップS127へと進む。
 そして、ステップS127において、形状判定部52は、+(プラス)カウントに+1を加えて、+2に設定する。
Next, in step S125, the shape determination unit 52 determines whether or not the + (plus) count is 0. Since the + (plus) count is +1, the control proceeds to step S127.
Then, in step S127, the shape determination unit 52 adds +1 to the + (plus) count and sets it to +2.
 このように、ステップS121~S127およびステップS138が繰り返される。そして、ステップS123において、変化の結果が負(-)になると、制御はステップS128へと進む。
 ステップS128では、形状判定部52は、+カウントが5以上になっているか否かを判定する。ここでカウントが5以上ある場合には、連続して5pixel以上、Y軸方向に上昇していることという条件1を満たしていることになる。
In this way, steps S121 to S127 and step S138 are repeated. Then, in step S123, when the result of the change becomes negative (-), the control proceeds to step S128.
In step S128, the shape determination unit 52 determines whether or not the + count is 5 or more. Here, when the count is 5 or more, it means that the condition 1 that the pixel number continuously rises by 5 pixels or more in the Y-axis direction is satisfied.
 図15および図16に示すデータでは、例えば、8番目から9番目への変化が負(-)であり、そのときまでに正(+)は7個存在するため、+カウントは7となっている。そのため、制御はステップS129に進む。
 ステップS129では、形状判定部52は、-(マイナス)カウントが0か否かを判定する。-(マイナス)カウントが0であるため、制御はステップS130へと進む。
In the data shown in FIGS. 15 and 16, for example, the change from the 8th to the 9th is negative (−), and there are 7 positive (+) by that time, so the +count becomes 7. There is. Therefore, the control proceeds to step S129.
In step S129, the shape determination unit 52 determines whether or not the −(minus) count is 0. Since the-(minus) count is 0, the control proceeds to step S130.
 ステップS130では、形状判定部52は、1つ前の深さ位置(Y座標ともいう)を記憶する。すなわち、形状判定部52は、山の頂点が(傾きが+から-に変わった点)のY座標を記憶する。図15および図16のデータでは、8番目から9番目への変化における前の点である8番目のY座標を記憶する。
 次に、ステップS131において、形状判定部52は、-(マイナス)カウントに+1を加える。
In step S130, the shape determination unit 52 stores the previous depth position (also referred to as Y coordinate). That is, the shape determination unit 52 stores the Y coordinate of the apex of the mountain (the point where the inclination changes from + to −). In the data of FIGS. 15 and 16, the 8th Y coordinate which is the previous point in the change from the 8th to the 9th is stored.
Next, in step S131, the shape determination unit 52 adds +1 to the-(minus) count.
 次に、ステップS132において、形状判定部52は、-(マイナス)カウントが5以上か否かを判定する。ここでカウントが5以上ある場合には、連続して5pixel以上、Y軸方向に下降しているという条件2を満たしていることになる。8番目から9番目への変化の場合、-カウントは+1であるため、制御はステップS138へと進み、ステップS121を介して、9番目から10番目への変化が処理の対象として選択される。 Next, in step S132, the shape determination unit 52 determines whether the-(minus) count is 5 or more. Here, when the count is 5 or more, it means that the condition 2 in which the count is continuously decreased by 5 pixels or more in the Y-axis direction is satisfied. In the case of the change from the 8th to the 9th, the −count is +1. Therefore, the control proceeds to step S138, and the change from the 9th to the 10th is selected as a processing target through step S121.
 このように、順次、変化が選択され、12番目から13番目への変化が処理の対象として選択されると、ステップS132における-(マイナス)カウントが+5以上となるため、制御はステップS133へと進む。
 ステップS133では、形状判定部52は、始点のY座標と頂点のY座標との差を算出する。図15および図16のデータでは、1番目のY座標と8番目のY座標の差が算出される。
In this way, when the changes are sequentially selected and the change from the 12th to the 13th is selected as the processing target, the −(minus) count in step S132 becomes +5 or more, and thus the control proceeds to step S133. move on.
In step S133, the shape determination unit 52 calculates the difference between the Y coordinate of the starting point and the Y coordinate of the vertex. With the data in FIGS. 15 and 16, the difference between the first Y coordinate and the eighth Y coordinate is calculated.
 次に、ステップS134において、形状判定部52は、算出した差が、10以上であるか否かを判定する。ここで、10以上である場合には、Y軸方向の差が10pixel以上あるという条件3を満たしていることなる。
 算出した差が10以上である場合には、ステップS135において、形状判定部52はグ、ループが山形状であると判定し、埋設物が存在すると判定する。
 一方、算出した差が10未満の場合、制御はステップS138へと進む。
Next, in step S134, the shape determination unit 52 determines whether or not the calculated difference is 10 or more. Here, when it is 10 or more, the condition 3 that the difference in the Y-axis direction is 10 pixels or more is satisfied.
When the calculated difference is 10 or more, in step S135, the shape determination unit 52 determines that the gusset and the loop have a mountain shape, and determines that an embedded object exists.
On the other hand, when the calculated difference is less than 10, the control proceeds to step S138.
 (埋設物取得処理)
 次に、図39のステップS83の埋設物取得処理について説明する。図42は、埋設物取得処理を示すフロー図である。
 埋設物取得処理が開始されると、ステップS141において、埋設物位置決定部53が、上述した頂点検出処理で検出された頂点の組み合わせのパターン(図17参照)を検出する。
(Buried object acquisition process)
Next, the buried object acquisition process in step S83 of FIG. 39 will be described. FIG. 42 is a flowchart showing the buried object acquisition processing.
When the buried object acquisition process is started, the buried object position determination unit 53 detects the pattern of the combination of vertices detected by the above-described vertex detection process (see FIG. 17) in step S141.
 次に、ステップS142において、埋設物位置決定部53は、検出したパターンがC以外(すなわち、ピークが2つ以上であるパターンA1、A2、B1、B2のいずれか)であるか否かを判定し、検出したパターンがCの場合には、埋設物の位置として設定せずに、埋設物取得処理を終了する。
 一方、埋設物位置決定部53は、検出したパターンがC以外の場合であって、以前の走査によってパターンが検出されているとき、図21に示す埋設物データ遷移表に従って、埋設物データ(パターンおよび決定された埋設物の位置)が更新され、RFデータ管理部62に格納される。
Next, in step S142, the embedded object position determination unit 53 determines whether or not the detected pattern is other than C (that is, any of the patterns A1, A2, B1, and B2 having two or more peaks). If the detected pattern is C, the embedded object acquisition process is terminated without setting the position of the embedded object.
On the other hand, when the detected pattern is other than C and the pattern is detected by the previous scan, the buried object position determination unit 53 follows the buried object data (pattern) according to the buried object data transition table shown in FIG. And the determined position of the buried object) is updated and stored in the RF data management unit 62.
 (判定結果表示処理)
 図33のステップS203に示す判定結果表示処理では、埋設物取得処理によって取得された埋設物の位置に示すために、表示制御部26は、表示部8を制御して、画像データに●を表示する(例えば、図18(b)参照)。
 図15に示す例の場合、表示制御部26は、頂点である8番目のデータの位置に●印を付けて画像を表示部8に表示させる。
(Judgment result display process)
In the determination result display process shown in step S203 of FIG. 33, the display control unit 26 controls the display unit 8 to display a ● in the image data in order to indicate the position of the buried object acquired by the buried object acquisition process. (See, for example, FIG. 18B).
In the case of the example shown in FIG. 15, the display control unit 26 causes the display unit 8 to display an image with a ● mark at the position of the eighth data, which is the apex.
 [他の実施形態]
 以上、本発明の一実施形態について説明したが、本発明は上記実施形態に限定されるものではなく、発明の要旨を逸脱しない範囲で種々の変更が可能である。
 (A)
 上記実施の形態では、埋設物検出装置1およびメイン制御モジュール6(データ処理装置の一例)の制御方法として、図27~図42に示すフローチャートに従って、実施する例を挙げて説明したが、これに限定されるものではない。
[Other Embodiments]
Although one embodiment of the present invention has been described above, the present invention is not limited to the above embodiment, and various modifications can be made without departing from the gist of the invention.
(A)
In the above embodiment, the control method of the embedded object detection device 1 and the main control module 6 (an example of the data processing device) has been described by way of an example in which it is executed according to the flowcharts shown in FIGS. 27 to 42. It is not limited.
 例えば、図27~図42に示すフローチャートに従って実施される埋設物検出装置1および埋設物検出方法をコンピュータに実行させるプログラムとして、本発明を実現しても良い。
 また、プログラムの一つの利用形態は、コンピュータにより読取可能な、ROM等の記録媒体に記録され、コンピュータと協働して動作する態様であってもよい。
For example, the present invention may be implemented as a program that causes a computer to execute the embedded object detection device 1 and the embedded object detection method implemented according to the flowcharts shown in FIGS. 27 to 42.
Further, one usage form of the program may be a mode in which the program is recorded in a recording medium such as a ROM readable by a computer and operates in cooperation with the computer.
 またプログラムの一つの利用形態は、インターネット等の伝送媒体、光・電波・音波などの伝送媒体中を伝送し、コンピュータにより読みとられ、コンピュータと協働して動作する態様であってもよい。
 また、上述したコンピュータは、CPU(Central Processing Unit)等のハードウェアに限らずファームウェアや、OS、更に周辺機器を含むものであってもよい。
 なお、以上説明したように、電力消費体の制御方法はソフトウェア的に実現してもよいし、ハードウェア的に実現しても良い。
Further, one usage form of the program may be a mode in which the program is transmitted through a transmission medium such as the Internet or a transmission medium such as light, radio waves, sound waves, read by a computer, and operates in cooperation with the computer.
The computer described above is not limited to hardware such as a CPU (Central Processing Unit), but may include firmware, an OS, and peripheral devices.
As described above, the method of controlling the power consuming body may be realized by software or hardware.
 (B)
 上記実施の形態では、埋設物の一例として鉄筋を例に挙げて説明したが、鉄筋にかぎらなくてもよく、ガス管、水道管、木材等であってもよく、また、埋設物が設けられた対象物としてもコンクリートに限られるものではない。
(B)
In the above-described embodiment, the reinforcing bar is described as an example of the buried object, but it is not limited to the reinforcing bar, and may be a gas pipe, a water pipe, wood, or the like, and the buried object is provided. The target object is not limited to concrete.
 (C)
 上記実施の形態では、階調処理によって黒が鉄筋を示すように設定したが、これに限らず白が鉄筋を示すように設定してもよい。
 (D)
 上記実施の形態では、ステップS88において、比較超数が、下限値としての位置ずれ比較数よりも大きいか否かを判定しているが、上限値も設定してもよい。
(C)
In the above-described embodiment, black is set to indicate the reinforcing bar by gradation processing, but the present invention is not limited to this, and white may be set to indicate the reinforcing bar.
(D)
In the above-described embodiment, in step S88, it is determined whether or not the number of comparisons is larger than the number of positional deviation comparisons as the lower limit value, but the upper limit value may be set.
 (E)
 上記実施の形態では、埋設物位置消去部75が、以前に取得した埋設物データ(パターンおよび決定した埋設物の位置)を消去しているが、消去の代わり若しくは消去とともに位置ずれによる走査エラーが発生したことを報知してもよい。例えば、図43に示すように、走査エラー判定部65´が、報知部76を更に有し、走査エラーを検出した場合に、警告音または表示等によって使用者に走査エラーの発生を知らせても良い。
(E)
In the above embodiment, the embedded object position erasing unit 75 erases the previously acquired embedded object data (the pattern and the determined position of the embedded object). You may notify that it occurred. For example, as shown in FIG. 43, the scanning error determination unit 65′ further includes a notification unit 76, and when a scanning error is detected, the user may be notified of the scanning error by a warning sound or a display. good.
 (F)
 上記実施の形態では、埋設物検出装置1の本体部2内にメイン制御モジュール6が設けられているが、メイン制御モジュール6が本体部2と別に設けられていてもよい。この場合、タブレットなどにメイン制御モジュール6と表示部8を設けてもよい。本体部2とタブレットの間は無線または有線によって通信が行われてもよい。
(F)
In the above embodiment, the main control module 6 is provided in the main body 2 of the embedded object detection device 1, but the main control module 6 may be provided separately from the main body 2. In this case, the main control module 6 and the display unit 8 may be provided on the tablet or the like. Communication between the main body 2 and the tablet may be performed wirelessly or by wire.
 (G)
 上記実施の形態では、走査エラーを判定する際に比較する範囲Sは、埋設物の検出位置Pdを含むように設定されているが、これに限られるものではない。範囲Sは、埋設物の検出位置を含んでいなくてもよいが、信号強度の変化が大きいほうが走査エラーを判定しやすいため、範囲Sは埋設物の検出位置Pd近傍に設定する方が好ましい。
(G)
In the above-described embodiment, the range S to be compared when determining a scanning error is set to include the detection position Pd of the embedded object, but the range S is not limited to this. The range S does not have to include the detection position of the buried object, but it is preferable to set the range S near the detection position Pd of the buried object because it is easier to determine the scanning error when the change in the signal intensity is large. ..
 本発明の埋設物検出装置および埋設物検出方法は、埋設物の位置の確からしさを向上することが可能効果を有し、コンクリート内の埋設物の検出を行う上で有用である。 The buried object detecting apparatus and the buried object detecting method of the present invention have the effect of improving the accuracy of the position of the buried object, and are useful for detecting the buried object in concrete.
1      :埋設物検出装置
2      :本体部
3      :把手
4      :車輪
5      :インパルス制御モジュール
6      :メイン制御モジュール
7      :エンコーダ
8      :表示部
10     :制御部
11     :送信アンテナ
12     :受信アンテナ
13     :パルス発生部
14     :ディレイ部
15     :ゲート部
23     :前処理部
24     :埋設物判定部(埋設物位置検出部の一例)
25     :判定結果登録部
26     :表示制御部
31     :ゲイン調整部
32     :差分処理部
33     :移動平均処理部
34     :一次微分処理部
35     :ピーク検出部(信号強度ピーク検出部の一例)
51     :グルーピング部
52     :形状判定部(深さ位置ピーク検出部の一例)
53     :埋設物位置決定部
54     :埋設物データ積算部
61     :受信部
62     :RFデータ管理部
63     :平均化処理部
64     :埋設物検出部
65     :走査エラー判定部
65´    :走査エラー判定部
66     :表示制御部
71     :範囲設定部
72     :差演算部
73     :カウント部
74     :判定部
75     :埋設物位置消去部
76     :報知部
100    :コンクリート
100a   :表面
101    :埋設物
101a   :埋設物
101b   :埋設物
101c   :埋設物
101d   :埋設物
1 :Buried object detection device 2 :Main body 3 :Grip 4 :Wheel 5 :Impulse control module 6 :Main control module 7 :Encoder 8 :Display section 10 :Control section 11 :Transmission antenna 12 :Reception antenna 13 :Pulse generation section 14: delay unit 15: gate unit 23: preprocessing unit 24: buried object determination unit (an example of buried object position detection unit)
25: Judgment result registration unit 26: Display control unit 31: Gain adjustment unit 32: Difference processing unit 33: Moving average processing unit 34: First derivative processing unit 35: Peak detection unit (an example of signal intensity peak detection unit)
51: Grouping unit 52: Shape determination unit (an example of a depth position peak detection unit)
53: buried object position determining section 54: buried object data integrating section 61: receiving section 62: RF data managing section 63: averaging processing section 64: buried object detecting section 65: scanning error judging section 65': scanning error judging section 66 : Display control unit 71: Range setting unit 72: Difference calculation unit 73: Count unit 74: Judgment unit 75: Buried object position erasing unit 76: Notification unit 100: Concrete 100a: Surface 101: Buried object 101a: Buried object 101b: Buried Object 101c: buried object 101d: buried object

Claims (23)

  1.  対象物の表面を移動しながら前記対象物に向かって放射した電磁波の反射波に関するデータを用いて前記対象物内の埋設物を検出する埋設物検出装置であって、
     移動に伴ったタイミング毎に前記反射波に関するデータを受信する受信部と、
     前記対象物の表面における同じ道筋を往復移動する際の前記反射波に関するデータを用いて前記埋設物を検出する埋設物検出部と、
     前記埋設物が検出された際の第1の移動における所定の第1範囲の前記反射波に関するデータと、前記第1の移動よりも後の第2の移動における前記第1範囲に対応する第2範囲の前記反射波に関するデータと、を比較して、同じ道筋を往復移動するように走査されているか否かを判定する走査エラー判定部と、を備えた、
    埋設物検出装置。
    An embedded object detection device for detecting an embedded object in the object by using data relating to a reflected wave of an electromagnetic wave emitted toward the object while moving on the surface of the object,
    A receiving unit that receives data regarding the reflected wave for each timing accompanying movement,
    An embedded object detection unit that detects the embedded object using data regarding the reflected wave when reciprocating along the same path on the surface of the object,
    Data relating to the reflected wave in a predetermined first range in the first movement when the buried object is detected, and second data corresponding to the first range in the second movement after the first movement. Comparing with the data on the reflected wave of the range, a scanning error determination unit for determining whether or not to be scanned so as to reciprocate the same path, and,
    Buried object detection device.
  2.  前記走査エラー判定部は、
     同じ道筋を往復移動していないと判定された場合に、判定された際の前記道筋の移動よりも前の前記道筋の移動によって検出された前記埋設物の検出位置を消去する埋設物位置消去部を有する、請求項1に記載の埋設物検出装置。
    The scanning error determination unit,
    When it is determined that the same route is not reciprocatingly moved, the embedded object position erasing unit that erases the detected position of the embedded object detected by the movement of the route before the movement of the route when it is determined. The embedded object detection device according to claim 1, further comprising:
  3.  前記走査エラー判定部は、
     同じ道筋を往復移動していないと判定された場合に、使用者に走査エラーが発生したことを報知する報知部を有する、請求項1に記載の埋設物検出装置。
    The scanning error determination unit,
    The buried object detection device according to claim 1, further comprising a notification unit that notifies a user that a scanning error has occurred when it is determined that the user does not reciprocate along the same route.
  4.  前記第1範囲は、前記埋設物が検出された検出位置を含む範囲、または検出位置の近傍の範囲である、
    請求項1に記載の埋設物検出装置。
    The first range is a range including a detection position where the buried object is detected, or a range in the vicinity of the detection position,
    The buried object detection device according to claim 1.
  5.  前記第1範囲と前記第2範囲は、前記往復移動における折り返し位置から同じ距離である、請求項1または4に記載の埋設物検出装置。 The embedded object detection device according to claim 1 or 4, wherein the first range and the second range are the same distance from a turnaround position in the reciprocating movement.
  6.  前記第1範囲は、前記検出位置を含み、前記第2の移動における前記検出位置の手前の範囲である、
    請求項4に記載の埋設物検出装置。
    The first range includes the detection position, and is a range before the detection position in the second movement,
    The buried object detection device according to claim 4.
  7.  前記埋設物検出部は、
     各々の前記タイミングにおける前記対象物の深さ方向の信号強度のピークを検出する信号強度ピーク検出部と、
     各々の前記タイミングにおいて検出された前記信号強度のピークに基づいて前記埋設物の位置を検出する埋設物位置検出部と、を有する、
    請求項1に記載の埋設物検出装置。
    The buried object detection unit,
    A signal intensity peak detection unit that detects a peak of the signal intensity in the depth direction of the object at each of the timings,
    An embedded object position detection unit that detects a position of the embedded object based on a peak of the signal intensity detected at each of the timings,
    The buried object detection device according to claim 1.
  8.  前記走査エラー判定部は、前記第1範囲における前記タイミング毎の深さ方向の深さ位置における信号強度と、前記第2範囲における前記タイミング毎の深さ方向の深さ位置における信号強度とを比較して、同じ道筋を往復移動するように走査されているか否かを判定する、
    請求項7に記載の埋設物検出装置。
    The scanning error determination unit compares the signal strength at the depth position in the depth direction for each timing in the first range with the signal strength at the depth position in the depth direction for each timing in the second range. Then, it is determined whether or not scanning is performed so as to reciprocate along the same route.
    The buried object detection device according to claim 7.
  9.  前記第1範囲は、前記タイミングと前記深さ位置で特定される複数の第1位置を含み、
     前記第2範囲は、前記タイミングと前記深さ位置で特定される複数の第2位置を含み、
     前記走査エラー判定部は、
     各々の前記第1位置の前記信号強度と、各々の前記第1位置に対応する前記第2位置の前記信号強度の差を演算する差演算部と、
     第1閾値以上の前記差の数をカウントするカウント部と、
     カウント数が、第2閾値以上であるか否かを判定し、前記第2閾値以上の場合に走査エラーと判定する判定部と、を備えた、
    請求項8に記載の埋設物検出装置。
    The first range includes a plurality of first positions specified by the timing and the depth position,
    The second range includes a plurality of second positions specified by the timing and the depth position,
    The scanning error determination unit,
    A difference calculator that calculates a difference between the signal strength at each of the first positions and the signal strength at the second position corresponding to each of the first positions;
    A counting unit that counts the number of the differences equal to or greater than a first threshold;
    A determination unit that determines whether or not the count number is equal to or greater than a second threshold value, and that determines a scanning error when the count number is equal to or greater than the second threshold value,
    The buried object detection device according to claim 8.
  10.  前記埋設物検出部は、
     前記深さ方向またはその反対の表面方向において、所定の深さ位置における信号強度の、その前の前記深さ位置における信号強度からの変化の差分を検出する差分処理部を更に備えた、
    請求項7に記載の埋設物検出装置。
    The buried object detection unit,
    In the depth direction or the surface direction opposite thereto, further comprising a difference processing unit that detects a difference in change in signal intensity at a predetermined depth position from the signal intensity at the preceding depth position.
    The buried object detection device according to claim 7.
  11.  前記信号強度ピーク検出部は、
     前記差分が減少から増加に変化する深さ位置および前記差分が増加から減少に変化する深さ位置の少なくとも一方を前記信号強度のピークとして検出する、
    請求項10に記載の埋設物検出装置。
    The signal strength peak detection unit,
    At least one of a depth position at which the difference changes from decrease to increase and a depth position at which the difference changes from increase to decrease is detected as a peak of the signal strength,
    The buried object detection device according to claim 10.
  12.  前記埋設物検出部は、
     各々の前記タイミングで検出された前記信号強度のピークにおける前記深さ位置のうち、移動方向において所定間隔以内で連続している複数の前記信号強度のピークにおける深さ位置を、1つのグループとするグルーピング部と、
     前記移動方向と前記深さ方向における平面において、前記グループの前記深さ位置のピークを検出する深さ位置ピーク検出部と、を有する、
    請求項11に記載の埋設物検出装置。
    The buried object detection unit,
    Of the depth positions at the peak of the signal intensity detected at each of the timings, the depth positions at a plurality of the peaks of the signal intensity that are consecutive within a predetermined interval in the moving direction are set as one group. Grouping part,
    In a plane in the moving direction and the depth direction, a depth position peak detection unit that detects a peak of the depth position of the group, and
    The buried object detection device according to claim 11.
  13.  前記埋設物検出部は、
     検出された前記深さ位置のピークに基づいて、前記埋設物の位置を設定する埋設物位置決定部を、更に備えた、請求項12に記載の埋設物検出装置。
    The buried object detection unit,
    The embedded object detection device according to claim 12, further comprising an embedded object position determination unit that sets the position of the embedded object based on the detected peak of the depth position.
  14.  前記埋設物位置決定部は、所定範囲において、減少から増加に変化する前記信号強度のピークの前記グループにおける前記深さ位置のピークと、増加から減少に変化する前記信号強度のピークの前記グループにおける前記深さ位置のピークのいずれか一方が検出され、そのピークから所定範囲内に他の深さ位置のピークが存在しない場合、検出された前記ピークを埋設物の位置として設定しない、
    請求項13に記載の埋設物検出装置。
    In the predetermined range, the buried object position determination unit has the peak of the depth position in the group of the peaks of the signal strength changing from decrease to increase and the peak of the peak of the signal strength changing from increase to decrease in the group. Either one of the peaks at the depth position is detected, and if there is no peak at another depth position within a predetermined range from the peak, the detected peak is not set as the position of the buried object,
    The buried object detection device according to claim 13.
  15.  前記埋設物位置決定部は、
     減少から増加に変化する前記信号強度のピークの前記グループにおける前記深さ位置のピークと、増加から減少に変化する前記信号強度のピークの前記グループにおける前記深さ位置のピークが、所定範囲内で隣り合って存在する場合、浅いほうの前記ピークの位置を前記埋設物の位置に設定する、
    請求項13に記載の埋設物検出装置。
    The buried object position determination unit,
    The peak of the depth position in the group of the peak of the signal strength changing from decrease to increase, and the peak of the depth position in the group of the peak of the signal strength changing from increase to decrease within a predetermined range. When adjacent to each other, the position of the shallower peak is set to the position of the buried object,
    The buried object detection device according to claim 13.
  16.  前記埋設物位置決定部は、
     減少から増加に変化する前記信号強度のピークの前記グループにおける前記深さ位置のピークと、増加から減少に変化する前記信号強度のピークの前記グループにおける前記深さ位置のピークが、交互に3つ以上所定範囲内で隣り合って存在する場合、最も浅い前記ピークの位置を前記埋設物の位置に設定する、
    請求項13に記載の埋設物検出装置。
    The buried object position determination unit,
    Three peaks at the depth position in the group of the peaks of the signal strength changing from decrease to increase and three peaks at the depth position in the group of the peaks of the signal strength changing from increase to decrease are alternated. When existing adjacent to each other within the above predetermined range, the position of the shallowest peak is set to the position of the buried object,
    The buried object detection device according to claim 13.
  17.  前記埋設物位置決定部は、
     前記往復移動において、隣り合う前記深さ位置のピークの数が多いときの前記埋設物の位置を採用する、
    請求項13に記載の埋設物検出装置。
    The buried object position determination unit,
    In the reciprocating movement, the position of the buried object is adopted when the number of peaks at the adjacent depth positions is large.
    The buried object detection device according to claim 13.
  18.  前記深さ位置ピーク検出部は、
     前記移動方向と前記深さ方向における平面において、前記グループが所定形状となっている場合、前記埋設物が存在すると判定し、前記所定形状ではない場合、前記埋設物が存在しないと判定する、
    請求項12に記載の埋設物検出装置。
    The depth position peak detection unit,
    In the plane in the moving direction and the depth direction, if the group has a predetermined shape, it is determined that the buried object exists, if not the predetermined shape, it is determined that the buried object does not exist,
    The buried object detection device according to claim 12.
  19.  前記埋設物検出部は、
     前記往復移動の間において、各々の前記タイミングにおける前記対象物の深さ方向の信号強度を、それより前に受信した各々の前記タイミングにおける前記対象物の深さ方向の信号強度と平均化する平均化処理部を更に有する、
    請求項1に記載の埋設物検出装置。
    The buried object detection unit,
    An average that averages the signal strength in the depth direction of the object at each of the timings with the signal strength in the depth direction of the object at each of the timings received earlier during the reciprocating movement. Further having a chemical processing unit,
    The buried object detection device according to claim 1.
  20.  前記往復移動において、移動ごとに、移動方向と深さ方向の平面における信号強度を示す表示を行う表示部を更に備え、
     前記表示部は、前記埋設物が検出された場合には、前記表示とともに前記埋設物の検出位置を示す、
    請求項1に記載の埋設物検出装置。
    In the reciprocating movement, further comprising a display unit for displaying a signal intensity on a plane in the movement direction and the depth direction for each movement,
    The display section, when the buried object is detected, shows the detected position of the buried object together with the display,
    The buried object detection device according to claim 1.
  21.  前記表示部は、同じ道筋を往復移動していないと判定された場合に、前記埋設物の検出位置を消去する、
    請求項20に記載の埋設物検出装置。
    The display unit erases the detected position of the embedded object when it is determined that the embedded unit is not reciprocating along the same path.
    The buried object detection device according to claim 20.
  22.  対象物の表面を移動しながら前記対象物に向かって放射した電磁波の反射波に関するデータを用いて前記対象物内の埋設物を検出する埋設物検出方法であって、
     移動に伴ったタイミング毎に前記反射波に関するデータを受信する受信ステップと、
     前記対象物の表面における同じ道筋を往復移動する際の前記反射波に関するデータを用いて前記埋設物を検出する埋設物検出ステップと、
     前記埋設物が検出された際の第1の移動における所定の第1範囲の前記反射波に関するデータと、前記第1の移動よりも後の第2の移動における前記第1範囲に対応する第2範囲の前記反射波に関するデータと、を比較して、同じ道筋を往復移動するように走査されているか否かを判定する走査エラー判定ステップと、を備えた、
    埋設物検出方法。
    A buried object detection method for detecting a buried object in the object using data regarding a reflected wave of an electromagnetic wave radiated toward the object while moving on the surface of the object,
    A receiving step of receiving data regarding the reflected wave for each timing accompanying movement,
    An embedded object detecting step of detecting the embedded object using data regarding the reflected wave when reciprocating along the same path on the surface of the object;
    Data relating to the reflected wave in a predetermined first range in the first movement when the buried object is detected, and second data corresponding to the first range in the second movement after the first movement. Comparing with the data on the reflected wave of the range, the scanning error determination step of determining whether or not to be scanned so as to reciprocate the same path, comprising:
    Buried object detection method.
  23.  同じ道筋を往復移動していないと判定された場合に、判定された際の前記道筋の移動よりも前の前記道筋の移動によって検出された前記埋設物の検出位置を消去する埋設物位置消去ステップを更に備えた、請求項22に記載の埋設物検出方法。 When it is determined that the same route is not reciprocating, the embedded object position erasing step of erasing the detected position of the embedded object detected by the movement of the route before the movement of the route when it is determined. 23. The embedded object detection method according to claim 22, further comprising:
PCT/JP2019/040656 2018-12-28 2019-10-16 Embedded object detection device and embedded object detection method WO2020137101A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018247889A JP6984582B2 (en) 2018-12-28 2018-12-28 Buried object detection device and buried object detection method
JP2018-247889 2018-12-28

Publications (1)

Publication Number Publication Date
WO2020137101A1 true WO2020137101A1 (en) 2020-07-02

Family

ID=71128871

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/040656 WO2020137101A1 (en) 2018-12-28 2019-10-16 Embedded object detection device and embedded object detection method

Country Status (2)

Country Link
JP (1) JP6984582B2 (en)
WO (1) WO2020137101A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7467269B2 (en) * 2020-07-31 2024-04-15 株式会社東芝 Signal processing device, cutting device, and signal processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000329848A (en) * 1999-05-24 2000-11-30 Osaka Gas Co Ltd Probing method and device thereof
US20050156776A1 (en) * 2003-11-25 2005-07-21 Waite James W. Centerline and depth locating method for non-metallic buried utility lines
JP2007163271A (en) * 2005-12-13 2007-06-28 Kddi Corp Underground radar image processing method
JP2013024873A (en) * 2011-07-15 2013-02-04 Hilti Ag Detector for detecting material in base, and method
JP2013250107A (en) * 2012-05-31 2013-12-12 Japan Radio Co Ltd Buried object exploration apparatus

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02205701A (en) * 1989-02-04 1990-08-15 Hazama Gumi Ltd Method for measuring position of reinforcing bar in concrete structure
JP2866885B2 (en) * 1990-01-31 1999-03-08 日本電信電話株式会社 Method and apparatus for measuring depth of object in buried medium and relative permittivity of buried medium
JP4168040B2 (en) * 2005-04-05 2008-10-22 株式会社きんでん Embedded object search processing method and apparatus, embedded object search processing program, and recording medium recording the program
JP4858117B2 (en) * 2006-11-24 2012-01-18 パナソニック電工株式会社 Object detection device
JP5062921B1 (en) * 2012-05-01 2012-10-31 株式会社ウオールナット Cavity thickness estimation method and apparatus
JP6478578B2 (en) * 2014-11-18 2019-03-06 大阪瓦斯株式会社 Exploration equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000329848A (en) * 1999-05-24 2000-11-30 Osaka Gas Co Ltd Probing method and device thereof
US20050156776A1 (en) * 2003-11-25 2005-07-21 Waite James W. Centerline and depth locating method for non-metallic buried utility lines
JP2007163271A (en) * 2005-12-13 2007-06-28 Kddi Corp Underground radar image processing method
JP2013024873A (en) * 2011-07-15 2013-02-04 Hilti Ag Detector for detecting material in base, and method
JP2013250107A (en) * 2012-05-31 2013-12-12 Japan Radio Co Ltd Buried object exploration apparatus

Also Published As

Publication number Publication date
JP2020106491A (en) 2020-07-09
JP6984582B2 (en) 2021-12-22

Similar Documents

Publication Publication Date Title
US7643159B2 (en) Three-dimensional shape measuring system, and three-dimensional shape measuring method
US7657099B2 (en) Method and apparatus for processing line pattern using convolution kernel
WO2020137101A1 (en) Embedded object detection device and embedded object detection method
US9115983B2 (en) Position measurement apparatus and position measuring method
JP3685970B2 (en) Object detection device
US10893846B2 (en) Ultrasound diagnosis apparatus
JP2022168956A (en) Laser measuring device, and measurement method thereof
CN113099120A (en) Depth information acquisition method and device, readable storage medium and depth camera
JP6589619B2 (en) Ultrasonic diagnostic equipment
JP6717881B2 (en) Distance measuring device with polarization filter
US20110304586A1 (en) Infrared type handwriting input apparatus and scanning method
CN113671513B (en) Ranging system and calibration method of ranging sensor
KR102141051B1 (en) Method for detecting target and equipment for detecting target
JP7378203B2 (en) Data processing equipment and buried object detection equipment
US11259779B2 (en) Ultrasound body tissue detecting device, ultrasound body tissue detecting method, and ultrasound body tissue detecting program
JP7371370B2 (en) Buried object detection device and buried object detection method
WO2020195347A1 (en) Embedded object detection device and embedded object detection method
JP2020041984A (en) Data processing device and buried material detection device
US20210181155A1 (en) Method for evaluating corroded part
US20240046660A1 (en) Radar device
KR102141052B1 (en) Method for detecting target and equipment for detecting target
JP2008180646A (en) Shape measuring device and shape measuring technique
KR102141050B1 (en) Method for detecting target and equipment for detecting target
WO2020195342A1 (en) Embedded object detection device and embedded object detection method
CN118196162A (en) Acquiring depth maps

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19902587

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19902587

Country of ref document: EP

Kind code of ref document: A1