WO2014073571A1 - Dispositif de traitement d'image pour machine industrielle auto-propulsée et procédé de traitement d'image pour machine industrielle auto-propulsée - Google Patents

Dispositif de traitement d'image pour machine industrielle auto-propulsée et procédé de traitement d'image pour machine industrielle auto-propulsée Download PDF

Info

Publication number
WO2014073571A1
WO2014073571A1 PCT/JP2013/080022 JP2013080022W WO2014073571A1 WO 2014073571 A1 WO2014073571 A1 WO 2014073571A1 JP 2013080022 W JP2013080022 W JP 2013080022W WO 2014073571 A1 WO2014073571 A1 WO 2014073571A1
Authority
WO
WIPO (PCT)
Prior art keywords
range
image data
image
alert
self
Prior art date
Application number
PCT/JP2013/080022
Other languages
English (en)
Japanese (ja)
Inventor
守飛 太田
Original Assignee
日立建機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立建機株式会社 filed Critical 日立建機株式会社
Priority to JP2014545734A priority Critical patent/JP6072064B2/ja
Publication of WO2014073571A1 publication Critical patent/WO2014073571A1/fr

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/24Safety devices, e.g. for preventing overload
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60PVEHICLES ADAPTED FOR LOAD TRANSPORTATION OR TO TRANSPORT, TO CARRY, OR TO COMPRISE SPECIAL LOADS OR OBJECTS
    • B60P1/00Vehicles predominantly for transporting loads and modified to facilitate loading, consolidating the load, or unloading
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/26Indicating devices
    • E02F9/261Surveying the work-site to be treated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning

Definitions

  • the present invention relates to an image processing apparatus of a self-propelled industrial machine that performs predetermined image processing on image data captured by a photographing unit provided in a self-propelled industrial machine and an image processing method thereof.
  • the self-propelled industrial machines include transport machines such as dump trucks and working machines such as hydraulic shovels.
  • the dump truck is provided with a vessel which can be undulated on the frame of the vehicle body.
  • An excavated object excavated by a hydraulic shovel or the like is loaded on a vessel of the dump truck. After a predetermined amount of excavated material is loaded in the vessel, the dump truck is traveled to a predetermined accumulation site to excavate the loaded excavation material at the accumulation site.
  • the dump truck is a large transport vehicle, and there are many blind spots that can not be seen by the operator who gets into the cab.
  • a camera is mounted on the dump truck, and an image captured by the camera is displayed on the monitor of the driver's cab.
  • the camera takes an image of the area which is the blind spot of the operator. Then, by displaying the image captured by the camera on the monitor, the operator can supplementarily obtain a blind spot. Thereby, the operator can recognize the situation around the dump truck in detail.
  • Patent Document 1 discloses a technique for performing predetermined image processing on an image acquired by a camera of a hydraulic shovel to recognize an obstacle. In the technology of Patent Document 1, it is detected by image processing whether or not an intruding moving body is in the working range of the hydraulic shovel. By performing this image processing, the distance of the intruding moving body and the degree of certainty as a human body are evaluated.
  • a predetermined visual field range exists in the camera, and an image of this visual field range is displayed on the screen of the monitor as image data. Then, as in the technique of Patent Document 1 described above, by performing predetermined image processing on image data, an obstacle such as a worker, a service car, or another work machine is detected. As a result, the operator who gets in the driver's cab can know that when an obstacle is detected.
  • a range close to a self-propelled industrial machine such as a dump truck needs to be alerted, and this range is considered as a critical alert range.
  • This alert range can be set as a range in which the dump truck and the obstacle may come in contact with each other. Therefore, all of the alert range needs to be included in the camera view range. That is, the alert range is a partial range of the field of view of the camera. For this reason, a range other than the alert range is generated in the visual field range of the camera.
  • the range outside the vigilance range is a vigilance area, and even if an obstacle exists in this range, no problem occurs. That is, it is not necessary to know the operator.
  • the operator is notified of information which does not need to be recognized by the operator, which causes the operator to be confused. This reduces the work efficiency.
  • the caution-unwanted range is a far range in the visual field range of the camera, and the pixels when this range is imaged become rough. As a result, even non-obstacles may be falsely detected as obstacles, and normal obstacles can not be detected.
  • the image processing is performed to the unwarranted range, the amount of information to be processed increases, and the time required for processing also increases.
  • the alert range for detecting an obstacle in the camera's field of view is automatically set, and even if the camera's field of view deviates afterward, the alert range is not deviated. With the goal.
  • the image processing apparatus for a self-propelled industrial machine is provided in a self-propelled industrial machine, and a part of the structure constituting the self-propelled industrial machine is included in the field of view.
  • An imaging unit for imaging a peripheral region a structure detection unit for detecting the structure in the image data based on image data captured by the imaging unit, and a position in the peripheral region of the image data To be detected, and the predetermined range including the obstacle is set as the required alert range, and the required alert range is set in the image data based on the structure detected by the structure detector.
  • a warning range setting unit is provided in a self-propelled industrial machine, and a part of the structure constituting the self-propelled industrial machine is included in the field of view.
  • An imaging unit for imaging a peripheral region a structure detection unit for detecting the structure in the image data based on image data captured by the imaging unit, and a position in the peripheral region of the image data To be detected, and the predetermined range including the obstacle is set as the required alert range, and the required
  • the image processing apparatus may further include an obstacle detection unit configured to detect the obstacle only in the alert range in the image data.
  • the image processing apparatus may further include a shift determination unit that determines whether or not the view range of the imaging unit is shifted more than an allowable range based on the number of pixels which is equal to or greater than a threshold.
  • the self-propelled industrial machine may include a traveling speed detection unit that detects a traveling speed, and the alert range may be changed based on the traveling speed detected by the traveling speed detection unit.
  • the image picked up by the photographing unit is displayed on the image display means.
  • the image of the imaging unit is subjected to viewpoint conversion by signal processing to be a bird's eye view with the upper viewpoint It is good also as composition.
  • a photographing unit is provided in the self-propelled industrial machine, and a surrounding area including a part of the structure of the self-propelled industrial machine is photographed, Based on the image data captured by the imaging unit, the structure in the image data is detected, and an obstacle located in the peripheral area of the image data is detected, and a predetermined range including the obstacle is detected. Setting the required alert range in the image data based on the structure detected by the structure detection unit, and setting the failure with respect to only the required alert range in the image data Perform processing to detect objects.
  • the imaging unit performs imaging so that the structure of the self-propelled industrial machine is included in the field of view, detects the structure edge, and sets the alert range based on the structure edge.
  • the imaging unit performs imaging so that the structure of the self-propelled industrial machine is included in the field of view, detects the structure edge, and sets the alert range based on the structure edge.
  • FIG. 6A is a diagram showing an example of a vigilance range and a vigilance unnecessary range, wherein (a) is image data in which a vigilance range is set based on edge line segment data, and (b) is image data shown in (a) Image data to which distortion restoration processing has been added. It is a figure which shows an example of the display image of 1st Embodiment. It is a block diagram showing composition of an image processing device of a 2nd embodiment.
  • FIG. 1 It is a figure showing an example which specifies the front structure in case a gap has arisen in a visual field range
  • (a) is image data from which an edge pixel was extracted
  • (b) is an image conversion shown in (a) It is image data after processing.
  • (a) is image data in which a vigilance range is set based on edge line segment data
  • (b) It is image data obtained by adding distortion restoration processing to the image data shown in (a).
  • FIG. 1 It is a block diagram which shows the structure of the image processing apparatus of 3rd Embodiment. It is a figure which shows an example of the image data of 4th Embodiment. It is a figure which shows an example of the image which binarized the image data of FIG. It is a figure which shows an example of the display image of 4th Embodiment. It is explanatory drawing which shows the principle which carries out a bird's-eye view imaging process of a camera image. It is a figure which shows an example which displayed the display image of 2nd Embodiment as a bird's-eye view, (a) shows the area
  • a dump truck as a transport vehicle is applied to a self-propelled industrial machine.
  • any self-propelled industrial machine other than a dump truck such as a hydraulic shovel having a lower traveling body or the like may be applied.
  • the present embodiment can be applied to any self-propelled industrial machine that performs a predetermined operation (such as transportation or digging).
  • “left” is the left side when viewed from the driver's cab
  • “right” is the right side when viewed from the driver's cab.
  • FIG. 1 shows an example of the dump truck 1 of the present embodiment.
  • the dump truck 1 includes a vehicle body frame 2, front wheels 3 (3L and 3R), rear wheels 4 (4L and 4R), a vessel 5, a camera 6, a cab 7, an image processing device 8, and a monitor 9 There is.
  • the vehicle body frame 2 forms the main body of the dump truck 1, and front wheels 3 and rear wheels 4 are provided in front of the vehicle body frame 2.
  • the front wheel 3R is the right front wheel 3 and the front wheel 3L is the left front wheel 3.
  • the rear wheel 4R is a right rear wheel 4 and the rear wheel 4L is a left rear wheel 4.
  • a vessel 5 is a loading platform for loading soil, minerals, and the like. The vessel 5 is configured to be able to be undulated.
  • a camera 6 as an imaging unit can be installed at an arbitrary position on the dump truck 1.
  • FIG. 2 shows a side view of the dump truck 1, and the camera 6 is mounted in front of the cab 7.
  • the camera 6 performs imaging of the peripheral area of the dump truck 1 in a field of view range 6A (the range of the broken line in the drawing) which looks diagonally downward and forward of the dump truck 1.
  • the worker M is present as an obstacle to avoid contact.
  • Other obstacles include work machines and service cars, which may be included in the visual field range 6A.
  • the field of view range 6A of the camera 6 is obliquely downward in front of the dump truck 1 so that part of the front structure 10 is included in the field of view range 6A.
  • the visual field range 6A includes a part of the front structure 10.
  • the visual field range 6A which is the peripheral area photographed by the camera 6 may be a direction other than the front of the dump truck 1, or any direction such as the rear, left or right of the dump truck 1 may be a visual field.
  • the visual field range 6A By setting the visual field range 6A to the rear of the dump truck 1, it is possible to display an image of the rear situation when moving the dump truck 1 backward. At this time, the visual field range 6A includes a part of the rear structure.
  • the operator's cab 7 is provided with various operation means for the operator to ride and operate the dump truck 1. For example, a shift lever or the like for moving the dump truck 1 forward or backward is provided as the operating means.
  • An image processing device 8 and a monitor 9 are provided in the driver's cab 7, and the image processing device 8 performs predetermined image processing on image data generated by the camera 6 taking a picture.
  • the image data subjected to the image processing is displayed on the monitor 9.
  • the monitor 9 is a display device, and basically, an image taken by the camera 6 is displayed on the monitor 9.
  • the image processing device 8 is an image processing controller that performs image processing on image data of a video captured by the camera 6. Then, the image data subjected to the image processing is displayed on the monitor 9 as a display image. As shown in FIG. 3, the image processing apparatus 8 includes a distortion correction unit 21, a smoothing processing unit 22, an edge extraction unit 23, a structure edge determination unit 24, an edge line segment data determination unit 25, and a caution range setting unit 26. And a distortion restoration unit 27 and a display image generation unit 28.
  • the image captured by the camera 6 is input to the distortion correction unit 21 as image data.
  • the distortion correction unit 21 corrects distortion of image data using lens parameters of the camera 6 (focal length, optical axis center of image, lens distortion coefficient, etc.).
  • the image reflected by the lens of the camera 6 is not distorted at the center, but is distorted as it goes away from the center. Therefore, the image data obtained by photographing by the camera 6 is distorted as a whole.
  • image data with high linearity can be obtained by performing correction using the above-mentioned parameters.
  • the smoothing processing unit 22 receives the image data corrected by the distortion correction unit 21 and performs smoothing processing.
  • the smoothing process is a process of reducing a change in pixel value between each pixel of image data to generate a smooth image. By performing the smoothing process on the image data, it is possible to smooth the change of the pixel value, and it is possible to make the image data smoothed as a whole.
  • any method can be used for the smoothing process, for example, a moving average filter, a median filter, a Gaussian filter or the like can be used.
  • the edge extraction unit 23 performs edge extraction processing on the image data that has been subjected to the smoothing processing by the smoothing processing unit 22. For this reason, pixels having a large change in pixel value between adjacent pixels are extracted as edge pixels, and a shape in which the edge pixels are continuously formed is set as an edge. Thus, an edge having a large change in pixel value is extracted from the image data.
  • a method such as Sobel filter or Canney's algorithm can be used.
  • the structure edge determination unit 24 which is a structure detection unit, uses an edge whose shape is close to the boundary of the front structure 10 in the image data as the edge of the front structure 10. judge.
  • the structure edge determination unit 24 straightens the edges in the image data using a linearization method such as Hough transformation. Then, a straightened edge is determined as an edge line segment, and an edge line segment having high connectivity is determined as a boundary (edge) of the front structure 10.
  • the edge line segment can be expressed using the coordinates in the image data, and here, the intersection coordinates of the edge line segments, the edge line segment, and the lower end of the image data (closest to the dump truck 1 among the image data P
  • the coordinates of the point of intersection with the existing end are stored as edge line segment data.
  • the edge line segment data determination unit 25 compares an arbitrary number of edge line segment data, including the latest edge line segment data. Since the camera 6 has a predetermined imaging cycle and image data is periodically acquired, edge line segment data is also periodically acquired. The edge line segment data determination unit 25 stores the past edge line segment data, and calculates an average value with the latest edge line segment data. Then, the average value of the calculated edge line segment data is determined as the determined edge line segment data.
  • Each part from the distortion correction unit 21 to the edge line segment data determination unit 25 is a structure detection unit for detecting the front structure 10.
  • the caution range setting unit 26 sets a caution range based on the edge line segment data input from the edge line segment data determination unit 25.
  • the relative positional relationship between the vigilance range and the front structure 10 of the dump truck 1 is constant, and the vigilance range and the front structure are based on the mounting height or depression angle of the camera 6, the number of pixels of the camera 6, the angle of view, etc.
  • the relative positional relationship with 10 can be recognized as known. Therefore, the caution range setting unit 26 sets a predetermined range as the caution range based on the boundary of the front structure 10 obtained by the edge line segment data.
  • the distortion restoration unit 27 restores distortion using the camera lens parameters distortion-corrected by the distortion correction unit 21.
  • the image data up to the distortion restoration unit 27 is data in which distortion of the camera lens of the camera 6 is corrected, and distortion occurs in the image of the original camera 6. Therefore, when displaying the image of the camera 6 inherently, the distortion restoration unit 27 performs processing to restore distortion to the image data. Due to the restoration of this distortion, distortion also occurs in the required alert range in the image data.
  • the display image generation unit 28 generates a display image to be displayed on the monitor 9.
  • the display image at this time is image data in which distortion is restored, and a caution range is set in this.
  • the display image creation unit 28 outputs the display image in which the alert range is set to the monitor 9 and displays the display image on the monitor 9. As a result, the operator boarding the cab 7 can recognize the image of the camera 6 by visually recognizing the monitor 9.
  • the camera 6 installed on the dump truck 1 captures a view range 6A with a view obliquely downward, and performs shooting at a predetermined shooting cycle (for example, frame / second). As a result, image data of one frame obtained for each shooting cycle is periodically output to the image processing device 8 (step S1).
  • the camera 6 is installed so that the front structure 10 is included in the visual field range 6A, and the front structure 10 is included in the image data.
  • FIG. 1 An example of the image data obtained by the camera 6 photographing the visual field range 6A is shown in FIG.
  • the camera 6 has a field of view obliquely downward, and as shown in the figure, the ground in the shooting direction of the camera 6 is projected on the image data P as a background.
  • the site where the dump truck 1 is operated is under a severe environment, the ground is not leveled, and there are many irregularities.
  • the image data P it is projected as the uneven portion R.
  • the field of view of the camera 6 is set so that the front structure 10 of the dump truck 1 is projected in the field of view range 6A. Therefore, the front structure 10 is also included in the image data P.
  • the image data P described above is output to the distortion correction unit 21 of the image processing device 8 and is also output to the display image generation unit 28.
  • the image data P input to the distortion correction unit 21 is subjected to distortion correction based on the lens parameters of the camera 6 (step S2).
  • distortion correction of the image data P using the lens parameters of the camera 6 distortion of a portion separated from the center of the image data P is corrected.
  • the image data P is corrected to an image having high linearity as shown in FIG.
  • the image data P subjected to distortion correction processing is input to the smoothing processing unit 22.
  • the smoothing processing unit 22 performs a smoothing process on the image data P (step S3).
  • the smoothing processing unit 22 generates smooth image data P by reducing the difference in pixel value between adjacent pixels for each pixel of the image data P.
  • the edge extraction unit 23 performs edge extraction processing.
  • the pixel values of the image data P become sparse values due to unevenness of the ground, a difference in illuminance, or the like, and these may be detected as edges.
  • these are noise components, and the smoothing processing unit 22 performs smoothing processing in order to remove noise components in advance.
  • the sparse pixel values are smoothed and noise components are removed.
  • the smoothing processing is performed using the smoothing filter shown in FIG. 7, but any method can be used as long as the difference in pixel value between adjacent pixels can be reduced.
  • FIG. 7 shows an example in which a Gaussian filter is applied as the smoothing filter.
  • the Gaussian filter is a filter in which the filter coefficients of pixels close to one pixel (central pixel) in the image data P are large and the filter coefficients of pixels far from the central pixel are small in consideration of the spatial arrangement of pixels.
  • the Gaussian filter then takes a weighted average of each filtered pixel.
  • the following Gaussian function (Expression 1) can be used as the filter coefficient.
  • (sigma) in Formula 1 is a standard deviation.
  • a Gaussian filter of 3 pixels ⁇ 3 pixels in total of 9 pixels is used.
  • the number of pixels of the Gaussian filter is not nine but may be 25 pixels in total of 5 pixels ⁇ 5 pixels, or more.
  • the Gaussian filter using the Gaussian function of Equation 1 described above is configured of a central pixel and peripheral pixels (eight pixels).
  • the filter coefficient K of the central pixel is the largest, the filter coefficients L of the four pixels above, below, and to the left of the central pixel are the second largest, and the filter coefficients M of the other four pixels are the smallest. That is, "K>L>M".
  • the filter coefficients are weighted, and the sum of the filter coefficients is "1".
  • the smoothing processing unit 22 performs the smoothing processing using the above-described Gaussian filter on all the pixels of the image data P. Specifically, the pixel value of the central pixel is multiplied by the filter coefficient K, the pixel values of the four pixels above, below, left, and right of the central pixel are multiplied by the filter coefficient L, and the pixel values of the other four pixels are multiplied. Then, the filter coefficient M is multiplied. Then, with respect to the pixel value of the central pixel multiplied by the filter coefficient K, the pixel values of the surrounding eight pixels multiplied by the filter coefficients L and M are added. The pixel value of the central pixel when this addition is performed becomes the smoothed value. This process is performed on all the pixels of the image data P.
  • the uneven portion R on the ground and the front structure 10 are shown.
  • the pixel of the uneven portion R has a difference between its surrounding pixels and the pixel value. Therefore, when the smoothing processing unit 22 performs the smoothing process on the image data P, the difference in pixel value between the uneven portion R and the surrounding pixels is reduced, and the image data P is smoothed. Then, the values of the filter coefficients K, L and M described above are values such that the boundary of the front structure 10 in the image data P is detected as an edge, but the other uneven portion R etc. is not detected as an edge. Set to As a result, as shown in FIG. 8, the uneven portion R in the image data P can be significantly reduced. Since the values of the filter coefficients K, L, and M are determined by the standard deviation ⁇ of the Gaussian function, the standard deviation ⁇ is set such that the uneven portion R is not detected.
  • the difference between the pixel of the uneven portion R and the surrounding pixels may become large depending on the size, the illumination condition, etc. in the uneven portion R.
  • the uneven portion R can not be completely removed from the image data P, and a portion of the uneven portion R remains.
  • the number can be significantly reduced.
  • the boundary between the background and the front structure 10 is clearly distinguishable.
  • the dump truck 1 is often colored in a color that can be easily distinguished from the ground or the surrounding environment from the viewpoint of grasping the surrounding situation.
  • the front structure 10 also has a clearly different color from the background such as the ground and the surrounding environment. Therefore, the difference between the pixel value of the front structure 10 and the pixel value of the background of the boundary between the background and the front structure 10, that is, the front structure 10, is large. Thereby, since the difference of the pixel value in the boundary of front structure 10 is very large, even if it performs smoothing processing, most remain.
  • the edge extracting unit 23 extracts, as an edge pixel, a pixel having a difference in pixel value between adjacent pixels equal to or larger than a predetermined threshold (a pixel having a large difference) from the image data P subjected to the smoothing process (step S4) .
  • a Canny algorithm is used as edge extraction processing for extracting edge pixels.
  • the Canny algorithm uses two types of Sobel filters, a Sobel filter in the horizontal direction (horizontal direction: X direction) and a Sobel filter in the vertical direction (longitudinal direction: Y direction) as shown in FIG.
  • the Sobel filter in the X direction emphasizes the contour in the X direction of the image data P
  • the Sobel filter in the Y direction emphasizes the contour in the Y direction.
  • Edge pixels are extracted for all pixels of the image data P using these two Sobel filters.
  • the edge extraction unit 23 compares the edge intensities g of all the pixels of the image data P subjected to the Sobel filter using the first threshold T1 and the second threshold T2 (where T1> T2). . Focusing on one pixel of the image data P, if the edge strength g of the pixel is equal to or greater than the first threshold T1, the pixel is detected as an edge pixel. On the other hand, if the edge strength g of the focused pixel is less than the second threshold T2, the pixel is detected as not being an edge pixel.
  • the edge strength of the focused pixel is equal to or more than the first threshold T1 and less than the second threshold T2
  • the edge pixel exists in the vicinity of the pixel (in particular, the pixel adjacent to the pixel) It is detected as a pixel, otherwise it is not detected as an edge pixel.
  • image data P in which edge pixels are extracted as shown in FIG. 10A is generated. Then, a shape in which edge pixels are continuous is extracted as an edge E.
  • the front structure 10 is determined from the image data P by performing the structure edge determination process on the image data P from which the edge E is extracted (step S5). In the edge extraction process, pixels having a large difference between adjacent pixels are extracted as edge pixels. As a result, since the pixels that originally projected the uneven portion R are hardly detected as edge pixels, the accuracy of detecting the front structure 10 is dramatically improved.
  • the boundary of the front structure 10 is likely to be detected as an edge pixel because the difference in pixel value is large.
  • the edge E may not exhibit linearity and may exhibit irregular changes.
  • the boundary of the front structure 10 which originally has linearity may lose linearity, and part of the boundary may not be detected.
  • the structure edge determination unit 24 extracts a straight line from the edge pixel of the image data P.
  • Hough transform is used as a known method here.
  • straight lines can be extracted from edge pixels, methods other than Hough transform may be used.
  • line segments indicating irregular changes are straight lines L and L1 to L4 as shown in FIG. 10 (b). become.
  • attention is paid to straight lines L1 and L2 extending in the X direction and straight lines L3 and L4 extending in the Y direction.
  • the structure edge determination unit 24 selects the longest straight line L1 among the straight lines L1 and L2, and selects two straight lines L3 and L4.
  • the structure edge determination unit 24 extends straight lines L1, L3, and L4 in the image data P.
  • the straight line L1 is an extending straight line in the X direction
  • the straight lines L3 and L4 are straight lines extending in the Y direction.
  • the straight line L1 intersects with the straight line L3 and the straight line L4.
  • the intersection of the straight line L1 and the straight line L3 is C1
  • the intersection of the straight line L1 and the straight line L4 is C2.
  • an intersection between the straight line L3 and the lower end of the image data P is C3
  • an intersection between the straight line L4 and the lower end of the image data P is C4.
  • Intersection points C1 to C4 indicate the coordinates of pixels in the two-dimensional image data P.
  • the information of the intersections C1 to C4 is data for specifying the edge line segment of the front structure 10, and this is used as edge line segment data.
  • An area surrounded by the edge line segment data (coordinate information of the intersections C1 to C4) is the front structure 10.
  • the structure edge determination unit 24 determines the edge of the front structure 10 based on the edge line segment data.
  • the front structure 10 may be specified based on edge line segment data obtained from one image data P, in order to improve the accuracy of detecting the front structure 10 in the image data P, the edge line
  • the minute data determination unit 25 compares edge line segment data (step S6).
  • the image data P is input from the camera 6 to the image processing device 8 at a predetermined cycle, and a plurality of pieces of image data P can be obtained.
  • the edge line segment data determination unit 25 stores edge line segment data based on past image data P, and calculates an average value of edge line segment data based on the latest image data P and past edge line segment data. .
  • the edge line segment data determination unit 25 sets the calculated edge line segment data (average value) as the determined edge line segment data. That is, the front structure 10 in the image data P is identified by the determined edge line segment data.
  • the edge line segment data determination unit 25 calculates the average value of edge line segment data of a plurality of image data P
  • the edge line segment data determination unit 25 stores edge line segment data of a predetermined number of image data P. The determination of the edge line segment data is not performed until it is done.
  • the accuracy of the edge line segment data is improved, but the processing time becomes longer. Therefore, in consideration of the balance between the accuracy of the edge line segment data and the processing time, the number of edge line segment data to be calculated is determined.
  • the edge line segment data determined by the edge line segment data determination unit 25 is output to the caution range setting unit 26.
  • the caution range setting unit 26 sets a caution range based on edge line segment data (step S7).
  • the critical range is a range for detecting an obstacle, and is a predetermined range from the dump truck 1. As described above, the relative positional relationship between the front structure 10 of the dump truck 1 and the caution range can be recognized as known. Therefore, if the front structure 10 in the image data P can be identified, the alert range can be automatically recognized.
  • FIG. 11A shows a state in which the caution range 31 is set based on edge line segment data.
  • the determined edge line segment data is information of the intersections C1 to C4, and the front structure 10 is identified by connecting the intersections C1 to C4. Therefore, the area surrounded by the intersections C1 to C4 in FIG.
  • the alert range 31 is set in the image data P based on the relative positional relationship between the front structure 10 and the alert range 31.
  • an area other than the alert range 31 becomes an alert-unnecessary range 32 that does not require alerting. Therefore, when an obstacle is detected from the image data P, the obstacle is detected only in the alert range 31, and the obstacle is not detected in the alert unnecessary range 32.
  • the method of obstacle detection can be any method. Then, when detecting an obstacle, the alarm-unnecessary area 32 is a mask area, and by masking the area, it is possible to narrow down the alarm area 31 and detect an obstacle.
  • the caution range 31 is set based on the image data P subjected to distortion correction by the distortion correction unit 21.
  • the display image displayed on the monitor 9 is image data P output from the camera 6 and is an image not subjected to distortion correction. Therefore, the distortion restoration unit 27 restores the distortion correction for the image data P for which the caution range 31 has been set (step S8).
  • the distortion restoration unit 27 restores the distortion correction of the image data P using the lens parameters of the camera 6 used by the distortion correction unit 21.
  • the alert range 31 is deformed as shown in FIG. Information on the alerting range 31 subjected to the distortion restoration processing is output to the display image creation unit 28.
  • the display image generation unit 28 generates an image (display image) to be displayed on the monitor 9 (step S9).
  • the display image generation unit 28 periodically inputs image data P from the camera 6.
  • the image data P is an image that has not been subjected to distortion correction.
  • the display image creation unit 28 inputs the information of the alert range 31 of which the distortion restoration processing has been performed from the distortion restoration unit 27.
  • the display image creation unit 28 superimposes the caution range 31 on the image data P input from the camera 6.
  • the state is shown in FIG. That is, in the image data P shown in the figure, the caution range 31 is an area surrounded by virtual lines.
  • the caution area 31 is an area for detecting an obstacle, and it is not necessary to display the caution area 31 exceptionally as shown in FIG.
  • the display image creation unit 28 outputs the image data P input from the camera 6 to the monitor 9.
  • An operator who gets in the cab 7 can obtain an auxiliary visual field by visually recognizing the image data P displayed on the monitor 9.
  • the front structure 10 of the dump truck 1 is specified based on the image data P, and the alert range 31 is set from the front structure 10.
  • the alerting range 31 for detecting an obstacle can be automatically set without manual operation, and the time and effort for setting can be largely omitted.
  • the dump truck 1 vibrates violently to cause a shift in the visual field range 6A of the camera 6, there is no shift in the caution range 31. That is, the relative positional relationship between the front structure 10 and the caution range 31 in the image data P is constant as the visual field range 6A is shifted, so the caution range 31 is determined based on the front structure 10 By setting, the occurrence of a shift in the alert range 31 is avoided.
  • the structure edge determination unit 24 detects the linearity of the edge pixel to determine the edge line segment of the front structure 10, but if the boundary between the front structure 10 and the background can be detected
  • a method other than the method of detecting the edge line segment of the front structure 10 may be used.
  • the front structure 10 can be detected based on the difference in pixel value. In this case, it is not necessary to perform distortion correction by the distortion correction unit 21, nor is it necessary to perform distortion correction by the distortion restoration unit 27.
  • the front structure 10 has a unique linearity, the accuracy of detecting the boundary between the front structure 10 and the background is improved by adopting a method of detecting the edge line segment of the front structure 10 Do.
  • the smoothing processing unit 22 performs smoothing processing on the image data P, thereby reducing the uneven portion R in the image data P.
  • the uneven portion R in the image data P is reduced. Therefore, the edge E of the front structure 10 can be extracted without performing the smoothing process.
  • the edge line segment data determination unit 25 improves the accuracy of the edge line segment data
  • the edge line segment data determination unit may be used when the edge line segment can be clearly identified based on one image data P. 25 may omit the process.
  • the edge line segment data by determining the edge line segment data by calculating the average value of the edge line segment data of a plurality of image data P, the accuracy of the edge line segment data is improved. For this reason, it is desirable that the processing by the edge line segment data determination unit 25 be performed.
  • FIG. 13 shows the configuration of the image processing apparatus 8 of the second embodiment.
  • the alert range image generation unit 41, the obstacle detection unit 42, the mask image generation unit 43, and the deviation determination unit 44 are newly added to the first embodiment, but other than that Since the configuration of is the same as that of the first embodiment, the description will be omitted.
  • Image data P is periodically output to the image processing apparatus 8 from the camera 6 which is photographing the visual field range 6A.
  • the image data P is output not only to the distortion correction unit 21 and the display image generation unit 28 but also to the caution range image generation unit 41.
  • the caution range image generation unit 41 generates an image of only the caution range 31 in the image data P, as shown in FIG.
  • the alert restoration range 31 in which the distortion is restored is input from the distortion restoration unit 27, and the image data P is input from the camera 6. Then, of the image data P, only the image within the area of the caution area 31 is generated as the caution area image.
  • the caution range image is input to the obstacle detection unit 42.
  • the obstacle detection unit 42 detects an obstacle in the alert range image. Workers, service cars and the like are mainly assumed as obstacles for which contact must be avoided, but, for example, other work machines and the like also become obstacles. Arbitrary methods can be used for the method of detecting an obstacle.
  • the obstacles include a stationary state and a moving state, and a stationary obstacle is a stationary body, and a moving obstacle is a moving body.
  • An arbitrary method can be used as a method of detecting a stationary body, for example, a method of detecting a region of a stationary body by compositely judging an outline, luminance, color or the like in an image can be used.
  • the stationary body has a certain area, and when the contour of the certain area has a predetermined shape, when the luminance value has a predetermined value, when the color shows a specific color, etc.
  • the above-mentioned certain area can be detected as a stationary body.
  • pattern matching method of determining an obstacle by matching using shape information learned in advance based on information of contours of a background and a stationary body can be used.
  • Arbitrary methods can be used also as a method of detecting a moving object, and for example, when comparing temporally different image data P, the moving object is moving when a certain area is moving to a different position To detect that.
  • a block matching method can be applied to the detection of a moving object. The block matching method will be described later.
  • gradient data detected using the relationship between the spatial gradient and the temporal gradient of brightness at each pixel of the image data P, and image data and latest image data when there is no area of the moving object It is possible to use a background subtraction method for detecting a difference between the two and a frame subtraction method for generating two difference images using three pieces of image data of different times to detect a moving object region.
  • the mask image generation unit 43 generates a mask image.
  • the alert range setting unit 26 sets the alert range 31 in the image data P, and sets the other area as the alert unnecessary range 32.
  • the mask image generation unit 43 generates a mask image with the alert unnecessary area 32 as a mask area.
  • the mask image is an image of the same size as the image data P, and the number of pixels is also equal.
  • the mask area (the area 32 not requiring alarming) and the other area (the area 31 requiring alarming) are clearly distinguished.
  • all mask regions in the mask image can be configured in black (zero pixel value), and all regions in the critical range 31 can be configured in white (pixel value 256).
  • the shift determination unit 44 receives the mask image from the mask image generation unit 43 and determines the shift.
  • the shift determination unit 44 holds a past mask image, and compares the newly input mask image with the past mask image. Then, the difference between the new mask image and the past mask image is calculated for all the pixels, and when there are a predetermined number or more of pixels having a difference of pixel values equal to or more than a predetermined value, the camera 6 is shifted. In the other cases, it is determined that no deviation has occurred.
  • the determination result is output to the display image creation unit 28.
  • step S8 the alert range 31 in which distortion is restored is generated.
  • the information of the alerting range 31 is input to the alerting range image generation unit 41.
  • the alert range image generation unit 41 receives the image data P from the camera 6, and an area other than the alert range 31 of the image data P, that is, the alert unnecessary area 32 is an insensitive region 33 P's information has been deleted.
  • the insensitive region 33 has the same function as the mask region described above.
  • the insensitive regions 33 are all composed of black pixels (zero pixel value) (note that black pixels are shaded in FIG. 15 for convenience of the drawing).
  • the alert range image P1 as shown in FIG. 15 is generated (step S10).
  • the worker M is included as an obstacle in the alert range image P1.
  • the caution range image P1 has the same number of pixels as the image data P in the X and Y directions, and the image data P is displayed in the caution range 31 portion, but black in the insensitive region 33. Only the pixels are displayed. In other words, the black pixels in the insensitive area 33 are a mask, and the unmasked area in the image data P is the caution area 31. As shown in the figure, a large number of uneven portions R exist in the alert range 31 and a worker M exists as an obstacle. Such an alert range image P1 is output to the obstacle detection unit 42.
  • the obstacle detection unit 42 detects an obstacle from the alert range image P1 (step S11). Although any method can be used to detect an obstacle, the case where a block matching method is used will be described here.
  • FIG. 16 shows an example of a moving object detection method using the block matching method.
  • the obstacle detection unit 42 stores the past alert range image P1 and compares the latest alert range image P1 with the past alert range image P1.
  • the alert range image P1 one cycle before the imaging cycle of the camera 6 is taken as the past alert range image P1.
  • a total of 16 pixels of 4 pixels in the X direction and 4 pixels in the Y direction is taken as a region of interest A1.
  • the attention area A1 may adopt the number of pixels other than 16 pixels.
  • a total of 64 pixels of 8 pixels in the X direction and 8 pixels in the Y direction are set as the search range A2 with the attention area A1 as the center.
  • the number of pixels in the search range A2 can also be set arbitrarily.
  • the obstacle detection unit 42 sets the search range A2 to a predetermined position of the image data P, for example, an end in the X direction and the Y direction. Then, the search range A2 is searched.
  • the search method in the search range A2 will be described with reference to FIG.
  • the comparison area A3 is set in the search range A2.
  • the comparison area A3 has four pixels in the X direction and four pixels in the Y direction, as in the case of the attention area A1, and has a total of 16 pixels. Then, the comparison area A3 is set to the end in the X direction and the Y direction.
  • the obstacle detection unit 42 calculates the difference in pixel value between corresponding pixels of the attention area A1 in the past caution range image P1 and the comparison area A3 in the latest caution range image P1.
  • the attention area A1 and the comparison area A3 are both configured by 4 pixels in the X direction and 4 pixels in the Y direction, and one pixel in the attention area A1 and one pixel in the comparison area A3 correspond 1: 1. doing. Therefore, as described above, the difference calculation of the pixel values between corresponding pixels is performed. By performing this operation, it is possible to obtain difference values for 16 pixels. Then, the difference value for 16 pixels is respectively squared, and the operation is performed to add the squared 16 difference values. The value obtained by performing this calculation is taken as the addition value.
  • the obstacle detection unit 42 determines whether the added value is equal to or less than a predetermined threshold.
  • the threshold value at this time is a value capable of determining whether or not the attention area A1 and the comparison area A3 can be regarded as identical.
  • the obstacle detection unit 42 determines that the past attention area A1 matches the latest comparison area A3. On the other hand, in other cases, it is determined that the past attention area A1 and the latest comparison area A3 do not match.
  • the comparison area A3 is shifted by one pixel in the X direction for comparison. Let this be a search (scan) in the X direction. When scanning in the X direction is completed, next, the comparison region A3 is shifted by one pixel in the Y direction, and scanning in the X direction is performed again. Performing scanning in the X direction while shifting one pixel in the Y direction is referred to as scanning in the Y direction. Therefore, scanning is performed in the X direction and the Y direction.
  • FIG. 18 shows the method of this scan three-dimensionally.
  • the scanning in the X direction and the Y direction is performed until it is detected that the past attention area A1 matches the latest comparison area A3. Therefore, if it is detected that the past attention area A1 matches the latest comparison area A3, the scan is ended at that point. On the other hand, when all the scans in the X direction and the Y direction are completed, if a matching past focused area A1 and the latest comparison area A3 are not detected, a mobile object (a failure Object: It is recognized that there is no worker M).
  • the search range A2 is a limit range that can be moved when the worker M normally walks until the latest alert range image P1 is acquired since the past alert range image P1 is acquired. Determined as a standard. As described above, the past alert range image P1 is an image one cycle before the latest alert range image P1. Therefore, the search range A2 is set based on the limit amount when the worker M normally walks during one cycle of the photographing cycle of the camera 6.
  • the camera 6 has a case where the imaging cycle is fast and a case where it is slow. At this time, when the shooting cycle of the camera 6 is high speed, even if the worker M is moving, it is recognized that it hardly moves. On the other hand, when the photographing cycle is low, even if the worker M is moving at low speed, it is recognized as moving at high speed. Therefore, when the shooting cycle of the camera 6 is fast, the vigilant range image P1 relatively in the past, which is a plurality of cycles earlier, is set as the vigilance range image P1 of the past. On the other hand, when the shooting cycle of the camera 6 is slow, the relatively latest (for example, one cycle before) the alert range image P1 is set as the past alert range image P1.
  • the obstacle detection unit 42 causes the search within the search range A2 to be performed on all the pixels of the alert range image P1. Therefore, when the search range A2 is at the end of the X direction and Y direction, when the search is completed, the next search area A2 is shifted in the X direction by the pixel (8 pixels) in the X direction, Do a search for Then, when the search for all the pixels in the X direction of the alert range image P1 is completed, the search range A2 is shifted in the Y direction by the pixel in the Y direction (8 pixels), and the same search is performed again. Then, the search within the search range A2 is performed on all the pixels of the caution required range image P1.
  • the alert range image P ⁇ b> 1 is distinguished into an alert range 31 for detecting an obstacle and a blind area 33.
  • the obstacle detection unit 42 recognizes the insensitive area 33 in the alert range image P1. That is, the pixels included in the insensitive area 33 are recognized. Therefore, the obstacle detection unit 42 excludes the dead area 33 when performing a search based on the search range A2. As a result, the worker M as an obstacle can be detected only in the alert range 31. Then, the obstacle detection unit 42 outputs the detected information on the worker M as the obstacle to the display image generation unit 28.
  • the display image creation unit 28 receives the image data P from the camera 6, and receives the information of the worker M as an obstacle from the obstacle detection unit 42. Therefore, the display image creation unit 28 recognizes the worker M in the alert range 31 of the image data P.
  • the display image creation unit 28 superimposes an obstacle marking display M1 on the periphery of the worker M in order to explicitly display the obstacle.
  • the obstacle marking indication M1 is a circumscribed rectangle formed around the worker M, but may of course not be a rectangle.
  • the image data P on which the obstacle marking indication M1 is superimposed is shown in FIG.
  • the image data P becomes a display image and is output to the monitor 9.
  • the obstacle marking display M1 is superimposed around the worker M as an obstacle, so that the operator looks at the presence of the worker M based on the obstacle marking display M1. Can be recognized.
  • the caution range 31 is shown with a virtual line in FIG. 19, it has shown that the detection of the obstruction was performed only in this range.
  • the alerting range 31 is basically not displayed on the monitor 9, it may be displayed.
  • the image data P is a through image
  • an image obtained by viewpoint conversion of the data of the through image to be an upper viewpoint is a bird's eye image. Therefore, in order to distinguish it from the bird's eye image,
  • the acquired image data is called through image P.
  • the camera image (through image) acquired by the camera 6 is displayed, but it is also possible to perform signal processing from this camera image to form a bird's-eye view image and display it on the monitor 9. That is, as shown in FIG. 30A, viewpoint conversion processing is performed in which the through image P is an upper viewpoint.
  • FIG. 30 (b) cameras are installed in the dump truck 1 at four locations, the front position, the left and right both positions, and the rear position, respectively, and the periphery of the dump truck 1
  • cameras are installed in the dump truck 1 at four locations, the front position, the left and right both positions, and the rear position, respectively, and the periphery of the dump truck 1
  • the front side camera, the left and right side Each bird's-eye view image in each image area SPF, SPL, SPR, SPB can be acquired and displayed on the monitor 9.
  • FIG. 30A shows an image area SPF of the front camera.
  • an icon image Q having the dump truck 1 in a planar shape is disposed at the center of the screen of the monitor 9, and image areas SPF, SPL, SPR, SPB are respectively provided in four directions around the icon image Q. Display the corresponding overhead image.
  • this surveillance image can display the obstacle marking display M1 indicating the worker M in the same manner as the through image P of FIG. 30 (a), and further, the position relative to the dump truck 1 because of the overhead image.
  • the display mode can also be grasped.
  • the vibration of the dump truck 1 may cause a shift in the visual field range 6A of the camera 6. Therefore, it is determined whether or not a shift occurs in the visual field range 6A.
  • the deviation of the visual field range 6A has an allowable range, and it is determined that the deviation occurs when the amount of deviation exceeds the allowable range, and it is determined that the deviation does not occur if within the allowable range.
  • the information on the alert range 31 is input from the alert range setting unit 26 to the mask image generation unit 43.
  • the mask image generation unit 43 generates a mask image based on the image data P and the caution area 31 (step S12).
  • the caution range 31 of the image data P is formed of white pixels (pixel value 256), and the caution unnecessary range 32 is formed of black pixels (zero pixel value).
  • the mask image generation unit 43 holds the image data P in the past (for example, one cycle before) and the information of the caution range 31 and holds the mask image in the past. Then, when the latest image data P and the information of the alert range 31 are input, the latest mask image is generated. The mask image generation unit 43 outputs the past mask image and the latest mask image to the deviation determination unit 44.
  • the shift determination unit 44 calculates the shift amount based on the input past mask image and the latest mask image, and based on the calculated shift amount, whether or not the shift of the visual field range 6A is within the allowable range. It determines (step S13). For this purpose, the shift determination unit 44 calculates the pixel difference between the past mask image and the latest mask image.
  • the pixel value becomes zero when the difference between pixels in which the caution range 31 overlaps in the past mask image and the latest mask image is calculated. That is, the pixel becomes black.
  • the pixel value becomes zero when the difference between pixels where the unneeded area 32 for warning is overlapped is calculated between the past mask image and the latest mask image.
  • the absolute value of the calculation result is 256. That is, the pixel is white.
  • the difference image P2 in FIG. 20 is an image resulting from the calculation of the difference.
  • a predetermined number of pixels is set as a threshold in the deviation determination unit 44, and the number of white pixels of the difference image P2 is compared with the threshold.
  • This threshold indicates the allowable range of deviation of the visual field range 6A.
  • the allowable range can be set arbitrarily. For example, a deviation that affects the visibility of the operator can be set as the allowable range. Then, as a result of comparing the pixel count of the white pixel with the threshold, if the pixel count of the white pixel is less than the threshold, the shift determining unit 44 determines that the view range 6A is not shifted. On the other hand, if the number of white pixels is equal to or greater than the threshold value, it is determined that the viewing range 6A has a shift.
  • the shift determination unit 44 outputs information on whether or not there is a shift to the display image creation unit 28.
  • the display image generation unit 28 displays the shift information G on the image data P when the information that the shift is generated is input from the shift determination unit 44.
  • FIG. 21 shows an example of the image data P on which the deviation information G is superimposed.
  • the shift information G is displayed at the corner of the image data P, and is a mark for visually recognizing that the camera 6 is shifted.
  • the deviation information G is a notification means for causing the operator to recognize that the camera 6 has generated a deviation greater than the allowable range, and based on the deviation information G displayed on the monitor 9, the operator Can be recognized.
  • the operator who recognizes this can return the camera 6 to a normal position, and can eliminate the shift of the visual field range 6A.
  • the maintenance personnel etc. can return the camera 6 to the normal position by the operator contacting the maintenance personnel etc.
  • the edge extraction unit 23 performs an edge extraction process on the image data P, and shows image data P from which an edge pixel is extracted. Then, the structure edge determination unit 24 extracts a straight line from the image data P using a method such as Hough transform.
  • FIG. 22B shows a state in which a straight line is extracted from the image data P.
  • the structure edge determination unit 24 detects straight lines in the X direction and the Y direction because no shift occurs in the visual field range 6A.
  • the structure edge determination unit 24 of the present embodiment since the inclination range is generated in the visual field range 6A, the front structure 10 in the image data P is also inclined. Therefore, the structure edge determination unit 24 of the present embodiment also extracts a straight line that is inclined at a predetermined angle from the X direction and the Y direction. That is, with the predetermined angle as the inclination angle ⁇ , a straight line within the range of the inclination angle ⁇ is extracted among the extracted straight lines.
  • the longest straight line is determined as the straight line of the front structure 10 in the X direction being inclined.
  • the straight line L5 is the longest straight line.
  • the inclination angle from the Y direction Extract a straight line within the range of ⁇ .
  • straight lines L7 and L8 are extracted.
  • straight lines L7 and L8 are extended.
  • FIG. 22B four intersections C5 to C8 of the straight line L5 and the straight lines L7 and L8 are detected. These intersections C5 to C8 become edge line segment data.
  • FIG. 23 (a) shows a state in which the alert range 31 is set based on edge line segment data. Since a shift in the tilt direction occurs in the visual field range 6A, the caution range 31 also tilts. At the same time, the caution-unwanted range 32 is also inclined. However, the alert range 31 is all included in the image data P. Then, an image obtained by performing the distortion restoration process on the image data P of FIG. 23 (a) becomes the image data P shown in FIG. 23 (b).
  • the image data P shown in FIG. 23B is output to the caution range image generation unit 41, and a caution range image as shown in FIG. 24 is generated.
  • the warning image is output to the obstacle detection unit 42, and the obstacle detection unit 42 detects the worker M as an obstacle.
  • the alert range 31 is changed according to the traveling speed of the dump truck 1.
  • the range in which the obstacle should be warned also changes depending on whether the dump truck 1 is stopped, traveling at a low speed, or traveling at a high speed.
  • the dump truck 1 is in the stopped state, it is not necessary to set the warning range 31 to a very wide range, but when traveling at a certain speed, it is necessary to set the wide range to the warning range 31.
  • a traveling speed detection unit 45 is added to the second embodiment.
  • the dump truck 1 is provided with means for detecting the traveling speed, and outputs the detected traveling speed to the critical alert range setting unit 26.
  • the alert range setting unit 26 changes the size of the alert range 31 based on the input information on the traveling speed.
  • the size of the critical alert range 31 needs to be within the critical alert range 311 from the stop zone, the critical alert range 312 at low speeds, and the critical alert ranges at high speeds It can be changed to be 313.
  • the low speed region is a speed at which the dump truck 1 stops at about that position when the travel stop operation is performed. It should be noted that the above-described alert range in the operation of the dump truck 1 is a range requiring alertness and is not an image display range on the monitor 9.
  • the dump truck 1 is applied as a self-propelled industrial machine, but in the fourth embodiment, the dump truck 1 is used.
  • a hydraulic shovel is applied as a self-propelled industrial machine.
  • the case where the camera 6 is installed in the hydraulic shovel, the rear structure 50 of the hydraulic shovel is extracted, and the alert range 31 is set based on the rear structure 50 will be described.
  • FIG. 26 shows image data P subjected to distortion correction processing by the distortion correction unit 21.
  • the rear structure 50 of the hydraulic shovel is often rounded, and the boundary between the rear structure 50 and the background in the image data P has curvilinearity with a predetermined radius of curvature.
  • the aft structure 50 of the hydraulic shovel is colored in a color distinct from the background color. Therefore, the brightness value of the rear structure 50 and the brightness value of the background are largely different. So, in this embodiment, the pixel which has a luminance value more than a predetermined luminance value (threshold value) about all the pixels of image data P is extracted.
  • the threshold value at this time is set to a luminance value that can extract a color colored in the rear structure 50.
  • the image data P is binarized with the threshold as a boundary, and as shown in FIG. 27, the pixels constituting the rear structure 50 are extracted.
  • the rear structure 50 there may be an edge pixel having a luminance value equal to or greater than a threshold value in the uneven portion R. Therefore, the area of the pixel in contact with the lower end of the screen among the extracted pixels is determined as the rear structure 50. Thereby, the rear structure 50 is specified in the image data P.
  • FIG. 28 shows an example of a display image displayed on the monitor 9.
  • the relative positional relationship between the rear structure 50 and the alert range 51 is constant, and by specifying the rear structure 50 in the image data P, the alert range 51 indicated by a virtual line is also set. Then, the worker M can be detected as an obstacle by limiting to the alert range 51 required.
  • FIG. 6 shows the case where the field of view range is tilted, so that the entire screen is displayed tilted. Then, the shift information G is superimposed on the image data P.
  • each process by the smoothing processing unit 22, the edge extraction unit 23, the structure edge determination unit 24, and the edge line segment data determination unit 25 described above Is unnecessary.
  • the rear structure 50 may be specified by the processing of each of these portions.
  • the method of binarization may be used as in the present embodiment.
  • the image displayed on the monitor 9 may be a camera image itself, or may be displayed as a bird's-eye view image by performing viewpoint conversion processing based on the principle of FIG.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Structural Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Civil Engineering (AREA)
  • Mining & Mineral Resources (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Emergency Alarm Devices (AREA)
  • Alarm Systems (AREA)
  • Component Parts Of Construction Machinery (AREA)

Abstract

La présente invention a pour objet de régler automatiquement une zone imposant la prudence dans laquelle un obstacle est détecté dans la portée du champ de vision d'une caméra et, même si la portée du champ de vision de la caméra est décalée par la suite, empêche le décalage de la zone imposant la prudence. La présente invention comprend : une caméra (6) qui est montée dans un camion-benne (1) et capture une image d'une région dans laquelle une structure avant (10) du camion-benne (1) est incluse dans le champ de vision de celle-ci; une unité de détection de structure qui, en se basant sur les données d'image (P) capturées par la caméra (6), détecte la structure avant (10) dans les données d'image (P); et une unité de réglage de zone imposant la prudence (26) qui, avec une zone dans laquelle la détection d'un obstacle (M) au sein des données d'image (P) est nécessaire en tant que zone imposant la prudence (31), définit la zone imposant la prudence (31) au sein des données d'image (P) en se basant sur la structure avant (19) détectée par l'unité de détection de structure.
PCT/JP2013/080022 2012-11-08 2013-11-06 Dispositif de traitement d'image pour machine industrielle auto-propulsée et procédé de traitement d'image pour machine industrielle auto-propulsée WO2014073571A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2014545734A JP6072064B2 (ja) 2012-11-08 2013-11-06 自走式産業機械の画像処理装置および自走式産業機械の画像処理方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012246190 2012-11-08
JP2012-246190 2012-11-08

Publications (1)

Publication Number Publication Date
WO2014073571A1 true WO2014073571A1 (fr) 2014-05-15

Family

ID=50684673

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/080022 WO2014073571A1 (fr) 2012-11-08 2013-11-06 Dispositif de traitement d'image pour machine industrielle auto-propulsée et procédé de traitement d'image pour machine industrielle auto-propulsée

Country Status (2)

Country Link
JP (1) JP6072064B2 (fr)
WO (1) WO2014073571A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016115259A (ja) * 2014-12-17 2016-06-23 日立建機株式会社 作業機械およびその周囲監視装置
JP2016144110A (ja) * 2015-02-04 2016-08-08 日立建機株式会社 車体外部移動物体検知システム
JP2017030688A (ja) * 2015-08-06 2017-02-09 日立建機株式会社 作業機械の周囲監視装置
CN106462962A (zh) * 2014-06-03 2017-02-22 住友重机械工业株式会社 施工机械用人体检测系统
CN106462961A (zh) * 2014-06-03 2017-02-22 住友重机械工业株式会社 施工机械用人体检测系统
JP2017155491A (ja) * 2016-03-02 2017-09-07 株式会社神戸製鋼所 建設機械の干渉防止装置
JP2018064191A (ja) * 2016-10-13 2018-04-19 日立建機株式会社 障害物検知システム及び建設機械
WO2018164273A1 (fr) * 2017-03-10 2018-09-13 株式会社タダノ Système de détection humaine pour véhicule de travail, et véhicule de travail équipé de celui-ci
JP2021185366A (ja) * 2018-03-29 2021-12-09 ヤンマーパワーテクノロジー株式会社 障害物検知システム
JP7396987B2 (ja) 2018-07-31 2023-12-12 住友建機株式会社 ショベル

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06301782A (ja) * 1993-04-16 1994-10-28 N T T Data Tsushin Kk 監視装置
JP2002327470A (ja) * 2001-05-02 2002-11-15 Komatsu Ltd 作業機械の表示装置
JP2008163719A (ja) * 2007-01-05 2008-07-17 Hitachi Constr Mach Co Ltd 作業機械の周囲監視装置
JP2008248613A (ja) * 2007-03-30 2008-10-16 Hitachi Constr Mach Co Ltd 作業機械周辺監視装置
JP2009193494A (ja) * 2008-02-18 2009-08-27 Shimizu Corp 警報システム
JP2010198519A (ja) * 2009-02-27 2010-09-09 Hitachi Constr Mach Co Ltd 周囲監視装置
JP2010218246A (ja) * 2009-03-17 2010-09-30 Toyota Motor Corp 物体識別装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008027309A (ja) * 2006-07-24 2008-02-07 Sumitomo Electric Ind Ltd 衝突判定システム、及び衝突判定方法
JP2008108135A (ja) * 2006-10-26 2008-05-08 Sumitomo Electric Ind Ltd 障害物検出システム、及び障害物検出方法
JP2009086788A (ja) * 2007-09-28 2009-04-23 Hitachi Ltd 車両周辺監視装置
JP5208790B2 (ja) * 2009-02-04 2013-06-12 アイシン精機株式会社 画像処理装置、画像処理方法ならびにプログラム
JP5853206B2 (ja) * 2011-03-31 2016-02-09 パナソニックIpマネジメント株式会社 車載用表示装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06301782A (ja) * 1993-04-16 1994-10-28 N T T Data Tsushin Kk 監視装置
JP2002327470A (ja) * 2001-05-02 2002-11-15 Komatsu Ltd 作業機械の表示装置
JP2008163719A (ja) * 2007-01-05 2008-07-17 Hitachi Constr Mach Co Ltd 作業機械の周囲監視装置
JP2008248613A (ja) * 2007-03-30 2008-10-16 Hitachi Constr Mach Co Ltd 作業機械周辺監視装置
JP2009193494A (ja) * 2008-02-18 2009-08-27 Shimizu Corp 警報システム
JP2010198519A (ja) * 2009-02-27 2010-09-09 Hitachi Constr Mach Co Ltd 周囲監視装置
JP2010218246A (ja) * 2009-03-17 2010-09-30 Toyota Motor Corp 物体識別装置

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10465362B2 (en) 2014-06-03 2019-11-05 Sumitomo Heavy Industries, Ltd. Human detection system for construction machine
EP3154024A4 (fr) * 2014-06-03 2017-11-15 Sumitomo Heavy Industries, Ltd. Système de détection de personne pour engin de chantier
JP6997066B2 (ja) 2014-06-03 2022-01-17 住友重機械工業株式会社 人検知システム
US10824853B2 (en) 2014-06-03 2020-11-03 Sumitomo Heavy Industries, Ltd. Human detection system for construction machine
CN106462962B (zh) * 2014-06-03 2020-08-04 住友重机械工业株式会社 施工机械用人检测系统以及挖土机
CN106462962A (zh) * 2014-06-03 2017-02-22 住友重机械工业株式会社 施工机械用人体检测系统
CN106462961A (zh) * 2014-06-03 2017-02-22 住友重机械工业株式会社 施工机械用人体检测系统
US20170073934A1 (en) * 2014-06-03 2017-03-16 Sumitomo Heavy Industries, Ltd. Human detection system for construction machine
EP3154025A4 (fr) * 2014-06-03 2017-07-12 Sumitomo Heavy Industries, Ltd. Système de détection de personne pour engin de chantier
JP2019053753A (ja) * 2014-06-03 2019-04-04 住友重機械工業株式会社 人検知システム及び作業機械
JP2016115259A (ja) * 2014-12-17 2016-06-23 日立建機株式会社 作業機械およびその周囲監視装置
WO2016125332A1 (fr) * 2015-02-04 2016-08-11 日立建機株式会社 Système de détection d'objet mobile à l'extérieur d'une carrosserie de véhicule
JP2016144110A (ja) * 2015-02-04 2016-08-08 日立建機株式会社 車体外部移動物体検知システム
US9990543B2 (en) 2015-02-04 2018-06-05 Hitachi Construction Machinery Co., Ltd. Vehicle exterior moving object detection system
CN106797450A (zh) * 2015-02-04 2017-05-31 日立建机株式会社 车身外部移动物体探测系统
CN106797450B (zh) * 2015-02-04 2019-12-20 日立建机株式会社 车身外部移动物体探测装置
JP2017030688A (ja) * 2015-08-06 2017-02-09 日立建機株式会社 作業機械の周囲監視装置
WO2017022262A1 (fr) * 2015-08-06 2017-02-09 日立建機株式会社 Dispositif de surveillance de l'environnement de machines d'exploitation
US11111654B2 (en) * 2016-03-02 2021-09-07 Kabushiki Kaisha Kobe Seiko Sho (Kobe Steel, Ltd.) Interference prevention device for construction machinery
JP2017155491A (ja) * 2016-03-02 2017-09-07 株式会社神戸製鋼所 建設機械の干渉防止装置
JP2018064191A (ja) * 2016-10-13 2018-04-19 日立建機株式会社 障害物検知システム及び建設機械
JP2018152685A (ja) * 2017-03-10 2018-09-27 株式会社タダノ 作業車両用の人物検知システムおよびこれを備える作業車両
WO2018164273A1 (fr) * 2017-03-10 2018-09-13 株式会社タダノ Système de détection humaine pour véhicule de travail, et véhicule de travail équipé de celui-ci
CN110383831A (zh) * 2017-03-10 2019-10-25 株式会社多田野 作业车辆用的人员检测系统和具备该系统的作业车辆
JP2021185366A (ja) * 2018-03-29 2021-12-09 ヤンマーパワーテクノロジー株式会社 障害物検知システム
JP7396987B2 (ja) 2018-07-31 2023-12-12 住友建機株式会社 ショベル

Also Published As

Publication number Publication date
JP6072064B2 (ja) 2017-02-01
JPWO2014073571A1 (ja) 2016-09-08

Similar Documents

Publication Publication Date Title
WO2014073571A1 (fr) Dispositif de traitement d'image pour machine industrielle auto-propulsée et procédé de traitement d'image pour machine industrielle auto-propulsée
KR101448411B1 (ko) 입체물 검출 장치 및 입체물 검출 방법
JP5995899B2 (ja) 自走式産業機械の画像処理装置
EP2879370B1 (fr) Dispositif de reconnaissance d'image monté a l'intérieur un véhicule
JP6267972B2 (ja) 作業機械の周囲監視装置
WO2018225446A1 (fr) Dispositif de détection de points de changement de carte
US9773177B2 (en) Surrounding environment recognition device
JP3780922B2 (ja) 道路白線認識装置
JP4930046B2 (ja) 路面判別方法および路面判別装置
JP6542539B2 (ja) 車両用進入可否判定装置
KR101464489B1 (ko) 영상 인식 기반의 차량 접근 장애물 감지 방법 및 시스템
JP5902990B2 (ja) 自走式産業機械の画像処理装置
CN110659552B (zh) 有轨电车障碍物检测及报警方法
JPWO2016140016A1 (ja) 車両の周囲監視装置
JP2019003606A (ja) 地図変化点検出装置
JP6878221B2 (ja) 作業機械の障害物検知システム
JP4967758B2 (ja) 物体移動の検出方法及び検出装置
JP7280852B2 (ja) 人物検知システム、人物検知プログラム、学習済みモデル生成プログラム及び学習済みモデル
JP3532896B2 (ja) スミア検出方法及びこのスミア検出方法を用いた画像処理装置
KR101531313B1 (ko) 차량 하부의 물체 탐지장치 및 방법
JP2018005441A (ja) 車間距離警報及び衝突警報装置
JP2005217482A (ja) 車両周辺監視方法および装置
US20190102902A1 (en) System and method for object detection
JP6868996B2 (ja) 障害物検知システム及び建設機械
JP2020042716A (ja) 異常検出装置および異常検出方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13852710

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014545734

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13852710

Country of ref document: EP

Kind code of ref document: A1