CN104982030A - Periphery monitoring device for work machine - Google Patents

Periphery monitoring device for work machine Download PDF

Info

Publication number
CN104982030A
CN104982030A CN201480007706.9A CN201480007706A CN104982030A CN 104982030 A CN104982030 A CN 104982030A CN 201480007706 A CN201480007706 A CN 201480007706A CN 104982030 A CN104982030 A CN 104982030A
Authority
CN
China
Prior art keywords
image
output
coordinates
camera
alarm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201480007706.9A
Other languages
Chinese (zh)
Inventor
清田芳永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sumitomo Heavy Industries Ltd
Original Assignee
Sumitomo Heavy Industries Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sumitomo Heavy Industries Ltd filed Critical Sumitomo Heavy Industries Ltd
Priority to CN201910680197.0A priority Critical patent/CN110318441A/en
Publication of CN104982030A publication Critical patent/CN104982030A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • B60Q9/008Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/24Safety devices, e.g. for preventing overload
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/26Indicating devices
    • E02F9/261Surveying the work-site to be treated
    • E02F9/262Surveying the work-site to be treated with follow-up actions to control the work tool, e.g. controller
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Structural Engineering (AREA)
  • Mining & Mineral Resources (AREA)
  • Civil Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Mechanical Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Component Parts Of Construction Machinery (AREA)

Abstract

A periphery monitoring device (100A) is provided with: a person's presence determination means (12) for determining whether a person is present in each of a plurality of monitored spaces in a periphery of a shovel (60); and a warning control means (13) for controlling a plurality of warning output units (7) which are disposed inside a cab (64) of the shovel (60), and which output warnings to an operator. The warning control means (13) causes a warning to be outputted from a right-side warning output unit (7R) in cases when it is determined that a person is present in a right-side monitored space (ZR), causes a warning to be outputted from a left-side warning output unit (7L) in cases when it is determined that a person is present in a left-side monitored space (ZL), and causes a warning to be outputted from a rear warning output unit (7B) in cases when it is determined that a person is present in a rear monitored space (ZB).

Description

Periphery monitoring device for working machine
Technical Field
The present invention relates to a work machine periphery monitoring device having a function of determining the presence or absence of a person around a work machine.
Background
There is known a periphery monitoring device that emits an alarm sound when an operator is detected within a monitoring range of an obstacle detector mounted on an excavator (see, for example, patent document 1). Further, there is known an alarm system which determines whether or not an alarm sound is output by determining whether or not an operator who has entered a work area set around a shovel is a co-operator based on a light emission pattern of an LED attached to a helmet of the operator (see, for example, patent document 2). There is also known a safety device that communicates between a forklift and an operator working in the vicinity (surroundings) of the forklift and controls output of an alarm sound based on the communication (see, for example, patent document 3).
Prior patent documents
Patent document
Patent document 1: japanese patent laid-open No. 2008-179940
Patent document 2: japanese laid-open patent publication No. 2009-
Patent document 3: japanese patent laid-open publication No. 2007-310587
Disclosure of Invention
Technical problem to be solved by the invention
However, in the techniques of patent documents 1 to 3, an alarm sound is output from the same buzzer or speaker regardless of the direction in which an operator, such as an excavator, who has entered a predetermined range is present. Therefore, even if the operator of the excavator or the like hears the alarm sound, the operator cannot intuitively grasp in which direction the operator is present.
In view of the above, it is desirable to provide a work machine periphery monitoring device that enables an operator of a work machine to intuitively grasp the positions of people present around the work machine.
The periphery monitoring device for a working machine according to an embodiment of the present invention includes: a human presence/absence determination unit configured to determine the presence/absence of a human in each of a plurality of monitoring spaces around the working machine; an alarm control mechanism that is provided in a cab of the work machine and controls a plurality of alarm output units that output an alarm to an operator; the plurality of monitored spaces include a 1 st monitored space and a 2 nd monitored space which are two of the plurality of monitored spaces; the plurality of alarm output units include a 1 st alarm output unit corresponding to the 1 st monitored space and a 2 nd alarm output unit corresponding to the 2 nd monitored space, which are two of the plurality of alarm output units; the alarm control means outputs an alarm from the 1 st alarm output unit when it is determined that a person is present in the 1 st monitored space, and outputs an alarm from the 2 nd alarm output unit when it is determined that a person is present in the 2 nd monitored space.
ADVANTAGEOUS EFFECTS OF INVENTION
With the above configuration, it is possible to provide a work machine periphery monitoring device that enables an operator of a work machine to intuitively grasp the positions of people present around the work machine.
Drawings
Fig. 1 is a block diagram schematically showing a configuration example of an image generating apparatus according to an embodiment of the present invention.
Fig. 2 is a diagram showing a configuration example of a shovel equipped with an image generating device.
Fig. 3 is a diagram showing an example of a spatial model of a projected input image.
Fig. 4 is a diagram showing an example of the relationship between the spatial model and the processing target image plane.
Fig. 5 is a diagram for explaining the correspondence establishment of the coordinates on the input image plane and the coordinates on the spatial model.
Fig. 6 is a diagram for explaining the association between coordinates by the coordinate association establishing means.
Fig. 7 is a diagram for explaining the action of the parallel line group.
Fig. 8 is a diagram for explaining the operation of the auxiliary line group.
Fig. 9 is a flowchart showing a flow of the processing target image generation processing and the output image generation processing.
Fig. 10 shows an example of an output image.
Fig. 11 is an example of a plan view of a shovel on which the image generating device is mounted.
Fig. 12 is an example of input images of 3 cameras mounted on a shovel and an output image generated using the input images.
Fig. 13 is a diagram for explaining image disappearance prevention processing for preventing disappearance of an object at an overlapping portion of the respective imaging spaces of the two cameras.
Fig. 14 is a comparison diagram showing a difference between the output image shown in fig. 12 and an output image obtained by applying the image disappearance prevention process to the output image of fig. 12.
Fig. 15 is another example of a diagram showing input images of 3 cameras mounted on a shovel and an output image generated using the input images.
Fig. 16 is an example of a correspondence table showing a correspondence relationship between a determination result by the human presence determination means and an input image used for generating an output image.
Fig. 17 shows another example of an output image.
Fig. 18 is another example of a plan view of a shovel on which the image generating device is mounted.
Fig. 19 is another example of a correspondence table showing a correspondence relationship between a determination result by the human presence determination means and an input image used for generating an output image.
Fig. 20 shows still another example of an output image.
Fig. 21 shows another example of an output image.
Fig. 22 is a flowchart showing the flow of the alarm control process.
Fig. 23 is a diagram showing an example of transition of an output image displayed in the alarm control process.
Fig. 24 is a block diagram schematically showing another configuration example of the periphery monitoring device.
Detailed Description
Hereinafter, preferred embodiments for carrying out the present invention will be described with reference to the drawings.
Fig. 1 is a block diagram schematically showing a configuration example of an image generating apparatus 100 according to an embodiment of the present invention.
The image generation apparatus 100 is an example of a work machine periphery monitoring apparatus 1 that monitors the periphery of a work machine, and is configured from a control unit 1, a camera 2, an input unit 3, a storage unit 4, a display unit 5, a human detection sensor 6, and an alarm output unit 7. Specifically, the image generating apparatus 100 generates an output image based on an input image captured by the camera 2 mounted on the work machine, and presents the output image to the operator. Further, the image generation apparatus 100 switches the content of the output image to be presented based on the output of the human detection sensor 6.
Fig. 2 is a diagram showing a configuration example of a shovel 60 as a working machine on which the image generating apparatus 100 is mounted, and the shovel 60 is mounted on a crawler-type lower traveling body 61 via a turning mechanism 62 so that an upper turning body 63 is rotatable about a turning shaft PV.
The upper revolving structure 63 further includes a cab (cab) 64 in a front left portion thereof, an excavation attachment E in a front center portion thereof, and the cameras 2 (the right side camera 2R and the rear camera 2B) and the human detection sensors 6 (the right side human detection sensor 6R and the rear human detection sensor 6B) in a right side surface and a rear surface thereof. Further, a display unit 5 is provided in a position in the cab 64 where the operator can easily recognize. In the cab 64, a warning output portion 7 (a right side warning output portion 7R and a rear side warning output portion 7B) is provided on the right side inner wall and the rear side inner wall.
Next, each component of the image generating apparatus 100 will be described.
The control unit 1 is a computer including a cpu (central Processing unit), a ram (random access Memory), a rom (read Only Memory), an NVRAM (Non-Volatile random access Memory), and the like. In the present embodiment, the control unit 1 stores programs corresponding to the coordinate association establishing means 10, the image generating means 11, the human presence determining means 12, and the alarm control means 13, which will be described later, in, for example, a ROM or an NVRAM, and causes the CPU to execute processing corresponding to each means while using the RAM as a temporary storage area.
The camera 2 is a device for acquiring an input image of the surroundings of the shovel 60. In the present embodiment, the cameras 2 are, for example, a right side camera 2R and a rear camera 2B (see fig. 2) which are attached to the right side surface and the rear surface of the upper revolving structure 63 so as to be able to photograph a region which is a blind spot of an operator in the cab 64. The camera 2 includes an imaging element such as a ccd (charge Coupled device) or a cmos (complementary Metal Oxide semiconductor). The camera 2 may be attached to a position other than the right side surface and the rear surface (for example, the front surface and the left side surface) of the upper revolving unit 63, or may be attached with a wide-angle lens or a fisheye lens so as to be able to take an image over a wide range.
The camera 2 acquires an input image based on a control signal from the control unit 1, and outputs the acquired input image to the control unit 1. When an input image is acquired using a fisheye lens or a wide-angle lens, the camera 2 outputs the corrected input image, in which the apparent distortion or the oblique light generated by using these lenses is corrected, to the control unit 1. The camera 2 may output the input image, which is not corrected for the distortion in the appearance or the oblique view, to the control unit 1 as it is. In this case, the control unit 1 corrects the distortion or the upward reflection of the external appearance.
The input unit 3 is a device for allowing an operator to input various information to the image generating apparatus 100, and is, for example, a touch panel, a button switch, a pointing device, a keyboard, or the like.
The storage unit 4 is a device for storing various information, and is, for example, a hard disk, an optical disk, a semiconductor memory, or the like.
The display unit 5 is a device for displaying image information, such as a liquid crystal display or a projector provided in a cab 64 (see fig. 2) of the shovel 60, and displays various images output by the control unit 1.
The human detection sensor 6 is a device for detecting a human existing around the shovel 60. In the present embodiment, person detection sensors 6 are attached to, for example, the right side surface and the rear surface of upper revolving unit 63 so as to be able to detect a person present in a region that is a blind spot of an operator in cab 64 (see fig. 2).
The human detection sensor 6 is a sensor for detecting a person by distinguishing it from an object other than a human, and is a sensor for detecting a change in energy in a corresponding monitored space, and includes a pyroelectric type infrared sensor, a bolometer type infrared sensor, and a moving body detection sensor using an output signal of an infrared camera or the like. In the present embodiment, the human detection sensor 6 uses a pyroelectric infrared sensor, and detects a moving body (moving heat source) as a human. The monitoring space of the right-side human detection sensor 6R is included in the imaging space of the right-side camera, and the monitoring space of the rear human detection sensor 6B is included in the imaging space of the rear camera 2B.
Similarly to camera 2, person detection sensor 6 may be attached to a position other than the right side surface and the rear surface of upper revolving unit 63 (for example, the front surface and the left side surface), may be attached to any one of 1 of the front surface, the left side surface, the right side surface, and the rear surface of upper revolving unit 63, or may be attached to all the surfaces.
The alarm output unit 7 is a device for outputting an alarm to the operator of the shovel 60. For example, the alarm output unit 7 is an alarm device that outputs at least one of sound and light, and includes a sound output device such as a buzzer or a speaker, and a light emitting device such as an LED or a strobe. In the present embodiment, the alarm output unit 7 is a buzzer that outputs an alarm sound, and is composed of a right alarm output unit 7R attached to the right inner wall of the cab 64 and a rear alarm output unit 7B attached to the rear inner wall of the cab 64 (see fig. 2).
The image generation apparatus 100 may generate a processing target image based on the input image, generate an output image in which the positional relationship or the sense of distance to the surrounding object can be intuitively grasped by performing image conversion processing on the processing target image, and present the output image to the operator.
The "processing target image" is an image that is generated based on an input image and is a target of image conversion processing (for example, scaling conversion processing, affine conversion processing, distortion conversion processing, viewpoint conversion processing, and the like). Specifically, the "processing target image" is an image that is generated from an input image including an image in the horizontal direction (for example, an aerial part) at a wide image angle based on, for example, an input image obtained by a camera that captures an image of the earth surface from above, and is suitable for image conversion processing. More specifically, the input image is projected onto a predetermined spatial model so that the image in the horizontal direction is not displayed in an unnatural manner (for example, the part in the air may not be treated as the part on the ground surface), and then the projected image projected onto the spatial model is re-projected onto another two-dimensional plane, thereby generating the input image. The processing target image may be used as an output image without performing image conversion processing.
The "spatial model" is the projection object of the input image. Specifically, the "spatial model" is constituted by one or more planes or curved surfaces including at least a plane or a curved surface other than the processing target image plane as a plane on which the processing target image is located. The plane or curved surface on which the processing target image is located, i.e., the plane other than the processing target image plane, is, for example, a plane parallel to the processing target image plane or a plane or curved surface forming an angle with the processing target image plane.
The image generation apparatus 100 may generate an output image by performing image conversion processing on the projection image projected on the spatial model without generating the processing target image. The projection image may be used as it is as an output image without performing image conversion processing.
Fig. 3 is a diagram showing an example of a space model MD in which an input image is projected, the left side of fig. 3 shows a relationship between the shovel 60 and the space model MD when the shovel 60 is viewed from the side, and the right side of fig. 3 shows a relationship between the shovel 60 and the space model MD when the shovel 60 is viewed from above.
As shown in fig. 3, the space model MD has a semi-cylindrical shape, and has a planar region R1 on the inner side of the bottom surface thereof and a curved region R2 on the inner side of the side surface thereof.
Fig. 4 is a diagram showing an example of the relationship between the spatial model MD and the processing target image plane, and the processing target image plane R3 is a plane including the plane region R1 of the spatial model MD, for example. In fig. 4, the spatial model MD is shown not in a semi-cylindrical shape as shown in fig. 3 but in a cylindrical shape for clarity, but the spatial model MD may be in any of a semi-cylindrical shape and a cylindrical shape. The same applies to the following figures. As described above, the processing target image plane R3 may be a circular region including the plane region R1 of the spatial model MD, or may be an annular region not including the plane region R1 of the spatial model MD.
Next, various mechanisms included in the control unit 1 will be described.
The coordinate association establishing means 10 is a means for associating the coordinates on the input image plane where the input image captured by the camera 2 is located, the coordinates on the spatial model MD, and the coordinates on the processing target image plane R3. In the present embodiment, the coordinate association establishing means 10 associates the coordinates on the input image plane, the coordinates on the spatial model MD, and the coordinates on the processing target image plane R3, for example, based on various parameters regarding the camera 2 that are set in advance or input via the input unit 3, and the predetermined positional relationship among the input image plane, the spatial model MD, and the processing target image plane R3. The various parameters of the camera 2 include, for example, the optical center, focal distance, CCD size, optical axis direction vector, camera horizontal direction vector, and projection system of the camera 2. The coordinate correspondence establishing means 10 stores these correspondence relationships in the input image-spatial model correspondence mapping table 40 and the spatial model-processing target image correspondence mapping table 41 of the storage unit 4.
In addition, the coordinate association establishing means 10 omits, when the processing target image is not generated, the storage of the spatial model-processing target image association map 41 for establishing the association between the coordinates on the spatial model MD and the coordinates on the processing target image plane R3 and the correspondence relationship therebetween.
The image generating means 11 is means for generating an output image. In the present embodiment, the image generation mechanism 11 associates the coordinates on the processing target image plane R3 with the coordinates on the output image plane on which the output image is located, for example, by applying scaling transformation, affine transformation, or distortion transformation to the processing target image. Then, the image generation means 11 stores the correspondence relationship in the processing target image-output image correspondence map 42 of the storage unit 4. The image generating means 11 refers to the input image-spatial model correspondence map 40 and the spatial model-processing target image correspondence map 41, and associates the value of each pixel in the output image with the value of each pixel in the input image to generate the output image. The value of each pixel is, for example, a luminance value, a hue value, a chroma value, or the like.
The image generating means 11 associates the coordinates on the processing target image plane R3 with the coordinates on the output image plane on which the output image is located, based on various parameters for the virtual camera that are set in advance or input via the input unit 3. The various parameters of the virtual camera include, for example, the optical center of the virtual camera, the focal distance, the CCD size, the optical axis direction vector, the camera horizontal direction vector, and the projection system. Then, the image generation means 11 stores the correspondence relationship in the processing target image-output image correspondence map 42 of the storage unit 4. The image generating means 11 refers to the input image-spatial model correspondence mapping table 40 and the spatial model-processing target image correspondence mapping table 41, and associates the value of each pixel in the output image with the value of each pixel in the input image to generate the output image.
The image generating means 11 may generate an output image by changing the scaling of the processing target image without using the concept of a virtual camera.
Further, the image generating means 11 associates the coordinates on the spatial model MD with the coordinates on the output image plane according to the image conversion processing to be performed, without generating the processing target image. The image generating means 11 refers to the input image-space model correspondence map 40, and associates the value of each pixel in the output image with the value of each pixel in the input image to generate the output image. In this case, the image generation means 11 omits the association between the coordinates on the processing target image plane R3 and the coordinates on the output image plane and the storage of the correspondence relationship in the processing target image-output image correspondence map 42.
Further, the image generating means 11 switches the content of the output image based on the determination result of the human presence/absence determining means 12. Specifically, the image generation unit 11 switches the input image used for generating the output image based on the determination result of the human presence determination unit 12, for example. The switching of the input images used for generating the output images and the output images generated based on the switched input images will be described in detail later.
The human presence/absence determination means 12 is a means for determining the presence/absence of a human in each of a plurality of monitoring spaces set around the working machine. In the present embodiment, the human presence determination means 12 determines the presence or absence of a human around the shovel 60 based on the output of the human detection sensor 6.
Further, the human presence/absence determination unit 12 may determine the presence/absence of a human in each of a plurality of monitoring spaces set around the work machine based on an input image captured by the camera 2. Specifically, the human presence/absence determination unit 12 may determine the presence/absence of a human around the work machine by using an image processing technique such as optical flow or pattern matching. Further, the human presence/absence determination means 12 may determine the presence/absence of a human around the work machine based on an output of an image sensor different from the camera 2.
Alternatively, the human presence/absence determination unit 12 may determine the presence/absence of a human in each of the plurality of monitoring spaces set around the work machine based on the output of the human detection sensor 6 and the output of the image sensor such as the camera 2.
The alarm control means 13 controls the alarm output unit 7. In the present embodiment, the alarm control means 13 controls the alarm output section 7 based on the determination result of the human presence/absence determination means 12. The control of the alarm output unit 7 by the alarm control means 13 will be described in detail later.
Next, an example of specific processing performed by the coordinate association establishing means 10 and the image generating means 11 will be described.
The coordinate correspondence establishing means 10 may, for example, use a quaternion of hamilton to establish correspondence between the coordinates on the input image plane and the coordinates on the spatial model.
Fig. 5 is a diagram for explaining the correspondence establishment of the coordinates on the input image plane and the coordinates on the spatial model. The input image plane of the camera 2 is represented as a plane of the UVW orthogonal coordinate system with the optical center C of the camera 2 as the origin. The spatial model is represented as a solid plane in an XYZ orthogonal coordinate system.
First, the coordinate correlation establishing means 10 moves the origin of the XYZ coordinate system in parallel to the optical center C (the origin of the UVW coordinate system), and then rotates the XYZ coordinate system so that the X axis coincides with the U axis, the Y axis coincides with the V axis, and the Z axis coincides with the — W axis. This is to convert coordinates on the spatial model (coordinates on the XYZ coordinate system) into coordinates on the input image plane (coordinates on the UVW coordinate system). In addition, the symbol "-" of the "-W axis" means that the direction of the Z axis is opposite to the direction of the-W axis. This is because the UVW coordinate system sets the front of the camera to the + W direction, and the XYZ coordinate system sets the vertical downward direction to the-Z direction.
In addition, when there are a plurality of cameras 2, since each camera 2 has a separate UVW coordinate system, the coordinate association establishing means 10 moves and rotates the XYZ coordinate system in parallel with respect to each of the plurality of UVW coordinate systems.
The conversion is performed by moving the XYZ coordinate system in parallel so that the optical center C of the camera 2 becomes the origin of the XYZ coordinate system, rotating so that the Z axis coincides with the-W axis, and further rotating so that the X axis coincides with the U axis. Therefore, the coordinate correlation establishing means 10 can summarize the two rotations into one rotation calculation by describing the transformation by a quaternion of hamilton.
However, the rotation for matching a certain vector a with another vector B corresponds to the process of rotating the angle formed by the vector a and the vector B with the normal line of the plane spanned by the vector a and the vector B as the axis. If the angle is theta, the angle theta is used as the inner product of the vector A and the vector B
[ numerical formula 1]
<math> <mrow> <mi>&theta;</mi> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>A</mi> <mo>&CenterDot;</mo> <mi>B</mi> </mrow> <mrow> <mrow> <mo>|</mo> <mi>A</mi> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <mi>B</mi> <mo>|</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow> </math>
And (4) showing.
The unit vector N of the normal line of the surface formed by the vector a and the vector B can be used as the outer product of the vector a and the vector B
[ numerical formula 2]
<math> <mrow> <mi>N</mi> <mo>=</mo> <mfrac> <mrow> <mi>A</mi> <mo>&times;</mo> <mi>B</mi> </mrow> <mrow> <mrow> <mo>|</mo> <mi>A</mi> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <mi>B</mi> <mo>|</mo> </mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> </mfrac> </mrow> </math>
And (4) showing.
In addition, the quaternion satisfies the condition that i, j and k are respectively the imaginary number units
[ numerical formula 3]
ii=jj=kk=ijk=-1
In the present embodiment, the quaternion Q is defined as the real component t and the pure imaginary components a, b, c
[ numerical formula 4]
Q=(t;a,b,c)=t+ai+bj+ck
The conjugate quaternion of quaternion Q is expressed as
[ numerical formula 5]
Q*=(t;-a,-b,-c)=t-ai-bj-ck
And (4) showing.
The quaternion Q may represent three-dimensional vectors (a, b, c) by pure imaginary components a, b, c while setting the real component t to 0 (zero), or may represent a rotational motion about an arbitrary vector as an axis by the components t, a, b, c.
Further, the quaternion Q can be expressed as one rotation by combining a plurality of consecutive rotations. Specifically, the quaternion Q can be expressed as follows, for example, at a point D (ex, ey, ez) when the angle θ is rotated with an arbitrary unit vector C (l, m, n) as an axis at an arbitrary point S (sx, sy, sz).
[ numerical formula 6]
D=(0;ex,ey,ez)=QSQ*
Wherein,
<math> <mrow> <mi>S</mi> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>;</mo> <mi>s</mi> <mi>x</mi> <mo>,</mo> <mi>s</mi> <mi>y</mi> <mo>,</mo> <mi>s</mi> <mi>z</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>Q</mi> <mo>=</mo> <mrow> <mo>(</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> <mo>;</mo> <mi>l</mi> <mi>sin</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> <mo>,</mo> <mi>m</mi> <mi>sin</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> <mo>,</mo> <mi>n</mi> <mi>sin</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> </mrow> </math>
here, in the present embodiment, if Qz is a quaternion indicating rotation for matching the Z axis with the — W axis, since the point X on the X axis in the XYZ coordinate system is moved to the point X ', the point X' is used as the point X
[ number formula 7]
X′=QzXQz *
And (4) showing.
In the present embodiment, if a quaternion indicating a rotation in which a line connecting a point X' on the X axis and the origin is aligned with the U axis is Qx, a quaternion R indicating "a rotation in which the Z axis is aligned with the-W axis and the X axis is aligned with the U axis" is used
[ number formula 8]
R=QxQz
And (4) showing.
As described above, the arbitrary coordinate P on the spatial model (XYZ coordinate system) is expressed as the coordinate P' in the case of expressing the coordinate on the input image plane (UVW coordinate system)
[ numerical formula 9]
P′=RPR*
And (4) showing. Further, since the quaternion R is invariant in each of the cameras 2, the coordinate correspondence establishing means 10 can transform the coordinates on the spatial model (XYZ coordinate system) into the coordinates on the input image plane (UVW coordinate system) only by performing this operation thereafter.
After converting the coordinates on the spatial model (XYZ coordinate system) into coordinates on the input image plane (UVW coordinate system), the coordinate correspondence creating means 10 calculates an incident angle α formed by the line segment CP' and the optical axis G of the camera 2. The line segment CP 'is a line segment connecting the optical center C of the camera 2 (coordinates on the UVW coordinate system) and coordinates P' on the spatial model, which are expressed by the UVW coordinate system.
Further, the coordinate correspondence establishing means 10 calculates the off angle Φ and the length of the line segment EP 'in the plane H parallel to the input image plane R4 (e.g., CCD plane) of the camera 2 and including the coordinate P'. The line segment EP 'is a line segment connecting the coordinate P' and the intersection E of the plane H and the optical axis G, and the off angle Φ is an angle formed by the U 'axis and the line segment EP' in the plane H.
The optical system of a camera typically has an image height h that is a function of the angle of incidence α and the focal distance f. Therefore, the coordinate correlation establishing means 10 selects an appropriate projection method such as solid angle projection (h ═ 2fsin (α/2)) such as normal projection (h ═ ftan α), normal projection (h ═ fsin α), and stereo projection (h ═ 2ftan (α/2)), and equidistant projection (h ═ f α), and calculates the image height h.
Then, the coordinate correspondence establishing means 10 decomposes the calculated image height h into a U component and a V component on the UV coordinate system by the deflection angle Φ, and divides the U component and the V component by a value corresponding to the pixel size of each pixel of the input image plane R4. Thus, the coordinate association establishing means 10 associates the coordinates P (P') on the spatial model MD with the coordinates on the input image plane R4.
In addition, if the pixel size of each pixel in the U-axis direction of the input image plane R4 is assumed to be aUThe pixel size of each pixel in the V-axis direction of the input image plane R4 is aVThe coordinates (u, v) on the input image plane R4 corresponding to the coordinates P (P') on the spatial model MD are used
[ numerical formula 10]
<math> <mrow> <mi>u</mi> <mo>=</mo> <mfrac> <mrow> <mi>h</mi> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&phi;</mi> </mrow> <msub> <mi>a</mi> <mi>U</mi> </msub> </mfrac> </mrow> </math>
[ numerical formula 11]
<math> <mrow> <mi>v</mi> <mo>=</mo> <mfrac> <mrow> <mi>h</mi> <mi>sin</mi> <mi>&phi;</mi> </mrow> <msub> <mi>a</mi> <mi>V</mi> </msub> </mfrac> </mrow> </math>
And (4) showing.
In this way, the coordinate association establishing means 10 associates the coordinates on the spatial model MD with the coordinates on one or more input image planes R4 existing for each camera, associates the coordinates on the spatial model MD, the camera identification code, and the coordinates on the input image plane R4, and stores the coordinates in the input image-spatial model association map 40.
Further, since the coordinate correlation establishing means 10 calculates the coordinate transformation using the quaternion, there is an advantage that the gimbal lock does not occur unlike the case of calculating the coordinate transformation using the euler angle. However, the coordinate association establishing means 10 is not limited to the transformation using the quaternion-operated coordinates, and may use the transformation using the euler angle-operated coordinates.
In the case where the coordinate correspondence between the coordinates on the plurality of input image planes R4 is possible, the coordinate correspondence creating means 10 may associate the coordinate P (P') on the spatial model MD with the coordinate on the input image plane R4 of the camera with the smallest incident angle α, or may associate the coordinate with the coordinate on the input image plane R4 selected by the operator.
Next, a process of re-projecting the coordinates on the curved surface region R2 (coordinates having a component in the Z-axis direction) among the coordinates on the spatial model MD onto the processing target image plane R3 on the XY plane will be described.
Fig. 6 is a diagram for explaining the association between coordinates by the coordinate association establishing means 10. F6A shows, as an example, a correspondence relationship between coordinates on the input image plane R4 of the camera 2 and coordinates on the spatial model MD, which are obtained by normal projection (h ═ ftan α). The coordinate association establishing means 10 associates the coordinates of the input image plane R4 of the camera 2 with the coordinates of the space model MD corresponding to the coordinates by passing a line segment connecting the coordinates through the optical center C of the camera 2.
In the example of F6A, the coordinate association means 10 associates the coordinate K1 on the input image plane R4 of the camera 2 with the coordinate L1 on the plane region R1 of the spatial model MD, and associates the coordinate K2 on the input image plane R4 of the camera 2 with the coordinate L2 on the curved surface region R2 of the spatial model MD. At this time, the line segments K1-L1 and K2-L2 both pass through the optical center C of the camera 2.
When the camera 2 employs a projection method other than normal projection (for example, orthographic projection, stereoscopic projection, equal solid angle projection, equidistant projection, or the like), the coordinate association establishing means 10 associates the coordinates K1 and K2 on the input image plane R4 of the camera 2 with the coordinates L1 and L2 on the spatial model MD, based on the respective projection methods.
Specifically, the coordinate association establishing means 10 associates the coordinates on the input image plane with the coordinates on the spatial model MD based on a predetermined function (for example, orthographic projection (h ═ fsin α), stereoscopic projection (h ═ 2ftan (α/2)), equal solid angle projection (h ═ 2fsin (α/2)), equidistant projection (h ═ f α), or the like). In this case, the line segments K1-L1 and K2-L2 do not pass through the optical center C of the camera 2.
F6B is a diagram showing the correspondence between the coordinates on the curved surface region R2 of the spatial model MD and the coordinates on the processing target image plane R3. The coordinate association establishing means 10 is a parallel line group PL located on the XZ plane, and introduces the parallel line group PL forming an angle β with the processing target image plane R3. The coordinate association establishing means 10 associates the coordinates on the curved surface region R2 of the spatial model MD with the coordinates on the processing target image plane R3 corresponding to the coordinates by placing both coordinates on one of the parallel line groups PL.
In the example of F6B, the coordinate correspondence creating means 10 associates the coordinates L2 on the curved surface region R2 of the spatial model MD with the coordinates M2 on the processing target image plane R3 by placing the two coordinates on a common parallel line.
The coordinate association mechanism 10 may associate the coordinates on the plane region R1 of the spatial model MD and the coordinates on the curved surface region R2 with the coordinates on the processing target image plane R3 using the parallel line group PL. However, in the example of F6B, the plane region R1 and the processing target image plane R3 are common planes. Therefore, the coordinate L1 on the plane region R1 of the spatial model MD and the coordinate M1 on the processing object image plane R3 have the same coordinate value.
In this way, the coordinate correspondence creating means 10 associates the coordinates on the spatial model MD with the coordinates on the processing target image plane R3, and stores the coordinates on the spatial model MD and the coordinates on the processing target image plane R3 in the spatial model-processing target image correspondence mapping table 41 in association with each other.
F6C shows a correspondence relationship between the coordinates on the processing target image plane R3 and the coordinates on the output image plane R5 of the virtual camera 2V using normal projection (h ═ ftan α), as an example. The image generation means 11 associates the coordinates of the virtual camera 2V with each other by causing a line segment connecting the coordinates on the output image plane R5 of the virtual camera 2V and the coordinates on the processing target image plane R3 corresponding to the coordinates to pass through the optical center CV of the virtual camera 2V.
In the example of F6C, the image generating unit 11 associates the coordinate N1 on the output image plane R5 of the virtual camera 2V with the coordinate M1 on the processing target image plane R3 (the plane region R1 of the spatial model MD), and associates the coordinate N2 on the output image plane R5 of the virtual camera 2V with the coordinate M2 on the processing target image plane R3. At this time, the line segments M1-N1 and M2-N2 both pass through the optical center CV of the hypothetical camera 2V.
When the virtual camera 2V employs a projection method other than normal projection (for example, orthographic projection, stereoscopic projection, equal solid angle projection, equidistant projection, or the like), the image generating means 11 associates the coordinates N1 and N2 on the output image plane R5 of the virtual camera 2V with the coordinates M1 and M2 on the processing target image plane R3, in accordance with the respective projection methods.
Specifically, the image generation unit 11 associates the coordinates on the output image plane R5 with the coordinates on the processing target image plane R3 based on a predetermined function (for example, orthographic projection (h ═ fsin α), stereoscopic projection (h ═ 2ftan (α/2)), equal solid angle projection (h ═ 2fsin (α/2)), equidistant projection (h ═ f α), or the like). In this case, the line segments M1-N1 and M2-N2 do not pass through the optical center CV of the hypothetical camera 2V.
In this way, the image generating unit 11 associates the coordinates on the output image plane R5 with the coordinates on the processing target image plane R3, and stores the coordinates on the output image plane R5 and the coordinates on the processing target image plane R3 in association with each other in the processing target image-output image correspondence map 42. The image generating means 11 refers to the input image-spatial model correspondence map 40 and the spatial model-processing target image correspondence map 41, and associates the value of each pixel in the output image with the value of each pixel in the input image to generate the output image.
F6D is a diagram combining F6A to F6C, and shows the positional relationship among the camera 2, the virtual camera 2V, the plane region R1 and the curved surface region R2 of the spatial model MD, and the processing target image plane R3.
Next, the operation of the parallel line group PL introduced by the coordinate association establishing means 10 to associate the coordinates on the spatial model MD with the coordinates on the processing target image plane R3 will be described with reference to fig. 7.
The left diagram of fig. 7 is a diagram in the case where an angle β is formed between the parallel line group PL located on the XZ plane and the processing target image plane R3. On the other hand, the right diagram of fig. 7 is a diagram in the case where an angle β 1(β 1> β) is formed between the parallel line group PL located on the XZ plane and the processing target image plane R3. The coordinates La to Ld on the curved surface region R2 of the spatial model MD in the left and right diagrams of fig. 7 correspond to the coordinates Ma to MD on the processing target image plane R3, respectively. The intervals between the coordinates La to Ld in the left diagram of fig. 7 are equal to the intervals between the coordinates La to Ld in the right diagram of fig. 7. The parallel line group PL is assumed to exist on the XZ plane for the purpose of explanation, but actually exists so as to extend radially from all points on the Z axis toward the processing target image plane R3. The Z axis in this case is referred to as a "reprojection axis".
As shown in the left and right diagrams of fig. 7 and 7, the intervals between the coordinates Ma to Md on the processing target image plane R3 decrease linearly as the angle between the parallel line group PL and the processing target image plane R3 increases. That is, the distance between the curved surface region R2 of the spatial model MD and each of the coordinates Ma to MD decreases uniformly without any relation. On the other hand, in the example of fig. 7, the coordinate group on the plane region R1 of the spatial model MD is not converted into the coordinate group on the processing target image plane R3, and therefore the interval between the coordinate groups does not change.
The change in the intervals of these coordinate groups means that, of the image portions on the output image plane R5 (see fig. 6), only the image portion corresponding to the image projected onto the curved surface region R2 of the spatial model MD is linearly enlarged or reduced.
Next, an alternative example of the parallel line group PL introduced by the coordinate association establishing means 10 in order to associate the coordinates on the spatial model MD with the coordinates on the processing target image plane R3 will be described with reference to fig. 8.
The left diagram of fig. 8 is a diagram in the case where all the auxiliary line groups AL located on the XZ plane extend from the starting point T1 on the Z axis toward the processing target image plane R3. On the other hand, the right diagram of fig. 8 is a diagram in a case where all the auxiliary line groups AL extend from the starting point T2(T2> T1) on the Z axis toward the processing target image plane R3. In addition, the coordinates La to Ld on the curved surface region R2 of the spatial model MD in the left and right diagrams of fig. 8 correspond to the coordinates Ma to MD on the processing target image plane R3. In the example of the left image of fig. 8, the coordinates Mc and Md are outside the region of the processing target image plane R3 and are not shown. The intervals between the coordinates La to Ld in the left diagram of fig. 8 are equal to the intervals between the coordinates La to Ld in the right diagram of fig. 8. Note that the auxiliary line group AL is assumed to exist on the XZ plane for the purpose of explanation, but actually exists so as to radially extend from an arbitrary point on the Z axis toward the processing target image plane R3. In addition, similarly to fig. 7, the Z axis in this case is referred to as a "reprojection axis".
As shown in the left and right diagrams of fig. 8 and 8, the respective intervals between the coordinates Ma to Md on the processing target image plane R3 decrease non-linearly as the distance (height) between the start point of the auxiliary line group AL and the origin O increases. That is, the larger the distance between the curved surface region R2 of the spatial model MD and each of the coordinates Ma to MD, the larger the reduction width of each interval. On the other hand, in the example of fig. 8, the coordinate group on the plane region R1 of the spatial model MD is not converted into the coordinate group on the processing target image plane R3, and therefore the interval between the coordinate groups does not change.
The change in the intervals between these coordinate groups means that, as in the case of the parallel line group PL, of the image portions on the output image plane R5 (see fig. 6), only the image portion corresponding to the image projected on the curved surface region R2 of the spatial model MD is nonlinearly enlarged or reduced.
In this way, the image generating apparatus 100 can linearly or nonlinearly enlarge or reduce the image portion (for example, a horizontal image) of the output image corresponding to the image projected in the curved surface region R2 of the spatial model MD without affecting the image portion (for example, a road surface image) of the output image corresponding to the image projected in the planar region R1 of the spatial model MD. Therefore, the image generating apparatus 100 can quickly and flexibly enlarge or reduce the objects located around the shovel 60 (objects in the image when the surroundings are viewed from the shovel 60 in the horizontal direction) without affecting the road surface image (virtual image when the shovel 60 is viewed from directly above) in the vicinity of the shovel 60, and can improve the visibility of the blind spot region of the shovel 60.
Next, a process of generating a processing target image (hereinafter referred to as a "processing target image generation process") and a process of generating an output image using the generated processing target image (hereinafter referred to as an "output image generation process") by the image generation apparatus 100 will be described with reference to fig. 9. Fig. 9 is a flowchart showing the flow of the processing target image generation processing (step S1 to step S3) and the output image generation processing (step S4 to step S6). The arrangement of the camera 2 (input image plane R4), the spatial model (plane region R1 and curved surface region R2), and the processing target image plane R3 are determined in advance.
First, the control unit 1 associates the coordinates on the processing target image plane R3 with the coordinates on the spatial model MD by the coordinate association means 10 (step S1).
Specifically, the coordinate association establishing means 10 acquires an angle formed between the parallel line group PL and the processing target image plane R3. The coordinate association establishing means 10 calculates a point at which one of the parallel line groups PL extending from one coordinate on the processing target image plane R3 intersects the curved surface region R2 of the spatial model MD. The coordinate correspondence creating means 10 derives the coordinates on the curved surface region R2 corresponding to the calculated point as one coordinate on the curved surface region R2 corresponding to the one coordinate on the processing target image plane R3, and stores the correspondence relationship in the spatial model-processing target image correspondence map 41. The angle formed between the parallel line group PL and the processing target image plane R3 may be a value stored in advance in the storage unit 4 or the like, or may be a value dynamically input by the operator via the input unit 3.
When one coordinate on the processing target image plane R3 coincides with one coordinate on the plane region R1 of the spatial model MD, the coordinate correspondence creating means 10 derives the one coordinate on the plane region R1 as one coordinate corresponding to the one coordinate on the processing target image plane R3, and stores the correspondence in the spatial model-processing target image correspondence map 41.
Then, the control unit 1 associates the coordinates on the spatial model MD derived by the above-described processing with the coordinates on the input image plane R4 by the coordinate association means 10 (step S2).
Specifically, the coordinate correspondence creating means 10 obtains the coordinates of the optical center C of the camera 2 using normal projection (h ═ ftan α). The coordinate correspondence creating means 10 calculates a point at which a line segment passing through the optical center C, which is a line segment extending from one coordinate on the spatial model MD, intersects the input image plane R4. The coordinate correspondence creating means 10 derives the coordinates on the input image plane R4 corresponding to the calculated point as one coordinate on the input image plane R4 corresponding to the one coordinate on the spatial model MD, and stores the correspondence in the input image-spatial model correspondence map 40.
Then, the control unit 1 determines whether or not all the coordinates on the processing target image plane R3 are associated with the coordinates on the spatial model MD and the coordinates on the input image plane R4 (step S3). When the control unit 1 determines that all the coordinates have not been associated with each other (NO in step S3), the process of step S1 and step S2 are repeated.
On the other hand, when determining that all the coordinates are associated with each other (YES in step S3), the control unit 1 starts the output image generation process after the process target image generation process is finished. Then, the control unit 1 associates the coordinates on the processing target image plane R3 with the coordinates on the output image plane R5 by the image generation means 11 (step S4).
Specifically, the image generation unit 11 generates an output image by applying scaling, affine transformation, or distortion transformation to the processing target image. The image generation unit 11 stores the correspondence relationship between the coordinates on the processing target image plane R3 and the coordinates on the output image plane R5, which is determined by the content of the applied scaling transform, affine transform, or distortion transform, in the processing target image-output image correspondence map 42.
Alternatively, when the output image is generated using the virtual camera 2V, the image generation unit 11 may calculate coordinates on the output image plane R5 from coordinates on the processing target image plane R3 in accordance with the projection method to be used, and store the correspondence in the processing target image-output image correspondence map 42.
Alternatively, when the output image is generated using the virtual camera 2V using the normal projection (h ═ ftan α), the image generation unit 11 acquires the coordinates of the optical center CV of the virtual camera 2V. The image generating means 11 calculates a point at which a line segment passing through the optical center CV intersects the processing target image plane R3, from a line segment extending from one coordinate on the output image plane R5. The image generation unit 11 then derives the coordinates on the processing target image plane R3 corresponding to the calculated point as one coordinate on the processing target image plane R3 corresponding to the one coordinate on the output image plane R5. In this way, the image generation unit 11 may store the correspondence relationship in the processing target image-output image correspondence map 42.
Then, the image generation means 11 of the control unit 1 refers to the input image-spatial model correspondence map 40, the spatial model-processing target image correspondence map 41, and the processing target image-output image correspondence map 42. The image generation unit 11 traces back the correspondence between the coordinates on the input image plane R4 and the coordinates on the spatial model MD, the correspondence between the coordinates on the spatial model MD and the coordinates on the processing target image plane R3, and the correspondence between the coordinates on the processing target image plane R3 and the coordinates on the output image plane R5. The image generating means 11 then acquires values (for example, a luminance value, a hue value, a chroma value, and the like) of the coordinates on the input image plane R4 corresponding to the respective coordinates on the output image plane R5, and adopts the acquired values as the values of the respective coordinates on the corresponding output image plane R5 (step S5). In addition, when a plurality of coordinates on the plurality of input image planes R4 correspond to one coordinate on the output image plane R5, the image generation unit 11 may derive a statistical value based on the value of each of the plurality of coordinates on the plurality of input image planes R4, and use the statistical value as the value of the one coordinate on the output image plane R5. The statistical values are, for example, average values, maximum values, minimum values, median values, etc.
Then, the control section 1 determines whether or not the values of all the coordinates on the output image plane R5 are associated with the values of the coordinates on the input image plane R4 (step S6). When the control unit 1 determines that all the coordinate values have not been associated with each other (NO in step S6), the process of step S4 and step S5 are repeated.
On the other hand, when determining that all the coordinate values are associated with each other (YES in step S6), the control unit 1 generates an output image and ends the series of processing.
In addition, when the image generation device 100 does not generate the processing target image, the processing target image generation processing is omitted. In this case, the "coordinates on the processing target image plane" of step S4 in the output image generation process may instead be referred to as "coordinates on the spatial model".
With the above configuration, the image generating apparatus 100 can generate the processing target image and the output image that enable the operator to intuitively grasp the positional relationship between the excavator 60 and the objects around the excavator 60.
Further, the image generation apparatus 100 performs correspondence building of coordinates in a manner of going back from the processing object image plane R3 to the input image plane R4 through the spatial model MD. Thus, the image generating apparatus 100 can reliably associate each coordinate on the processing target image plane R3 with one or more coordinates on the input image plane R4. Therefore, the image generating apparatus 100 can generate a better quality processing target image more quickly than in the case where the correspondence of the coordinates is performed in the order from the input image plane R4 to the processing target image plane R3 through the spatial model MD. In the case where the correspondence establishment of the coordinates is performed in the order from the input image plane R4 through the spatial model MD to the processing object image plane R3, it is possible to reliably correspond each coordinate on the input image plane R4 to one or more coordinates on the processing object image plane R3. However, in some cases, a part of the coordinates on the processing target image plane R3 does not correspond to any of the coordinates on the input image plane R4, and in this case, it is necessary to perform interpolation processing or the like on a part of the coordinates on the processing target image plane R3.
In addition, when only the image corresponding to the curved surface region R2 of the spatial model MD is enlarged or reduced, the image generating apparatus 100 can achieve desired enlargement or reduction without rewriting the contents of the input image-spatial model correspondence map 40 only by changing the angle formed between the parallel line group PL and the processing target image plane R3 and rewriting only the portion of the spatial model-processing target image correspondence map 41 associated with the curved surface region R2.
Further, when changing the viewing mode of the output image, the image generating apparatus 100 can generate a desired output image (a scale-converted image, an affine-converted image, or a distortion-converted image) without rewriting the contents of the input image-space model correspondence map 40 and the space model-processing object image correspondence map 41 by simply changing the values of various parameters relating to the scale conversion, the affine conversion, or the distortion conversion and rewriting the processing object image-output image correspondence map 42.
Similarly, when changing the viewpoint of the output image, the image generating apparatus 100 can generate an output image (viewpoint conversion image) viewed from a desired viewpoint without rewriting the contents of the input image-spatial model correspondence map 40 and the spatial model-processing target image correspondence map 41 by simply rewriting the processing target image-output image correspondence map 42 by changing the values of various parameters of the virtual camera 2V.
Fig. 10 is a display example when an output image generated using input images of two cameras 2 (a right side camera 2R and a rear side camera 2B) mounted on the shovel 60 is displayed on the display unit 5.
The image generating apparatus 100 projects the respective input images of the two cameras 2 onto the plane region R1 and the curved surface region R2 of the spatial model MD, and then re-projects the input images onto the processing target image plane R3 to generate a processing target image. The image generation apparatus 100 generates an output image by applying image conversion processing (for example, scaling conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like) to the generated processing target image. In this way, the image generation device 100 generates an output image that simultaneously displays an image (image in the plane area R1) that overlooks the vicinity of the shovel 60 from above and an image (image in the processing target image plane R3) of the surroundings viewed in the horizontal direction from the shovel 60. Hereinafter, such an output image is referred to as a virtual viewpoint image for peripheral monitoring.
The virtual viewpoint image for monitoring the surroundings is generated by performing image conversion processing (for example, viewpoint conversion processing) on the image projected on the spatial model MD when the image generation device 100 does not generate the processing target image.
The virtual viewpoint image for monitoring the periphery is clipped to a circle so that the image when the shovel 60 is rotating can be displayed without any sense of incongruity, and the center CTR of the circle is generated so as to be positioned on the cylinder center axis of the space model MD and on the rotating axis PV of the shovel 60. Therefore, the virtual viewpoint image for monitoring the periphery is displayed so as to rotate about the center CTR thereof in accordance with the turning operation of the shovel 60. In this case, the cylinder center axis of the spatial model MD may or may not coincide with the reprojection axis.
In addition, the radius of the spatial model MD is, for example, 5 meters. The angle formed between the parallel line group PL and the processing target image plane R3 or the height of the starting point of the auxiliary line group AL may be set so that the object display unit 5 is displayed sufficiently large (for example, 7 mm or more) when an object (for example, an operator) is present at a position separated from the rotation center of the shovel 60 by the maximum reach distance (for example, 12 m) of the excavation attachment E.
Further, the virtual viewpoint image for monitoring the periphery may be arranged such that the CG image of the shovel 60 coincides with the front of the shovel 60 and the upper side of the screen of the display unit 5, and the center of rotation coincides with the center CTR. This is to make the positional relationship between the shovel 60 and the object appearing in the output image easier to understand. In the virtual viewpoint image for monitoring the periphery, a frame image including various information such as orientation may be arranged around the virtual viewpoint image.
Next, details of the virtual viewpoint image for monitoring the surroundings, which is generated by the image generation apparatus 100, will be described with reference to fig. 11 to 14.
Fig. 11 is a plan view of the shovel 60 on which the image generating device 100 is mounted. In the embodiment shown in fig. 11, the shovel 60 includes 3 cameras 2 (a left side camera 2L, a right side camera 2R, and a rear camera 2B) and 3 human detection sensors 6 (a left side human detection sensor 6L, a right side human detection sensor 6R, and a rear human detection sensor 6B). Regions CL, CR, and CB indicated by the one-dot chain lines in fig. 11 indicate imaging spaces of the left side camera 2L, the right side camera 2R, and the rear camera 2B, respectively. Further, regions ZL, ZR, and ZB indicated by dotted lines in fig. 11 respectively indicate monitoring spaces of the left-side person detection sensor 6L, the right-side person detection sensor 6R, and the rear person detection sensor 6B. Further, the shovel 60 includes the display unit 5 and 3 warning output units 7 (a left side warning output unit 7L, a right side warning output unit 7R, and a rear warning output unit 7B) in the cab 64.
In the present embodiment, the monitoring space of the human detection sensor 6 is narrower than the imaging space of the camera 2, but the monitoring space of the human detection sensor 6 may be the same as the imaging space of the camera 2 or may be larger than the imaging space of the camera 2. The monitoring space of the human detection sensor 6 is located near the shovel 60 in the imaging space of the camera 2, but may be located in a region farther from the shovel 60. Further, the monitoring space of the human detection sensor 6 has a repeated portion in a portion where the imaging spaces of the cameras 2 are repeated. For example, in the overlapping portion between the imaging space CR of the right side camera 2R and the imaging space CB of the rear camera 2B, the monitoring space ZR of the right side human detection sensor 6R overlaps the monitoring space ZB of the rear human detection sensor 6B. However, the monitoring space of the human detection sensor 6 may be configured not to be duplicated.
Fig. 12 is a diagram showing input images of the 3 cameras 2 mounted on the shovel 60 and output images generated using the input images.
The image generating apparatus 100 projects the input images of the 3 cameras 2 onto the plane region R1 and the curved surface region R2 of the spatial model MD, and then re-projects the input images onto the processing target image plane R3 to generate a processing target image. The image generation apparatus 100 generates an output image by performing image conversion processing (for example, scaling conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like) on the generated processing target image. As a result, the image generation device 100 generates a virtual viewpoint image for monitoring the surroundings, which simultaneously displays an image (image in the plane area R1) overlooking the vicinity of the shovel 60 from above and an image (image in the processing target image plane R3) of the surroundings viewed in the horizontal direction from the shovel 60. The image displayed at the center of the virtual viewpoint image for monitoring the periphery is a CG image 60CG of the shovel 60.
In fig. 12, the input image of the right side camera 2R and the input image of the rear camera 2B capture a person in an overlapping portion of the imaging space of the right side camera 2R and the imaging space of the rear camera 2B, respectively (see an area R10 surrounded by a two-dot chain line in the input image of the right side camera 2R and an area R11 surrounded by a two-dot chain line in the input image of the rear camera 2B).
However, if the coordinates on the output image plane are made to correspond to the coordinates on the input image plane of the camera with the smallest incident angle, the output image causes the person in the overlapping portion to disappear (refer to a region R12 surrounded by a one-dot chain line in the output image).
Therefore, the image generating apparatus 100 mixes the region corresponding to the coordinates on the input image plane of the rear camera 2B and the region corresponding to the coordinates on the input image plane of the right side camera 2R in the output image portion corresponding to the overlapping portion, preventing the object in the overlapping portion from disappearing.
Fig. 13 is a diagram for explaining a stripe pattern process as an example of an image disappearance prevention process for preventing disappearance of an object in an overlapping portion of the imaging spaces of the two cameras.
F13A is a diagram showing an output image portion corresponding to an overlapping portion of the imaging space of the right side camera 2R and the imaging space of the rear camera 2B, and corresponds to a rectangular region R13 shown by a dotted line in fig. 12.
Further, in F13A, a region PR1 filled with gray is an image region where the input image portion of the rear camera 2B is arranged, and coordinates on the input image plane of the rear camera 2B are associated with respective coordinates on the output image plane corresponding to the region PR 1.
On the other hand, the region PR2 filled with white is an image region where the input image portion of the right side camera 2R is arranged, and coordinates on the input image plane of the right side camera 2R are associated with respective coordinates on the output image plane corresponding to the region PR 2.
In the present embodiment, the regions PR1 and PR2 are arranged to form a stripe pattern (stripe pattern processing), and the boundary line of the portions where the regions PR1 and PR2 are alternately arranged in a stripe shape is determined by concentric circles on a horizontal plane centered on the rotation center of the shovel 60.
F13B is a plan view showing the state of the space region diagonally to the right and rearward of the shovel 60, and shows the current state of the space region imaged by both the rear camera 2B and the right side camera 2R. F13B indicates a case where a rod-shaped solid object OB is present diagonally to the right and rearward of the shovel 60.
F13C indicates a part of an output image generated based on an input image obtained by actually capturing the spatial region indicated by F13B by the rear camera 2B and the right side camera 2R.
Specifically, the image OB1 represents an image in which the image of the three-dimensional object OB in the input image of the rear camera 2B is extended in the extending direction of the line connecting the rear camera 2B and the three-dimensional object OB by the viewpoint conversion for generating the road surface image. That is, the image OB1 is a part of the image of the three-dimensional object OB displayed in the case of generating the road surface image in the output image portion using the input image of the rear camera 2B.
The image OB2 is an image in which the image of the solid object OB in the input image of the right camera 2R is extended in the direction of extension of the line connecting the right camera 2R and the solid object OB by the viewpoint conversion for generating the road surface image. That is, the image OB2 is a part of the image of the three-dimensional object OB displayed in the case of generating the road surface image in the output image portion using the input image of the right side camera 2R.
In this way, the image generation apparatus 100 mixes the region PR1 corresponding to the coordinates on the input image plane of the rear camera 2B and the region PR2 corresponding to the coordinates on the input image plane of the right side camera 2R in the overlapping portion. As a result, the image generating apparatus 100 displays both the two images OB1 and OB2 for 1 three-dimensional object OB on the output image, and prevents the three-dimensional object OB from disappearing from the output image.
Fig. 14 is a comparison diagram showing a difference between the output image of fig. 12 and an output image obtained by applying the image loss prevention process (stripe pattern process) to the output image of fig. 12, in which the upper diagram of fig. 14 shows the output image of fig. 12, and the lower diagram of fig. 14 shows the output image after applying the image loss prevention process (stripe pattern process). While a person disappears in the region R12 surrounded by a one-dot chain line in the upper diagram of fig. 14, a person is displayed without disappearing in the region R14 surrounded by a one-dot chain line in the lower diagram of fig. 14.
In addition, the image generation apparatus 100 may apply grid pattern processing, averaging processing, or the like instead of stripe pattern processing to prevent the disappearance of the object in the overlapping portion. Specifically, the image generation apparatus 100 uses, as the values of the pixels of the output image portion corresponding to the repeated portion, the average value of the values (for example, luminance values) of the corresponding pixels in the respective input images of the two cameras by the averaging process. Alternatively, the image generating apparatus 100 configures, in an output image portion corresponding to the repeated portion, a region corresponding to the value of a pixel in the input image of one camera and a region corresponding to the value of a pixel in the input image of the other camera to form a mesh pattern (mesh pattern) by the mesh pattern processing. Thus, image generation apparatus 100 prevents the object in the overlapping portion from disappearing.
Next, with reference to fig. 15 to 17, a process (hereinafter referred to as "1 st input image determination process") in which the image generation unit 11 determines an input image to be used for generating an output image from a plurality of input images based on the determination result of the human presence determination unit 12 will be described. Fig. 15 is a diagram showing input images of the 3 cameras 2 mounted on the shovel 60 and output images generated using the input images, and corresponds to fig. 12. Fig. 16 is a correspondence table showing the correspondence relationship between the determination result of the presence/absence determination means 12 and the input image used for generating the output image. Fig. 17 is a display example of an output image generated based on the input image determined by the 1 st input image determination process.
As shown in fig. 15, the image generating apparatus 100 projects the input images of the 3 cameras 2 onto the plane region R1 and the curved region R2 of the spatial model MD, and then re-projects the input images onto the processing target image plane R3 to generate a processing target image. The image generation apparatus 100 generates an output image by performing image conversion processing (for example, scaling conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like) on the generated processing target image. As a result, the image generating apparatus 100 generates a virtual viewpoint image for monitoring the surroundings, which simultaneously displays an image that overlooks the vicinity of the shovel 60 from above and an image that shows the surroundings in the horizontal direction from the shovel 60.
In fig. 15, the input images of the left side camera 2L, the rear camera 2B, and the right side camera 2R indicate a state in which 3 persons are present for each operator. The output image indicates that 9 workers are present around the shovel 60.
Here, the correspondence relationship between the determination result of the human presence/absence determining means 12 and the input image used for generating the output image will be described with reference to the correspondence table of fig. 16. Note that the "o" mark indicates that the human presence determination means 12 determines that a human is present, and the "x" mark indicates that a human is not present.
Style 1 represents: when it is determined that a person is present only in the left monitoring space ZL and that no person is present in the rear monitoring space ZB and the right monitoring space ZR, an output image is generated using the input image of the left camera 2L. This model 1 is employed, for example, when an operator (3 persons in this example) is present only on the left side of the shovel 60. As indicated by an output image D1 in fig. 17, the image generation means 11 outputs the input image of the left camera 2L of the operator who has captured 3 persons as it is as an output image. In addition, an output image using the input image as it is will be referred to as a "through image" hereinafter.
Style 2 represents: when it is determined that a person is present only in the rear monitoring space ZB and that no person is present in the left monitoring space ZL and the right monitoring space ZR, the output image is generated using the input image of the rear camera 2B. This pattern 2 is employed, for example, when an operator (3 persons in this example) is present only behind the shovel 60. As indicated by an output image D2 in fig. 17, the image generation means 11 outputs the input image of the rear camera 2B of the operator who has captured 3 persons as it is as an output image.
Style 3 shows: when it is determined that a person is present only in the right side monitored space ZR and it is determined that no person is present in the left side monitored space ZL and the rear monitored space ZB, an output image is generated using the input image of the right side camera 2R. This pattern 3 is adopted, for example, when an operator (3 persons in this example) is present only on the right side of the shovel 60. As indicated by an output image D3 in fig. 17, the image generation means 11 outputs the input image of the right camera 2R of the operator who has captured 3 persons as it is as an output image.
Style 4 shows: when it is determined that a person is present in the left monitoring space ZL and the rear monitoring space ZB and it is determined that a person is not present in the right monitoring space ZR, all of the 3 input images are used to generate an output image. This pattern 4 is adopted, for example, when there are workers (in this example, 6 workers for each 3) on the left side and the rear side of the shovel 60. As indicated by an output image D4 in fig. 17, the image generation means 11 outputs, as an output image, a virtual viewpoint image for monitoring the surroundings of a 6-person-captured operator generated based on 3 input images.
Style 5 represents: when it is determined that a person is present in the rear monitoring space ZB and the right monitoring space ZR and it is determined that a person is not present in the left monitoring space ZL, all of the 3 input images are used to generate an output image. This pattern 5 is adopted, for example, when there are workers (in this example, 6 workers for each 3) at the rear and right side of the shovel 60. As indicated by an output image D5 in fig. 17, the image generation means 11 outputs, as an output image, a virtual viewpoint image for monitoring the surroundings of a 6-person-captured operator generated based on 3 input images.
Style 6 represents: when it is determined that a person is present in the left side monitoring space ZL and the right side monitoring space ZR and it is determined that no person is present in the rear monitoring space ZB, all of the 3 input images are used to generate an output image. This pattern 6 is adopted, for example, when there are workers (in this example, 6 workers for each 3 workers) on the left and right sides of the shovel 60. As indicated by an output image D6 in fig. 17, the image generation means 11 outputs, as an output image, a virtual viewpoint image for monitoring the surroundings of a 6-person-captured operator generated based on 3 input images.
Style 7 shows: when it is determined that a person is present in all of the left monitored space ZL, the rear monitored space ZB, and the right monitored space ZR, an output image is generated using all of the 3 input images. This pattern 7 is adopted, for example, when there are workers (in this example, 9 workers for each of 3 workers) on the left, rear, and right sides of the shovel 60. As shown in the output image of fig. 15, the image generating means 11 outputs, as an output image, a virtual viewpoint image for monitoring the surroundings of an operator who has captured 9 persons, which is generated based on 3 input images.
Style 8 shows: when it is determined that no person is present in all of the left monitor space ZL, the rear monitor space ZB, and the right monitor space ZR, an output image is generated using all of the 3 input images. This pattern 8 is adopted, for example, when no operator is present on the left side, the rear side, and the right side of the shovel 60. As shown by an output image D7 in fig. 17, the image generating means 11 outputs, as an output image, a virtual viewpoint image for monitoring the surroundings, which is generated based on 3 input images and is captured in a state where no operator is present around the surroundings.
As described above, when determining that a person is present in only 1 of the 3 monitored spaces, the image generating means 11 outputs the through image of the corresponding camera as an output image. This is to display a person present in the monitored space on the display section 5 as large as possible. On the other hand, when determining that there are two or more persons in the 3 monitored spaces, the image generating means 11 outputs the virtual viewpoint images for monitoring the surroundings without outputting the through images. This is because all the people present around the shovel 60 cannot be displayed on the display unit 5 by only 1 through image, and because all the people present around the shovel 60 can be displayed on the display unit 5 if the virtual viewpoint image for monitoring the periphery is output. When it is determined that no person is present in any of the 3 monitored spaces, the image generating means 11 outputs the virtual viewpoint image for periphery monitoring without outputting the through image. This is because there is no person to be displayed in an enlarged manner, and in addition, it is intended to enable wide monitoring of objects other than persons present around the shovel 60.
When displaying the through image, the image generating means 11 may display a text message that can indicate which input image is used.
Next, another example of processing for determining an input image to be used for generating an output image from among a plurality of input images (hereinafter referred to as "2 nd input image determination processing") based on the determination result of the human presence/absence determining means 12 will be described with reference to fig. 18 to 20. Fig. 18 is a plan view of a shovel 60 showing another arrangement example of the human detection sensor 6, and corresponds to fig. 11. Fig. 19 is a correspondence table showing the correspondence relationship between the determination result of the presence/absence determination means 12 and the input image used for generating the output image, and corresponds to fig. 16. Fig. 20 is a display example of an output image generated based on the input image determined in the 2 nd input image determination processing.
In the embodiment shown in fig. 18, the shovel 60 includes two cameras 2 (a right side camera 2R and a rear camera 2B) and 3 human detection sensors 6 (a right side human detection sensor 6R, a right rear human detection sensor 6BR, and a rear human detection sensor 6B). Regions CR and CB indicated by one-dot chain lines in fig. 18 respectively indicate imaging spaces of the right side camera 2R and the rear camera 2B. Regions ZR, ZBR, and ZB indicated by dotted lines in fig. 18 respectively indicate the monitoring spaces of the right-side person detection sensor 6R, the right-side rear person detection sensor 6BR, and the rear person detection sensor 6B. In fig. 18, a region X indicated by hatching indicates an overlapping portion between the imaging space CR and the imaging space CB (hereinafter referred to as "overlapping imaging space X").
The arrangement example of fig. 18 is different from the arrangement example of fig. 11 in that the monitoring space ZR and the monitoring space ZB do not have overlapping portions, and in that the right rear human detection sensor 6BR having the monitoring space ZBR including the repeating imaging space X is provided.
According to this arrangement of the person detection sensor 6, the image generation apparatus 100 can determine whether or not a person is present in the repeated imaging space X. The image generation apparatus 100 can use the determination result in determining the input image for generating the output image, and can more appropriately switch the content of the output image.
Here, the correspondence relationship between the determination result of the human presence/absence determining means 12 and the input image for generating the output image will be described with reference to the correspondence table of fig. 19.
Style a represents: when it is determined that no person is present in all of the rear monitored space ZB, the right rear monitored space ZBR, and the right rear monitored space ZR, an output image is generated using the input images of the rear camera 2B and the right side camera 2R. This model a is adopted, for example, when no operator is present around the shovel 60. As shown in an output image D7 of fig. 17, the image generating means 11 generates and outputs a virtual viewpoint image for monitoring the surroundings, which is obtained by capturing a state in which no operator is present in the surroundings, based on two input images.
Style B represents: when it is determined that a person is present only in the rear monitored space ZB and that no person is present in the right rear monitored space ZBR and the right side monitored space ZR, an output image is generated using the input image of the rear camera 2B. This pattern B is adopted, for example, when there is a 1-person operator P1 behind the shovel 60. As shown in an output image D10 of fig. 20, the image generating means 11 outputs the input image of the rear camera 2B captured by the operator P1 as it is as an output image.
Style C represents: when it is determined that a person is present only in the right side monitored space ZR and it is determined that no person is present in the rear monitored space ZB and the right rear monitored space ZBR, an output image is generated using the input image of the right side camera 2R. This style C is adopted, for example, when there is a worker P2 of 1 person on the right side of the shovel 60. As shown in an output image D11 of fig. 20, the image generation means 11 outputs the input image of the right camera 2R having captured the operator P2 as it is as an output image.
Style D represents: when it is determined that a person is present only in the right rear monitored space ZBR and that no person is present in the rear monitored space ZB and the right side monitored space ZR, an output image is generated using the input images of the rear camera 2B and the right side camera 2R. This pattern D is adopted, for example, when there is a 1-person operator P3 in the repeated imaging space X on the right rear side of the shovel 60. As shown in an output image D12 of fig. 20, the image generating means 11 outputs the input image of the rear camera 2B having captured the worker P3 as it is as a 1 st output image (left side in the figure), and outputs the input image of the right side camera 2R having captured the same worker P3 as it is as a 2 nd output image (right side in the figure).
Style E represents: when it is determined that a person is present in the rear monitored space ZB and the right rear monitored space ZBR and it is determined that a person is not present in the right side monitored space ZR, an output image is generated using the input images of the rear camera 2B and the right side camera 2R. This pattern E is adopted, for example, when there is a 1-person operator P4 behind the shovel 60 and a 1-person operator P5 in the repeated imaging space X behind the shovel 60 on the right side. As shown in an output image D13 of fig. 20, the image generating means 11 outputs the input image of the rear camera 2B capturing the operator P4 and the operator P5 as it is as a 1 st output image (left side in the figure), and outputs the input image of the right camera 2R capturing only the operator P4 as it is as a 2 nd output image (right side in the figure).
Style F denotes: when it is determined that a person is present in the rear monitored space ZB and the right monitored space ZR and it is determined that a person is not present in the right rear monitored space ZBR, an output image is generated using the input images of the rear camera 2B and the right camera 2R. This pattern F is adopted, for example, when there is a 1-person operator P6 behind the shovel 60 and another 1-person operator P7 on the right side of the shovel 60. As shown in an output image D14 of fig. 20, the image generating means 11 outputs the input image of the rear camera 2B having captured the operator P6 as it is as a 1 st output image (left side in the figure), and outputs the input image of the right camera 2R having captured the operator P7 as it is as a 2 nd output image (right side in the figure).
Style G represents: when it is determined that a person is present in the right rear monitored space ZBR and the right side monitored space ZR and it is determined that no person is present in the rear monitored space ZB, an output image is generated using the input images of the rear camera 2B and the right side camera 2R. This pattern G is adopted, for example, when there is a 1-person operator P8 in the repeated imaging space X on the right rear side of the shovel 60 and another 1-person operator P9 on the right side of the shovel 60. As shown in an output image D15 of fig. 20, the image generating means 11 outputs the input image of the rear camera 2B having captured the operator P8 as it is as a 1 st output image (left side in the figure), and outputs the input image of the right camera 2R having captured the operator P8 and the operator P9 as it is as a 2 nd output image (right side in the figure).
Style H represents: when it is determined that a person is present in all of the rear monitored space ZB, the right rear monitored space ZBR, and the right rear monitored space ZR, an output image is generated using the input images of the rear camera 2B and the right side camera 2R. This pattern H is adopted, for example, when there is a 1-person operator P10 in the repeated imaging space X on the right rear side of the shovel 60, another 1-person operator P11 on the rear side of the shovel 60, and still another 1-person operator P12 on the right side of the shovel 60. As shown in an output image D16 of fig. 20, the image generating means 11 outputs the input image of the rear camera 2B capturing the operator P10 and the operator P11 as it is as a 1 st output image (left side in the figure), and outputs the input image of the right camera 2R capturing the operator P10 and the operator P12 as it is as a 2 nd output image (right side in the figure).
In the 2 nd input image determination process, when a person is captured in the input images of only 1 camera, the image generation means 11 outputs the through image of the corresponding camera as the output image. This is to display a person present in the monitored space on the display section 5 as large as possible. On the other hand, when the input images of both cameras capture a person, the image generation unit 11 outputs the through images of both cameras simultaneously and independently. This is because it is impossible to display all the people present around the shovel 60 on the display unit 5 by only 1 through image, and this is because it becomes difficult to recognize the operator as compared with the case of outputting the through image if the virtual viewpoint image for monitoring the periphery is output. When it is determined that no person is present in any of the 3 monitored spaces, the image generating means 11 outputs the virtual viewpoint image for periphery monitoring without outputting the through image. This is because there is no person to be displayed in an enlarged manner, and in addition, it is intended to enable wide monitoring of objects other than persons present around the shovel 60.
In the case of displaying the through image, the image generating means 11 may display a text message indicating which input image is used, as in the 1 st input image determining process.
Next, another display example of an output image generated based on the input image determined in the 1 st input image determination process or the 2 nd input image determination process will be described with reference to fig. 21.
As shown in fig. 21, the image generation unit 11 may display the virtual viewpoint image for monitoring the surroundings and the through image simultaneously. For example, when the operator is present behind the shovel 60, the image generating means 11 may display the virtual viewpoint image for monitoring the periphery as the 1 st output image and the through image of the rear camera 2B as the 2 nd output image, as shown in the output image D20 in fig. 21. In this case, the through image is displayed below the virtual viewpoint image for periphery monitoring. This is to allow the operator of the shovel 60 to intuitively grasp that there is an operator behind the shovel 60. In the case where the operator is present on the left side of the shovel 60, the image generating means 11 may display the virtual viewpoint image for monitoring the periphery as the 1 st output image and the through image of the left camera 2L as the 2 nd output image, as shown in the output image D21 in fig. 21. In this case, the through image is displayed on the left side of the virtual viewpoint image for periphery monitoring. Similarly, when the operator is present on the right side of the shovel 60, the image generating means 11 may display the virtual viewpoint image for monitoring the periphery as the 1 st output image and the through image of the right side camera 2R as the 2 nd output image, as shown in the output image D22 in fig. 21. In this case, the through image is displayed on the right side of the virtual viewpoint image for periphery monitoring.
Alternatively, the image generation unit 11 may display the virtual viewpoint image for monitoring the surroundings and the plurality of through images simultaneously. For example, when the operator is present behind and on the right side of the shovel 60, the image generating means 11 may display the virtual viewpoint image for monitoring the surroundings as the 1 st output image, the through image of the rear camera 2B as the 2 nd output image, and the through image of the right side camera 2R as the 3 rd output image, as shown in the output image D23 in fig. 21. In this case, the through image of the rear camera 2B is displayed below the virtual viewpoint image for periphery monitoring, and the through image of the right side camera 2R is displayed on the right side of the virtual viewpoint image for periphery monitoring.
In addition, when a plurality of output images are simultaneously displayed, the image generating means 11 may display information indicating the content of the output images. For example, the image generation unit 11 may display a text such as "rear camera through image" on or around the through image of the rear camera 2B.
When a plurality of output images are simultaneously displayed, the image generating means 11 may pop up 1 or more through images on the virtual viewpoint image for monitoring the periphery.
With the above configuration, the image generating apparatus 100 switches the content of the output image based on the determination result of the human presence/absence determining means 12. Specifically, the image generating apparatus 100 switches between, for example, a virtual viewpoint image for monitoring surroundings, which is displayed by reducing and changing the content of the individual input image, and a through image, which is displayed as is the content of the individual input image. In this way, the image generating apparatus 100 can more reliably prevent the operator of the shovel 60 from overlooking the operator by displaying the through image when the operator is detected in the vicinity of the shovel 60 than when only the virtual viewpoint image for monitoring the periphery is displayed. This is for displaying the operator in a large size and easily understandable manner.
Further, in the above-described embodiment, the image generation mechanism 11, in the case of generating an output image based on the input images of 1 camera, takes the through image of the camera as the output image. However, the present invention is not limited to this configuration. For example, the image generating unit 11 may generate the virtual viewpoint image for rear monitoring as the output image based on the input image of the rear camera 2B.
Further, in the above-described embodiment, the image generation apparatus 100 has the monitoring space of 1 personal detection sensor corresponding to the imaging space of 1 camera, but the monitoring space of 1 personal detection sensor may be made to correspond to the imaging spaces of a plurality of cameras, or the monitoring spaces of a plurality of personal detection sensors may be made to correspond to the imaging spaces of 1 camera.
In the above-described embodiment, the image generation apparatus 100 overlaps a part of the monitoring spaces of the two adjacent human detection sensors, but the monitoring spaces may not be overlapped. In addition, the image generating apparatus 100 may include the monitoring space of the 1-person detection sensor entirely in the monitoring space of another person detection sensor.
Further, in the above-described embodiment, the image generation apparatus 100 switches the contents of the output image at the instant when the determination result of the human presence/absence determination means 12 changes. However, the present invention is not limited to this configuration. For example, the image generating apparatus 100 may set a predetermined delay time from when the determination result of the human presence/absence determining unit 12 is changed to when the content of the output image is switched. This is to suppress the contents of the output image from being frequently switched.
Next, a process (hereinafter referred to as "alarm control process") in which the alarm control means 13 controls the alarm output unit 7 based on the determination result of the human presence determination means 12 will be described with reference to fig. 22 and 23. Fig. 22 is a flowchart showing a flow of the alarm control process, and fig. 23 is an example of transition of the output image displayed in the alarm control process. The alarm control means 13 repeatedly executes the alarm control process at predetermined intervals. The image generating apparatus 100 is mounted on the shovel 60 shown in fig. 11.
First, the human presence/absence determination means 12 determines whether or not a human is present around the shovel 60 (step S11). At this time, the image generating means 11 generates and displays a virtual viewpoint image for monitoring the surroundings, such as the output image D31 shown in fig. 23, for example.
When it is determined that a person is present around the shovel 60 (YES at step S11), the person presence/absence determination means 12 determines which of the left monitored space ZL, the rear monitored space ZB, and the right monitored space ZR is occupied by a person (step S12).
When the presence/absence determination means 12 determines that a person is present only in the left monitoring space ZL (on the left side in step S12), it outputs a left detection signal to the alarm control means 13. Then, the alarm control means 13 that has received the left side detection signal outputs an alarm start signal to the left side alarm output unit 7L, and outputs an alarm from the left side alarm output unit 7L (step S13). The image generation unit 11 displays, as an output image, a through image of the left camera 2L as shown in the output image D32 of fig. 23, for example. Alternatively, for example, as shown in the output image D33 of fig. 23, the image generating means 11 may display the virtual viewpoint image for monitoring the surroundings as the 1 st output image (the right image in the figure) and the through image of the left camera 2L as the 2 nd output image (the left image in the figure).
When the human presence/absence determination means 12 determines that a human is present only in the rear monitoring space ZB (behind step S12), it outputs a rear detection signal to the alarm control means 13. Then, the alarm control means 13 that has received the rear detection signal outputs an alarm start signal to the rear alarm output unit 7B, and outputs an alarm from the rear alarm output unit 7B (step S14). The image generation unit 11 displays, for example, a through image of the rear camera 2B as an output image. Alternatively, the image generation unit 11 may display the virtual viewpoint image for periphery monitoring as the 1 st output image and the through image of the rear camera 2B as the 2 nd output image.
When determining that a person is present only in the right side monitoring space ZR (the right side in step S12), the person presence/absence determination means 12 outputs a right side detection signal to the alarm control means 13. Then, the alarm control means 13 that has received the right side detection signal outputs an alarm start signal to the right side alarm output unit 7R, and outputs an alarm from the right side alarm output unit 7R (step S15). Further, the image generation mechanism 11 displays, for example, a through image of the right side camera 2R as an output image. Alternatively, the image generation unit 11 may display the virtual viewpoint image for periphery monitoring as the 1 st output image and the through image of the right camera 2R as the 2 nd output image.
On the other hand, if it is determined that NO person is present around the shovel 60 (NO in step S11), the person presence/absence determination means 12 ends the current alarm control process without outputting a detection signal to the alarm control means 13.
When the presence/absence determination means 12 determines that a person is present in two or more of the left monitored space ZL, the rear monitored space ZB, and the right monitored space ZR, the alarm control means 13 outputs an alarm from corresponding two or more of the left alarm output unit 7L, the rear alarm output unit 7B, and the right alarm output unit 7R. Further, when the presence/absence determination means 12 determines that a person is present in two or more of the left monitored space ZL, the rear monitored space ZB, and the right monitored space ZR, the image generation means 11 displays the output image by the method described in the above embodiment. Specifically, the image generating means 11 may display only the virtual viewpoint image for monitoring the periphery as shown in the output image D34 of fig. 23. The image generation means 11 may simultaneously display two or more through images as shown in the output image D35 of fig. 23. The image generation means 11 may display the virtual viewpoint image for monitoring the periphery and two or more through images at the same time as shown in the output image D36 of fig. 23.
In the configuration in which the alarm sound is output from the alarm output unit 7, the image generating apparatus 100 may be configured such that the contents (level, output interval, and the like) of the alarm sounds of the left side alarm output unit 7L, the rear alarm output unit 7B, and the right side alarm output unit 7R are different. Similarly, in the configuration in which light is output from alarm output unit 7, image generating apparatus 100 may be configured such that the contents of light (color, light emission interval, and the like) of left alarm output unit 7L, rear alarm output unit 7B, and right alarm output unit 7R are different. This is to enable the operator of the shovel 60 to more intuitively recognize the general position of a person present in the periphery of the shovel 60 from the difference in the content of the alarm.
With the above configuration, the image generation device 100 enables the operator of the excavator 60 to intuitively grasp the approximate position of the operator present around the excavator 60. For example, the image generating apparatus 100 can intuitively transmit the determined direction to the operator of the shovel 60 by determining where the operator is present on the left side, the rear side, or the right side of the shovel 60 without detecting the correct position of the operator.
In the above embodiment, the alarm output unit 7 is constituted by 3 independent buzzers, but sound may be localized using a surround sound system including a plurality of speakers.
While the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various modifications and substitutions can be made to the above embodiments without departing from the scope of the present invention.
For example, in the above-described embodiment, the image generation apparatus 100 employs the cylindrical spatial model MD as the spatial model, but may employ a spatial model having another columnar shape such as a polygonal column, may employ a spatial model composed of both the bottom surface and the side surface, or may employ a spatial model having only the side surface.
The image generating apparatus 100 is mounted on an automatic shovel equipped with a movable member such as a bucket, an arm, a boom, and a swing mechanism together with a camera and a human detection sensor. The image generating apparatus 100 constitutes an operation support system that supports the movement of the excavator and the operation of the movable members while presenting the surrounding image to the operator. However, the image generating apparatus 100 may be mounted on a working machine having no turning mechanism, such as a forklift, an asphalt finisher, or the like, together with a camera and a human detection sensor. Alternatively, the image generating apparatus 100 may be mounted on a working machine which has a movable member but does not perform itself, such as an industrial machine or a stationary crane, together with a camera and a human detection sensor. Further, the image generation apparatus 100 may constitute an operation support system that supports the operation of these work machines.
The periphery monitoring apparatus has been described by taking the image generating apparatus 100 including the camera 2 and the display unit 5 as an example 1, but may be configured not to include the image display function of the camera 2, the display unit 5, and the like. For example, as shown in fig. 24, the periphery monitoring apparatus 100A as an apparatus for executing the alarm control process may omit the camera 2, the input unit 3, the storage unit 4, the display unit 5, the coordinate association establishing means 10, and the image generating means 11.
Further, the present application claims priority based on japanese patent application No. 2013-057401 filed on 3/19/2013, the entire contents of which are incorporated by reference in the present application.
Description of the reference symbols
1 a control unit; 2 a camera; 2L left side camera; 2R right side camera; 2B rear camera; 3 an input unit; 4 a storage section; 5 a display unit; 6 person detection sensors; 6L left square person detection sensor; a 6R right square person detection sensor; 6B rear human detection sensor; 7 an alarm output section; 7L left side alarm output part; 7B rear side alarm output part; 7R right side alarm output part; 10 coordinate corresponding establishing mechanism; 11 an image generating means; 12 a human presence/absence determination mechanism; 13 an alarm control mechanism; 40 inputting an image-space model corresponding mapping table; 41 spatial model-processing object image correspondence mapping table; 42 processing object image-output image correspondence mapping table; 60 an excavator; 61 a lower traveling body; 62 a swing mechanism; 63 an upper slewing body; a 64 cab; 100 an image generating device; 100A periphery monitoring device.
The claims (modification according to treaty clause 19)
(corrected) a periphery monitoring device for a working machine, characterized in that,
comprising:
a human presence/absence determination unit configured to determine the presence/absence of a human in each of a plurality of monitoring spaces around the working machine; and
an alarm control mechanism that is provided in a cab of the work machine and controls a plurality of alarm output units that output an alarm to an operator;
the plurality of monitored spaces include a 1 st monitored space and a 2 nd monitored space which are two of the plurality of monitored spaces;
the plurality of alarm output units include a 1 st alarm output unit corresponding to the 1 st monitored space and a 2 nd alarm output unit corresponding to the 2 nd monitored space, which are two of the plurality of alarm output units;
the human presence/absence determination means determines the presence/absence of a human on the basis of an output of a human detection sensor that detects the presence/absence of a human without determining the position of the human in the monitored space;
the alarm control means outputs an alarm from the 1 st alarm output unit when it is determined that a person is present in the 1 st monitored space, and outputs an alarm from the 2 nd alarm output unit when it is determined that a person is present in the 2 nd monitored space.
2. The periphery monitoring device for a working machine according to claim 1,
the plurality of monitoring spaces include a rear monitoring space located behind the working machine and a side monitoring space located on a side;
the plurality of warning output portions include a rear warning output portion provided behind the cab and a side warning output portion provided on a side of the cab;
the rear alarm output part corresponds to the rear monitoring space;
the side alarm output unit corresponds to the side monitoring space.
3. The periphery monitoring device for a working machine according to claim 1,
the alarm output unit outputs an alarm by at least one of sound and light.
4. The periphery monitoring device for a working machine according to claim 1,
the human presence/absence determination unit determines the presence/absence of a human in each of the plurality of monitored spaces based on outputs of a plurality of moving body detection sensors or image sensors mounted on the working machine.
Statement or declaration (modification according to treaty clause 19)
Instructions in treaty article 19 (1)
Claim 1 specifies that the human presence or absence determination means "an output of a human detection sensor that detects the presence or absence of a human based on a position of a human in an uncertain monitoring space" determines the presence or absence of a human.
Document 1(JP 2010-112100 a), document 2(JP 2008-179940 a), and document 3(JP 10-090406 a) neither disclose nor suggest a human presence/absence determination mechanism "determines the presence/absence of a human based on the output of a human detection sensor that detects the presence/absence of a human based on the position of a human in an indeterminate monitoring space".

Claims (4)

1. A periphery monitoring device for a working machine, characterized in that,
comprising:
a human presence/absence determination unit configured to determine the presence/absence of a human in each of a plurality of monitoring spaces around the working machine; and
an alarm control mechanism that is provided in a cab of the work machine and controls a plurality of alarm output units that output an alarm to an operator;
the plurality of monitored spaces include a 1 st monitored space and a 2 nd monitored space which are two of the plurality of monitored spaces;
the plurality of alarm output units include a 1 st alarm output unit corresponding to the 1 st monitored space and a 2 nd alarm output unit corresponding to the 2 nd monitored space, which are two of the plurality of alarm output units;
the alarm control means outputs an alarm from the 1 st alarm output unit when it is determined that a person is present in the 1 st monitored space, and outputs an alarm from the 2 nd alarm output unit when it is determined that a person is present in the 2 nd monitored space.
2. The periphery monitoring device for a working machine according to claim 1,
the plurality of monitoring spaces include a rear monitoring space located behind the working machine and a side monitoring space located on a side;
the plurality of warning output portions include a rear warning output portion provided behind the cab and a side warning output portion provided on a side of the cab;
the rear alarm output part corresponds to the rear monitoring space;
the side alarm output unit corresponds to the side monitoring space.
3. The periphery monitoring device for a working machine according to claim 1,
the alarm output unit outputs an alarm by at least one of sound and light.
4. The periphery monitoring device for a working machine according to claim 1,
the human presence/absence determination unit determines the presence/absence of a human in each of the plurality of monitored spaces based on outputs of a plurality of moving body detection sensors or image sensors mounted on the working machine.
CN201480007706.9A 2013-03-19 2014-02-24 Periphery monitoring device for work machine Pending CN104982030A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910680197.0A CN110318441A (en) 2013-03-19 2014-02-24 Work machine

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2013057401A JP6545430B2 (en) 2013-03-19 2013-03-19 Shovel
JP2013-057401 2013-03-19
PCT/JP2014/054289 WO2014148203A1 (en) 2013-03-19 2014-02-24 Periphery monitoring device for work machine

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201910680197.0A Division CN110318441A (en) 2013-03-19 2014-02-24 Work machine

Publications (1)

Publication Number Publication Date
CN104982030A true CN104982030A (en) 2015-10-14

Family

ID=51579895

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201480007706.9A Pending CN104982030A (en) 2013-03-19 2014-02-24 Periphery monitoring device for work machine
CN201910680197.0A Pending CN110318441A (en) 2013-03-19 2014-02-24 Work machine

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910680197.0A Pending CN110318441A (en) 2013-03-19 2014-02-24 Work machine

Country Status (5)

Country Link
US (1) US9836938B2 (en)
EP (1) EP2978213B1 (en)
JP (1) JP6545430B2 (en)
CN (2) CN104982030A (en)
WO (1) WO2014148203A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108892042A (en) * 2018-09-13 2018-11-27 郑州大学 A kind of steel ladle trunnion lifting contraposition identification device and method
CN109312557A (en) * 2016-07-04 2019-02-05 住友建机株式会社 Excavator
CN109314769A (en) * 2016-11-01 2019-02-05 住友建机株式会社 Construction machinery surroundings monitoring system
CN109993936A (en) * 2019-03-19 2019-07-09 上海电力高压实业有限公司 Transfomer Substation Reconstruction geofence system and method
CN110166751A (en) * 2019-06-03 2019-08-23 三一汽车制造有限公司 Arm support equipment, arm support equipment safety early warning device and method
CN110325687A (en) * 2017-02-24 2019-10-11 住友重机械工业株式会社 Excavator, the control method of excavator and portable information terminal
CN111954637A (en) * 2018-04-27 2020-11-17 株式会社多田野 Crane vehicle
CN113614320A (en) * 2019-03-13 2021-11-05 神钢建机株式会社 Periphery monitoring device for working machine
CN114067511A (en) * 2020-07-31 2022-02-18 株式会社东芝 Attention reminding system, attention reminding method, and storage medium

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3041227A4 (en) * 2013-08-26 2017-05-24 Hitachi Construction Machinery Co., Ltd. Device for monitoring area around working machine
CN106462962B (en) * 2014-06-03 2020-08-04 住友重机械工业株式会社 Human detection system for construction machinery and excavator
WO2016082881A1 (en) * 2014-11-27 2016-06-02 Abb Technology Ltd Distribution of audible notifications in a control room
CN108026715B (en) 2015-09-15 2021-06-18 住友建机株式会社 Excavator
JP6419677B2 (en) * 2015-11-30 2018-11-07 住友重機械工業株式会社 Perimeter monitoring system for work machines
WO2017094626A1 (en) 2015-11-30 2017-06-08 住友重機械工業株式会社 Periphery monitoring system for work machine
CN114640827A (en) * 2016-01-29 2022-06-17 住友建机株式会社 Shovel and autonomous flying body flying around shovel
JP6689669B2 (en) * 2016-05-13 2020-04-28 住友重機械工業株式会社 Excavator
CN109313840A (en) * 2016-11-01 2019-02-05 住友建机株式会社 Construction machinery safety management system, managing device, method for managing security
KR102508693B1 (en) * 2017-02-22 2023-03-09 스미토모 겐키 가부시키가이샤 shovel
KR20210037607A (en) * 2018-07-31 2021-04-06 스미토모 겐키 가부시키가이샤 Shovel
JP7020359B2 (en) * 2018-09-28 2022-02-16 株式会社豊田自動織機 Warning device
JP2020063566A (en) 2018-10-15 2020-04-23 日立建機株式会社 Hydraulic backhoe
JP2020125672A (en) * 2019-01-15 2020-08-20 株式会社カナモト Safety device for construction machine
WO2020204007A1 (en) 2019-03-30 2020-10-08 住友建機株式会社 Shovel excavator and construction system
FR3095392B1 (en) * 2019-04-24 2021-04-16 Option Automatismes Anti-collision system for construction machinery, and construction machinery equipped with such an anti-collision system
JP7189074B2 (en) * 2019-04-26 2022-12-13 日立建機株式会社 working machine
EP3960937A4 (en) * 2019-04-26 2022-06-22 Sumitomo Construction Machinery Co., Ltd. Shovel, and safety equipment confirmation system for worksite
EP4023820A4 (en) * 2019-08-29 2022-11-09 Sumitomo Construction Machinery Co., Ltd. Excavator and excavator diagnostic system
GB2588186A (en) * 2019-10-11 2021-04-21 Caterpillar Inc System for monitoring surrounding area of cold planer
KR20220154080A (en) 2020-03-25 2022-11-21 스미도모쥬기가이고교 가부시키가이샤 Construction machinery, construction machinery management system, machine learning device, and construction machinery work site management system
WO2021192655A1 (en) * 2020-03-27 2021-09-30 日立建機株式会社 Work machine
US11521397B2 (en) * 2020-09-08 2022-12-06 Caterpillar Inc. Object tracking for work machines
US11958403B2 (en) * 2022-05-23 2024-04-16 Caterpillar Inc. Rooftop structure for semi-autonomous CTL
US20240246510A1 (en) * 2023-01-20 2024-07-25 Caterpillar Inc. Machine security system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07102596A (en) * 1993-10-01 1995-04-18 Kensetsusho Kanto Chiho Kensetsu Kyokucho Monitoring device of construction machinery
JPH1090406A (en) * 1996-09-13 1998-04-10 Omron Corp Alarm device
JP2007310587A (en) * 2006-05-17 2007-11-29 Sumitomonacco Materials Handling Co Ltd Safety device for industrial vehicle
JP2010112100A (en) * 2008-11-07 2010-05-20 Hitachi Constr Mach Co Ltd Monitoring device for working machine
CN102792334A (en) * 2010-04-12 2012-11-21 住友重机械工业株式会社 Image generation device and operation support system
CN103502541A (en) * 2011-05-26 2014-01-08 住友重机械工业株式会社 Shovel provided with electric rotating device and control method therefor

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0517973A (en) * 1991-07-16 1993-01-26 Yutani Heavy Ind Ltd Safety device for construction machine
JP2008179940A (en) 2005-03-31 2008-08-07 Hitachi Constr Mach Co Ltd Surrounding monitoring equipment of working machine
JP4776491B2 (en) * 2006-10-06 2011-09-21 日立建機株式会社 Work machine ambient monitoring device
JP2009193494A (en) 2008-02-18 2009-08-27 Shimizu Corp Warning system
JP5227841B2 (en) * 2009-02-27 2013-07-03 日立建機株式会社 Ambient monitoring device
AT509253A1 (en) * 2009-08-05 2011-07-15 Moeller Gebaeudeautomation Gmbh ELECTRICAL INSTALLATION ARRANGEMENT
US9633563B2 (en) * 2010-04-19 2017-04-25 Caterpillar Inc. Integrated object detection and warning system
JP3161937U (en) * 2010-06-04 2010-08-12 東京通信機株式会社 Heavy equipment rear check device
WO2011158955A1 (en) * 2010-06-18 2011-12-22 日立建機株式会社 Device for monitoring area around work machine
US8948975B2 (en) * 2011-10-28 2015-02-03 Agco Corporation Agriculture combination machines for dispensing compositions
US9169973B2 (en) * 2012-12-18 2015-10-27 Agco Corporation Zonal operator presence detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07102596A (en) * 1993-10-01 1995-04-18 Kensetsusho Kanto Chiho Kensetsu Kyokucho Monitoring device of construction machinery
JPH1090406A (en) * 1996-09-13 1998-04-10 Omron Corp Alarm device
JP2007310587A (en) * 2006-05-17 2007-11-29 Sumitomonacco Materials Handling Co Ltd Safety device for industrial vehicle
JP2010112100A (en) * 2008-11-07 2010-05-20 Hitachi Constr Mach Co Ltd Monitoring device for working machine
CN102792334A (en) * 2010-04-12 2012-11-21 住友重机械工业株式会社 Image generation device and operation support system
CN103502541A (en) * 2011-05-26 2014-01-08 住友重机械工业株式会社 Shovel provided with electric rotating device and control method therefor

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109312557A (en) * 2016-07-04 2019-02-05 住友建机株式会社 Excavator
CN109312557B (en) * 2016-07-04 2022-02-08 住友建机株式会社 Excavator
CN109314769A (en) * 2016-11-01 2019-02-05 住友建机株式会社 Construction machinery surroundings monitoring system
CN110325687A (en) * 2017-02-24 2019-10-11 住友重机械工业株式会社 Excavator, the control method of excavator and portable information terminal
CN110325687B (en) * 2017-02-24 2022-06-14 住友重机械工业株式会社 Shovel, shovel control method, and portable information terminal
CN111954637A (en) * 2018-04-27 2020-11-17 株式会社多田野 Crane vehicle
CN108892042A (en) * 2018-09-13 2018-11-27 郑州大学 A kind of steel ladle trunnion lifting contraposition identification device and method
CN108892042B (en) * 2018-09-13 2024-05-03 郑州大学 Steel ladle trunnion hoisting alignment recognition device and method
CN113614320A (en) * 2019-03-13 2021-11-05 神钢建机株式会社 Periphery monitoring device for working machine
CN109993936A (en) * 2019-03-19 2019-07-09 上海电力高压实业有限公司 Transfomer Substation Reconstruction geofence system and method
WO2020244010A1 (en) * 2019-06-03 2020-12-10 三一汽车制造有限公司 Crane boom apparatus, and safety early-warning device and method for crane boom apparatus
CN110166751A (en) * 2019-06-03 2019-08-23 三一汽车制造有限公司 Arm support equipment, arm support equipment safety early warning device and method
CN114067511A (en) * 2020-07-31 2022-02-18 株式会社东芝 Attention reminding system, attention reminding method, and storage medium

Also Published As

Publication number Publication date
EP2978213A1 (en) 2016-01-27
JP2014183500A (en) 2014-09-29
WO2014148203A1 (en) 2014-09-25
EP2978213A4 (en) 2016-06-01
EP2978213B1 (en) 2019-11-06
JP6545430B2 (en) 2019-07-17
CN110318441A (en) 2019-10-11
US20160005286A1 (en) 2016-01-07
US9836938B2 (en) 2017-12-05

Similar Documents

Publication Publication Date Title
CN104982030A (en) Periphery monitoring device for work machine
JP6029306B2 (en) Perimeter monitoring equipment for work machines
JP6386213B2 (en) Excavator
JP6541734B2 (en) Shovel
JP6740259B2 (en) Work machine
JP6045865B2 (en) Excavator
JP6324665B2 (en) Perimeter monitoring equipment for work machines
JP5960007B2 (en) Perimeter monitoring equipment for work machines
JP6378801B2 (en) Excavator
JP6169381B2 (en) Excavator
JP5805574B2 (en) Perimeter monitoring equipment for work machines
JP6091821B2 (en) Excavator
JP7560163B2 (en) Excavator
JP6257919B2 (en) Excavator
JP2019004484A (en) Shovel
JP6257918B2 (en) Excavator
JP2018186575A (en) Shovel
JP6302622B2 (en) Perimeter monitoring equipment for work machines
JP7454461B2 (en) excavator
JP6295026B2 (en) Excavator
JP7237882B2 (en) Excavator
JP7130547B2 (en) Excavator and perimeter monitoring device for excavator
JP2019071677A (en) Shovel
JP2018186574A (en) Shovel

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20151014

RJ01 Rejection of invention patent application after publication