WO2014148202A1 - Periphery monitoring device for work machine - Google Patents

Periphery monitoring device for work machine Download PDF

Info

Publication number
WO2014148202A1
WO2014148202A1 PCT/JP2014/054287 JP2014054287W WO2014148202A1 WO 2014148202 A1 WO2014148202 A1 WO 2014148202A1 JP 2014054287 W JP2014054287 W JP 2014054287W WO 2014148202 A1 WO2014148202 A1 WO 2014148202A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
state
shovel
work
person
Prior art date
Application number
PCT/JP2014/054287
Other languages
French (fr)
Japanese (ja)
Inventor
芳永 清田
Original Assignee
住友重機械工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 住友重機械工業株式会社 filed Critical 住友重機械工業株式会社
Publication of WO2014148202A1 publication Critical patent/WO2014148202A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/26Indicating devices
    • E02F9/261Surveying the work-site to be treated
    • E02F9/262Surveying the work-site to be treated with follow-up actions to control the work tool, e.g. controller
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/20Drives; Control devices
    • E02F9/2025Particular purposes of control systems not otherwise provided for
    • E02F9/2033Limiting the movement of frames or implements, e.g. to avoid collision between implements and the cabin
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/24Safety devices, e.g. for preventing overload
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • B60R2300/607Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective from a bird's eye viewpoint
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source

Definitions

  • the present invention relates to a work machine peripheral monitoring device having a function of notifying people around the work machine of its state.
  • the display device of Patent Document 1 merely informs the status of the hydraulic excavator from the hydraulic excavator side to surrounding people. Therefore, a worker who works around the hydraulic excavator is limited to the fact that the hydraulic excavator or its operator is aware of the presence of the operator even if the status of the hydraulic excavator is known by looking at the indicator. There is no reason to continue working safely around the hydraulic excavator.
  • the indicator of Patent Document 1 the state of the hydraulic excavator when the operator of the hydraulic excavator notices the presence of the operator or when the hydraulic excavator and the operator approach each other The information that the operator wants to know, such as
  • the work machine peripheral monitoring device is a work machine peripheral monitoring device provided with a notification unit visible from the periphery of the work machine, and whether or not the work machine is in a workable state Work machine state judgment means for judging, person presence / absence judgment means for judging presence / absence of a person around the work machine, notification unit control means for controlling the notification unit, and work for determining the permission / prohibition of work by the work machine
  • the notification unit control means is configured to determine whether the work machine is determined to be in the work enable state or not determined to be in the work enable state; The notification state is changed to a different state, and the work permission determination means determines the permission of the work by the work machine based on the determination result of the person existence determination means.
  • the work machine peripheral monitoring device can be provided which can more appropriately notify the person present around the work machine the state of the work machine.
  • FIG. 6 is a diagram showing an example of a relationship between a space model and a processing target image plane. It is a figure for demonstrating matching with the coordinate on an input image plane, and the coordinate on a space model. It is a figure for demonstrating the matching between the coordinates by a coordinate matching means. It is a figure for demonstrating the effect
  • FIG. 13 is a contrast diagram showing the difference between the output image shown in FIG. 12 and the output image obtained by applying the image loss prevention process to the output image of FIG. 12.
  • FIG. 7 shows another example of the relationship between the spatial model and the image plane to be processed. It is a figure explaining the relation of two output pictures switched by the 1st output picture change processing. It is a figure explaining the relationship of three output images switched by 2nd output image switching process. It is a corresponding
  • FIG. 1 is a block diagram schematically showing a configuration example of an image generation apparatus 100 according to an embodiment of the present invention.
  • the image generation device 100 is an example of a work machine peripheral monitoring device that monitors the periphery of a work machine, and includes a control unit 1, a camera 2, an input unit 3, a storage unit 4, a display unit 5, a human detection sensor 6, and an alarm.
  • the output unit 7, the gate lock lever 8, the ignition switch 9, and the notification unit 20 are provided.
  • the image generating apparatus 100 generates an output image based on an input image captured by the camera 2 mounted on a work machine, and presents the output image to the operator. Further, the image generation device 100 switches the content of the output image to be presented based on the output of the human detection sensor 6.
  • FIG. 2 is a view showing a configuration example of a shovel 60 as a working machine on which the image generating apparatus 100 is mounted.
  • the shovel 60 is an upper portion of a crawler type lower traveling body 61 via a turning mechanism 62.
  • the swing body 63 is rotatably mounted around the swing axis PV.
  • the upper swing body 63 also has a cab (driver's cab) 64 on the front left side thereof and an excavating attachment E at the front center thereof, and the camera 2 (right side camera 2R, rear camera 2B on its right side and rear side) And a notification unit 20 (a left direction notification unit 20L, a right direction notification unit 20R, and a rear notification unit 20B).
  • the display part 5 is installed in the position in which the operator in the cab 64 visually recognizes.
  • an alarm output unit 7 (right side alarm output unit 7R, rear alarm output unit 7B), a gate lock lever 8 and an ignition switch 9 are installed.
  • the control unit 1 is a computer provided with a central processing unit (CPU), a random access memory (RAM), a read only memory (ROM), a non-volatile random access memory (NVRAM), and the like.
  • the control unit 1 includes, for example, coordinate association unit 10, image generation unit 11, person existence determination unit 12, alarm control unit 13, work machine condition determination unit 14, notification unit control unit 15, and the like described later.
  • a program corresponding to each of the work permission determination means 16 is stored in the ROM or NVRAM, and the CPU is caused to execute processing corresponding to each means while using the RAM as a temporary storage area.
  • the camera 2 is a device for acquiring an input image that reflects the surroundings of the shovel 60.
  • the camera 2 is, for example, the right side camera 2R and the rear camera 2B attached to the right side surface and the rear surface of the upper swing body 63 so as to be able to capture an area which becomes a blind spot of the operator in the cab 64 (see FIG. 2)).
  • the camera 2 includes an imaging element such as a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS).
  • CCD charge coupled device
  • CMOS complementary metal oxide semiconductor
  • the camera 2 may be attached to a position other than the right side surface and the rear surface of the upper swing body 63 (for example, the front surface and the left side surface), and a wide angle lens or a fisheye lens is attached to capture a wide range. It may be
  • the camera 2 acquires an input image according to a control signal from the control unit 1, and outputs the acquired input image to the control unit 1.
  • the corrected input image corrected for apparent distortion and tilt caused by using these lenses is sent to the control unit 1.
  • the camera 2 may output an input image not corrected for the apparent distortion or tilt to the control unit 1 as it is. In that case, the control unit 1 corrects the apparent distortion or tilt.
  • the input unit 3 is a device for enabling an operator to input various information to the image generation device 100, and is, for example, a touch panel, a button switch, a pointing device, a keyboard, or the like.
  • the storage unit 4 is a device for storing various information, and is, for example, a hard disk, an optical disk, a semiconductor memory, or the like.
  • the display unit 5 is a device for displaying image information, and is, for example, a liquid crystal display or a projector installed in a cab 64 (see FIG. 2) of the shovel 60. Display an image.
  • the human detection sensor 6 is a device for detecting a person present around the shovel 60.
  • the human detection sensor 6 is attached to the right side surface and the rear surface of the upper swing body 63, for example, so as to be able to detect a person present in an area in the cab 64 which will be the blind spot of the operator (see FIG. 2). .
  • the human detection sensor 6 is a sensor that distinguishes and detects a human from an object other than a human, for example, a sensor that detects an energy change in the corresponding monitoring space, and includes a pyroelectric infrared sensor, a bolometer infrared sensor, It includes a moving body detection sensor using an output signal of an infrared camera or the like.
  • the human detection sensor 6 uses a pyroelectric infrared sensor, and detects a moving body (moving heat source) as a human.
  • the monitoring space of the right side person detection sensor 6R is included in the imaging space of the right side camera
  • the monitoring space of the rear person detection sensor 6B is included in the imaging space of the rear camera 2B.
  • the person detection sensor 6 may be attached to a position other than the right side surface and the rear surface (for example, the front surface and the left surface) of the upper swing body 63 as in the camera 2. It may be attached to any one of the left side surface, the right side surface, and the rear surface, and may be attached to all the surfaces.
  • the alarm output unit 7 is a device that outputs an alarm to the operator of the shovel 60.
  • the alarm output unit 7 is an alarm device that outputs at least one of sound and light, and includes a sound output device such as a buzzer and a speaker, and a light emitting device such as an LED and a flash light.
  • the alarm output unit 7 is a buzzer for outputting an alarm sound, and the right side alarm output unit 7R attached to the right inner wall of the cab 64 and the rear alarm output unit attached to the rear inner wall of the cab 64 7B (see FIG. 2).
  • the gate lock lever 8 is a device that switches the state of the shovel 60.
  • the gate lock lever 8 has a locked state in which the shovel 60 can not be operated and a unlocked state in which the shovel 60 can be operated.
  • the “work enable state” means a state in which the operator can operate the shovel 60
  • the “work inoperable state” means a state in which the operator can not operate the shovel 60.
  • the gate lock lever 8 repeatedly outputs a signal representing the current state of itself to the control unit 1 at a predetermined cycle.
  • the operator pulls up the gate lock lever 8 to make it substantially horizontal to bring the gate lock lever 8 into the unlocked state, and pushes the gate lock lever 8 into the locked state by pushing it down.
  • the gate lock lever 8 blocks the entry / exit opening of the cab 64 to restrict an operator's withdrawal from the cab 64, and an operation lever and an operation pedal (not shown) in the cab 64 (hereinafter referred to as “operation Operation by the lever etc.) is enabled to allow the operator to operate the shovel 60.
  • the gate lock lever 8 opens the entrance of the cab 64 and allows the operator to withdraw from the cab 64 while invalidating the operation by the operation lever or the like in the cab 64 to operate the operator. Prevents the shovel 60 from operating.
  • the gate lock lever 8 closes the gate lock valve in the locked state, and opens the gate lock valve in the unlocked state.
  • the gate lock valve is a switching valve provided between an oil passage between a control valve (not shown) and an operating lever or the like.
  • the control valve is a flow control valve that controls the flow of hydraulic fluid between a hydraulic pump (not shown) and various hydraulic actuators.
  • the gate lock valve shuts off the flow of hydraulic fluid between the control valve and the operation lever or the like to invalidate the operation lever or the like.
  • the gate lock valve allows hydraulic fluid to communicate between the control valve and the operation lever or the like to make the operation lever or the like effective.
  • the ignition switch 9 is a device that switches the state of the shovel 60.
  • the ignition switch 9 has an off state in which the shovel 60 is in an operation impossible state, and an on state in which the shovel 60 is in an operation enabled state.
  • the ignition switch 9 repeatedly outputs a signal representing the current state of itself to the control unit 1 at a predetermined cycle.
  • the operator switches the ignition switch 9 between the on state and the off state by pressing the ignition switch 9.
  • the ignition switch 9 starts the control pump to supply the hydraulic fluid to the oil passage between the control valve and the operation lever or the like by starting the engine. Then, the ignition switch 9 enables an operation of the shovel 60 by enabling an operation by an operation lever or the like in the cab 64.
  • the ignition switch 9 stops the control pump by stopping the engine. Then, the ignition switch 9 invalidates the operation by the operation lever or the like in the cab 64 so that the operator can not operate the shovel 60.
  • the notification unit 20 is a device that notifies the surrounding people of the state of the shovel 60.
  • the notification unit 20 is an indicator lamp configured by an LED, a flash light, etc., and is attached to the left side surface, the right side surface, and the rear surface of the upper swing body 63 so as to be visible to surrounding people (FIG. 2). reference.).
  • the notification unit 20 may be attached to an arbitrary position such as the outer surface of the shovel 60 including the front surface of the upper swing body 63 or the like so as to be visible to the surrounding people.
  • the notification unit 20 may be configured to be attached to the camera 2.
  • the image generation apparatus 100 generates the processing target image based on the input image, and performs image conversion processing on the processing target image so that the positional relationship with the surrounding objects and the sense of distance can be intuitively grasped. After the output image to be generated is generated, the output image may be presented to the operator.
  • the “processing target image” is an image to be generated based on an input image and to be a target of image conversion processing (for example, scale conversion processing, affine conversion processing, distortion conversion processing, viewpoint conversion processing, and the like).
  • the “processing target image” is, for example, an input image by a camera that captures the ground surface from above, and an input image including an image in the horizontal direction (for example, a sky part) due to its wide angle of view.
  • An image suitable for image conversion processing, generated from More specifically, the input image is projected onto a predetermined spatial model so that the horizontal image is not displayed unnaturally (for example, the sky portion is not treated as being on the ground surface). It is generated by reprojecting the projection image projected on the space model to another two-dimensional plane.
  • the processing target image may be used as an output image as it is without performing image conversion processing.
  • the "space model” is a projection target of the input image.
  • the “space model” is configured by one or more planes or curved surfaces including at least a plane or curved surface other than the processing target image plane which is a plane on which the processing target image is located.
  • a plane or a curved surface other than the processing target image plane which is a plane on which the processing target image is located is, for example, a plane parallel to the processing target image plane or a plane or a curved surface forming an angle with the processing target image plane. .
  • the image generation apparatus 100 may generate an output image by performing an image conversion process on the projection image projected on the space model without generating the processing target image.
  • the projection image may be used as an output image as it is without performing image conversion processing.
  • FIG. 3 is a view showing an example of a space model MD on which an input image is projected, and the left view of FIG. 3 shows the relationship between the shovel 60 and the space model MD when the shovel 60 is viewed from the side. 3 shows the relationship between the shovel 60 and the space model MD when the shovel 60 is viewed from above.
  • the space model MD has a semi-cylindrical shape, and has a flat region R1 inside the bottom and a curved region R2 inside the side.
  • FIG. 4 is a diagram showing an example of the relationship between the space model MD and the processing target image plane, and the processing target image plane R3 is, for example, a plane including the plane region R1 of the space model MD.
  • FIG. 4 shows the space model MD not in a semi-cylindrical shape as shown in FIG. 3 but in a cylindrical shape for clarity, the space model MD has either a semi-cylindrical shape or a cylindrical shape. It may be The same applies to the following figures.
  • the processing target image plane R3 may be a circular area including the plane area R1 of the space model MD, or may be an annular area not including the plane area R1 of the space model MD.
  • control unit 1 Next, various units included in the control unit 1 will be described.
  • the coordinate associating means 10 is a means for associating the coordinates on the input image plane where the input image captured by the camera 2 is located, the coordinates on the space model MD, and the coordinates on the processing object image plane R3.
  • the coordinate associating unit 10 includes, for example, various parameters related to the camera 2 set in advance or input through the input unit 3, and an input image plane, a space model MD, and the like, which are determined in advance.
  • the coordinates on the input image plane, the coordinates on the space model MD, and the coordinates on the processing image plane R3 are associated based on the mutual positional relationship between the processing object image plane R3.
  • the various parameters relating to the camera 2 are, for example, the optical center of the camera 2, focal length, CCD size, optical axis direction vector, camera horizontal direction vector, projection method and the like. Then, the coordinate correlating means 10 stores the correspondence between them in the input image / space model correspondence map 40 of the storage unit 4 and the space model / processing target image correspondence map 41.
  • the image generation unit 11 is a unit for generating an output image.
  • the image generation unit 11 performs, for example, scale conversion, affine conversion, or distortion conversion on the processing target image to obtain the coordinates on the processing target image plane R3 and the output image on which the output image is located. Correspond to the coordinates.
  • the image generation unit 11 stores the correspondence in the processing target image / output image correspondence map 42 of the storage unit 4.
  • the image generation unit 11 associates the value of each pixel in the output image with the value of each pixel in the input image while referring to the input image / space model correspondence map 40 and the space model / process target image correspondence map 41. Generate an output image.
  • the value of each pixel is, for example, a luminance value, a hue value, a saturation value or the like.
  • the image generation unit 11 outputs an image plane on which coordinates on the image plane R3 to be processed and an output image are located based on various parameters related to the virtual camera set in advance or input through the input unit 3. Correspond with the upper coordinates.
  • the various parameters relating to the virtual camera are, for example, the optical center of the virtual camera, the focal length, the CCD size, the optical axis direction vector, the camera horizontal direction vector, the projection method, and the like.
  • the image generation unit 11 stores the correspondence in the processing target image / output image correspondence map 42 of the storage unit 4.
  • the image generation unit 11 associates the value of each pixel in the output image with the value of each pixel in the input image while referring to the input image / space model correspondence map 40 and the space model / process target image correspondence map 41. Generate an output image.
  • the image generation unit 11 may generate the output image by changing the scale of the processing target image without using the concept of the virtual camera.
  • the image generation unit 11 associates the coordinates on the space model MD with the coordinates on the output image plane according to the applied image conversion processing. Then, while referring to the input image / space model correspondence map 40, the image generation unit 11 associates the value of each pixel in the output image with the value of each pixel in the input image to generate an output image. In this case, the image generation unit 11 omits the correspondence between the coordinates on the processing target image plane R3 and the coordinates on the output image plane and the storage of the correspondence in the processing target image / output image correspondence map 42. .
  • the image generation unit 11 switches the content of the output image based on the determination result of the person presence / absence determination unit 12. The details of the switching of the output image by the image generation unit 11 will be described later.
  • the human presence / absence determination means 12 is a means for determining the presence / absence of a person in each of a plurality of monitoring spaces set around the work machine. In the present embodiment, the human presence / absence determination means 12 determines the presence / absence of a person around the shovel 60 based on the output of the human detection sensor 6.
  • the human presence / absence determination means 12 may determine the presence / absence of a person in each of a plurality of monitoring spaces set around the work machine based on the input image captured by the camera 2. Specifically, the human presence / absence judgment means 12 may judge presence / absence of a person around the work machine using an image processing technology such as optical flow, pattern matching and the like. The human presence / absence determination means 12 may determine the presence / absence of a person around the work machine based on the output of an image sensor other than the camera 2.
  • the human presence / absence determination means 12 determines the presence / absence of a person in each of the plurality of monitoring spaces set around the work machine based on the output of the human detection sensor 6 and the output of the image sensor such as the camera 2 It is also good.
  • the alarm control unit 13 is a unit that controls the alarm output unit 7.
  • the alarm control means 13 outputs an alarm based on the judgment result of the person existence / nonexistence judgment means 12 or based on the judgment result of the person existence / nonexistence judgment means 12 and the judgment result of the work machine state judgment means 14 Control unit 7; The control of the alarm output unit 7 by the alarm control means 13 will be described in detail later.
  • the working machine state determination means 14 is a means for determining the state of the working machine. In the present embodiment, the work machine state determination means 14 determines whether the shovel 60 is in the work enable state. In addition, about the determination of the state of the shovel 60 by the working machine state determination means 14, the detail is mentioned later.
  • the notification unit control unit 15 is a unit that controls the notification unit 20.
  • the notification unit control means 15 is based on the determination result of the work machine state determination means 14 or based on the determination result of the human presence / absence determination means 12 and the determination result of the work machine state determination means 14.
  • the notification unit 20 is controlled. The control of the notification unit 20 by the notification unit control means 15 will be described in detail later.
  • the work permission determination means 16 is a means for determining whether to permit the work of the shovel 60.
  • the work permission determination means 16 determines the permission of the work of the shovel 60 based on the determination result of the human existence determination means 12.
  • the detail is mentioned later.
  • the coordinate correlating means 10 can associate the coordinates on the input image plane with the coordinates on the space model using, for example, the quaternion of Hamilton.
  • FIG. 5 is a diagram for explaining the correspondence between the coordinates on the input image plane and the coordinates on the space model.
  • the input image plane of the camera 2 is represented as a plane in the UVW orthogonal coordinate system whose origin is the optical center C of the camera 2.
  • a space model is represented as a solid plane in an XYZ orthogonal coordinate system.
  • the coordinate correlating means 10 moves the origin of the XYZ coordinate system parallel to the optical center C (the origin of the UVW coordinate system), then sets the X axis as the U axis, the Y axis as the V axis, and the Z axis Rotate the XYZ coordinate system so that each matches the -W axis.
  • This is to convert coordinates on the space model (coordinates on the XYZ coordinate system) into coordinates on the input image plane (coordinates on the UVW coordinate system).
  • the sign “ ⁇ ” of “ ⁇ W axis” means that the direction of the Z axis is opposite to the direction of the ⁇ W axis. This is because the UVW coordinate system has the + W direction in front of the camera, and the XYZ coordinate system has the -Z direction in the vertically downward direction.
  • each of the cameras 2 has a separate UVW coordinate system, so the coordinate correlating means 10 moves the XYZ coordinate system parallel to each of the plurality of UVW coordinate systems and Rotate.
  • the above-mentioned conversion is performed so that the Z axis coincides with the -W axis after parallel movement of the XYZ coordinate system so that the optical center C of the camera 2 becomes the origin of the XYZ coordinate system. It is realized by rotating to coincide with the axis. Therefore, the coordinate correlating means 10 can combine these two rotations into one rotation operation by describing this conversion by the quaternion of Hamilton.
  • the rotation for making one vector A coincide with another vector B corresponds to the processing of rotating by an angle formed by the vector A and the vector B around the normal of the surface where the vector A and the vector B extend. .
  • the angle ⁇ is ⁇ , from the inner product of the vector A and the vector B, the angle ⁇ is
  • the unit vector N of the normal to the surface where the vector A and the vector B extend is the outer product of the vector A and the vector B
  • the quaternion Q has a real component as t and pure imaginary components as a, b, and c, respectively.
  • the quaternion Q can express a three-dimensional vector (a, b, c) with pure imaginary components a, b, c while setting the real component t to 0 (zero), and t, a, b
  • the components c and c can represent rotational motion about an arbitrary vector.
  • the quaternion Q can be expressed as one rotation operation by integrating a plurality of consecutive rotation operations.
  • the quaternion Q is obtained by rotating an arbitrary point S (sx, sy, sz) by an angle ⁇ with an arbitrary unit vector C (l, m, n) as an axis.
  • the point D (ex, ey, ez) can be expressed as follows.
  • the coordinate correlating means 10 executes coordinates in the input image plane (UVW coordinate system) simply by executing this operation thereafter. Coordinate system) can be converted to coordinates.
  • the coordinate correlating means 10 After converting the coordinates on the space model (XYZ coordinate system) to the coordinates on the input image plane (UVW coordinate system), the coordinate correlating means 10 forms a line segment CP ′ and the optical axis G of the camera 2 The incident angle ⁇ is calculated.
  • the line segment CP ′ is a line connecting the optical center C (coordinates on the UVW coordinate system) of the camera 2 and coordinates P ′ that represent arbitrary coordinates P on the space model in the UVW coordinate system.
  • the coordinate associating unit 10 calculates the argument ⁇ in the plane H parallel to the input image plane R4 (for example, the CCD plane) of the camera 2 and including the coordinate P ′ and the length of the line segment EP ′.
  • the line segment EP ' is a line connecting the intersection point E between the plane H and the optical axis G and the coordinates P', and the argument ⁇ forms the U 'axis in the plane H and the line segment EP' Is the angle at which
  • the image height h is usually a function of the incident angle ⁇ and the focal length f.
  • the coordinate associating unit 10 decomposes the calculated image height h into U component and V component on the UV coordinate system by the argument angle ⁇ , and has a numerical value corresponding to the pixel size per pixel of the input image plane R4. Divide Accordingly, the coordinate associating unit 10 can associate the coordinates P (P ′) on the space model MD with the coordinates on the input image plane R4.
  • the coordinate associating unit 10 associates the coordinates on the space model MD with the coordinates on one or more input image plane R4 existing for each camera, and coordinates on the space model MD, a camera identifier , And coordinates on the input image plane R4 are stored in the input image / space model correspondence map 40 in association with each other.
  • the coordinate correlating means 10 calculates transformation of coordinates using quaternions, it has an advantage that gimbal lock is not generated unlike the case of calculating transformation of coordinates using Euler angles. .
  • the coordinate correlating means 10 is not limited to one that calculates transformation of coordinates using a quaternion, and may calculate the transformation of coordinates using Euler angles.
  • the coordinate associating unit 10 inputs the coordinates P (P ′) on the space model MD with respect to the camera having the smallest incident angle ⁇ . It may be made to correspond to the coordinates on the image plane R4, or may be made to correspond to the coordinates on the input image plane R4 selected by the operator.
  • FIG. 6 is a diagram for explaining the association between the coordinates by the coordinate association means 10.
  • the coordinate correlating means 10 causes each of the line segments connecting the coordinates on the input image plane R4 of the camera 2 and the coordinates on the space model MD corresponding to the coordinates to pass through the optical center C of the camera 2, Correspond both coordinates.
  • the coordinate associating unit 10 associates the coordinate K1 on the input image plane R4 of the camera 2 with the coordinate L1 on the plane region R1 of the space model MD, and the coordinate K2 on the input image plane R4 of the camera 2 Are associated with the coordinate L2 on the curved surface area R2 of the space model MD.
  • the line segment K1-L1 and the line segment K2-L2 both pass through the optical center C of the camera 2.
  • the coordinate correlating means 10 sets the respective projection methods. Accordingly, the coordinates K1 and K2 on the input image plane R4 of the camera 2 are associated with the coordinates L1 and L2 on the space model MD.
  • the line segment K1-L1 and the line segment K2-L2 do not pass through the optical center C of the camera 2.
  • F6B is a diagram showing the correspondence between the coordinates on the curved surface area R2 of the space model MD and the coordinates on the processing object image plane R3.
  • the coordinate correlating means 10 is a parallel line group PL located on the XZ plane, and introduces a parallel line group PL which forms an angle ⁇ with the processing target image plane R3. Then, the coordinate associating unit 10 causes both the coordinates on the curved surface region R2 of the space model MD and the coordinates on the processing target image plane R3 corresponding to the coordinates to be on one of the parallel line group PL. , Correspond both coordinates.
  • the coordinate associating unit 10 associates both coordinates on the assumption that the coordinate L2 on the curved surface area R2 of the space model MD and the coordinate M2 on the processing target image plane R3 lie on a common parallel line.
  • the coordinate associating unit 10 can associate the coordinates on the plane region R1 of the space model MD with the coordinates on the processing object image plane R3 using the parallel line group PL in the same manner as the coordinates on the curved region R2. is there.
  • the planar region R1 and the processing target image plane R3 are common to each other. Therefore, the coordinate L1 on the plane region R1 of the space model MD and the coordinate M1 on the processing target image plane R3 have the same coordinate value.
  • the coordinate associating unit 10 associates the coordinates on the space model MD with the coordinates on the processing object image plane R3, and associates the coordinates on the space model MD and the coordinates on the processing object image plane R3. Are stored in the space model / processing target image correspondence map 41.
  • the image generation unit 11 causes each of the line segments connecting the coordinates on the output image plane R5 of the virtual camera 2V and the coordinates on the processing target image plane R3 corresponding to the coordinates to pass through the optical center CV of the virtual camera 2V. And associate both coordinates.
  • the image generation unit 11 associates the coordinates N1 on the output image plane R5 of the virtual camera 2V with the coordinates M1 on the processing target image plane R3 (the plane region R1 of the space model MD).
  • the coordinate N2 on the output image plane R5 is associated with the coordinate M2 on the processing object image plane R3.
  • the line segment M1-N1 and the line segment M2-N2 both pass the optical center CV of the virtual camera 2V.
  • the image generation unit 11 selects one of the projection systems. Accordingly, the coordinates N1 and N2 on the output image plane R5 of the virtual camera 2V are associated with the coordinates M1 and M2 on the processing target image plane R3.
  • the line segment M1-N1 and the line segment M2-N2 never pass the optical center CV of the virtual camera 2V.
  • the image generation unit 11 associates the coordinates on the output image plane R5 with the coordinates on the processing object image plane R3, and coordinates on the output image plane R5 and the coordinates on the processing object image plane R3. It associates and stores in the processing target image / output image correspondence map 42. Then, the image generation unit 11 associates the value of each pixel in the output image with the value of each pixel in the input image while referring to the input image / space model correspondence map 40 and the space model / process target image correspondence map 41. Generate an output image.
  • F6D is a diagram combining F6A to F6C, and shows the mutual positional relationship between the camera 2, the virtual camera 2V, the plane region R1 and the curved region R2 of the space model MD, and the processing target image plane R3.
  • FIG. 7 The left drawing of FIG. 7 is a view in the case where an angle ⁇ is formed between the parallel line group PL located on the XZ plane and the processing object image plane R3.
  • FIG. 7 right figure is a view in the case where an angle ⁇ 1 ( ⁇ 1> ⁇ ) is formed between the parallel line group PL located on the XZ plane and the processing target image plane R3.
  • each of the coordinates La to Ld on the curved surface region R2 of the space model MD in the left side of FIG. 7 and the right side of FIG. 7 corresponds to each of the coordinates Ma to Md on the processing target image plane R3.
  • the distance between the parallel line group PL and the processing target image plane R3 increases the distance between the coordinates Ma to Md on the processing target image plane R3. It decreases linearly as That is, the distance between the curved surface area R2 of the space model MD and each of the coordinates Ma to Md decreases uniformly regardless of the distance.
  • the coordinate group on the plane region R1 of the space model MD is not converted to the coordinate group on the processing object image plane R3 in the example of FIG. 7, so the interval between the coordinate groups does not change. .
  • the change in the distance between these coordinate groups is linearly expanded or only the image portion corresponding to the image projected on the curved region R2 of the space model MD. It means to be reduced.
  • the left side of FIG. 8 is a diagram in the case where all the auxiliary line groups AL located on the XZ plane extend from the starting point T1 on the Z axis toward the processing object image plane R3.
  • the right side of FIG. 8 is a diagram in the case where all the auxiliary line groups AL extend from the start point T2 (T2> T1) on the Z axis toward the processing object image plane R3.
  • each of the coordinates La to Ld on the curved surface region R2 of the space model MD in the left drawing of FIG. 8 and the right drawing of FIG. 8 corresponds to each of the coordinates Ma to Md on the processing target image plane R3.
  • the coordinates Mc and Md are not shown because they are outside the area of the processing target image plane R3. Further, the interval of each of the coordinates La to Ld in the left drawing of FIG. 8 is equal to the interval of each of the coordinates La to Ld in the right drawing of FIG.
  • the auxiliary line group AL is present on the XZ plane for the purpose of explanation, in reality, the auxiliary line group AL is present so as to radially extend from any one point on the Z axis toward the processing object image plane R3. Do. As in FIG. 7, the Z axis in this case is referred to as “reprojection axis”.
  • each distance between coordinates Ma to Md on the processing object image plane R3 is the distance (height) between the start point of the auxiliary line group AL and the origin O Decreases non-linearly as That is, the larger the distance between the curved surface area R2 of the space model MD and each of the coordinates Ma to Md, the larger the reduction width of each space.
  • the coordinate group on the plane region R1 of the space model MD is not converted to the coordinate group on the processing target image plane R3 in the example of FIG. 8, so the interval between the coordinate groups does not change. .
  • the change in the distance between these coordinate groups corresponds to the image projected on the curved region R2 of the space model MD in the image portion on the output image plane R5 (see FIG. 6), as in the parallel line group PL. It means that only the image part is expanded or reduced non-linearly.
  • the image generation device 100 does not affect the image portion (for example, a road surface image) of the output image corresponding to the image projected on the plane region R1 of the space model MD, and the space model MD is generated.
  • the image portion (for example, a horizontal image) of the output image corresponding to the image projected onto the curved surface region R2 of the image data of the image data may be expanded or reduced linearly or non-linearly.
  • the image generating apparatus 100 does not affect the road surface image in the vicinity of the shovel 60 (a virtual image when the shovel 60 is viewed from directly above), and an object located around the shovel 60 (horizontal direction from the shovel 60) It is possible to quickly and flexibly enlarge or reduce the object (in the image when the surroundings are viewed), and to improve the visibility of the blind area of the shovel 60.
  • FIG. 9 is a flowchart showing a flow of processing target image generation processing (steps S1 to S3) and output image generation processing (steps S4 to S6). Further, the arrangement of the camera 2 (input image plane R4), the space model (plane area R1 and curved area R2), and the processing target image plane R3 are determined in advance.
  • control unit 1 causes the coordinate associating unit 10 to associate the coordinates on the processing target image plane R3 with the coordinates on the space model MD (step S1).
  • the coordinate associating unit 10 acquires an angle formed between the parallel line group PL and the processing target image plane R3. Then, the coordinate associating unit 10 calculates a point at which one of the parallel line group PL extending from one coordinate on the processing target image plane R3 intersects the curved surface region R2 of the space model MD. Then, the coordinate associating unit 10 derives the coordinates on the curved surface area R2 corresponding to the calculated point as one coordinate on the curved surface area R2 corresponding to the one coordinate on the processing target image plane R3, and the correspondence relationship It is stored in the space model / processing target image correspondence map 41.
  • the angle formed between the parallel line group PL and the processing target image plane R3 may be a value stored in advance in the storage unit 4 or the like, and the operator dynamically uses the input unit 3 via the input unit 3. It may be a value to be input.
  • the coordinate associating unit 10 processes the one coordinate on the plane region R1 as the processing target image. It is derived as one coordinate corresponding to the one coordinate on the plane R 3, and the correspondence is stored in the spatial model-processing target image correspondence map 41.
  • control unit 1 causes the coordinate associating unit 10 to associate one coordinate on the space model MD derived by the above-described processing with the coordinate on the input image plane R4 (step S2).
  • the coordinate associating unit 10 is a line segment extending from one coordinate on the space model MD, and calculates a point at which the line segment passing through the optical center C intersects with the input image plane R4. Then, the coordinate associating unit 10 derives the coordinates on the input image plane R4 corresponding to the calculated point as one coordinate on the input image plane R4 corresponding to the one coordinate on the space model MD, and the correspondence relationship
  • the input image / space model correspondence map 40 is stored.
  • control unit 1 determines whether all the coordinates on the processing target image plane R3 are associated with the coordinates on the space model MD and the coordinates on the input image plane R4 (step S3). Then, when it is determined that all the coordinates have not been associated yet (NO in step S3), the control unit 1 repeats the processing in step S1 and step S2.
  • step S3 when it is determined that all the coordinates are associated (YES in step S3), the control unit 1 ends the processing target image generation processing and then starts the output image generation processing. Then, the control unit 1 causes the image generation unit 11 to associate the coordinates on the processing target image plane R3 with the coordinates on the output image plane R5 (step S4).
  • the image generation unit 11 generates an output image by performing scale conversion, affine conversion, or distortion conversion on the processing target image. Then, the image generation unit 11 processes the correspondence between the coordinates on the processing target image plane R3 and the coordinates on the output image plane R5, which is determined by the contents of the applied scale conversion, affine conversion, or distortion conversion. Store in the output image correspondence map 42.
  • the image generation unit 11 calculates the coordinates on the output image plane R5 from the coordinates on the processing target image plane R3 according to the adopted projection method,
  • the correspondence relationship may be stored in the processing target image / output image correspondence map 42.
  • the image generation unit 11 of the control unit 1 refers to the input image / space model correspondence map 40, the space model / processing target image correspondence map 41, and the processing target image / output image correspondence map 42. Then, the image generation means 11 corresponds the correspondence between the coordinates on the input image plane R4 and the coordinates on the space model MD, the correspondence between the coordinates on the space model MD and the coordinates on the processing object image plane R3, and the processing object The correspondence between the coordinates on the image plane R3 and the coordinates on the output image plane R5 is traced.
  • the image generation unit 11 acquires values (for example, luminance value, hue value, saturation value, etc.) of the coordinates on the input image plane R4 corresponding to the respective coordinates on the output image plane R5, The acquired value is adopted as the value of each coordinate on the corresponding output image plane R5 (step S5).
  • the image generation unit 11 When a plurality of coordinates on the plurality of input image planes R4 correspond to one coordinate on the output image plane R5, the image generation unit 11 generates each of the plurality of coordinates on the plurality of input image planes R4.
  • a value-based statistic may be derived and adopted as the value of its one coordinate on the output image plane R5.
  • the statistical value is, for example, an average value, a maximum value, a minimum value, an intermediate value, and the like.
  • control unit 1 determines whether or not the values of all the coordinates on the output image plane R5 are associated with the values of the coordinates on the input image plane R4 (step S6). Then, when it is determined that the values of all the coordinates are not associated yet (NO in step S6), the control unit 1 repeats the processes of step S4 and step S5.
  • control unit 1 when it is determined that the values of all the coordinates are associated (YES in step S6), the control unit 1 generates an output image and terminates this series of processing.
  • the processing target image generation process is omitted.
  • the "coordinates on the processing target image plane" in step S4 in the output image generation process can be read as "coordinates on the space model”.
  • the image generating apparatus 100 can generate the processing target image and the output image that can make the operator intuitively grasp the positional relationship between the object around the shovel 60 and the shovel 60.
  • the image generation apparatus 100 performs coordinate association so as to trace back to the input image plane R4 from the processing target image plane R3 through the space model MD. Thereby, the image generation device 100 can reliably make each coordinate on the processing target image plane R3 correspond to one or more coordinates on the input image plane R4. Therefore, the image generation apparatus 100 generates a processing target image of better quality more quickly than the case of executing coordinate matching in the order from the input image plane R4 to the processing target image plane R3 via the space model MD. Can.
  • each coordinate on the input image plane R4 corresponds to one or more coordinates on the processing target image plane R3. It can be made to correspond surely to coordinates.
  • a part of the coordinates on the processing target image plane R3 may not be associated with any coordinates on the input image plane R4, and in this case, a part of the coordinates on the processing target image plane R3 It is necessary to perform interpolation processing etc.
  • the image generation device 100 changes the angle formed between the parallel line group PL and the processing target image plane R3. Desired enlargement or reduction can be realized without rewriting the content of the input image / space model correspondence map 40 only by rewriting only the part related to the curved region R2 in the space model / processing target image correspondence map 41. .
  • the image generation apparatus 100 when changing the appearance of the output image, the image generation apparatus 100 only changes the values of various parameters related to scale conversion, affine conversion, or distortion conversion to rewrite the processing target image / output image correspondence map 42.
  • a desired output image (scale conversion image, affine conversion image or distortion conversion image) can be generated without rewriting the contents of the input image / space model correspondence map 40 and the space model / processing target image correspondence map 41.
  • the image generation apparatus 100 changes the values of various parameters of the virtual camera 2 V and rewrites the processing object image / output image correspondence map 42 to change the input image / space.
  • An output image (viewpoint conversion image) viewed from a desired viewpoint can be generated without rewriting the contents of the model correspondence map 40 and the space model / process target image correspondence map 41.
  • FIG. 10 is a display example when an output image generated using input images of two cameras 2 (right-side camera 2R and rear camera 2B) mounted on the shovel 60 is displayed on the display unit 5 .
  • the image generation apparatus 100 projects the input image of each of the two cameras 2 onto the plane region R1 and the curved region R2 of the space model MD, and reprojects the image on the processing target image plane R3 to generate the processing target image. Do. Then, the image generation apparatus 100 generates an output image by performing image conversion processing (for example, scale conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like) on the generated processing target image. In this manner, the image generating apparatus 100 displays an image (image in the flat region R1) looking down from above the vicinity of the shovel 60 and an image (image in the processing target image plane R3) looking horizontally from the shovel 60 And an output image to be displayed simultaneously.
  • image conversion processing for example, scale conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like
  • the peripheral monitoring virtual viewpoint image When the image generation apparatus 100 does not generate the processing target image, the peripheral monitoring virtual viewpoint image performs an image conversion process (for example, a viewpoint conversion process) on the image projected on the space model MD. Generated by an image conversion process (for example, a viewpoint conversion process) on the image projected on the space model MD.
  • the virtual viewpoint image for periphery monitoring is trimmed in a circular shape so that an image when the shovel 60 performs a turning operation can be displayed without a sense of incongruity, and the center CTR of the circle is on the cylinder central axis of the space model MD and It is generated so as to be on the pivot axis PV of the shovel 60. Therefore, in accordance with the turning operation of the shovel 60, the periphery monitoring virtual viewpoint image is displayed so as to rotate about the center CTR.
  • the cylinder central axis of the space model MD may or may not coincide with the reprojection axis.
  • the radius of the space model MD is, for example, 5 meters.
  • the angle formed by the parallel line group PL with the processing target image plane R3, or the starting point height of the auxiliary line group AL is the maximum reach distance of the excavating attachment E from the turning center of the shovel 60 (for example, 12 meters) Is set so that when an object (for example, an operator) is present at a position far away from the display, the object is displayed on the display unit 5 sufficiently large (for example, 7 millimeters or more). obtain.
  • the virtual viewpoint image for periphery monitoring may be arranged such that the CG image of the shovel 60 is such that the front of the shovel 60 coincides with the upper side of the screen of the display unit 5 and the turning center thereof coincides with the center CTR. .
  • a frame image including various information such as an orientation may be arranged around it.
  • FIG. 11 is a top view of a shovel 60 on which the image generating apparatus 100 is mounted.
  • the shovel 60 has three cameras 2 (left side camera 2L, right side camera 2R, and rear camera 2B) and three human detection sensors 6 (left side human detection sensor 6L, right side) It includes a police detection sensor 6R, a rear detection sensor 6B, and three notification units 20 (left direction notification unit 20L, right direction notification unit 20R, and rear notification unit 20B).
  • Regions CL, CR, and CB indicated by alternate long and short dash lines in FIG. 11 indicate imaging spaces of the left side camera 2L, the right side camera 2R, and the rear camera 2B, respectively.
  • the shovel 60 includes the display unit 5, three alarm output units 7 (a left direction alarm output unit 7L, a right direction alarm output unit 7R, and a rear alarm output unit 7B), and a gate lock lever. 8 and an ignition switch 9 are provided.
  • the monitoring space of the human detection sensor 6 is narrower than the imaging space of the camera 2, the monitoring space of the human detection sensor 6 may be the same as the imaging space of the camera 2. It may be wide. In addition, although the monitoring space of the human detection sensor 6 is located near the shovel 60 in the imaging space of the camera 2, the monitoring space may be in an area farther from the shovel 60. Also, the monitoring space of the human detection sensor 6 has an overlapping portion in the portion where the imaging space of the camera 2 overlaps. For example, in the overlapping portion of the imaging space CR of the right side camera 2R and the imaging space CB of the rear camera 2B, the monitoring space ZR of the right side person detection sensor 6R overlaps with the monitoring space ZB of the rear person detection sensor 6B. However, the monitoring space of the human detection sensor 6 may be arranged so that duplication does not occur.
  • FIG. 12 shows an input image of each of the three cameras 2 mounted on the shovel 60 and an output image generated using the input images.
  • the image generation apparatus 100 projects the input image of each of the three cameras 2 onto the plane region R1 and the curved region R2 of the space model MD, and reprojects the image on the processing target image plane R3 to generate the processing target image. . Further, the image generation apparatus 100 generates an output image by performing image conversion processing (for example, scale conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like) on the generated processing target image. As a result, the image generating apparatus 100 displays an image (image in the flat region R1) looking down near the shovel 60 from above and an image (image in the processing target image plane R3) looking horizontally from the shovel 60 A virtual viewpoint image for periphery monitoring to be simultaneously displayed is generated. The image displayed at the center of the periphery monitoring virtual viewpoint image is a CG image 60 CG of the shovel 60.
  • image conversion processing for example, scale conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like
  • the input image of the right side camera 2R and the input image of the rear camera 2B respectively capture a person in the overlapping portion of the imaging space of the right side camera 2R and the imaging space of the rear camera 2B (right side A region R10 surrounded by a two-dot chain line in the input image of the two-way camera 2R and a region R11 surrounded by a two-dot chain line in the input image of the rear camera 2B.
  • the output image will cause the person in the overlapping part to disappear (a point in the output image See the region R12 enclosed by a dashed line).
  • the area to which the coordinates on the input image plane of the rear camera 2B are associated is associated with the coordinates on the input image plane of the right side camera 2R.
  • the area is mixed, and the object in the overlapping part is prevented from disappearing.
  • FIG. 13 is a diagram for explaining stripe pattern processing which is an example of the image loss prevention processing for preventing the loss of an object in the overlapping portion of the imaging space of each of two cameras.
  • F13A is a diagram showing an output image portion corresponding to an overlapping portion of the imaging space of the right side camera 2R and the imaging space of the rear camera 2B, and corresponds to a rectangular area R13 shown by a dotted line in FIG.
  • the area PR1 filled with gray is an image area where the input image portion of the rear camera 2B is arranged, and the input image of the rear camera 2B is displayed at each coordinate on the output image plane corresponding to the area PR1. Coordinates on the plane are associated.
  • the area PR2 filled with white is an image area where the input image portion of the right side camera 2R is arranged, and each coordinate on the output image plane corresponding to the area PR2 is the input image plane of the right side camera 2R The upper coordinates are associated.
  • the area PR1 and the area PR2 are arranged to form a stripe pattern (stripe pattern), and the boundary line between the area PR1 and the area PR2 alternately arranged in a stripe is the turning center of the shovel 60. Defined by concentric circles on a horizontal plane centered on
  • F13B is a top view showing the situation of the space area at the diagonally right back of the shovel 60, and shows the current situation of the space area imaged by both the rear camera 2B and the right side camera 2R. Further, F13B indicates that a rod-like three-dimensional object OB is present obliquely rearward to the right of the shovel 60.
  • F13C shows a part of an output image generated based on an input image obtained by actually imaging the space area indicated by F13B with the rear camera 2B and the right side camera 2R.
  • the image of the solid object OB in the input image of the rear camera 2B is extended in the extension direction of the line connecting the rear camera 2B and the solid object OB by viewpoint conversion for generating a road surface image
  • the image OB1 is a part of the image of the three-dimensional object OB displayed when the road surface image in the output image portion is generated using the input image of the rear camera 2B.
  • the image of the solid object OB in the input image of the right side camera 2R is expanded in the extension direction of the line connecting the right side camera 2R and the solid object OB by viewpoint conversion for generating a road surface image.
  • the image OB2 is a part of the image of the three-dimensional object OB displayed when the road surface image in the output image portion is generated using the input image of the right side camera 2R.
  • the image generation device 100 combines the area PR1 with which the coordinates on the input image plane of the rear camera 2B are associated and the area PR2 with which the coordinates on the input image plane of the right side camera 2R are associated. Mix As a result, the image generation apparatus 100 displays both the two images OB1 and OB2 of one solid object OB on the output image, and prevents the solid object OB from disappearing from the output image.
  • FIG. 14 is a contrast diagram showing the difference between the output image of FIG. 12 and the output image obtained by applying the image loss prevention process (stripe pattern process) to the output image of FIG.
  • the output image of FIG. 12 is shown, and the lower part of FIG. 14 shows an output image after the image loss prevention process (stripe pattern process) is applied. While a person disappears in a region R12 surrounded by an alternate long and short dash line in the upper diagram of FIG. 14, a human is displayed without disappearing in a region R14 enclosed by an alternate long and short dash line in the lower diagram of FIG.
  • the image generation apparatus 100 may prevent the disappearance of the object in the overlapping portion by applying mesh pattern processing, averaging processing, or the like instead of the stripe pattern processing. Specifically, the image generating apparatus 100 outputs an average value of the values (for example, luminance values) of corresponding pixels in the input images of the two cameras by averaging, and corresponds to the overlapping portion. Adopted as the pixel value of the image part. Alternatively, in the output image portion corresponding to the overlapping portion, the image generation apparatus 100 performs mesh pattern processing on a region to which values of pixels in the input image of one camera are associated and values of pixels in the input image of the other camera. Are arranged so as to form a mesh pattern (mesh pattern). Thereby, the image generation device 100 prevents the loss of the object in the overlapping portion.
  • mesh pattern processing for example, luminance values
  • FIG. 15 is a view showing input images of the three cameras 2 mounted on the shovel 60 and an output image generated using the input images, and corresponds to FIG.
  • FIG. 16 is a diagram showing an example of the relationship between the space model MD used in the first output image switching process and the processing target image plane R3, and corresponds to FIG.
  • FIG. 17 is a diagram for explaining the relationship between two output images switched in the first output image switching process.
  • the image generation apparatus 100 projects the input image of each of the three cameras 2 onto the plane region R1 and the curved region R2 of the space model MD, and reprojects it on the processing object image plane R3. Generate an image to be processed. Further, the image generation apparatus 100 generates an output image by performing image conversion processing (for example, scale conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like) on the generated processing target image. As a result, the image generation device 100 generates a periphery monitoring virtual viewpoint image that simultaneously displays an image of the vicinity of the shovel 60 as viewed from above and an image of the periphery of the shovel 60 viewed in the horizontal direction.
  • image conversion processing for example, scale conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like
  • the input images of the left side camera 2L, the rear side camera 2B, and the right side camera 2R show a state in which three workers are present.
  • the output image also shows that nine workers are present around the shovel 60.
  • the height of the plane region R1 of the space model MD is set to a height corresponding to the road surface which is the ground contact surface of the shovel 60. Therefore, in the image in the plane region R1 of FIG. 15, the distance between the contact positions of the nine workers and the CG image 60CG of the shovel 60 is between the respective ones of the nine workers and the shovel 60. It accurately represents the actual distance. However, the image of the worker is displayed so as to be larger as it goes away from the ground contact position (under the foot). In particular, as shown in FIG. 15, the head of the worker is displayed significantly larger than the size of the CG image 60 CG of the shovel 60. Therefore, the operator of the shovel 60 who saw the output image of FIG. 15 may be illusive that the distance between the shovel 60 and the worker is larger than the actual distance.
  • the image generation unit 11 determines the content of the output image when it is determined by the human existence / nonexistence determination unit 12 that a person is present in any of the left side monitoring space ZL, the rear monitoring space ZB, and the right side monitoring space ZR. Switch.
  • the image generation means 11 sets the height of the plane region R1 of the space model MD to a height corresponding to the height of a person's head from the height corresponding to the road surface (hereinafter referred to as “head height”).
  • head height is a value set in advance, and is, for example, 150 cm. However, the head height may be a value determined dynamically.
  • the image generation device 100 determines the head height based on the detected height. May be Specifically, the height of the head may be determined according to the height of the worker closest to the shovel 60, and the statistical values (maximum value, minimum value, height values of heights of a plurality of workers present around the shovel 60) The head height may be determined according to the average value or the like.
  • space model MD of FIG. 4 having plane area R1 of a height corresponding to the road surface, and head height reference space model MDM having plane area R1M of the head height Explain the relationship between The upper diagram in FIG. 16 shows the relationship between the space model MD and the processing target image plane R3 including the flat region R1, and the lower diagram in FIG. 16 shows the head height reference space model MDM and the head height reference plane region R1M. And a head height reference processing target image plane R3M including the Further, in FIG.
  • an output screen D1 is a virtual viewpoint image for perimeter monitoring based on a road surface height generated using a space model MD
  • an output image D2 is a head height generated using a space model MDM
  • It is a virtual viewpoint image for perimeter surveillance of the standard.
  • the image D3 is an explanatory image representing the difference in size between the road surface height-based perimeter monitoring virtual viewpoint image and the head height-based perimeter monitoring virtual viewpoint image. Further, the size of the image portion D10 in the periphery monitoring virtual viewpoint image based on the road surface height corresponds to the size of the periphery monitoring virtual viewpoint image based on the head height.
  • the heights of the head height reference plane region R1M and the head height reference processing target image plane R3M are compared with the heights of the plane region R1M and the processing target image plane R3 corresponding to the height of the road surface. , Head height HT only.
  • the image generation device 100 can prevent the operator of the shovel 60 who has viewed the output image from capturing the distance between the shovel 60 and the worker larger than the actual distance.
  • a region D10M on the space model MDM corresponding to the image portion D10 is included in the head height reference plane region R1M. That is, the virtual viewpoint image for perimeter monitoring based on head height does not use the input image portion reprojected to the annular portion (portion other than head height reference plane region R1M) of head height based processing target image plane R3M . Therefore, in the present embodiment, the image generating apparatus 100 omits the association of the coordinates in the area other than the area D10M.
  • the image generation apparatus 100 switches the content of the output image based on the determination result of the person presence / absence determination unit 12. Specifically, when it is determined that a person is present around the shovel 60, the image generating apparatus 100 switches the virtual viewpoint image for perimeter monitoring based on the road surface height to the virtual viewpoint image for perimeter monitoring based on the head height. . As a result, when the image generation device 100 detects a worker around the shovel 60, the image generation device 100 can more accurately convey the distance between the shovel 60 and the worker to the operator of the shovel 60. If the image generating apparatus 100 subsequently determines that there is no person around the shovel 60, the virtual viewpoint image for perimeter monitoring based on head height is converted to a virtual viewpoint image for perimeter monitoring based on road surface height. Switch. This is to allow the operator of the shovel 60 to monitor the surroundings of the shovel 60 more extensively.
  • FIG. 18 is a diagram for explaining the relationship between three output images switched in the second output image switching process.
  • the image generation apparatus 100 executes image loss prevention processing such as stripe pattern processing, mesh pattern processing, and averaging processing for preventing the loss of an object in an overlapping portion of the imaging space of each of the two cameras. Specifically, the image generation apparatus 100 performs overlap by combining the coordinates on the input image plane of each of the two cameras in the output image portion corresponding to the overlapping portion by the image loss prevention process. Prevent the disappearance of objects in parts. However, the image loss prevention process has a problem that the visibility of the object in the overlapping portion is reduced.
  • image loss prevention processing such as stripe pattern processing, mesh pattern processing, and averaging processing for preventing the loss of an object in an overlapping portion of the imaging space of each of the two cameras. Specifically, the image generation apparatus 100 performs overlap by combining the coordinates on the input image plane of each of the two cameras in the output image portion corresponding to the overlapping portion by the image loss prevention process. Prevent the disappearance of objects in parts. However, the image loss prevention process has a problem that the visibility of the object in
  • the image generation unit 11 switches the content of the output image according to the determination result of the person presence / absence determination unit 12.
  • the image generation unit 11 sets a camera corresponding to the monitoring space determined to have a person as a priority camera, and sets a camera corresponding to the monitoring space determined to have no person as a priority camera. Note that the imaging space of the priority camera and the imaging space of the priority camera have an overlapping portion. Then, when the priority camera and the camera to be prioritized can not be determined, the image generation unit 11 generates a virtual viewpoint image for periphery monitoring to which the image loss prevention process is applied.
  • the image generation unit 11 can determine the priority camera and the priority camera, the image loss prevention process is not applied to the output image portion corresponding to the overlapping portion on the input image plane of the priority camera.
  • a virtual viewpoint image for monitoring the periphery is generated by correlating the coordinates of.
  • the image generation unit 11 monitors the periphery by applying the image loss prevention process when it is determined that a person is present in all of the left side monitoring space ZL, the rear monitoring space ZB, and the right side monitoring space ZR.
  • Virtual viewpoint images for This is because any camera can not be a priority camera, and neither camera can be a priority camera. That is, when any camera is used as a priority camera, there is a possibility that the person in the overlapping portion may be lost.
  • the image generation means 11 is for peripheral monitoring to which the image loss prevention process is applied even when it is determined that no person exists in all of the left side monitoring space ZL, the rear monitoring space ZB, and the right side monitoring space ZR. Generate a virtual viewpoint image. This is because there is no target to be displayed while using any camera as a priority camera. However, in this case, without applying the image loss prevention process, the image generation unit 11 monitors the periphery by correlating the coordinates on the input image plane of one of the cameras with the output image portion corresponding to the overlapping portion. Virtual viewpoint images may be generated. This is because there is no object that may be lost in the first place, and the processing load of the control unit 1 can be reduced by omitting the image loss prevention process.
  • the output image D11 of FIG. 18 is generated when it is determined that a person is present in all of the left side monitoring space ZL, the rear monitoring space ZB, and the right side monitoring space ZR. 7 shows a monitoring virtual viewpoint image.
  • the image generation unit 11 sets the output image portion R15 corresponding to the overlapping portion of the left side imaging space CL and the rear imaging space CB to the area where the coordinates on the input image plane of the left side camera 2L are associated An area to which coordinates on the input image plane of the camera 2B are associated is mixed.
  • the image generation unit 11 is configured such that a region on the input image plane of the right side camera 2R is associated with the output image portion R16 corresponding to the overlapping portion of the right side imaging space CR and the rear imaging space CB An area to which coordinates on the input image plane of 2B are associated is mixed. This is to prevent the loss of the worker at the output image portions R15 and R16.
  • the image generation unit 11 sets the left-side camera 2L as a priority camera,
  • the rear camera 2B is set as a priority camera. That is, the image generation unit 11 associates the coordinates on the input image plane of the left side camera 2L with the output image portion R15 corresponding to the overlapping portion of the left side imaging space CL and the rear imaging space CB.
  • the image generation means 11 sets the right side camera 2R as a priority camera,
  • the camera 2B is set as a priority camera. That is, the image generation unit 11 associates the coordinates on the input image plane of the right side camera 2R with the output image portion R16 corresponding to the overlapping portion of the right side imaging space CR and the rear imaging space CB.
  • the output image D12 of FIG. 18 is a side camera that is generated when it is determined that a person is present in the left side monitoring space ZL and the right side monitoring space ZR, and it is determined that a person is not present in the rear monitoring space ZB.
  • the priority virtual observation image for periphery monitoring is shown.
  • the image generation unit 11 sets each of the left side camera 2L and the right side camera 2R as a priority camera, and sets the rear camera 2B as a priority camera.
  • the image generation unit 11 associates the coordinates on the input image plane of the left side camera 2L with the output image portion R15 corresponding to the overlapping portion of the left side imaging space CL and the rear imaging space CB, Coordinates on the input image plane of the right side camera 2R are associated with the output image portion R16 corresponding to the overlapping portion of the rear imaging space CB.
  • the image generation unit 11 associates the output image portion R17P with the coordinates on the input image plane of the left side camera 2L, associates the output image portion R18P with the coordinates on the input image plane of the right side camera 2R, and outputs Coordinates on the input image plane of the rear camera 2B are associated with the image portion R19N.
  • the image generation unit 11 sets the left side camera 2L as a priority camera,
  • the rear camera 2B is set as a priority camera. That is, the image generation unit 11 associates the coordinates on the input image plane of the rear camera 2B with the output image portion R15 corresponding to the overlapping portion of the left side imaging space CL and the rear imaging space CB.
  • the image generation unit 11 sets the right side camera 2R as a priority camera,
  • the rear camera 2B is set as a priority camera. That is, the image generation unit 11 associates the coordinates on the input image plane of the rear camera 2B with the output image portion R16 corresponding to the overlapping portion of the right side imaging space CR and the rear imaging space CB.
  • An output image D13 of FIG. 18 is a rear camera generated when it is determined that a person does not exist in the left side surveillance space ZL and the right side surveillance space ZR and it is determined that a person exists in the rear surveillance space ZB.
  • the priority virtual observation image for periphery monitoring is shown.
  • the image generation unit 11 sets each of the left side camera 2L and the right side camera 2R as a priority camera, and sets the rear camera 2B as a priority camera.
  • the image generation unit 11 associates the coordinates on the input image plane of the rear camera 2B with the output image portion R15 corresponding to the overlapping portion of the left side imaging space CL and the rear imaging space CB, and the right side imaging space CR and the rear Coordinates on the input image plane of the rear camera 2B are also associated with the output image portion R16 corresponding to the overlapping portion of the imaging space CB.
  • the image generation unit 11 associates the output image portion R17N with the coordinates on the input image plane of the left side camera 2L, associates the output image portion R18N with the coordinates on the input image plane of the right side camera 2R, and outputs Coordinates on the input image plane of the rear camera 2B are associated with the image portion R19P.
  • the output image portion R17P corresponds to a portion obtained by adding the output image portion R15 to the output image portion R17N
  • the output image portion R18P has the output image portion R16 added to the output image portion R18N. It corresponds to the part.
  • the output image portion R19P corresponds to a portion obtained by adding the output image portion R15 and the output image portion R16 to the output image portion R19N.
  • FIG. 19 is a correspondence table showing the correspondence between the determination result of the human presence / absence determination means 12 and the content of the output image.
  • a circle indicates that it is determined by the human presence / absence determination unit 12 that a person is present, and a cross indicates that it is determined that a person is not present.
  • Pattern B is as shown in the output image D12 of FIG. 18 when it is determined that a person is present only in the right side surveillance space ZR and it is determined that a person does not exist in the left direction surveillance space ZL and the rear surveillance space ZB. It represents that the virtual camera viewpoint image for periphery surveillance of a side camera priority is generated.
  • the pattern C is a side as shown in the output image D12 of FIG. 18 when it is determined that a person is present in the left side monitoring space ZL and the right side monitoring space ZR and it is determined that a person is not present in the rear monitoring space ZB. It represents that a virtual camera viewpoint image with peripheral camera priority is generated.
  • the pattern H When it is determined that there is no person in all of the left side monitoring space ZL, the rear monitoring space ZB, and the right side monitoring space ZR, the pattern H performs the image loss prevention process as shown in the output image D11 of FIG. Indicates that the applied peripheral monitoring virtual viewpoint image is generated.
  • the image generation apparatus 100 switches the content of the output image based on the determination result of the person presence / absence determination unit 12. Specifically, when the image generation apparatus 100 can not determine the priority camera and the camera to be prioritized, it generates a virtual viewpoint image for periphery monitoring to which the image loss prevention process is applied. On the other hand, when the image generation unit 11 can determine the priority camera and the prioritized camera, the image generation unit 11 does not apply the image loss prevention process to the output image portion corresponding to the overlapping portion on the input image plane of the priority camera. By associating the coordinates, a peripheral viewpoint virtual viewpoint image is generated. As a result, the image generation apparatus 100 can display the worker present in the overlapping portion of the imaging space of each of the two cameras more clearly.
  • the image generation device 100 generates a virtual viewpoint image for perimeter monitoring with side camera priority, with the left side camera 2L and the right side camera 2R as priority cameras and the rear camera 2B as priority cameras. Do. Alternatively, the image generating apparatus 100 generates a rear-view-preferred peripheral viewpoint virtual viewpoint image with the rear-side camera 2L and the right-side camera 2R as priority cameras and the rear camera 2B as a priority camera.
  • the present invention is not limited to this configuration.
  • the image generation apparatus 100 applies the image loss prevention process to the output image portion R16 while associating the output image portion R15 with the coordinates on the input image plane of either the left side camera 2L or the rear camera 2B. May be
  • the image generation apparatus 100 applies the image loss prevention process to the output image portion R15 while associating the output image portion R16 with the coordinates on the input image plane of either the right side camera 2R or the rear camera 2B. May be
  • the image generation apparatus 100 may combine the first output image switching process and the second output image switching process.
  • the image generation device 100 associates the monitoring space of one human detection sensor with the imaging space of one camera, but monitors the monitoring space of one human detection sensor in the imaging spaces of a plurality of cameras. It may be made to correspond, and the monitoring space of a plurality of person detection sensors may be made to correspond to the imaging space of one camera.
  • the image generation apparatus 100 switches the content of the output image at the moment when the determination result of the person presence / absence determination means 12 changes.
  • the present invention is not limited to this configuration.
  • the image generation apparatus 100 may set a predetermined delay time before the content of the output image is switched after the determination result of the person existence determination unit 12 changes. This is to suppress frequent switching of the content of the output image.
  • FIG. 20 is a flowchart showing the flow of the first alarm control process
  • FIG. 21 shows an example of the transition of the output image displayed during the first alarm control process.
  • the alarm control means 13 repeatedly executes this first alarm control process at a predetermined cycle.
  • the human presence / absence determination means 12 determines whether or not a person is present around the shovel 60 (step S11).
  • the image generation unit 11 generates and displays, for example, a virtual viewpoint image for perimeter monitoring based on the road surface height as shown in the output image D21 of FIG.
  • the person presence / absence determination means 12 When it is determined that a person is present around the shovel 60 (YES in step S11), the person presence / absence determination means 12 outputs a detection signal to the alarm control means 13.
  • the alarm control means 13 having received the detection signal outputs an alarm start signal for starting an alarm to the alarm output unit 7, and causes the alarm output unit 7 to output an alarm (step S12).
  • the alarm output unit 7 outputs an alarm sound.
  • the image generation means 11 switches the virtual viewpoint image for perimeter monitoring based on the road surface height to the virtual viewpoint image for perimeter monitoring based on the head height, for example, as shown in the output image D22 of FIG.
  • the human presence / absence determination means 12 notifies the alarm control means 13 of a detection signal when it is determined that the worker P1 is present in the backward monitoring space ZB.
  • the alarm control means 13 outputs an alarm start signal to the left side alarm output unit 7L, the rear alarm output unit 7B, and the right side alarm output unit 7R, and outputs an alarm sound from all three alarm output units.
  • the image generation unit 11 superimposes and displays the alarm stop button G1 on the periphery monitoring virtual viewpoint image based on head height.
  • the alarm stop button G1 is a software button configured in cooperation with the touch panel as the input unit 3. The operator can stop the alarm sound by pressing the alarm stop button G1.
  • the alarm stop button G1 may be a hardware button installed in the vicinity of the display unit 5.
  • the image generation unit 11 may superimpose a text message indicating that the alarm can be stopped by pressing the hardware button as the alarm stop button G1 on the periphery monitoring virtual viewpoint image.
  • the image generation means 11 does not change the virtual viewpoint image for perimeter monitoring based on the road surface height as shown in the output image D23 of FIG.
  • the alarm stop button G1 may be superimposed and displayed while being used.
  • the person presence / absence determination unit 12 does not output a detection signal to the alarm control unit 13. Therefore, the alarm control means 13 does not output an alarm start signal to the alarm output unit 7. Further, even when the alarm control means 13 has already output an alarm, it does not output an alarm stop signal for stopping the alarm to the alarm output unit 7.
  • the alarm control means 13 determines whether or not the alarm stop button G1 has been pressed (step S13). When it is determined that the alarm stop button G1 is pressed (YES in step S13), the alarm control unit 13 outputs an alarm stop signal to the alarm output unit 7, and the alarm output from the alarm output unit 7 is output. It is stopped (step S14). In addition, when displaying the virtual viewpoint image for perimeter monitoring based on head height, the image generation unit 11 displays the virtual viewpoint image for perimeter monitoring based on head height as a virtual viewpoint for perimeter monitoring based on road surface height. Switch to images and display.
  • the alarm control unit 13 does not output an alarm stop signal to the alarm output unit 7. That is, even if the human presence / absence determination means 12 subsequently determines that there is no person around the shovel 60, the warning output by the warning control means 13 does not stop until the warning stop button G1 is pressed. . Further, when displaying the peripheral monitoring virtual viewpoint image based on the head height, the image generation unit 11 continues the display of the peripheral monitoring virtual viewpoint image based on the head height.
  • the alarm control means 13 changes the content of the alarm when the human existence / nonexistence determination means 12 determines that there is no person around the shovel 60 even before the alarm stop button G1 is pressed. May be Specifically, the alarm control means 13 may change the intensity, height, output interval, etc. of the alarm sound that has already been output, or the intensity of the alarm lamp that has already emitted light, emission color, emission interval You may change etc. This is to allow the operator who receives the alarm to distinguish between the case where it is determined by the person existence determination means 12 that there is a person and the case where it is determined that there is no person.
  • the image generating apparatus 100 outputs an alarm when it is determined that a person is present in the monitoring space, and does not stop the alarm until the operator indicates the intention to stop the alarm. That is, the image generation apparatus 100 stops the alarm even when a person who has been present in the monitoring space has left the monitoring space or when a person who has been present in the monitoring space has failed to be detected. There is nothing to do. Therefore, the image generating apparatus 100 can urge the operator to confirm that the worker present in the monitoring space around the shovel 60 has left the monitoring space.
  • the image generation device 100 causes the alarm to stop in response to the worker standing still in the monitoring space. Can be prevented. Further, in the configuration in which the human presence / absence determination means 12 determines the presence / absence of the worker (moving body) using the optical flow, the image generation device 100 stops the alarm in response to the worker being stopped in the monitoring space. It is possible to prevent it. Note that the moving object detection sensor and the optical flow do not distinguish between a stationary object in the monitoring space and a leaving of the worker from the monitoring space without setting a stationary object as a detection target.
  • the alarm may be stopped although the worker is present in the monitoring space.
  • the image generating apparatus 100 can call attention of the operator of the shovel 60 by not stopping the alarm when there is a possibility that a worker is present in the monitoring space.
  • FIG. 22 is a flowchart showing the flow of the second alarm control process
  • FIG. 23 shows an example of the transition of the output image displayed during the second alarm control process
  • FIG. 24 is a flowchart showing a flow of processing (hereinafter referred to as “working machine state determination processing”) for the working machine state determination means 14 to determine the state of the working machine
  • FIG. 22 is a flowchart showing the flow of the second alarm control process
  • FIG. 23 shows an example of the transition of the output image displayed during the second alarm control process.
  • FIG. 24 is a flowchart showing a flow of processing (hereinafter referred to as “working machine state determination processing”) for the working machine state determination means 14 to determine the state of the working machine
  • FIG. working machine state determination processing a flow of processing for the working machine state determination means 14 to determine the state of the working machine
  • FIG. 4 is a flowchart showing a flow of processing (hereinafter referred to as “work start alarm control processing”) in which the alarm control means 13 controls the alarm output unit 7 at the time of switching from the state to the work available state.
  • the alarm control means 13 repeatedly executes the second alarm control process at a predetermined cycle, and the work machine state determination means 14 repeatedly executes the work machine state judgment process at a predetermined cycle. Further, the alarm control means 13 executes an alarm control process at the start of operation when switching from the operation impossible state to the operation enable state.
  • the human existence determining means 12 determines whether or not a person exists around the shovel 60 (step S21).
  • the image generation unit 11 generates and displays, for example, a virtual viewpoint image for perimeter monitoring based on the road surface height as shown in the output image D31 of FIG. .
  • the human presence / absence determination means 12 refers to the result of determination by the working machine state determination means 14 (see FIG. 24 described later) (step S22). ). In this case, the person existence determination means 12 may fix the gate lock lever 8 in the locked state. Specifically, the human presence / absence determination means 12 may not allow the operator to pull up the gate lock lever 8, and may not be in the unlocked state even if the gate lock lever 8 is pulled up. .
  • the human presence / absence determination means 12 When it is determined by the work machine state determination means 14 that the shovel 60 is in the work enable state (YES in step S22), the human presence / absence determination means 12 outputs a detection signal to the alarm control means 13.
  • the alarm control means 13 having received the detection signal outputs an alarm start signal for outputting an alarm to the alarm output unit 7, and causes the alarm output unit 7 to output an alarm (step S23).
  • the alarm output unit 7 outputs an alarm sound. Specifically, when it is determined that a person (here, workers P10 to P12) is present in the left direction monitoring space ZL, the human presence / absence determination means 12 sends a left direction detection signal to the alarm control means 13 Notice.
  • the alarm control means 13 outputs an alarm start signal to the left side alarm output unit 7L, the rear alarm output unit 7B, and the right side alarm output unit 7R, and outputs an alarm sound from all three alarm output units. Let In this case, the control unit 1 activates the display unit 5 when the display unit 5 is not activated.
  • the image generation unit 11 switches the virtual viewpoint image for perimeter monitoring on the road surface height standard to the virtual viewpoint image for perimeter monitoring on the head height standard, as shown in the output image D22 of FIG. Display superimposed. Further, as shown in the output image D23 of FIG. 21, the image generation means 11 may superimpose and display the alarm stop button G1 on the virtual viewpoint image for perimeter monitoring based on the road surface height.
  • the alarm control means 13 may prohibit the work by the shovel 60 until the alarm stop button G1 is pressed.
  • the alarm control means 13 may close the gate lock valve and shut off the flow of hydraulic fluid between the control valve and the operation lever or the like to invalidate the operation lever or the like.
  • the human presence / absence determination means 12 does not output a detection signal to the alarm control means 13.
  • the value of the human detection flag prepared in the NVRAM or the like is set to "1" (on) (step S24).
  • the human detection flag is initially set to a value “0” (off), and the human detection flag value “1” (on) represents that a human is detected, and the human detection flag value “0”. “(Off)” indicates that a person is not detected.
  • the alarm control means 13 since the alarm control means 13 does not receive the detection signal, it does not output the alarm start signal to the alarm output unit 7 and does not output the alarm from the alarm output unit 7.
  • the alarm control unit 13 may output the alarm stop signal to the alarm output unit 7.
  • the image generation means 11 is similar to the case where it is determined that there is no person even though it is determined that there is a person around the shovel 60 when the display unit 5 is activated. Display the output image. Specifically, the image generation unit 11 displays, for example, a peripheral viewpoint virtual viewpoint image based on the road surface height as shown in the output image D32 of FIG. However, the image generation unit 11 may switch the virtual viewpoint image for perimeter monitoring based on the road surface height to the virtual viewpoint image for perimeter monitoring based on the head height.
  • the human presence / absence determination means 12 does not refer to the result of determination by the working machine state determination means 14, and the alarm control means 13 No detection signal is output.
  • the image generation means 11 is similar to the case where it is determined that a person is present despite the fact that no person is present around the shovel 60. Display the output image. Specifically, the image generation unit 11 displays a virtual viewpoint image for perimeter monitoring based on head height as shown in the output image D33 of FIG. 23, for example, together with the alarm stop button G1. However, the image generation unit 11 may switch the virtual viewpoint image for perimeter monitoring based on head height to the virtual viewpoint image for perimeter monitoring based on road surface height as shown in the output image D34 of FIG.
  • the alarm control means 13 monitors whether the alarm stop button G1 is pressed (step S25). When it is determined that the alarm stop button G1 is pressed (YES in step S25), the alarm control unit 13 outputs an alarm stop signal to the alarm output unit 7, and the alarm output from the alarm output unit 7 is output. It is stopped (step S26).
  • the image generation unit 11 displays the virtual viewpoint image for perimeter monitoring based on head height on the road surface height. It switches to the reference
  • the alarm control unit 13 when it is determined that the alarm stop button G1 is not pressed (NO in step S25), the alarm control unit 13 does not output an alarm stop signal to the alarm output unit 7. That is, even if the human presence / absence determination means 12 subsequently determines that there is no person around the shovel 60, the warning output by the warning control means 13 does not stop until the warning stop button G1 is pressed. .
  • the image generation means 11 displays the virtual viewpoint image for perimeter monitoring based on that head height. To continue.
  • the alarm control means 13 changes the content of the alarm when the human existence / nonexistence determination means 12 determines that there is no person around the shovel 60 even before the alarm stop button G1 is pressed. May be Specifically, the alarm control means 13 may change the intensity, height, output interval, etc. of the alarm sound that has already been output, or the intensity of the alarm lamp that has already emitted light, emission color, emission interval You may change etc. This is to allow the operator who receives the alarm to distinguish between the case where it is determined by the person existence determination means 12 that there is a person and the case where it is determined that there is no person.
  • the work machine state determination means 14 determines whether the ignition switch 9 is in the on state based on the output of the ignition switch 9 (step S31).
  • the work machine state determination unit 14 determines whether the gate lock lever 8 is in the unlocked state based on the output of the gate lock lever 8. It determines (step S32).
  • step S32 If it is determined that the gate lock lever 8 is in the unlocked state (YES in step S32), the work machine state determination unit 14 determines that the shovel 60 is in the work available state (step S33).
  • step S34 when it is determined that the ignition switch 9 is not in the on state (NO in step S31), or when it is determined that the gate lock lever 8 is not in the unlocking state (NO in step S32) It is determined that the shovel 60 is not in the workable state, that is, the shovel 60 is in the work impossible state (step S34).
  • the work machine state determination unit 14 sets the value of the work available flag prepared in the NVRAM or the like according to the determination result.
  • the human presence / absence determination means 12 refers to the value of the operation possible flag in step S22 of the second alarm control process shown in FIG. 22, and determines the presence / absence of alarm output based on the referred value. Note that the work enable flag is initially set to the value “0” (off), and the work enable flag value “1” (on) indicates that work is possible, and the work enable flag value “0”. “(Off)” indicates that the work can not be performed.
  • the work machine state determination means 14 may determine whether the shovel 60 is in the work-enabled state based on only one of the gate lock lever 8 and the ignition switch 9.
  • the alarm control means 13 refers to the value of the human detection flag (step S41).
  • the alarm control unit 13 outputs an alarm start signal to the alarm output unit 7, and the alarm output unit 7 An alarm is output (step S42).
  • the alarm control means 13 outputs an alarm even when it is determined by the human presence / absence determination means 12 that there is no person around the shovel 60 at the present time.
  • the alarm control unit 13 outputs an alarm start signal to the left side alarm output unit 7L, the rear alarm output unit 7B, and the right side alarm output unit 7R, and generates an alarm from all three alarm output units. Make the sound output.
  • the alarm control means 13 may prohibit the work by the shovel 60 until the output of the alarm is stopped. Specifically, the alarm control means 13 may close the gate lock valve and shut off the flow of hydraulic fluid between the control valve and the operation lever or the like to invalidate the operation lever or the like.
  • the alarm control means 13 measures an elapsed time after the alarm start, and determines whether a predetermined time (for example, 2 seconds) has elapsed after the alarm start (step S43).
  • step S43 If it is determined that the predetermined time has not elapsed after the start of the alarm (NO in step S43), the alarm control unit 13 stands by until the predetermined time elapses.
  • the alarm control unit 13 outputs an alarm stop signal to the alarm output unit 7, and the alarm output from the alarm output unit 7 Are stopped (step S44). Further, the alarm control means 13 resets the value of the human detection flag to "0" (off) (step S45). The alarm control means 13 may not stop the alarm until the alarm stop button G1 is pressed.
  • the image generation apparatus 100 does not output an alarm when the shovel 60 is in the operation impossible state even when it is determined that a person is present in the monitoring space. Therefore, the image generation apparatus 100 can prevent an unnecessary alarm from being output when the shovel 60 is in the operation impossible state.
  • the image generating apparatus 100 can urge the operator to confirm that the worker present in the monitoring space around the shovel 60 has left the monitoring space.
  • the alarm control means 13 outputs an alarm as long as the value of the human detection flag is “1” (on).
  • the present invention is not limited to this configuration.
  • the alarm control unit 13 may not output an alarm if the elapsed time from the time when the value of the human detection flag is set to “1” (on) is equal to or longer than a predetermined time.
  • the human presence / absence determination means 12 sets the value of the human detection flag to “0” (off) when the elapsed time from the time of setting the value of the human detection flag to “1” (on) reaches a predetermined time. It may be reset. This is to prevent an alarm from being output based on a detection record that is too old.
  • the human presence / absence determination means 12 determines the presence / absence of the worker (moving body) based on the moving body detection sensor
  • the image generating apparatus 100 enters into the monitoring space when the shovel 60 is in the operation impossible state and stands still. It fails to detect the worker who is doing work when the shovel 60 becomes ready to work, and it is possible to prevent the operator of the shovel 60 from starting the work by the shovel 60 without the worker being aware of it.
  • the image generating apparatus 100 enters into the monitoring space when the shovel 60 is in the operation impossible state and stands still It fails to detect the worker who is doing work when the shovel 60 becomes ready to work, and it is possible to prevent the operator of the shovel 60 from starting the work by the shovel 60 without the worker being aware of it.
  • the moving object detection sensor and the optical flow do not distinguish between a stationary object in the monitoring space and a leaving of the worker from the monitoring space without setting a stationary object as a detection target. Therefore, in the configuration in which no alarm is output in any case when it is determined that there is no person in the monitoring space at the current time, the work is stopped by entering the monitoring space before the shovel 60 becomes ready to work. There is a possibility that work of the shovel 60 may be permitted despite the presence of a person. On the other hand, the image generating apparatus 100 can call attention of the operator of the shovel 60 by outputting an alarm when there is a possibility that a worker is present in the monitoring space.
  • the image generation device 100 causes the display unit 5 to display the output image even when the shovel 60 is in the operation impossible state, but does not display the output image on the display unit 5. May be When the shovel 60 is in the inoperable state, there is no possibility that the worker comes in contact with the shovel 60, and the operator does not have to monitor the periphery of the shovel 60 through the output image.
  • FIG. 26 is a flowchart showing the flow of the first notification unit control process executed by the image generation apparatus 100 mounted on the shovel 60 shown in FIG. 11, and the notification unit control means 15 repeats this process at a predetermined cycle.
  • the first notification unit control process is executed.
  • the notification unit control means 15 refers to the determination result of the work machine state determination means 14 and determines whether or not the shovel 60 is in the work available state (step S51).
  • the notification unit control means 15 When it is determined that the shovel 60 is in the work enable state (YES in step S51), the notification unit control means 15 outputs a work enable state signal to the notification unit 20, and the shovel 60 is in the work enable state.
  • the notification is made (step S52).
  • the notification unit control means 15 causes the three indicator lamps as the notification unit 20 to emit light in red to notify surrounding people that the shovel 60 is ready for work. Further, the notification unit control means 15 may turn on or turn off the indicator lamp. In addition, below, this state of the alerting
  • the notification unit control means 15 outputs the operation impossible state signal to the notification unit 20, and the shovel 60 is in the operation impossible state.
  • the notification unit control unit 15 causes the three indicator lamps as the notification unit 20 to emit light in green and the shovel 60 is not in the work enable state, that is, the shovel 60 is in the operation impossible state. Inform people of Further, the notification unit control means 15 may turn on or turn off the indicator lamp. In addition, below, this state of the alerting
  • FIG. 27 is a flow chart showing a flow of the work permission determination process executed by the image generation apparatus 100 mounted on the shovel 60 shown in FIG. 11, and the work permission determination means 16 repeatedly performs this work permission at a predetermined cycle. Execute the decision process.
  • the work permission determination means 16 refers to the determination result of the work machine state determination means 14 to determine whether the shovel 60 is in the work enable state (step S61).
  • the work permission determination means 16 refers to the determination result of the person existence determination means 12 and determines whether or not a person exists around the shovel 60. It is determined (step S62).
  • the work permission determination unit 16 prohibits the work by the shovel 60 (step S63). Specifically, the work permission determination means 16 outputs, for example, a work prohibition signal to the gate lock valve, shuts off the flow of hydraulic oil between the control valve and the operation lever, etc., and invalidates the operation lever, etc. Prohibits the work by the shovel 60. In this case, the work permission determination means 16 prohibits the work by the shovel 60 even when the gate lock lever 8 is in the unlocked state. However, the work permission determination unit 16 may release the prohibition of the work by the shovel 60 when a predetermined condition is satisfied, such as when the alarm output by the alarm control unit 13 is stopped.
  • the work permission determination unit 16 permits the work by the shovel 60 (step S64). Specifically, the work permission determination means 16 outputs, for example, a work permission signal to the gate lock valve, causes the hydraulic fluid to communicate between the control valve and the control lever, and makes the control lever or the like effective. Thus, the work by the shovel 60 is permitted.
  • step S61 when it determines with the shovel 60 not being in an operation possible state (NO of step S61), the operation
  • the image generating apparatus 100 can notify persons around the shovel 60 whether or not the shovel 60 is in a workable state. In addition, when a person is present around the shovel 60, the image generating apparatus 100 can prohibit the work by the shovel 60 even if the shovel 60 is in a workable state. In addition, even if a person who looks at the notification unit 20 in the work ready notification state approaches the shovel 60 even if a person is present around the shovel 60, the work by the shovel 60 is prohibited. It can recognize that it does not start moving unexpectedly. Therefore, a person who looks at the notification unit 20 in the work availability notification state can safely approach the shovel 60.
  • the person who saw the notification unit 20 in the work non-permission notification state starts the shovel 60 unexpectedly because the work by the shovel 60 is prohibited regardless of whether there is a person around the shovel 60 or not. Can recognize that there is no Therefore, a person who looks at the notification unit 20 in the inoperable notification state can approach the shovel 60 with confidence. In this manner, the image generating apparatus 100 can more appropriately notify a person present around the shovel 60 the state of the shovel 60.
  • FIG. 28 is a flow chart showing the flow of the second notification unit control process executed by the image generation apparatus 100 mounted on the shovel 60 shown in FIG. 11, and the notification unit control means 15 repeats this process at a predetermined cycle.
  • the second notification unit control process is executed. Further, in parallel with the second notification unit control processing by the notification unit control unit 15, the work permission determination unit 16 repeatedly executes the work permission determination processing at a predetermined cycle.
  • the notification unit control means 15 refers to the determination result of the work machine state determination means 14 and determines whether or not the shovel 60 is in the work enable state (step S71).
  • the informing unit control means 15 refers to the determination result of the person presence / absence determination means 12 and determines whether or not a person exists around the shovel 60. It is determined (step S72).
  • the notification unit control unit 15 When it is determined that a person is present around the shovel 60 (YES in step S72), the notification unit control unit 15 outputs a work possible / person detection state signal to the notification unit 20, and the shovel 60 is in a work possible state. And notifies that it is in a state where a person is detected by the person detection sensor 6 (hereinafter referred to as "person detection state") (step S73).
  • the notification unit control unit 15 causes the three indicator lamps as the notification unit 20 to emit light in red, and notifies surrounding people that the shovel 60 is in the operation enable state and in the human detection state. Do. Further, the notification unit control means 15 may turn on or turn off the indicator lamp.
  • reporting part 20 be a "working possible / person detection alerting
  • the work by the shovel 60 is prohibited by the work permission determination means 16.
  • the work permission determination unit 16 may release the prohibition of the work by the shovel 60 when a predetermined condition is satisfied, such as when the alarm output by the alarm control unit 13 is stopped.
  • the notification unit control means 15 outputs a work possible / non-detected state signal to the notification unit 20, and the shovel 60 It is informed that it is in a state in which it is possible to work and that no person is detected by the person detection sensor 6 (hereinafter referred to as "person non-detection state") (step S74).
  • the notification unit control unit 15 causes the three indicator lamps as the notification unit 20 to emit light in yellow so as to notify surrounding people that the shovel 60 is in the operation enable state and in the human non-detection state. Make it Further, the notification unit control means 15 may turn on or turn off the indicator lamp.
  • reporting part 20 be a "working possible / person non-detection alerting
  • the informing unit control means 15 When it is determined that the shovel 60 is not in the work enable state (NO in step S71), the informing unit control means 15 outputs the operation impossible state signal to the informing unit 20 to notify the informing unit 20 of the operation impossible informing state Then, the operator is informed that the shovel 60 is in the inoperable state (step S75).
  • the notification unit control means 15 causes the three indicator lamps as the notification unit 20 to emit light in green to notify surrounding people that the shovel 60 is in an operation impossible state. Further, the notification unit control means 15 may turn on or turn off the indicator lamp.
  • the notification unit control means 15 determines that the shovel 60 is not in the work-enabled state, whether or not a person exists around the shovel 60 with reference to the determination result of the person existence / non-existence determination means 12 It may be determined.
  • the notification unit control means 15 when it is determined that a person is present around the shovel 60, the notification unit control means 15 outputs an operation impossible / person detection state signal to the notification unit 20, and the shovel 60 is in the operation impossible state and the person is It may be informed that it is in the detection state.
  • the notification unit control unit 15 causes the three indicator lamps as the notification unit 20 to emit light in orange, for example, to notify surrounding people that the shovel 60 is in the operation impossible state and in the human detection state. You may do so. Further, the notification unit control means 15 may turn on or turn off the indicator lamp. In the following, this state of the notification unit 20 will be referred to as “a state where the operation is not possible / person detection notification”.
  • the notification unit control means 15 when it is determined that a person is not present around the shovel 60, the notification unit control means 15 outputs an operation impossible / person non-detection state signal to the notification unit 20, and the shovel 60 is in an operation impossible state It may be informed that it is a person non-detection state.
  • the notification unit control unit 15 causes the three indicator lamps as the notification unit 20 to emit light in blue, and the surrounding person is in a state in which the shovel 60 is in an operation impossible state and in a person non-detection state. It may be notified. Further, the notification unit control means 15 may turn on or turn off the indicator lamp. In addition, below, let this state of the alerting
  • the informing unit control means 15 sets three informing states (operation impossible notification state, operation possible / person detection notification state, and operation possible / not person detection notification state) in each of the notification units 20 (indicator lamps).
  • the individual notification units 20 are controlled to be distinguishable.
  • the notification unit control means 15 has four notification states (operation impossible / person detection notification state, operation not possible / person non-detection notification state, operation possible / person detection notification state, and the like) in each notification unit 20 (indicator lamp)
  • the individual notification units 20 may be controlled so as to be able to distinguish between work possible / person non-detection notification state).
  • the image generation apparatus 100 may include three additional notification units (not shown) for notifying persons around the shovel 60 of the determination result of the person existence / non-existence determination means 12 separately from the three notification units 20. It may be provided.
  • the notification unit control means 15 controls each notification state of the notification unit 20 so as to distinguish between the operation impossible notification state and the operation possible notification state, so that the person detection state and the person non-detection state can be distinguished. Control the notification status of each additional notification unit.
  • the image generating apparatus 100 can notify people around the shovel 60 whether or not the shovel 60 is in a workable state, and at the same time, whether or not the shovel 60 is in a person detection state We can notify people around 60.
  • a person who looks at the notification unit 20 can know whether or not he / she is detected by the shovel 60, independently of whether the shovel 60 is in the work enable state.
  • the image generating apparatus 100 can prohibit the work by the shovel 60 even if the shovel 60 is in a workable state.
  • the image generating apparatus 100 can more appropriately notify a person present around the shovel 60 the state of the shovel 60.
  • the notification unit 20 notifies the plurality of notification states in a distinguishable manner by changing the light emission color, but changes the character information to be displayed to notify the plurality of notification states in a distinguishable manner. May be
  • the image generation apparatus 100 adopts a cylindrical space model MD as a space model, but may adopt a space model having another columnar shape such as a polygonal column, etc.
  • a space model composed of two sides may be adopted, or a space model having only sides may be adopted.
  • the image generation apparatus 100 is mounted on a self-propelled shovel together with a camera and a human detection sensor while having movable members such as a bucket, an arm, a boom, and a turning mechanism. Then, the image generation device 100 configures an operation support system that supports the movement of the shovel and the operation of the movable members while presenting the surrounding image to the operator.
  • the image generating apparatus 100 may be mounted on a working machine having no turning mechanism such as a forklift, an asphalt finisher, etc. together with the camera and the human detection sensor.
  • the image generating apparatus 100 may be mounted together with the camera and the human detection sensor on a working machine having movable members but not self-propelled, such as industrial machines or fixed cranes.
  • the image generation apparatus 100 may comprise the operation assistance system which assists operation of those working machines.
  • the periphery monitoring device has been described as an example of the image generation device 100 including the camera 2 and the display unit 5, the periphery monitoring device may be configured as a device not including an image display function by the camera 2, the display unit 5 or the like.
  • the periphery monitoring device 100A as a device that executes the first notification unit control process or the second notification unit control process is a camera 2, an input unit 3, a storage unit 4, a display unit 5, coordinates.
  • the association unit 10, the image generation unit 11, and the alarm control unit 13 may be omitted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mining & Mineral Resources (AREA)
  • Civil Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Structural Engineering (AREA)
  • Mechanical Engineering (AREA)
  • Component Parts Of Construction Machinery (AREA)

Abstract

A periphery monitoring device (100A) for a shovel (60) provided with a notification unit (20) capable of being seen from a periphery is provided with: a work-machine-state determination means (14) for determining whether the shovel (60) is in a state in which work can be performed; a person's presence determination means (12) for determining whether a person is present in the periphery of the shovel (60); a notification-unit control means (15) for controlling the notification unit (20); and a work-permision determination means (16) for determining whether the shovel (60) is permitted to perform work. The notification-unit control means (15) sets a notification state of the notification unit (20) such that the notification state in cases when it is determined that the shovel (60) is in a state in which work can be performed, is different to the notification state in cases when it is determined that the shovel (60) is not in a state in which work can be performed. The work-permission determination means (16) determines, on the basis of a determination result from the person's presence determination means (12), whether the shovel (60) is permitted to perform work.

Description

作業機械用周辺監視装置Peripheral monitoring device for work machine
 本発明は、作業機械の状態を周囲の人に知らせる機能を備える作業機械用周辺監視装置に関する。 BACKGROUND OF THE INVENTION Field of the Invention The present invention relates to a work machine peripheral monitoring device having a function of notifying people around the work machine of its state.
 油圧掘削機の始動状態、旋回状態、走行状態等を周囲の人に知らせるための表示器を備えた油圧掘削機が知られている(例えば、特許文献1参照。) DESCRIPTION OF RELATED ART The hydraulic excavator provided with the indicator for notifying the surrounding person of the starting state, turning state, driving | running | working state, etc. of a hydraulic excavator is known (for example, refer patent document 1).
特開2002-327469号公報JP 2002-327469 A
 しかしながら、特許文献1の表示器は、油圧掘削機の状態を油圧掘削機側から周囲の人に一方的に知らせるだけのものである。そのため、油圧掘削機の周囲で作業する作業者は、その表示器を見て油圧掘削機の状態が分かったとしても、油圧掘削機又はその操作者が作業者の存在に気付いているとは限らないため、油圧掘削機の周囲で安心して作業を続けることができない。このように、特許文献1の表示器は、油圧掘削機の操作者が作業者の存在に気付いているか、或いは、油圧掘削機と作業者とが接近した場合に油圧掘削機がどのような状態になるかといった、作業者が知りたい情報を十分に提示できていない。 However, the display device of Patent Document 1 merely informs the status of the hydraulic excavator from the hydraulic excavator side to surrounding people. Therefore, a worker who works around the hydraulic excavator is limited to the fact that the hydraulic excavator or its operator is aware of the presence of the operator even if the status of the hydraulic excavator is known by looking at the indicator. There is no reason to continue working safely around the hydraulic excavator. Thus, in the indicator of Patent Document 1, the state of the hydraulic excavator when the operator of the hydraulic excavator notices the presence of the operator or when the hydraulic excavator and the operator approach each other The information that the operator wants to know, such as
 上述の点に鑑み、作業機械の周囲に存在する人に作業機械の状態をより適切に知らせることができる作業機械用周辺監視装置を提供することが望まれる。 In view of the above-mentioned points, it is desirable to provide a work machine peripheral monitoring device capable of more appropriately informing a person present around the work machine the state of the work machine.
 本発明の実施例に係る作業機械用周辺監視装置は、作業機械の周辺から視認可能な報知部を備える作業機械の周辺監視装置であって、前記作業機械が作業可能状態にあるか否かを判定する作業機械状態判定手段と、前記作業機械の周囲における人の存否を判定する人存否判定手段と、前記報知部を制御する報知部制御手段と、前記作業機械による作業の許否を決定する作業許否決定手段と、を備え、前記報知部制御手段は、前記作業機械が作業可能状態にあると判定された場合と前記作業機械が作業可能状態にないと判定された場合とで前記報知部の報知状態を異なる状態にし、前記作業許否決定手段は、前記人存否判定手段の判定結果に基づいて前記作業機械による作業の許否を決定する。 The work machine peripheral monitoring device according to an embodiment of the present invention is a work machine peripheral monitoring device provided with a notification unit visible from the periphery of the work machine, and whether or not the work machine is in a workable state Work machine state judgment means for judging, person presence / absence judgment means for judging presence / absence of a person around the work machine, notification unit control means for controlling the notification unit, and work for determining the permission / prohibition of work by the work machine The notification unit control means is configured to determine whether the work machine is determined to be in the work enable state or not determined to be in the work enable state; The notification state is changed to a different state, and the work permission determination means determines the permission of the work by the work machine based on the determination result of the person existence determination means.
 上述の手段により、作業機械の周囲に存在する人に作業機械の状態をより適切に知らせることができる作業機械用周辺監視装置が提供される。 According to the above-described means, the work machine peripheral monitoring device can be provided which can more appropriately notify the person present around the work machine the state of the work machine.
本発明の実施例に係る画像生成装置の構成例を概略的に示すブロック図である。It is a block diagram showing roughly an example of composition of an image generation device concerning an example of the present invention. 画像生成装置が搭載されるショベルの構成例を示す図である。It is a figure which shows the structural example of the shovel by which an image generation apparatus is mounted. 入力画像が投影される空間モデルの一例を示す図である。It is a figure which shows an example of the space model by which an input image is projected. 空間モデルと処理対象画像平面との間の関係の一例を示す図である。FIG. 6 is a diagram showing an example of a relationship between a space model and a processing target image plane. 入力画像平面上の座標と空間モデル上の座標との対応付けを説明するための図である。It is a figure for demonstrating matching with the coordinate on an input image plane, and the coordinate on a space model. 座標対応付け手段による座標間の対応付けを説明するための図である。It is a figure for demonstrating the matching between the coordinates by a coordinate matching means. 平行線群の作用を説明するための図である。It is a figure for demonstrating the effect | action of a parallel line group. 補助線群の作用を説明するための図である。It is a figure for demonstrating the effect | action of an auxiliary line group. 処理対象画像生成処理及び出力画像生成処理の流れを示すフローチャートである。It is a flowchart which shows the flow of a process target image generation process and an output image generation process. 出力画像の一例である。It is an example of an output image. 画像生成装置が搭載されるショベルの上面図である。It is a top view of a shovel by which an image generating device is carried. ショベルに搭載された3台のカメラのそれぞれの入力画像と、それら入力画像を用いて生成される出力画像とを示す図の一例である。It is an example of the figure which shows the input image of each of 3 cameras mounted in the shovel, and the output image produced | generated using those input images. 2つのカメラのそれぞれの撮像空間の重複部分における物体の消失を防止する画像消失防止処理を説明するための図である。It is a figure for demonstrating the image loss prevention process which prevents the loss | disappearance of the object in the overlapping part of imaging space of each of two cameras. 図12で示される出力画像と、図12の出力画像に画像消失防止処理を適用することで得られる出力画像との違いを表す対比図である。FIG. 13 is a contrast diagram showing the difference between the output image shown in FIG. 12 and the output image obtained by applying the image loss prevention process to the output image of FIG. 12. ショベルに搭載された3台のカメラのそれぞれの入力画像と、それら入力画像を用いて生成される出力画像とを示す図の別の一例である。It is another example of the figure which shows the input image of each of 3 cameras mounted in the shovel, and the output image produced | generated using those input images. 空間モデルと処理対象画像平面との間の関係の別の一例を示す図である。FIG. 7 shows another example of the relationship between the spatial model and the image plane to be processed. 第1出力画像切換処理で切り換えられる2つの出力画像の関係を説明する図である。It is a figure explaining the relation of two output pictures switched by the 1st output picture change processing. 第2出力画像切換処理で切り換えられる3つの出力画像の関係を説明する図である。It is a figure explaining the relationship of three output images switched by 2nd output image switching process. 人存否判定手段の判定結果と、出力画像の内容との対応関係を示す対応表である。It is a corresponding | compatible table which shows the correspondence of the determination result of a person existence determination means, and the content of an output image. 第1警報制御処理の流れを示すフローチャートである。It is a flow chart which shows the flow of the 1st alarm control processing. 第1警報制御処理中に表示される出力画像の推移の一例を示す図である。It is a figure which shows an example of transition of the output image displayed during 1st alarm control processing. 第2警報制御処理の流れを示すフローチャートである。It is a flow chart which shows the flow of the 2nd alarm control processing. 第2警報制御処理中に表示される出力画像の推移の一例を示す図である。It is a figure which shows an example of transition of the output image displayed during a 2nd alarm control process. 作業機械状態判定処理の流れを示すフローチャートである。It is a flow chart which shows a flow of work machine state judging processing. 作業開始時警報制御処理の流れを示すフローチャートである。It is a flowchart which shows the flow of alarm control processing at the time of work start. 第1報知部制御処理の流れを示すフローチャートである。It is a flowchart which shows the flow of a 1st alerting | reporting part control process. 作業許否決定処理の流れを示すフローチャートである。It is a flowchart which shows the flow of work permission decision processing. 第2報知部制御処理の流れを示すフローチャートである。It is a flowchart which shows the flow of 2nd alerting | reporting part control processing. 周辺監視装置の別の構成例を概略的に示すブロック図である。It is a block diagram which shows roughly another structural example of a periphery monitoring apparatus.
 以下、図面を参照しつつ、本発明を実施するための最良の形態の説明を行う。 Hereinafter, the best mode for carrying out the present invention will be described with reference to the drawings.
 図1は、本発明の実施例に係る画像生成装置100の構成例を概略的に示すブロック図である。 FIG. 1 is a block diagram schematically showing a configuration example of an image generation apparatus 100 according to an embodiment of the present invention.
 画像生成装置100は、作業機械の周辺を監視する作業機械用周辺監視装置の1例であり、制御部1、カメラ2、入力部3、記憶部4、表示部5、人検出センサ6、警報出力部7、ゲートロックレバー8、イグニッションスイッチ9、報知部20で構成される。具体的には、画像生成装置100は、作業機械に搭載されたカメラ2が撮像した入力画像に基づいて出力画像を生成しその出力画像を操作者に提示する。また、画像生成装置100は、人検出センサ6の出力に基づいて、提示すべき出力画像の内容を切り換える。 The image generation device 100 is an example of a work machine peripheral monitoring device that monitors the periphery of a work machine, and includes a control unit 1, a camera 2, an input unit 3, a storage unit 4, a display unit 5, a human detection sensor 6, and an alarm. The output unit 7, the gate lock lever 8, the ignition switch 9, and the notification unit 20 are provided. Specifically, the image generating apparatus 100 generates an output image based on an input image captured by the camera 2 mounted on a work machine, and presents the output image to the operator. Further, the image generation device 100 switches the content of the output image to be presented based on the output of the human detection sensor 6.
 図2は、画像生成装置100が搭載される作業機械としてのショベル60の構成例を示す図であり、ショベル60は、クローラ式の下部走行体61の上に、旋回機構62を介して、上部旋回体63を旋回軸PVの周りで旋回自在に搭載している。 FIG. 2 is a view showing a configuration example of a shovel 60 as a working machine on which the image generating apparatus 100 is mounted. The shovel 60 is an upper portion of a crawler type lower traveling body 61 via a turning mechanism 62. The swing body 63 is rotatably mounted around the swing axis PV.
 また、上部旋回体63は、その前方左側部にキャブ(運転室)64を備え、その前方中央部に掘削アタッチメントEを備え、その右側面及び後面にカメラ2(右側方カメラ2R、後方カメラ2B)、人検出センサ6(右側方人検出センサ6R、後方人検出センサ6B)、及び報知部20(左側方報知部20L、右側方報知部20R、後方報知部20B)を備えている。なお、キャブ64内の操作者が視認し易い位置には表示部5が設置されている。また、キャブ64内には、警報出力部7(右側方警報出力部7R、後方警報出力部7B)、ゲートロックレバー8、イグニッションスイッチ9が設置されている。 The upper swing body 63 also has a cab (driver's cab) 64 on the front left side thereof and an excavating attachment E at the front center thereof, and the camera 2 (right side camera 2R, rear camera 2B on its right side and rear side) And a notification unit 20 (a left direction notification unit 20L, a right direction notification unit 20R, and a rear notification unit 20B). In addition, the display part 5 is installed in the position in which the operator in the cab 64 visually recognizes. In the cab 64, an alarm output unit 7 (right side alarm output unit 7R, rear alarm output unit 7B), a gate lock lever 8 and an ignition switch 9 are installed.
 次に、画像生成装置100の各構成要素について説明する。 Next, each component of the image generation apparatus 100 will be described.
 制御部1は、CPU(Central Processing Unit)、RAM(Random Access Memory)、ROM(Read Only Memory)、NVRAM(Non-Volatile Random Access Memory)等を備えたコンピュータである。本実施例では、制御部1は、例えば、後述する座標対応付け手段10、画像生成手段11、人存否判定手段12、警報制御手段13、作業機械状態判定手段14、報知部制御手段15、及び作業許否決定手段16のそれぞれに対応するプログラムをROMやNVRAMに記憶し、一時記憶領域としてRAMを利用しながら各手段に対応する処理をCPUに実行させる。 The control unit 1 is a computer provided with a central processing unit (CPU), a random access memory (RAM), a read only memory (ROM), a non-volatile random access memory (NVRAM), and the like. In the present embodiment, the control unit 1 includes, for example, coordinate association unit 10, image generation unit 11, person existence determination unit 12, alarm control unit 13, work machine condition determination unit 14, notification unit control unit 15, and the like described later. A program corresponding to each of the work permission determination means 16 is stored in the ROM or NVRAM, and the CPU is caused to execute processing corresponding to each means while using the RAM as a temporary storage area.
 カメラ2は、ショベル60の周囲を映し出す入力画像を取得するための装置である。本実施例では、カメラ2は、例えば、キャブ64にいる操作者の死角となる領域を撮像できるよう上部旋回体63の右側面及び後面に取り付けられる右側方カメラ2R及び後方カメラ2Bである(図2参照。)。また、カメラ2は、CCD(Charge Coupled Device)やCMOS(Complementary Metal Oxide Semiconductor)等の撮像素子を備える。なお、カメラ2は、上部旋回体63の右側面及び後面以外の位置(例えば、前面及び左側面である。)に取り付けられていてもよく、広い範囲を撮像できるよう広角レンズ又は魚眼レンズが装着されていてもよい。 The camera 2 is a device for acquiring an input image that reflects the surroundings of the shovel 60. In the present embodiment, the camera 2 is, for example, the right side camera 2R and the rear camera 2B attached to the right side surface and the rear surface of the upper swing body 63 so as to be able to capture an area which becomes a blind spot of the operator in the cab 64 (see FIG. 2)). In addition, the camera 2 includes an imaging element such as a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS). The camera 2 may be attached to a position other than the right side surface and the rear surface of the upper swing body 63 (for example, the front surface and the left side surface), and a wide angle lens or a fisheye lens is attached to capture a wide range. It may be
 また、カメラ2は、制御部1からの制御信号に応じて入力画像を取得し、取得した入力画像を制御部1に対して出力する。なお、カメラ2は、魚眼レンズ又は広角レンズを用いて入力画像を取得した場合には、それらレンズを用いることによって生じる見掛け上の歪曲やアオリを補正した補正済みの入力画像を制御部1に対して出力する。また、カメラ2は、その見掛け上の歪曲やアオリを補正していない入力画像をそのまま制御部1に対して出力してもよい。その場合には、制御部1がその見掛け上の歪曲やアオリを補正する。 Further, the camera 2 acquires an input image according to a control signal from the control unit 1, and outputs the acquired input image to the control unit 1. When the camera 2 acquires an input image using a fisheye lens or a wide-angle lens, the corrected input image corrected for apparent distortion and tilt caused by using these lenses is sent to the control unit 1. Output. In addition, the camera 2 may output an input image not corrected for the apparent distortion or tilt to the control unit 1 as it is. In that case, the control unit 1 corrects the apparent distortion or tilt.
 入力部3は、操作者が画像生成装置100に対して各種情報を入力できるようにするための装置であり、例えば、タッチパネル、ボタンスイッチ、ポインティングデバイス、キーボード等である。 The input unit 3 is a device for enabling an operator to input various information to the image generation device 100, and is, for example, a touch panel, a button switch, a pointing device, a keyboard, or the like.
 記憶部4は、各種情報を記憶するための装置であり、例えば、ハードディスク、光学ディスク、又は半導体メモリ等である。 The storage unit 4 is a device for storing various information, and is, for example, a hard disk, an optical disk, a semiconductor memory, or the like.
 表示部5は、画像情報を表示するための装置であり、例えば、ショベル60のキャブ64(図2参照。)内に設置された液晶ディスプレイ又はプロジェクタ等であって、制御部1が出力する各種画像を表示する。 The display unit 5 is a device for displaying image information, and is, for example, a liquid crystal display or a projector installed in a cab 64 (see FIG. 2) of the shovel 60. Display an image.
 人検出センサ6は、ショベル60の周囲に存在する人を検出するための装置である。本実施例では、人検出センサ6は、例えば、キャブ64にいる操作者の死角となる領域に存在する人を検出できるよう上部旋回体63の右側面及び後面に取り付けられる(図2参照。)。 The human detection sensor 6 is a device for detecting a person present around the shovel 60. In the present embodiment, the human detection sensor 6 is attached to the right side surface and the rear surface of the upper swing body 63, for example, so as to be able to detect a person present in an area in the cab 64 which will be the blind spot of the operator (see FIG. 2). .
 人検出センサ6は、人以外の物体から人を区別して検出するセンサであり、例えば、対応する監視空間内のエネルギ変化を検出するセンサであって、焦電型赤外線センサ、ボロメータ型赤外線センサ、赤外線カメラ等の出力信号を利用した動体検出センサを含む。本実施例では、人検出センサ6は、焦電型赤外線センサを用いたものであり、動体(移動する熱源)を人として検出する。また、右側方人検出センサ6Rの監視空間は、右側方カメラの撮像空間に含まれ、後方人検出センサ6Bの監視空間は、後方カメラ2Bの撮像空間に含まれる。 The human detection sensor 6 is a sensor that distinguishes and detects a human from an object other than a human, for example, a sensor that detects an energy change in the corresponding monitoring space, and includes a pyroelectric infrared sensor, a bolometer infrared sensor, It includes a moving body detection sensor using an output signal of an infrared camera or the like. In the present embodiment, the human detection sensor 6 uses a pyroelectric infrared sensor, and detects a moving body (moving heat source) as a human. Also, the monitoring space of the right side person detection sensor 6R is included in the imaging space of the right side camera, and the monitoring space of the rear person detection sensor 6B is included in the imaging space of the rear camera 2B.
 なお、人検出センサ6は、カメラ2と同様、上部旋回体63の右側面及び後面以外の位置(例えば、前面及び左側面である。)に取り付けられてもよく、上部旋回体63の前面、左側面、右側面、及び後面のうちの何れか1つに取り付けられていてもよく、全ての面に取り付けられていてもよい。 The person detection sensor 6 may be attached to a position other than the right side surface and the rear surface (for example, the front surface and the left surface) of the upper swing body 63 as in the camera 2. It may be attached to any one of the left side surface, the right side surface, and the rear surface, and may be attached to all the surfaces.
 警報出力部7は、ショベル60の操作者に対する警報を出力する装置である。例えば、警報出力部7は、音及び光の少なくとも一方を出力する警報装置であり、ブザー、スピーカ等の音声出力装置、LED、フラッシュライト等の発光装置を含む。本実施例では、警報出力部7は、警報音を出力するブザーであり、キャブ64の右側内壁に取り付けられる右側方警報出力部7R、及び、キャブ64の後側内壁に取り付けられる後方警報出力部7Bで構成される(図2参照。)。 The alarm output unit 7 is a device that outputs an alarm to the operator of the shovel 60. For example, the alarm output unit 7 is an alarm device that outputs at least one of sound and light, and includes a sound output device such as a buzzer and a speaker, and a light emitting device such as an LED and a flash light. In the present embodiment, the alarm output unit 7 is a buzzer for outputting an alarm sound, and the right side alarm output unit 7R attached to the right inner wall of the cab 64 and the rear alarm output unit attached to the rear inner wall of the cab 64 7B (see FIG. 2).
 ゲートロックレバー8は、ショベル60の状態を切り換える装置である。本実施例では、ゲートロックレバー8は、ショベル60を作業不可状態とするロック状態と、ショベル60を作業可能状態とするロック解除状態とを有する。なお、「作業可能状態」は、操作者がショベル60を操作できる状態を意味し、「作業不可状態」は、操作者がショベル60を操作できない状態を意味する。ゲートロックレバー8は、自身の現在の状態を表す信号を所定周期で繰り返し制御部1に対して出力する。 The gate lock lever 8 is a device that switches the state of the shovel 60. In the present embodiment, the gate lock lever 8 has a locked state in which the shovel 60 can not be operated and a unlocked state in which the shovel 60 can be operated. Note that the “work enable state” means a state in which the operator can operate the shovel 60, and the “work inoperable state” means a state in which the operator can not operate the shovel 60. The gate lock lever 8 repeatedly outputs a signal representing the current state of itself to the control unit 1 at a predetermined cycle.
 具体的には、操作者は、ゲートロックレバー8を引き上げてほぼ水平にすることによりゲートロックレバー8をロック解除状態とし、ゲートロックレバー8を押し下げることによりゲートロックレバー8をロック状態とする。 Specifically, the operator pulls up the gate lock lever 8 to make it substantially horizontal to bring the gate lock lever 8 into the unlocked state, and pushes the gate lock lever 8 into the locked state by pushing it down.
 ロック解除状態において、ゲートロックレバー8は、キャブ64の乗降口を遮って操作者がキャブ64から退出するのを規制しながら、キャブ64内の図示しない操作レバー及び操作ペダル等(以下、「操作レバー等」とする。)による操作を有効にして操作者がショベル60を操作できるようにする。 In the unlocked state, the gate lock lever 8 blocks the entry / exit opening of the cab 64 to restrict an operator's withdrawal from the cab 64, and an operation lever and an operation pedal (not shown) in the cab 64 (hereinafter referred to as “operation Operation by the lever etc.) is enabled to allow the operator to operate the shovel 60.
 一方、ロック状態において、ゲートロックレバー8は、キャブ64の乗降口を開放して操作者がキャブ64から退出するのを許容しながら、キャブ64内の操作レバー等による操作を無効にして操作者がショベル60を操作できないようにする。 On the other hand, in the locked state, the gate lock lever 8 opens the entrance of the cab 64 and allows the operator to withdraw from the cab 64 while invalidating the operation by the operation lever or the like in the cab 64 to operate the operator. Prevents the shovel 60 from operating.
 より具体的には、ゲートロックレバー8は、ロック状態においてゲートロック弁を閉状態とし、ロック解除状態においてゲートロック弁を開状態とする。ゲートロック弁は、コントロールバルブ(図示せず。)と操作レバー等との間の油路の間に設けられる切換弁である。また、コントロールバルブは、図示しない油圧ポンプと各種油圧アクチュエータとの間の作動油の流れを制御する流量制御弁である。ゲートロック弁は、閉状態において、コントロールバルブと操作レバー等との間の作動油の流れを遮断して操作レバー等を無効にする。また、ゲートロック弁は、開状態において、コントロールバルブと操作レバー等との間で作動油を連通させて操作レバー等を有効にする。 More specifically, the gate lock lever 8 closes the gate lock valve in the locked state, and opens the gate lock valve in the unlocked state. The gate lock valve is a switching valve provided between an oil passage between a control valve (not shown) and an operating lever or the like. The control valve is a flow control valve that controls the flow of hydraulic fluid between a hydraulic pump (not shown) and various hydraulic actuators. In the closed state, the gate lock valve shuts off the flow of hydraulic fluid between the control valve and the operation lever or the like to invalidate the operation lever or the like. Further, in the open state, the gate lock valve allows hydraulic fluid to communicate between the control valve and the operation lever or the like to make the operation lever or the like effective.
 イグニッションスイッチ9は、ショベル60の状態を切り換える装置である。本実施例では、イグニッションスイッチ9は、ショベル60を作業不可状態とするオフ状態と、ショベル60を作業可能状態とするオン状態とを有する。イグニッションスイッチ9は、自身の現在の状態を表す信号を所定周期で繰り返し制御部1に対して出力する。 The ignition switch 9 is a device that switches the state of the shovel 60. In the present embodiment, the ignition switch 9 has an off state in which the shovel 60 is in an operation impossible state, and an on state in which the shovel 60 is in an operation enabled state. The ignition switch 9 repeatedly outputs a signal representing the current state of itself to the control unit 1 at a predetermined cycle.
 具体的には、操作者は、イグニッションスイッチ9を押下することによりイグニッションスイッチ9のオン状態とオフ状態とを切り換える。 Specifically, the operator switches the ignition switch 9 between the on state and the off state by pressing the ignition switch 9.
 オン状態において、イグニッションスイッチ9は、エンジンを始動させることによって、コントロールバルブと操作レバー等との間の油路に作動油を供給するコントロールポンプを始動させる。そして、イグニッションスイッチ9は、キャブ64内の操作レバー等による操作を有効にして操作者がショベル60を操作できるようにする。 In the on state, the ignition switch 9 starts the control pump to supply the hydraulic fluid to the oil passage between the control valve and the operation lever or the like by starting the engine. Then, the ignition switch 9 enables an operation of the shovel 60 by enabling an operation by an operation lever or the like in the cab 64.
 一方、オフ状態において、イグニッションスイッチ9は、エンジンを停止させることによって、コントロールポンプを停止させる。そして、イグニッションスイッチ9は、キャブ64内の操作レバー等による操作を無効にして操作者がショベル60を操作できないようにする。 On the other hand, in the off state, the ignition switch 9 stops the control pump by stopping the engine. Then, the ignition switch 9 invalidates the operation by the operation lever or the like in the cab 64 so that the operator can not operate the shovel 60.
 報知部20は、ショベル60の状態を周囲の人に報知する装置である。本実施例では、報知部20は、LED、フラッシュライト等で構成されるインジケータランプであり、周囲の人が視認できるよう上部旋回体63の左側面、右側面、及び後面に取り付けられる(図2参照。)。なお、報知部20は、例えば、上部旋回体63の前面を含むショベル60の外表面等、周囲の人が視認できる任意の位置に取り付けられてもよい。また、報知部20は、カメラ2に取り付けるように構成されてもよい。 The notification unit 20 is a device that notifies the surrounding people of the state of the shovel 60. In the present embodiment, the notification unit 20 is an indicator lamp configured by an LED, a flash light, etc., and is attached to the left side surface, the right side surface, and the rear surface of the upper swing body 63 so as to be visible to surrounding people (FIG. 2). reference.). In addition, the notification unit 20 may be attached to an arbitrary position such as the outer surface of the shovel 60 including the front surface of the upper swing body 63 or the like so as to be visible to the surrounding people. In addition, the notification unit 20 may be configured to be attached to the camera 2.
 また、画像生成装置100は、入力画像に基づいて処理対象画像を生成し、その処理対象画像に画像変換処理を施すことによって周囲の物体との位置関係や距離感を直感的に把握できるようにする出力画像を生成した上で、その出力画像を操作者に提示するようにしてもよい。 Further, the image generation apparatus 100 generates the processing target image based on the input image, and performs image conversion processing on the processing target image so that the positional relationship with the surrounding objects and the sense of distance can be intuitively grasped. After the output image to be generated is generated, the output image may be presented to the operator.
 「処理対象画像」は、入力画像に基づいて生成される、画像変換処理(例えば、スケール変換処理、アフィン変換処理、歪曲変換処理、視点変換処理等である。)の対象となる画像である。具体的には、「処理対象画像」は、例えば、地表を上方から撮像するカメラによる入力画像であってその広い画角により水平方向の画像(例えば、空の部分である。)を含む入力画像から生成される、画像変換処理に適した画像である。より具体的には、その水平方向の画像が不自然に表示されないよう(例えば、空の部分が地表にあるものとして扱われないよう)その入力画像を所定の空間モデルに投影した上で、その空間モデルに投影された投影画像を別の二次元平面に再投影することによって生成される。なお、処理対象画像は、画像変換処理を施すことなくそのまま出力画像として用いられてもよい。 The “processing target image” is an image to be generated based on an input image and to be a target of image conversion processing (for example, scale conversion processing, affine conversion processing, distortion conversion processing, viewpoint conversion processing, and the like). Specifically, the “processing target image” is, for example, an input image by a camera that captures the ground surface from above, and an input image including an image in the horizontal direction (for example, a sky part) due to its wide angle of view. An image suitable for image conversion processing, generated from More specifically, the input image is projected onto a predetermined spatial model so that the horizontal image is not displayed unnaturally (for example, the sky portion is not treated as being on the ground surface). It is generated by reprojecting the projection image projected on the space model to another two-dimensional plane. Note that the processing target image may be used as an output image as it is without performing image conversion processing.
 「空間モデル」は、入力画像の投影対象である。具体的には、「空間モデル」は、少なくとも、処理対象画像が位置する平面である処理対象画像平面以外の平面又は曲面を含む、一又は複数の平面若しくは曲面で構成される。処理対象画像が位置する平面である処理対象画像平面以外の平面又は曲面は、例えば、処理対象画像平面に平行な平面、又は、処理対象画像平面との間で角度を形成する平面若しくは曲面である。 The "space model" is a projection target of the input image. Specifically, the “space model” is configured by one or more planes or curved surfaces including at least a plane or curved surface other than the processing target image plane which is a plane on which the processing target image is located. A plane or a curved surface other than the processing target image plane which is a plane on which the processing target image is located is, for example, a plane parallel to the processing target image plane or a plane or a curved surface forming an angle with the processing target image plane. .
 なお、画像生成装置100は、処理対象画像を生成することなく、その空間モデルに投影された投影画像に画像変換処理を施すことによって出力画像を生成するようにしてもよい。また、投影画像は、画像変換処理を施すことなくそのまま出力画像として用いられてもよい。 The image generation apparatus 100 may generate an output image by performing an image conversion process on the projection image projected on the space model without generating the processing target image. In addition, the projection image may be used as an output image as it is without performing image conversion processing.
 図3は、入力画像が投影される空間モデルMDの一例を示す図であり、図3左図は、ショベル60を側方から見たときのショベル60と空間モデルMDとの間の関係を示し、図3右図は、ショベル60を上方から見たときのショベル60と空間モデルMDとの間の関係を示す。 FIG. 3 is a view showing an example of a space model MD on which an input image is projected, and the left view of FIG. 3 shows the relationship between the shovel 60 and the space model MD when the shovel 60 is viewed from the side. 3 shows the relationship between the shovel 60 and the space model MD when the shovel 60 is viewed from above.
 図3で示されるように、空間モデルMDは、半円筒形状を有し、その底面内側の平面領域R1とその側面内側の曲面領域R2とを有する。 As shown in FIG. 3, the space model MD has a semi-cylindrical shape, and has a flat region R1 inside the bottom and a curved region R2 inside the side.
 また、図4は、空間モデルMDと処理対象画像平面との間の関係の一例を示す図であり、処理対象画像平面R3は、例えば、空間モデルMDの平面領域R1を含む平面である。なお、図4は、明確化のために、空間モデルMDを、図3で示すような半円筒形状ではなく、円筒形状で示しているが、空間モデルMDは、半円筒形状及び円筒形状の何れであってもよい。以降の図においても同様である。また、処理対象画像平面R3は、上述のように、空間モデルMDの平面領域R1を含む円形領域であってもよく、空間モデルMDの平面領域R1を含まない環状領域であってもよい。 FIG. 4 is a diagram showing an example of the relationship between the space model MD and the processing target image plane, and the processing target image plane R3 is, for example, a plane including the plane region R1 of the space model MD. Although FIG. 4 shows the space model MD not in a semi-cylindrical shape as shown in FIG. 3 but in a cylindrical shape for clarity, the space model MD has either a semi-cylindrical shape or a cylindrical shape. It may be The same applies to the following figures. Further, as described above, the processing target image plane R3 may be a circular area including the plane area R1 of the space model MD, or may be an annular area not including the plane area R1 of the space model MD.
 次に、制御部1が有する各種手段について説明する。 Next, various units included in the control unit 1 will be described.
 座標対応付け手段10は、カメラ2が撮像した入力画像が位置する入力画像平面上の座標と、空間モデルMD上の座標と、処理対象画像平面R3上の座標とを対応付けるための手段である。本実施例では、座標対応付け手段10は、例えば、予め設定された、或いは、入力部3を介して入力されるカメラ2に関する各種パラメータと、予め決定された、入力画像平面、空間モデルMD、及び処理対象画像平面R3の相互の位置関係とに基づいて、入力画像平面上の座標と、空間モデルMD上の座標と、処理対象画像平面R3上の座標とを対応付ける。なお、カメラ2に関する各種パラメータは、例えば、カメラ2の光学中心、焦点距離、CCDサイズ、光軸方向ベクトル、カメラ水平方向ベクトル、射影方式等である。そして、座標対応付け手段10は、それらの対応関係を記憶部4の入力画像・空間モデル対応マップ40及び空間モデル・処理対象画像対応マップ41に記憶する。 The coordinate associating means 10 is a means for associating the coordinates on the input image plane where the input image captured by the camera 2 is located, the coordinates on the space model MD, and the coordinates on the processing object image plane R3. In the present embodiment, the coordinate associating unit 10 includes, for example, various parameters related to the camera 2 set in advance or input through the input unit 3, and an input image plane, a space model MD, and the like, which are determined in advance. The coordinates on the input image plane, the coordinates on the space model MD, and the coordinates on the processing image plane R3 are associated based on the mutual positional relationship between the processing object image plane R3. The various parameters relating to the camera 2 are, for example, the optical center of the camera 2, focal length, CCD size, optical axis direction vector, camera horizontal direction vector, projection method and the like. Then, the coordinate correlating means 10 stores the correspondence between them in the input image / space model correspondence map 40 of the storage unit 4 and the space model / processing target image correspondence map 41.
 なお、座標対応付け手段10は、処理対象画像を生成しない場合には、空間モデルMD上の座標と処理対象画像平面R3上の座標との対応付け、及び、その対応関係の空間モデル・処理対象画像対応マップ41への記憶を省略する。 Note that, when the coordinate association unit 10 does not generate the processing target image, the association between the coordinates on the space model MD and the coordinates on the processing target image plane R3, and the space model of the corresponding relationship and the processing target The storage in the image correspondence map 41 is omitted.
 画像生成手段11は、出力画像を生成するための手段である。本実施例では、画像生成手段11は、例えば、処理対象画像にスケール変換、アフィン変換、又は歪曲変換を施すことによって、処理対象画像平面R3上の座標と出力画像が位置する出力画像平面上の座標とを対応付ける。そして、画像生成手段11は、その対応関係を記憶部4の処理対象画像・出力画像対応マップ42に記憶する。そして、画像生成手段11は、入力画像・空間モデル対応マップ40及び空間モデル・処理対象画像対応マップ41を参照しながら、出力画像における各画素の値と入力画像における各画素の値とを関連付けて出力画像を生成する。各画素の値は、例えば、輝度値、色相値、彩度値等である。 The image generation unit 11 is a unit for generating an output image. In the present embodiment, the image generation unit 11 performs, for example, scale conversion, affine conversion, or distortion conversion on the processing target image to obtain the coordinates on the processing target image plane R3 and the output image on which the output image is located. Correspond to the coordinates. Then, the image generation unit 11 stores the correspondence in the processing target image / output image correspondence map 42 of the storage unit 4. Then, the image generation unit 11 associates the value of each pixel in the output image with the value of each pixel in the input image while referring to the input image / space model correspondence map 40 and the space model / process target image correspondence map 41. Generate an output image. The value of each pixel is, for example, a luminance value, a hue value, a saturation value or the like.
 また、画像生成手段11は、予め設定された、或いは、入力部3を介して入力される仮想カメラに関する各種パラメータに基づいて、処理対象画像平面R3上の座標と出力画像が位置する出力画像平面上の座標とを対応付ける。なお、仮想カメラに関する各種パラメータは、例えば、仮想カメラの光学中心、焦点距離、CCDサイズ、光軸方向ベクトル、カメラ水平方向ベクトル、射影方式等である。そして、画像生成手段11は、その対応関係を記憶部4の処理対象画像・出力画像対応マップ42に記憶する。そして、画像生成手段11は、入力画像・空間モデル対応マップ40及び空間モデル・処理対象画像対応マップ41を参照しながら、出力画像における各画素の値と入力画像における各画素の値とを関連付けて出力画像を生成する。 Further, the image generation unit 11 outputs an image plane on which coordinates on the image plane R3 to be processed and an output image are located based on various parameters related to the virtual camera set in advance or input through the input unit 3. Correspond with the upper coordinates. The various parameters relating to the virtual camera are, for example, the optical center of the virtual camera, the focal length, the CCD size, the optical axis direction vector, the camera horizontal direction vector, the projection method, and the like. Then, the image generation unit 11 stores the correspondence in the processing target image / output image correspondence map 42 of the storage unit 4. Then, the image generation unit 11 associates the value of each pixel in the output image with the value of each pixel in the input image while referring to the input image / space model correspondence map 40 and the space model / process target image correspondence map 41. Generate an output image.
 なお、画像生成手段11は、仮想カメラの概念を用いることなく、処理対象画像のスケールを変更して出力画像を生成するようにしてもよい。 The image generation unit 11 may generate the output image by changing the scale of the processing target image without using the concept of the virtual camera.
 また、画像生成手段11は、処理対象画像を生成しない場合には、施した画像変換処理に応じて空間モデルMD上の座標と出力画像平面上の座標とを対応付ける。そして、画像生成手段11は、入力画像・空間モデル対応マップ40を参照しながら、出力画像における各画素の値と入力画像における各画素の値とを関連付けて出力画像を生成する。この場合、画像生成手段11は、処理対象画像平面R3上の座標と出力画像平面上の座標との対応付け、及び、その対応関係の処理対象画像・出力画像対応マップ42への記憶を省略する。 Further, when the image to be processed is not generated, the image generation unit 11 associates the coordinates on the space model MD with the coordinates on the output image plane according to the applied image conversion processing. Then, while referring to the input image / space model correspondence map 40, the image generation unit 11 associates the value of each pixel in the output image with the value of each pixel in the input image to generate an output image. In this case, the image generation unit 11 omits the correspondence between the coordinates on the processing target image plane R3 and the coordinates on the output image plane and the storage of the correspondence in the processing target image / output image correspondence map 42. .
 また、画像生成手段11は、人存否判定手段12の判定結果に基づいて出力画像の内容を切り換える。なお、画像生成手段11による出力画像の切り換えについてはその詳細を後述する。 Further, the image generation unit 11 switches the content of the output image based on the determination result of the person presence / absence determination unit 12. The details of the switching of the output image by the image generation unit 11 will be described later.
 人存否判定手段12は、作業機械の周囲に設定される複数の監視空間のそれぞれにおける人の存否を判定する手段である。本実施例では、人存否判定手段12は、人検出センサ6の出力に基づいてショベル60の周囲の人の存否を判定する。 The human presence / absence determination means 12 is a means for determining the presence / absence of a person in each of a plurality of monitoring spaces set around the work machine. In the present embodiment, the human presence / absence determination means 12 determines the presence / absence of a person around the shovel 60 based on the output of the human detection sensor 6.
 また、人存否判定手段12は、カメラ2が撮像した入力画像に基づいて作業機械の周囲に設定される複数の監視空間のそれぞれにおける人の存否を判定してもよい。具体的には、人存否判定手段12は、オプティカルフロー、パターンマッチング等の画像処理技術を用いて作業機械の周囲の人の存否を判定してもよい。なお、人存否判定手段12は、カメラ2とは別の画像センサの出力に基づいて作業機械の周囲の人の存否を判定してもよい。 In addition, the human presence / absence determination means 12 may determine the presence / absence of a person in each of a plurality of monitoring spaces set around the work machine based on the input image captured by the camera 2. Specifically, the human presence / absence judgment means 12 may judge presence / absence of a person around the work machine using an image processing technology such as optical flow, pattern matching and the like. The human presence / absence determination means 12 may determine the presence / absence of a person around the work machine based on the output of an image sensor other than the camera 2.
 或いは、人存否判定手段12は、人検出センサ6の出力とカメラ2等の画像センサの出力とに基づいて作業機械の周囲に設定される複数の監視空間のそれぞれにおける人の存否を判定してもよい。 Alternatively, the human presence / absence determination means 12 determines the presence / absence of a person in each of the plurality of monitoring spaces set around the work machine based on the output of the human detection sensor 6 and the output of the image sensor such as the camera 2 It is also good.
 警報制御手段13は、警報出力部7を制御する手段である。本実施例では、警報制御手段13は、人存否判定手段12の判定結果に基づいて、或いは、人存否判定手段12の判定結果と作業機械状態判定手段14の判定結果とに基づいて、警報出力部7を制御する。なお、警報制御手段13による警報出力部7の制御についてはその詳細を後述する。 The alarm control unit 13 is a unit that controls the alarm output unit 7. In the present embodiment, the alarm control means 13 outputs an alarm based on the judgment result of the person existence / nonexistence judgment means 12 or based on the judgment result of the person existence / nonexistence judgment means 12 and the judgment result of the work machine state judgment means 14 Control unit 7; The control of the alarm output unit 7 by the alarm control means 13 will be described in detail later.
 作業機械状態判定手段14は、作業機械の状態を判定する手段である。本実施例では、作業機械状態判定手段14は、ショベル60が作業可能状態にあるか否かを判定する。なお、作業機械状態判定手段14によるショベル60の状態の判定についてはその詳細を後述する。 The working machine state determination means 14 is a means for determining the state of the working machine. In the present embodiment, the work machine state determination means 14 determines whether the shovel 60 is in the work enable state. In addition, about the determination of the state of the shovel 60 by the working machine state determination means 14, the detail is mentioned later.
 報知部制御手段15は、報知部20を制御する手段である。本実施例では、報知部制御手段15は、作業機械状態判定手段14の判定結果に基づいて、或いは、人存否判定手段12の判定結果と作業機械状態判定手段14の判定結果とに基づいて、報知部20を制御する。なお、報知部制御手段15による報知部20の制御についてはその詳細を後述する。 The notification unit control unit 15 is a unit that controls the notification unit 20. In the present embodiment, the notification unit control means 15 is based on the determination result of the work machine state determination means 14 or based on the determination result of the human presence / absence determination means 12 and the determination result of the work machine state determination means 14. The notification unit 20 is controlled. The control of the notification unit 20 by the notification unit control means 15 will be described in detail later.
 作業許否決定手段16は、ショベル60の作業を許可するか否かを決定する手段である。本実施例では、作業許否決定手段16は、人存否判定手段12の判定結果に基づいてショベル60の作業の許否を決定する。なお、作業許否決定手段16によるショベル60の作業の許否の決定についてはその詳細を後述する。 The work permission determination means 16 is a means for determining whether to permit the work of the shovel 60. In the present embodiment, the work permission determination means 16 determines the permission of the work of the shovel 60 based on the determination result of the human existence determination means 12. In addition, about the determination of the permissibility of the operation | work of the shovel 60 by the work permission determination means 16, the detail is mentioned later.
 次に、座標対応付け手段10及び画像生成手段11による具体的な処理の一例について説明する。 Next, an example of specific processing by the coordinate matching unit 10 and the image generation unit 11 will be described.
 座標対応付け手段10は、例えば、ハミルトンの四元数を用いて、入力画像平面上の座標と空間モデル上の座標とを対応付けることができる。 The coordinate correlating means 10 can associate the coordinates on the input image plane with the coordinates on the space model using, for example, the quaternion of Hamilton.
 図5は、入力画像平面上の座標と空間モデル上の座標との対応付けを説明するための図である。カメラ2の入力画像平面は、カメラ2の光学中心Cを原点とするUVW直交座標系における一平面として表される。空間モデルは、XYZ直交座標系における立体面として表される。 FIG. 5 is a diagram for explaining the correspondence between the coordinates on the input image plane and the coordinates on the space model. The input image plane of the camera 2 is represented as a plane in the UVW orthogonal coordinate system whose origin is the optical center C of the camera 2. A space model is represented as a solid plane in an XYZ orthogonal coordinate system.
 最初に、座標対応付け手段10は、XYZ座標系の原点を光学中心C(UVW座標系の原点)に並行移動させた上で、X軸をU軸に、Y軸をV軸に、Z軸を-W軸にそれぞれ一致させるようXYZ座標系を回転させる。空間モデル上の座標(XYZ座標系上の座標)を入力画像平面上の座標(UVW座標系上の座標)に変換するためである。なお、「-W軸」の符号「-」は、Z軸の方向と-W軸の方向が逆であることを意味する。これは、UVW座標系がカメラ前方を+W方向とし、XYZ座標系が鉛直下方を-Z方向としていることに起因する。 First, the coordinate correlating means 10 moves the origin of the XYZ coordinate system parallel to the optical center C (the origin of the UVW coordinate system), then sets the X axis as the U axis, the Y axis as the V axis, and the Z axis Rotate the XYZ coordinate system so that each matches the -W axis. This is to convert coordinates on the space model (coordinates on the XYZ coordinate system) into coordinates on the input image plane (coordinates on the UVW coordinate system). The sign “−” of “−W axis” means that the direction of the Z axis is opposite to the direction of the −W axis. This is because the UVW coordinate system has the + W direction in front of the camera, and the XYZ coordinate system has the -Z direction in the vertically downward direction.
 なお、カメラ2が複数存在する場合、カメラ2のそれぞれが個別のUVW座標系を有するので、座標対応付け手段10は、複数のUVW座標系のそれぞれに対して、XYZ座標系を並行移動させ且つ回転させる。 In the case where a plurality of cameras 2 exist, each of the cameras 2 has a separate UVW coordinate system, so the coordinate correlating means 10 moves the XYZ coordinate system parallel to each of the plurality of UVW coordinate systems and Rotate.
 上述の変換は、カメラ2の光学中心CがXYZ座標系の原点となるようにXYZ座標系を並行移動させた後に、Z軸が-W軸に一致するよう回転させ、更に、X軸がU軸に一致するよう回転させることによって実現される。そのため、座標対応付け手段10は、この変換をハミルトンの四元数で記述することにより、それら二回の回転を一回の回転演算に纏めることができる。 The above-mentioned conversion is performed so that the Z axis coincides with the -W axis after parallel movement of the XYZ coordinate system so that the optical center C of the camera 2 becomes the origin of the XYZ coordinate system. It is realized by rotating to coincide with the axis. Therefore, the coordinate correlating means 10 can combine these two rotations into one rotation operation by describing this conversion by the quaternion of Hamilton.
 ところで、あるベクトルAを別のベクトルBに一致させるための回転は、ベクトルAとベクトルBとが張る面の法線を軸としてベクトルAとベクトルBとが形成する角度だけ回転させる処理に相当する。そして、その角度をθとすると、ベクトルAとベクトルBとの内積から、角度θは、 By the way, the rotation for making one vector A coincide with another vector B corresponds to the processing of rotating by an angle formed by the vector A and the vector B around the normal of the surface where the vector A and the vector B extend. . Then, assuming that the angle is θ, from the inner product of the vector A and the vector B, the angle θ is
Figure JPOXMLDOC01-appb-M000001
で表される。
Figure JPOXMLDOC01-appb-M000001
Is represented by
 また、ベクトルAとベクトルBとが張る面の法線の単位ベクトルNは、ベクトルAとベクトルBとの外積から Also, the unit vector N of the normal to the surface where the vector A and the vector B extend is the outer product of the vector A and the vector B
Figure JPOXMLDOC01-appb-M000002
で表されることとなる。
Figure JPOXMLDOC01-appb-M000002
Will be represented by
 なお、四元数は、i、j、kをそれぞれ虚数単位とした場合、 When quaternions are i, j, k as imaginary units, respectively,
Figure JPOXMLDOC01-appb-M000003
を満たす超複素数であり、本実施例において、四元数Qは、実成分をt、純虚成分をa、b、cとして、
Figure JPOXMLDOC01-appb-M000003
In the present embodiment, the quaternion Q has a real component as t and pure imaginary components as a, b, and c, respectively.
Figure JPOXMLDOC01-appb-M000004
で表され、四元数Qの共役四元数は、
Figure JPOXMLDOC01-appb-M000004
And the quaternion of the quaternion Q is
Figure JPOXMLDOC01-appb-M000005
で表される。
Figure JPOXMLDOC01-appb-M000005
Is represented by
 四元数Qは、実成分tを0(ゼロ)としながら、純虚成分a、b、cで三次元ベクトル(a,b,c)を表現することができ、また、t、a、b、cの各成分により任意のベクトルを軸とした回転動作を表現することもできる。 The quaternion Q can express a three-dimensional vector (a, b, c) with pure imaginary components a, b, c while setting the real component t to 0 (zero), and t, a, b The components c and c can represent rotational motion about an arbitrary vector.
 更に、四元数Qは、連続する複数回の回転動作を統合して一回の回転動作として表現することができる。具体的には、四元数Qは、例えば、任意の点S(sx,sy,sz)を、任意の単位ベクトルC(l,m,n)を軸としながら角度θだけ回転させたときの点D(ex,ey,ez)を以下のように表現することができる。 Furthermore, the quaternion Q can be expressed as one rotation operation by integrating a plurality of consecutive rotation operations. Specifically, for example, the quaternion Q is obtained by rotating an arbitrary point S (sx, sy, sz) by an angle θ with an arbitrary unit vector C (l, m, n) as an axis. The point D (ex, ey, ez) can be expressed as follows.
Figure JPOXMLDOC01-appb-M000006
 ここで、本実施例において、Z軸を-W軸に一致させる回転を表す四元数をQzとすると、XYZ座標系におけるX軸上の点Xは、点X'に移動させられるので、点X'は、
Figure JPOXMLDOC01-appb-M000006
Here, in the present embodiment, assuming that a quaternion representing a rotation that causes the Z axis to coincide with the -W axis is Qz, the point X on the X axis in the XYZ coordinate system is moved to the point X '. X 'is
Figure JPOXMLDOC01-appb-M000007
で表される。
Figure JPOXMLDOC01-appb-M000007
Is represented by
 また、本実施例において、X軸上にある点X'と原点とを結ぶ線をU軸に一致させる回転を表す四元数をQxとすると、「Z軸を-W軸に一致させ、更に、X軸をU軸に一致させる回転」を表す四元数Rは、 Further, in the present embodiment, assuming that a quaternion representing a rotation that causes the line connecting point X ′ on the X axis and the origin to coincide with the U axis be Qx, “the Z axis coincides with the −W axis. , A quaternion R representing "rotation to align the X axis with the U axis",
Figure JPOXMLDOC01-appb-M000008
で表される。
Figure JPOXMLDOC01-appb-M000008
Is represented by
 以上により、空間モデル(XYZ座標系)上の任意の座標Pを入力画像平面(UVW座標系)上の座標で表現したときの座標P'は、 As described above, coordinates P ′ when arbitrary coordinates P on the space model (XYZ coordinate system) are expressed by coordinates on the input image plane (UVW coordinate system) are:
Figure JPOXMLDOC01-appb-M000009
で表される。また、四元数Rがカメラ2のそれぞれで不変であることから、座標対応付け手段10は、以後、この演算を実行するだけで空間モデル(XYZ座標系)上の座標を入力画像平面(UVW座標系)上の座標に変換することができる。
Figure JPOXMLDOC01-appb-M000009
Is represented by Further, since the quaternion R is invariant for each of the cameras 2, the coordinate correlating means 10 executes coordinates in the input image plane (UVW coordinate system) simply by executing this operation thereafter. Coordinate system) can be converted to coordinates.
 空間モデル(XYZ座標系)上の座標を入力画像平面(UVW座標系)上の座標に変換した後、座標対応付け手段10は、線分CP'と、カメラ2の光軸Gとが形成する入射角αを算出する。なお、線分CP'は、カメラ2の光学中心C(UVW座標系上の座標)と空間モデル上の任意の座標PをUVW座標系で表した座標P'とを結ぶ線分である。 After converting the coordinates on the space model (XYZ coordinate system) to the coordinates on the input image plane (UVW coordinate system), the coordinate correlating means 10 forms a line segment CP ′ and the optical axis G of the camera 2 The incident angle α is calculated. The line segment CP ′ is a line connecting the optical center C (coordinates on the UVW coordinate system) of the camera 2 and coordinates P ′ that represent arbitrary coordinates P on the space model in the UVW coordinate system.
 また、座標対応付け手段10は、カメラ2の入力画像平面R4(例えば、CCD面)に平行で且つ座標P'を含む平面Hにおける偏角φ、及び線分EP'の長さを算出する。なお、線分EP'は、平面Hと光軸Gとの交点Eと、座標P'とを結ぶ線分であり、偏角φは、平面HにおけるU'軸と線分EP'とが形成する角度である。 Further, the coordinate associating unit 10 calculates the argument φ in the plane H parallel to the input image plane R4 (for example, the CCD plane) of the camera 2 and including the coordinate P ′ and the length of the line segment EP ′. The line segment EP 'is a line connecting the intersection point E between the plane H and the optical axis G and the coordinates P', and the argument φ forms the U 'axis in the plane H and the line segment EP' Is the angle at which
 カメラの光学系は、通常、像高さhが入射角α及び焦点距離fの関数となっている。そのため、座標対応付け手段10は、通常射影(h=ftanα)、正射影(h=fsinα)、立体射影(h=2ftan(α/2))、等立体角射影(h=2fsin(α/2))、等距離射影(h=fα)等の適切な射影方式を選択して像高さhを算出する。 In the camera optical system, the image height h is usually a function of the incident angle α and the focal length f. For this reason, the coordinate associating unit 10 performs the normal projection (h = ftan α), the orthogonal projection (h = fsinα), the solid projection (h = 2 ftan (α / 2)), the iso-stereographic projection (h = 2 f sin (α / 2) The image height h is calculated by selecting an appropriate projection method such as)), equidistant projection (h = fα) or the like.
 その後、座標対応付け手段10は、算出した像高さhを偏角φによりUV座標系上のU成分及びV成分に分解し、入力画像平面R4の一画素当たりの画素サイズに相当する数値で除算する。これにより、座標対応付け手段10は、空間モデルMD上の座標P(P')と入力画像平面R4上の座標とを対応付けることができる。 Thereafter, the coordinate associating unit 10 decomposes the calculated image height h into U component and V component on the UV coordinate system by the argument angle φ, and has a numerical value corresponding to the pixel size per pixel of the input image plane R4. Divide Accordingly, the coordinate associating unit 10 can associate the coordinates P (P ′) on the space model MD with the coordinates on the input image plane R4.
 なお、入力画像平面R4のU軸方向における一画素当たりの画素サイズをaとし、入力画像平面R4のV軸方向における一画素当たりの画素サイズをaとすると、空間モデルMD上の座標P(P')に対応する入力画像平面R4上の座標(u,v)は、 Incidentally, when the pixel size per one pixel in the U axis direction of the input image plane R4 and a U, the pixel size per one pixel in the V axis direction of the input image plane R4 and a V, coordinates P of the space model MD The coordinates (u, v) on the input image plane R4 corresponding to (P ′) are
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000011
で表される。
Figure JPOXMLDOC01-appb-M000011
Is represented by
 このようにして、座標対応付け手段10は、空間モデルMD上の座標と、カメラ毎に存在する一又は複数の入力画像平面R4上の座標とを対応付け、空間モデルMD上の座標、カメラ識別子、及び入力画像平面R4上の座標を関連付けて入力画像・空間モデル対応マップ40に記憶する。 In this way, the coordinate associating unit 10 associates the coordinates on the space model MD with the coordinates on one or more input image plane R4 existing for each camera, and coordinates on the space model MD, a camera identifier , And coordinates on the input image plane R4 are stored in the input image / space model correspondence map 40 in association with each other.
 また、座標対応付け手段10は、四元数を用いて座標の変換を演算するので、オイラー角を用いて座標の変換を演算する場合と異なり、ジンバルロックを発生させることがないという利点を有する。しかしながら、座標対応付け手段10は、四元数を用いて座標の変換を演算するものに限定されることはなく、オイラー角を用いて座標の変換を演算するようにしてもよい。 Further, since the coordinate correlating means 10 calculates transformation of coordinates using quaternions, it has an advantage that gimbal lock is not generated unlike the case of calculating transformation of coordinates using Euler angles. . However, the coordinate correlating means 10 is not limited to one that calculates transformation of coordinates using a quaternion, and may calculate the transformation of coordinates using Euler angles.
 なお、複数の入力画像平面R4上の座標への対応付けが可能な場合、座標対応付け手段10は、空間モデルMD上の座標P(P')を、その入射角αが最も小さいカメラに関する入力画像平面R4上の座標に対応付けるようにしてもよく、操作者が選択した入力画像平面R4上の座標に対応付けるようにしてもよい。 In addition, when the correspondence to the coordinates on the plurality of input image planes R4 is possible, the coordinate associating unit 10 inputs the coordinates P (P ′) on the space model MD with respect to the camera having the smallest incident angle α. It may be made to correspond to the coordinates on the image plane R4, or may be made to correspond to the coordinates on the input image plane R4 selected by the operator.
 次に、空間モデルMD上の座標のうち、曲面領域R2上の座標(Z軸方向の成分を持つ座標)を、XY平面上にある処理対象画像平面R3に再投影する処理について説明する。 Next, among the coordinates on the space model MD, a process of re-projecting coordinates on the curved surface region R2 (coordinates having components in the Z-axis direction) on the processing target image plane R3 on the XY plane will be described.
 図6は、座標対応付け手段10による座標間の対応付けを説明するための図である。F6Aは、一例として通常射影(h=ftanα)を採用するカメラ2の入力画像平面R4上の座標と空間モデルMD上の座標との間の対応関係を示す図である。座標対応付け手段10は、カメラ2の入力画像平面R4上の座標とその座標に対応する空間モデルMD上の座標とを結ぶ線分のそれぞれがカメラ2の光学中心Cを通過するようにして、両座標を対応付ける。 FIG. 6 is a diagram for explaining the association between the coordinates by the coordinate association means 10. As shown in FIG. F6A is a diagram showing the correspondence between coordinates on the input image plane R4 of the camera 2 adopting coordinates (h = ftan α) as an example and coordinates on the space model MD. The coordinate correlating means 10 causes each of the line segments connecting the coordinates on the input image plane R4 of the camera 2 and the coordinates on the space model MD corresponding to the coordinates to pass through the optical center C of the camera 2, Correspond both coordinates.
 F6Aの例では、座標対応付け手段10は、カメラ2の入力画像平面R4上の座標K1を空間モデルMDの平面領域R1上の座標L1に対応付け、カメラ2の入力画像平面R4上の座標K2を空間モデルMDの曲面領域R2上の座標L2に対応付ける。このとき、線分K1-L1及び線分K2-L2は共にカメラ2の光学中心Cを通過する。 In the example of F6A, the coordinate associating unit 10 associates the coordinate K1 on the input image plane R4 of the camera 2 with the coordinate L1 on the plane region R1 of the space model MD, and the coordinate K2 on the input image plane R4 of the camera 2 Are associated with the coordinate L2 on the curved surface area R2 of the space model MD. At this time, the line segment K1-L1 and the line segment K2-L2 both pass through the optical center C of the camera 2.
 なお、カメラ2が通常射影以外の射影方式(例えば、正射影、立体射影、等立体角射影、等距離射影等である。)を採用する場合、座標対応付け手段10は、それぞれの射影方式に応じて、カメラ2の入力画像平面R4上の座標K1、K2を空間モデルMD上の座標L1、L2に対応付ける。 When the camera 2 adopts a projection method other than the normal projection (for example, orthographic projection, stereographic projection, equi-solid-angle projection, equidistant projection, etc.), the coordinate correlating means 10 sets the respective projection methods. Accordingly, the coordinates K1 and K2 on the input image plane R4 of the camera 2 are associated with the coordinates L1 and L2 on the space model MD.
 具体的には、座標対応付け手段10は、所定の関数(例えば、正射影(h=fsinα)、立体射影(h=2ftan(α/2))、等立体角射影(h=2fsin(α/2))、等距離射影(h=fα)等である。)に基づいて、入力画像平面上の座標と空間モデルMD上の座標とを対応付ける。この場合、線分K1-L1及び線分K2-L2がカメラ2の光学中心Cを通過することはない。 Specifically, the coordinate correlating means 10 is configured to set a predetermined function (for example, orthogonal projection (h = fsin α), solid projection (h = 2 ftan (α / 2)), isosolid angle projection (h = 2 f sin (α /) 2) The coordinates on the input image plane are associated with the coordinates on the space model MD based on the equidistant projection (h = fα) and the like. In this case, the line segment K1-L1 and the line segment K2-L2 do not pass through the optical center C of the camera 2.
 F6Bは、空間モデルMDの曲面領域R2上の座標と処理対象画像平面R3上の座標との間の対応関係を示す図である。座標対応付け手段10は、XZ平面上に位置する平行線群PLであって、処理対象画像平面R3との間で角度βを形成する平行線群PLを導入する。そして、座標対応付け手段10は、空間モデルMDの曲面領域R2上の座標とその座標に対応する処理対象画像平面R3上の座標とが共に平行線群PLのうちの一つに乗るようにして、両座標を対応付ける。 F6B is a diagram showing the correspondence between the coordinates on the curved surface area R2 of the space model MD and the coordinates on the processing object image plane R3. The coordinate correlating means 10 is a parallel line group PL located on the XZ plane, and introduces a parallel line group PL which forms an angle β with the processing target image plane R3. Then, the coordinate associating unit 10 causes both the coordinates on the curved surface region R2 of the space model MD and the coordinates on the processing target image plane R3 corresponding to the coordinates to be on one of the parallel line group PL. , Correspond both coordinates.
 F6Bの例では、座標対応付け手段10は、空間モデルMDの曲面領域R2上の座標L2と処理対象画像平面R3上の座標M2とが共通の平行線に乗るとして、両座標を対応付ける。 In the example of F6B, the coordinate associating unit 10 associates both coordinates on the assumption that the coordinate L2 on the curved surface area R2 of the space model MD and the coordinate M2 on the processing target image plane R3 lie on a common parallel line.
 なお、座標対応付け手段10は、空間モデルMDの平面領域R1上の座標を曲面領域R2上の座標と同様に平行線群PLを用いて処理対象画像平面R3上の座標に対応付けることも可能である。しかしながら、F6Bの例では、平面領域R1と処理対象画像平面R3とが共通の平面となっている。そのため、空間モデルMDの平面領域R1上の座標L1と処理対象画像平面R3上の座標M1とは同じ座標値を有する。 Note that the coordinate associating unit 10 can associate the coordinates on the plane region R1 of the space model MD with the coordinates on the processing object image plane R3 using the parallel line group PL in the same manner as the coordinates on the curved region R2. is there. However, in the example of F6B, the planar region R1 and the processing target image plane R3 are common to each other. Therefore, the coordinate L1 on the plane region R1 of the space model MD and the coordinate M1 on the processing target image plane R3 have the same coordinate value.
 このようにして、座標対応付け手段10は、空間モデルMD上の座標と、処理対象画像平面R3上の座標とを対応付け、空間モデルMD上の座標及び処理対象画像平面R3上の座標を関連付けて空間モデル・処理対象画像対応マップ41に記憶する。 Thus, the coordinate associating unit 10 associates the coordinates on the space model MD with the coordinates on the processing object image plane R3, and associates the coordinates on the space model MD and the coordinates on the processing object image plane R3. Are stored in the space model / processing target image correspondence map 41.
 F6Cは、処理対象画像平面R3上の座標と、一例として通常射影(h=ftanα)を採用する仮想カメラ2Vの出力画像平面R5上の座標との間の対応関係を示す図である。画像生成手段11は、仮想カメラ2Vの出力画像平面R5上の座標とその座標に対応する処理対象画像平面R3上の座標とを結ぶ線分のそれぞれが仮想カメラ2Vの光学中心CVを通過するようにして、両座標を対応付ける。 F6C is a diagram showing the correspondence relationship between the coordinates on the processing target image plane R3 and the coordinates on the output image plane R5 of the virtual camera 2V adopting a normal projection (h = ftan α) as an example. The image generation unit 11 causes each of the line segments connecting the coordinates on the output image plane R5 of the virtual camera 2V and the coordinates on the processing target image plane R3 corresponding to the coordinates to pass through the optical center CV of the virtual camera 2V. And associate both coordinates.
 F6Cの例では、画像生成手段11は、仮想カメラ2Vの出力画像平面R5上の座標N1を処理対象画像平面R3(空間モデルMDの平面領域R1)上の座標M1に対応付け、仮想カメラ2Vの出力画像平面R5上の座標N2を処理対象画像平面R3上の座標M2に対応付ける。このとき、線分M1-N1及び線分M2-N2は共に仮想カメラ2Vの光学中心CVを通過する。 In the example of F6C, the image generation unit 11 associates the coordinates N1 on the output image plane R5 of the virtual camera 2V with the coordinates M1 on the processing target image plane R3 (the plane region R1 of the space model MD). The coordinate N2 on the output image plane R5 is associated with the coordinate M2 on the processing object image plane R3. At this time, the line segment M1-N1 and the line segment M2-N2 both pass the optical center CV of the virtual camera 2V.
 なお、仮想カメラ2Vが通常射影以外の射影方式(例えば、正射影、立体射影、等立体角射影、等距離射影等である。)を採用する場合、画像生成手段11は、それぞれの射影方式に応じて、仮想カメラ2Vの出力画像平面R5上の座標N1、N2を処理対象画像平面R3上の座標M1、M2に対応付ける。 When the virtual camera 2V adopts a projection method other than the normal projection (for example, orthographic projection, stereographic projection, iso-stereographic projection, equidistant projection, etc.), the image generation unit 11 selects one of the projection systems. Accordingly, the coordinates N1 and N2 on the output image plane R5 of the virtual camera 2V are associated with the coordinates M1 and M2 on the processing target image plane R3.
 具体的には、画像生成手段11は、所定の関数(例えば、正射影(h=fsinα)、立体射影(h=2ftan(α/2))、等立体角射影(h=2fsin(α/2))、等距離射影(h=fα)等である。)に基づいて、出力画像平面R5上の座標と処理対象画像平面R3上の座標とを対応付ける。この場合、線分M1-N1及び線分M2-N2が仮想カメラ2Vの光学中心CVを通過することはない。 Specifically, the image generation unit 11 generates a predetermined function (for example, an orthogonal projection (h = fsin α), a solid projection (h = 2 ftan (α / 2)), an isostatic projection (h = 2 f sin (α / 2) The coordinates on the output image plane R5 are associated with the coordinates on the processing object image plane R3 on the basis of equidistant projection (h = fα) etc.)). In this case, the line segment M1-N1 and the line segment M2-N2 never pass the optical center CV of the virtual camera 2V.
 このようにして、画像生成手段11は、出力画像平面R5上の座標と、処理対象画像平面R3上の座標とを対応付け、出力画像平面R5上の座標及び処理対象画像平面R3上の座標を関連付けて処理対象画像・出力画像対応マップ42に記憶する。そして、画像生成手段11は、入力画像・空間モデル対応マップ40及び空間モデル・処理対象画像対応マップ41を参照しながら、出力画像における各画素の値と入力画像における各画素の値とを関連付けて出力画像を生成する。 Thus, the image generation unit 11 associates the coordinates on the output image plane R5 with the coordinates on the processing object image plane R3, and coordinates on the output image plane R5 and the coordinates on the processing object image plane R3. It associates and stores in the processing target image / output image correspondence map 42. Then, the image generation unit 11 associates the value of each pixel in the output image with the value of each pixel in the input image while referring to the input image / space model correspondence map 40 and the space model / process target image correspondence map 41. Generate an output image.
 なお、F6Dは、F6A~F6Cを組み合わせた図であり、カメラ2、仮想カメラ2V、空間モデルMDの平面領域R1及び曲面領域R2、並びに、処理対象画像平面R3の相互の位置関係を示す。 F6D is a diagram combining F6A to F6C, and shows the mutual positional relationship between the camera 2, the virtual camera 2V, the plane region R1 and the curved region R2 of the space model MD, and the processing target image plane R3.
 次に、図7を参照しながら、空間モデルMD上の座標と処理対象画像平面R3上の座標とを対応付けるために座標対応付け手段10が導入する平行線群PLの作用について説明する。 Next, with reference to FIG. 7, an operation of the parallel line group PL introduced by the coordinate associating unit 10 in order to associate the coordinates on the space model MD and the coordinates on the processing target image plane R3 will be described.
 図7左図は、XZ平面上に位置する平行線群PLと処理対象画像平面R3との間で角度βが形成される場合の図である。一方、図7右図は、XZ平面上に位置する平行線群PLと処理対象画像平面R3との間で角度β1(β1>β)が形成される場合の図である。また、図7左図及び図7右図における空間モデルMDの曲面領域R2上の座標La~Ldのそれぞれは、処理対象画像平面R3上の座標Ma~Mdのそれぞれに対応する。また、図7左図における座標La~Ldのそれぞれの間隔は、図7右図における座標La~Ldのそれぞれの間隔と等しい。なお、平行線群PLは、説明目的のためにXZ平面上に存在するものとしているが、実際には、Z軸上の全ての点から処理対象画像平面R3に向かって放射状に延びるように存在する。なお、この場合のZ軸を「再投影軸」と称する。 The left drawing of FIG. 7 is a view in the case where an angle β is formed between the parallel line group PL located on the XZ plane and the processing object image plane R3. On the other hand, FIG. 7 right figure is a view in the case where an angle β1 (β1> β) is formed between the parallel line group PL located on the XZ plane and the processing target image plane R3. Further, each of the coordinates La to Ld on the curved surface region R2 of the space model MD in the left side of FIG. 7 and the right side of FIG. 7 corresponds to each of the coordinates Ma to Md on the processing target image plane R3. Further, the interval of each of the coordinates La to Ld in the left drawing of FIG. 7 is equal to the interval of each of the coordinates La to Ld in the right drawing of FIG. Although the parallel line group PL is present on the XZ plane for the purpose of explanation, in reality, it is present so as to radially extend from all points on the Z axis toward the processing object image plane R3. Do. Note that the Z axis in this case is referred to as "reprojection axis".
 図7左図及び図7右図で示されるように、処理対象画像平面R3上の座標Ma~Mdのそれぞれの間隔は、平行線群PLと処理対象画像平面R3との間の角度が増大するにつれて線形的に減少する。すなわち、空間モデルMDの曲面領域R2と座標Ma~Mdのそれぞれとの間の距離とは関係なく一様に減少する。一方で、空間モデルMDの平面領域R1上の座標群は、図7の例では、処理対象画像平面R3上の座標群への変換が行われないので、座標群の間隔が変化することはない。 As shown in FIG. 7 left view and FIG. 7 right view, the distance between the parallel line group PL and the processing target image plane R3 increases the distance between the coordinates Ma to Md on the processing target image plane R3. It decreases linearly as That is, the distance between the curved surface area R2 of the space model MD and each of the coordinates Ma to Md decreases uniformly regardless of the distance. On the other hand, the coordinate group on the plane region R1 of the space model MD is not converted to the coordinate group on the processing object image plane R3 in the example of FIG. 7, so the interval between the coordinate groups does not change. .
 これら座標群の間隔の変化は、出力画像平面R5(図6参照。)上の画像部分のうち、空間モデルMDの曲面領域R2に投影された画像に対応する画像部分のみが線形的に拡大或いは縮小されることを意味する。 Among the image portions on the output image plane R5 (see FIG. 6), the change in the distance between these coordinate groups is linearly expanded or only the image portion corresponding to the image projected on the curved region R2 of the space model MD. It means to be reduced.
 次に、図8を参照しながら、空間モデルMD上の座標と処理対象画像平面R3上の座標とを対応付けるために座標対応付け手段10が導入する平行線群PLの代替例について説明する。 Next, with reference to FIG. 8, an alternative example of the parallel line group PL introduced by the coordinate associating unit 10 in order to associate the coordinates on the space model MD with the coordinates on the processing target image plane R3 will be described.
 図8左図は、XZ平面上に位置する補助線群ALの全てがZ軸上の始点T1から処理対象画像平面R3に向かって延びる場合の図である。一方、図8右図は、補助線群ALの全てがZ軸上の始点T2(T2>T1)から処理対象画像平面R3に向かって延びる場合の図である。また、図8左図及び図8右図における空間モデルMDの曲面領域R2上の座標La~Ldのそれぞれは、処理対象画像平面R3上の座標Ma~Mdのそれぞれに対応する。なお、図8左図の例では、座標Mc、Mdは、処理対象画像平面R3の領域外となるため図示されていない。また、図8左図における座標La~Ldのそれぞれの間隔は、図8右図における座標La~Ldのそれぞれの間隔と等しい。なお、補助線群ALは、説明目的のためにXZ平面上に存在するものとしているが、実際には、Z軸上の任意の一点から処理対象画像平面R3に向かって放射状に延びるように存在する。なお、図7と同様、この場合のZ軸を「再投影軸」と称する。 The left side of FIG. 8 is a diagram in the case where all the auxiliary line groups AL located on the XZ plane extend from the starting point T1 on the Z axis toward the processing object image plane R3. On the other hand, the right side of FIG. 8 is a diagram in the case where all the auxiliary line groups AL extend from the start point T2 (T2> T1) on the Z axis toward the processing object image plane R3. Further, each of the coordinates La to Ld on the curved surface region R2 of the space model MD in the left drawing of FIG. 8 and the right drawing of FIG. 8 corresponds to each of the coordinates Ma to Md on the processing target image plane R3. In the example of the left figure of FIG. 8, the coordinates Mc and Md are not shown because they are outside the area of the processing target image plane R3. Further, the interval of each of the coordinates La to Ld in the left drawing of FIG. 8 is equal to the interval of each of the coordinates La to Ld in the right drawing of FIG. Although the auxiliary line group AL is present on the XZ plane for the purpose of explanation, in reality, the auxiliary line group AL is present so as to radially extend from any one point on the Z axis toward the processing object image plane R3. Do. As in FIG. 7, the Z axis in this case is referred to as “reprojection axis”.
 図8左図及び図8右図で示されるように、処理対象画像平面R3上の座標Ma~Mdのそれぞれの間隔は、補助線群ALの始点と原点Oとの間の距離(高さ)が増大するにつれて非線形的に減少する。すなわち、空間モデルMDの曲面領域R2と座標Ma~Mdのそれぞれとの間の距離が大きいほど、それぞれの間隔の減少幅が大きくなる。一方で、空間モデルMDの平面領域R1上の座標群は、図8の例では、処理対象画像平面R3上の座標群への変換が行われないので、座標群の間隔が変化することはない。 As shown in FIG. 8 left and FIG. 8 right, each distance between coordinates Ma to Md on the processing object image plane R3 is the distance (height) between the start point of the auxiliary line group AL and the origin O Decreases non-linearly as That is, the larger the distance between the curved surface area R2 of the space model MD and each of the coordinates Ma to Md, the larger the reduction width of each space. On the other hand, the coordinate group on the plane region R1 of the space model MD is not converted to the coordinate group on the processing target image plane R3 in the example of FIG. 8, so the interval between the coordinate groups does not change. .
 これら座標群の間隔の変化は、平行線群PLのときと同様、出力画像平面R5(図6参照。)上の画像部分のうち、空間モデルMDの曲面領域R2に投影された画像に対応する画像部分のみが非線形的に拡大或いは縮小されることを意味する。 The change in the distance between these coordinate groups corresponds to the image projected on the curved region R2 of the space model MD in the image portion on the output image plane R5 (see FIG. 6), as in the parallel line group PL. It means that only the image part is expanded or reduced non-linearly.
 このようにして、画像生成装置100は、空間モデルMDの平面領域R1に投影された画像に対応する出力画像の画像部分(例えば、路面画像である。)に影響を与えることなく、空間モデルMDの曲面領域R2に投影された画像に対応する出力画像の画像部分(例えば、水平画像である。)を線形的に或いは非線形的に拡大或いは縮小させることができる。そのため、画像生成装置100は、ショベル60の近傍の路面画像(ショベル60を真上から見たときの仮想画像)に影響を与えることなく、ショベル60の周囲に位置する物体(ショベル60から水平方向に周囲を見たときの画像における物体)を迅速且つ柔軟に拡大或いは縮小させることができ、ショベル60の死角領域の視認性を向上させることができる。 In this manner, the image generation device 100 does not affect the image portion (for example, a road surface image) of the output image corresponding to the image projected on the plane region R1 of the space model MD, and the space model MD is generated. The image portion (for example, a horizontal image) of the output image corresponding to the image projected onto the curved surface region R2 of the image data of the image data may be expanded or reduced linearly or non-linearly. Therefore, the image generating apparatus 100 does not affect the road surface image in the vicinity of the shovel 60 (a virtual image when the shovel 60 is viewed from directly above), and an object located around the shovel 60 (horizontal direction from the shovel 60) It is possible to quickly and flexibly enlarge or reduce the object (in the image when the surroundings are viewed), and to improve the visibility of the blind area of the shovel 60.
 次に、図9を参照しながら、画像生成装置100が処理対象画像を生成する処理(以下、「処理対象画像生成処理」とする。)、及び、生成した処理対象画像を用いて出力画像を生成する処理(以下、「出力画像生成処理」とする。)について説明する。なお、図9は、処理対象画像生成処理(ステップS1~ステップS3)及び出力画像生成処理(ステップS4~ステップS6)の流れを示すフローチャートである。また、カメラ2(入力画像平面R4)、空間モデル(平面領域R1及び曲面領域R2)、並びに、処理対象画像平面R3の配置は予め決定されている。 Next, referring to FIG. 9, a process in which the image generation apparatus 100 generates a processing target image (hereinafter referred to as “processing target image generation processing”), and an output image using the generated processing target image. A process of generating (hereinafter, referred to as “output image generation process”) will be described. FIG. 9 is a flowchart showing a flow of processing target image generation processing (steps S1 to S3) and output image generation processing (steps S4 to S6). Further, the arrangement of the camera 2 (input image plane R4), the space model (plane area R1 and curved area R2), and the processing target image plane R3 are determined in advance.
 最初に、制御部1は、座標対応付け手段10により、処理対象画像平面R3上の座標と空間モデルMD上の座標とを対応付ける(ステップS1)。 First, the control unit 1 causes the coordinate associating unit 10 to associate the coordinates on the processing target image plane R3 with the coordinates on the space model MD (step S1).
 具体的には、座標対応付け手段10は、平行線群PLと処理対象画像平面R3との間に形成される角度を取得する。そして、座標対応付け手段10は、処理対象画像平面R3上の一座標から延びる平行線群PLの一つが空間モデルMDの曲面領域R2と交差する点を算出する。そして、座標対応付け手段10は、算出した点に対応する曲面領域R2上の座標を、処理対象画像平面R3上のその一座標に対応する曲面領域R2上の一座標として導き出し、その対応関係を空間モデル・処理対象画像対応マップ41に記憶する。なお、平行線群PLと処理対象画像平面R3との間に形成される角度は、記憶部4等に予め記憶された値であってもよく、入力部3を介して操作者が動的に入力する値であってもよい。 Specifically, the coordinate associating unit 10 acquires an angle formed between the parallel line group PL and the processing target image plane R3. Then, the coordinate associating unit 10 calculates a point at which one of the parallel line group PL extending from one coordinate on the processing target image plane R3 intersects the curved surface region R2 of the space model MD. Then, the coordinate associating unit 10 derives the coordinates on the curved surface area R2 corresponding to the calculated point as one coordinate on the curved surface area R2 corresponding to the one coordinate on the processing target image plane R3, and the correspondence relationship It is stored in the space model / processing target image correspondence map 41. The angle formed between the parallel line group PL and the processing target image plane R3 may be a value stored in advance in the storage unit 4 or the like, and the operator dynamically uses the input unit 3 via the input unit 3. It may be a value to be input.
 また、座標対応付け手段10は、処理対象画像平面R3上の一座標が空間モデルMDの平面領域R1上の一座標と一致する場合には、平面領域R1上のその一座標を、処理対象画像平面R3上のその一座標に対応する一座標として導き出し、その対応関係を空間モデル・処理対象画像対応マップ41に記憶する。 Further, when one coordinate on the processing target image plane R3 coincides with one coordinate on the plane region R1 of the space model MD, the coordinate associating unit 10 processes the one coordinate on the plane region R1 as the processing target image. It is derived as one coordinate corresponding to the one coordinate on the plane R 3, and the correspondence is stored in the spatial model-processing target image correspondence map 41.
 その後、制御部1は、座標対応付け手段10により、上述の処理によって導き出された空間モデルMD上の一座標と入力画像平面R4上の座標とを対応付ける(ステップS2)。 After that, the control unit 1 causes the coordinate associating unit 10 to associate one coordinate on the space model MD derived by the above-described processing with the coordinate on the input image plane R4 (step S2).
 具体的には、座標対応付け手段10は、通常射影(h=ftanα)を採用するカメラ2の光学中心Cの座標を取得する。そして、座標対応付け手段10は、空間モデルMD上の一座標から延びる線分であり、光学中心Cを通過する線分が入力画像平面R4と交差する点を算出する。そして、座標対応付け手段10は、算出した点に対応する入力画像平面R4上の座標を、空間モデルMD上のその一座標に対応する入力画像平面R4上の一座標として導き出し、その対応関係を入力画像・空間モデル対応マップ40に記憶する。 Specifically, the coordinate associating unit 10 acquires the coordinates of the optical center C of the camera 2 that adopts the normal projection (h = f tan α). The coordinate associating unit 10 is a line segment extending from one coordinate on the space model MD, and calculates a point at which the line segment passing through the optical center C intersects with the input image plane R4. Then, the coordinate associating unit 10 derives the coordinates on the input image plane R4 corresponding to the calculated point as one coordinate on the input image plane R4 corresponding to the one coordinate on the space model MD, and the correspondence relationship The input image / space model correspondence map 40 is stored.
 その後、制御部1は、処理対象画像平面R3上の全ての座標を空間モデルMD上の座標及び入力画像平面R4上の座標に対応付けたか否かを判定する(ステップS3)。そして、制御部1は、未だ全ての座標を対応付けていないと判定した場合には(ステップS3のNO)、ステップS1及びステップS2の処理を繰り返す。 Thereafter, the control unit 1 determines whether all the coordinates on the processing target image plane R3 are associated with the coordinates on the space model MD and the coordinates on the input image plane R4 (step S3). Then, when it is determined that all the coordinates have not been associated yet (NO in step S3), the control unit 1 repeats the processing in step S1 and step S2.
 一方、制御部1は、全ての座標を対応付けたと判定した場合には(ステップS3のYES)、処理対象画像生成処理を終了させた上で出力画像生成処理を開始させる。そして、制御部1は、画像生成手段11により、処理対象画像平面R3上の座標と出力画像平面R5上の座標とを対応付ける(ステップS4)。 On the other hand, when it is determined that all the coordinates are associated (YES in step S3), the control unit 1 ends the processing target image generation processing and then starts the output image generation processing. Then, the control unit 1 causes the image generation unit 11 to associate the coordinates on the processing target image plane R3 with the coordinates on the output image plane R5 (step S4).
 具体的には、画像生成手段11は、処理対象画像にスケール変換、アフィン変換、又は歪曲変換を施すことによって出力画像を生成する。そして、画像生成手段11は、施したスケール変換、アフィン変換、又は歪曲変換の内容によって定まる、処理対象画像平面R3上の座標と出力画像平面R5上の座標との間の対応関係を処理対象画像・出力画像対応マップ42に記憶する。 Specifically, the image generation unit 11 generates an output image by performing scale conversion, affine conversion, or distortion conversion on the processing target image. Then, the image generation unit 11 processes the correspondence between the coordinates on the processing target image plane R3 and the coordinates on the output image plane R5, which is determined by the contents of the applied scale conversion, affine conversion, or distortion conversion. Store in the output image correspondence map 42.
 或いは、画像生成手段11は、仮想カメラ2Vを用いて出力画像を生成する場合には、採用した射影方式に応じて処理対象画像平面R3上の座標から出力画像平面R5上の座標を算出し、その対応関係を処理対象画像・出力画像対応マップ42に記憶するようにしてもよい。 Alternatively, when generating an output image using the virtual camera 2V, the image generation unit 11 calculates the coordinates on the output image plane R5 from the coordinates on the processing target image plane R3 according to the adopted projection method, The correspondence relationship may be stored in the processing target image / output image correspondence map 42.
 或いは、画像生成手段11は、通常射影(h=ftanα)を採用する仮想カメラ2Vを用いて出力画像を生成する場合には、その仮想カメラ2Vの光学中心CVの座標を取得する。そして、画像生成手段11は、出力画像平面R5上の一座標から延びる線分であり、光学中心CVを通過する線分が処理対象画像平面R3と交差する点を算出する。そして、画像生成手段11は、算出した点に対応する処理対象画像平面R3上の座標を、出力画像平面R5上のその一座標に対応する処理対象画像平面R3上の一座標として導き出す。このようにして、画像生成手段11は、その対応関係を処理対象画像・出力画像対応マップ42に記憶するようにしてもよい。 Alternatively, when generating an output image using the virtual camera 2V adopting normal projection (h = ftan α), the image generation unit 11 acquires the coordinates of the optical center CV of the virtual camera 2V. Then, the image generation unit 11 is a line segment extending from one coordinate on the output image plane R5, and calculates a point at which the line segment passing through the optical center CV intersects the processing target image plane R3. Then, the image generation unit 11 derives the coordinates on the processing target image plane R3 corresponding to the calculated point as one coordinate on the processing target image plane R3 corresponding to the one coordinate on the output image plane R5. In this manner, the image generation unit 11 may store the correspondence in the processing target image / output image correspondence map 42.
 その後、制御部1の画像生成手段11は、入力画像・空間モデル対応マップ40、空間モデル・処理対象画像対応マップ41、及び処理対象画像・出力画像対応マップ42を参照する。そして、画像生成手段11は、入力画像平面R4上の座標と空間モデルMD上の座標との対応関係、空間モデルMD上の座標と処理対象画像平面R3上の座標との対応関係、及び処理対象画像平面R3上の座標と出力画像平面R5上の座標との対応関係を辿る。そして、画像生成手段11は、出力画像平面R5上の各座標に対応する入力画像平面R4上の座標が有する値(例えば、輝度値、色相値、彩度値等である。)を取得し、その取得した値を、対応する出力画像平面R5上の各座標の値として採用する(ステップS5)。なお、出力画像平面R5上の一座標に対して複数の入力画像平面R4上の複数の座標が対応する場合、画像生成手段11は、それら複数の入力画像平面R4上の複数の座標のそれぞれの値に基づく統計値を導き出し、出力画像平面R5上のその一座標の値としてその統計値を採用してもよい。統計値は、例えば、平均値、最大値、最小値、中間値等である。 After that, the image generation unit 11 of the control unit 1 refers to the input image / space model correspondence map 40, the space model / processing target image correspondence map 41, and the processing target image / output image correspondence map 42. Then, the image generation means 11 corresponds the correspondence between the coordinates on the input image plane R4 and the coordinates on the space model MD, the correspondence between the coordinates on the space model MD and the coordinates on the processing object image plane R3, and the processing object The correspondence between the coordinates on the image plane R3 and the coordinates on the output image plane R5 is traced. Then, the image generation unit 11 acquires values (for example, luminance value, hue value, saturation value, etc.) of the coordinates on the input image plane R4 corresponding to the respective coordinates on the output image plane R5, The acquired value is adopted as the value of each coordinate on the corresponding output image plane R5 (step S5). When a plurality of coordinates on the plurality of input image planes R4 correspond to one coordinate on the output image plane R5, the image generation unit 11 generates each of the plurality of coordinates on the plurality of input image planes R4. A value-based statistic may be derived and adopted as the value of its one coordinate on the output image plane R5. The statistical value is, for example, an average value, a maximum value, a minimum value, an intermediate value, and the like.
 その後、制御部1は、出力画像平面R5上の全ての座標の値を入力画像平面R4上の座標の値に対応付けたか否かを判定する(ステップS6)。そして、制御部1は、未だ全ての座標の値を対応付けていないと判定した場合には(ステップS6のNO)、ステップS4及びステップS5の処理を繰り返す。 Thereafter, the control unit 1 determines whether or not the values of all the coordinates on the output image plane R5 are associated with the values of the coordinates on the input image plane R4 (step S6). Then, when it is determined that the values of all the coordinates are not associated yet (NO in step S6), the control unit 1 repeats the processes of step S4 and step S5.
 一方、制御部1は、全ての座標の値を対応付けたと判定した場合には(ステップS6のYES)、出力画像を生成して、この一連の処理を終了させる。 On the other hand, when it is determined that the values of all the coordinates are associated (YES in step S6), the control unit 1 generates an output image and terminates this series of processing.
 なお、画像生成装置100は、処理対象画像を生成しない場合には、処理対象画像生成処理を省略する。この場合、出力画像生成処理におけるステップS4の"処理対象画像平面上の座標"は、"空間モデル上の座標"で読み替えられる。 When the image generation apparatus 100 does not generate the processing target image, the processing target image generation process is omitted. In this case, the "coordinates on the processing target image plane" in step S4 in the output image generation process can be read as "coordinates on the space model".
 以上の構成により、画像生成装置100は、ショベル60の周囲の物体とショベル60との位置関係を操作者に直感的に把握させることが可能な処理対象画像及び出力画像を生成することができる。 With the above configuration, the image generating apparatus 100 can generate the processing target image and the output image that can make the operator intuitively grasp the positional relationship between the object around the shovel 60 and the shovel 60.
 また、画像生成装置100は、処理対象画像平面R3から空間モデルMDを経て入力画像平面R4に遡るように座標の対応付けを実行する。これにより、画像生成装置100は、処理対象画像平面R3上の各座標を入力画像平面R4上の一又は複数の座標に確実に対応させることができる。そのため、画像生成装置100は、入力画像平面R4から空間モデルMDを経て処理対象画像平面R3に至る順番で座標の対応付けを実行する場合と比べ、より良質な処理対象画像を迅速に生成することができる。入力画像平面R4から空間モデルMDを経て処理対象画像平面R3に至る順番で座標の対応付けを実行する場合には、入力画像平面R4上の各座標を処理対象画像平面R3上の一又は複数の座標に確実に対応させることができる。しかしながら、処理対象画像平面R3上の座標の一部が、入力画像平面R4上の何れの座標にも対応付けられない場合があり、その場合にはそれら処理対象画像平面R3上の座標の一部に補間処理等を施す必要がある。 Further, the image generation apparatus 100 performs coordinate association so as to trace back to the input image plane R4 from the processing target image plane R3 through the space model MD. Thereby, the image generation device 100 can reliably make each coordinate on the processing target image plane R3 correspond to one or more coordinates on the input image plane R4. Therefore, the image generation apparatus 100 generates a processing target image of better quality more quickly than the case of executing coordinate matching in the order from the input image plane R4 to the processing target image plane R3 via the space model MD. Can. When matching of coordinates is performed from the input image plane R4 to the processing target image plane R3 via the space model MD, each coordinate on the input image plane R4 corresponds to one or more coordinates on the processing target image plane R3. It can be made to correspond surely to coordinates. However, a part of the coordinates on the processing target image plane R3 may not be associated with any coordinates on the input image plane R4, and in this case, a part of the coordinates on the processing target image plane R3 It is necessary to perform interpolation processing etc.
 また、画像生成装置100は、空間モデルMDの曲面領域R2に対応する画像のみを拡大或いは縮小する場合には、平行線群PLと処理対象画像平面R3との間に形成される角度を変更して空間モデル・処理対象画像対応マップ41における曲面領域R2に関連する部分のみを書き換えるだけで、入力画像・空間モデル対応マップ40の内容を書き換えることなく、所望の拡大或いは縮小を実現させることができる。 Further, in the case of enlarging or reducing only the image corresponding to the curved surface region R2 of the space model MD, the image generation device 100 changes the angle formed between the parallel line group PL and the processing target image plane R3. Desired enlargement or reduction can be realized without rewriting the content of the input image / space model correspondence map 40 only by rewriting only the part related to the curved region R2 in the space model / processing target image correspondence map 41. .
 また、画像生成装置100は、出力画像の見え方を変更する場合には、スケール変換、アフィン変換又は歪曲変換に関する各種パラメータの値を変更して処理対象画像・出力画像対応マップ42を書き換えるだけで、入力画像・空間モデル対応マップ40及び空間モデル・処理対象画像対応マップ41の内容を書き換えることなく、所望の出力画像(スケール変換画像、アフィン変換画像又は歪曲変換画像)を生成することができる。 Further, when changing the appearance of the output image, the image generation apparatus 100 only changes the values of various parameters related to scale conversion, affine conversion, or distortion conversion to rewrite the processing target image / output image correspondence map 42. A desired output image (scale conversion image, affine conversion image or distortion conversion image) can be generated without rewriting the contents of the input image / space model correspondence map 40 and the space model / processing target image correspondence map 41.
 同様に、画像生成装置100は、出力画像の視点を変更する場合には、仮想カメラ2Vの各種パラメータの値を変更して処理対象画像・出力画像対応マップ42を書き換えるだけで、入力画像・空間モデル対応マップ40及び空間モデル・処理対象画像対応マップ41の内容を書き換えることなく、所望の視点から見た出力画像(視点変換画像)を生成することができる。 Similarly, when changing the viewpoint of the output image, the image generation apparatus 100 changes the values of various parameters of the virtual camera 2 V and rewrites the processing object image / output image correspondence map 42 to change the input image / space. An output image (viewpoint conversion image) viewed from a desired viewpoint can be generated without rewriting the contents of the model correspondence map 40 and the space model / process target image correspondence map 41.
 図10は、ショベル60に搭載された二台のカメラ2(右側方カメラ2R及び後方カメラ2B)の入力画像を用いて生成される出力画像を表示部5に表示させたときの表示例である。 FIG. 10 is a display example when an output image generated using input images of two cameras 2 (right-side camera 2R and rear camera 2B) mounted on the shovel 60 is displayed on the display unit 5 .
 画像生成装置100は、それら二台のカメラ2のそれぞれの入力画像を空間モデルMDの平面領域R1及び曲面領域R2上に投影した上で処理対象画像平面R3に再投影して処理対象画像を生成する。そして、画像生成装置100は、その生成した処理対象画像に画像変換処理(例えば、スケール変換、アフィン変換、歪曲変換、視点変換処理等である。)を施すことによって出力画像を生成する。このようにして、画像生成装置100は、ショベル60の近傍を上空から見下ろした画像(平面領域R1における画像)と、ショベル60から水平方向に周囲を見た画像(処理対象画像平面R3における画像)とを同時に表示する出力画像を生成する。以下では、このような出力画像を周辺監視用仮想視点画像と称する。 The image generation apparatus 100 projects the input image of each of the two cameras 2 onto the plane region R1 and the curved region R2 of the space model MD, and reprojects the image on the processing target image plane R3 to generate the processing target image. Do. Then, the image generation apparatus 100 generates an output image by performing image conversion processing (for example, scale conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like) on the generated processing target image. In this manner, the image generating apparatus 100 displays an image (image in the flat region R1) looking down from above the vicinity of the shovel 60 and an image (image in the processing target image plane R3) looking horizontally from the shovel 60 And an output image to be displayed simultaneously. Hereinafter, such an output image is referred to as a peripheral monitoring virtual viewpoint image.
 なお、周辺監視用仮想視点画像は、画像生成装置100が処理対象画像を生成しない場合には、空間モデルMDに投影された画像に画像変換処理(例えば、視点変換処理である。)を施すことによって生成される。 When the image generation apparatus 100 does not generate the processing target image, the peripheral monitoring virtual viewpoint image performs an image conversion process (for example, a viewpoint conversion process) on the image projected on the space model MD. Generated by
 また、周辺監視用仮想視点画像は、ショベル60が旋回動作を行う際の画像を違和感なく表示できるよう、円形にトリミングされ、その円の中心CTRが空間モデルMDの円筒中心軸上で、且つ、ショベル60の旋回軸PV上となるように生成される。そのため、周辺監視用仮想視点画像は、ショベル60の旋回動作に応じてその中心CTRを軸に回転するように表示される。この場合、空間モデルMDの円筒中心軸は、再投影軸と一致するものであってもよく、一致しないものであってもよい。 In addition, the virtual viewpoint image for periphery monitoring is trimmed in a circular shape so that an image when the shovel 60 performs a turning operation can be displayed without a sense of incongruity, and the center CTR of the circle is on the cylinder central axis of the space model MD and It is generated so as to be on the pivot axis PV of the shovel 60. Therefore, in accordance with the turning operation of the shovel 60, the periphery monitoring virtual viewpoint image is displayed so as to rotate about the center CTR. In this case, the cylinder central axis of the space model MD may or may not coincide with the reprojection axis.
 なお、空間モデルMDの半径は、例えば、5メートルである。また、平行線群PLが処理対象画像平面R3との間で形成する角度、又は、補助線群ALの始点高さは、ショベル60の旋回中心から掘削アタッチメントEの最大到達距離(例えば12メートルである。)だけ離れた位置に物体(例えば、作業者である。)が存在する場合にその物体が表示部5で十分大きく(例えば、7ミリメートル以上である。)表示されるように、設定され得る。 The radius of the space model MD is, for example, 5 meters. In addition, the angle formed by the parallel line group PL with the processing target image plane R3, or the starting point height of the auxiliary line group AL, is the maximum reach distance of the excavating attachment E from the turning center of the shovel 60 (for example, 12 meters) Is set so that when an object (for example, an operator) is present at a position far away from the display, the object is displayed on the display unit 5 sufficiently large (for example, 7 millimeters or more). obtain.
 更に、周辺監視用仮想視点画像は、ショベル60のCG画像を、ショベル60の前方が表示部5の画面上方と一致し、且つ、その旋回中心が中心CTRと一致するように配置してもよい。ショベル60と出力画像に現れる物体との間の位置関係をより分かり易くするためである。なお、周辺監視用仮想視点画像は、方位等の各種情報を含む額縁画像をその周囲に配置してもよい。 Furthermore, the virtual viewpoint image for periphery monitoring may be arranged such that the CG image of the shovel 60 is such that the front of the shovel 60 coincides with the upper side of the screen of the display unit 5 and the turning center thereof coincides with the center CTR. . This is to make it easier to understand the positional relationship between the shovel 60 and the object appearing in the output image. Note that, in the periphery monitoring virtual viewpoint image, a frame image including various information such as an orientation may be arranged around it.
 次に、図11~図14を参照しながら、画像生成装置100が生成する周辺監視用仮想視点画像の詳細について説明する。 Next, the details of the periphery monitoring virtual viewpoint image generated by the image generation apparatus 100 will be described with reference to FIGS.
 図11は、画像生成装置100を搭載するショベル60の上面図である。図11に示す実施例では、ショベル60は、3台のカメラ2(左側方カメラ2L、右側方カメラ2R、及び後方カメラ2B)と3台の人検出センサ6(左側方人検出センサ6L、右側方人検出センサ6R、及び後方人検出センサ6B)と3台の報知部20(左側方報知部20L、右側方報知部20R、及び後方報知部20B)とを備える。なお、図11の一点鎖線で示す領域CL、CR、CBは、それぞれ、左側方カメラ2L、右側方カメラ2R、後方カメラ2Bの撮像空間を示す。また、図11の点線で示す領域ZL、ZR、ZBは、それぞれ、左側方人検出センサ6L、右側方人検出センサ6R、後方人検出センサ6Bの監視空間を示す。また、ショベル60は、キャブ64内に、表示部5と、3台の警報出力部7(左側方警報出力部7L、右側方警報出力部7R、及び後方警報出力部7B)と、ゲートロックレバー8と、イグニッションスイッチ9とを備える。 FIG. 11 is a top view of a shovel 60 on which the image generating apparatus 100 is mounted. In the embodiment shown in FIG. 11, the shovel 60 has three cameras 2 (left side camera 2L, right side camera 2R, and rear camera 2B) and three human detection sensors 6 (left side human detection sensor 6L, right side) It includes a police detection sensor 6R, a rear detection sensor 6B, and three notification units 20 (left direction notification unit 20L, right direction notification unit 20R, and rear notification unit 20B). Regions CL, CR, and CB indicated by alternate long and short dash lines in FIG. 11 indicate imaging spaces of the left side camera 2L, the right side camera 2R, and the rear camera 2B, respectively. Further, areas ZL, ZR, and ZB indicated by dotted lines in FIG. 11 indicate monitoring spaces of the left side detection sensor 6L, the right side detection sensor 6R, and the rear detection sensor 6B, respectively. In the cab 64, the shovel 60 includes the display unit 5, three alarm output units 7 (a left direction alarm output unit 7L, a right direction alarm output unit 7R, and a rear alarm output unit 7B), and a gate lock lever. 8 and an ignition switch 9 are provided.
 なお、本実施例では、人検出センサ6の監視空間がカメラ2の撮像空間よりも狭いが、人検出センサ6の監視空間は、カメラ2の撮像空間と同じでもよく、カメラ2の撮像空間より広くてもよい。また、人検出センサ6の監視空間は、カメラ2の撮像空間内において、ショベル60の近傍に位置するが、ショベル60からより遠い領域にあってもよい。また、人検出センサ6の監視空間は、カメラ2の撮像空間が重複する部分において、重複部分を有する。例えば、右側方カメラ2Rの撮像空間CRと後方カメラ2Bの撮像空間CBとの重複部分において、右側方人検出センサ6Rの監視空間ZRは、後方人検出センサ6Bの監視空間ZBと重複する。しかしながら、人検出センサ6の監視空間は、重複が生じないように配置されてもよい。 In the present embodiment, although the monitoring space of the human detection sensor 6 is narrower than the imaging space of the camera 2, the monitoring space of the human detection sensor 6 may be the same as the imaging space of the camera 2. It may be wide. In addition, although the monitoring space of the human detection sensor 6 is located near the shovel 60 in the imaging space of the camera 2, the monitoring space may be in an area farther from the shovel 60. Also, the monitoring space of the human detection sensor 6 has an overlapping portion in the portion where the imaging space of the camera 2 overlaps. For example, in the overlapping portion of the imaging space CR of the right side camera 2R and the imaging space CB of the rear camera 2B, the monitoring space ZR of the right side person detection sensor 6R overlaps with the monitoring space ZB of the rear person detection sensor 6B. However, the monitoring space of the human detection sensor 6 may be arranged so that duplication does not occur.
 図12は、ショベル60に搭載された3台のカメラ2のそれぞれの入力画像と、それら入力画像を用いて生成される出力画像とを示す図である。 FIG. 12 shows an input image of each of the three cameras 2 mounted on the shovel 60 and an output image generated using the input images.
 画像生成装置100は、3台のカメラ2のそれぞれの入力画像を空間モデルMDの平面領域R1及び曲面領域R2上に投影した上で処理対象画像平面R3に再投影して処理対象画像を生成する。また、画像生成装置100は、生成した処理対象画像に画像変換処理(例えば、スケール変換、アフィン変換、歪曲変換、視点変換処理等である。)を施すことによって出力画像を生成する。その結果、画像生成装置100は、ショベル60の近傍を上空から見下ろした画像(平面領域R1における画像)と、ショベル60から水平方向に周囲を見た画像(処理対象画像平面R3における画像)とを同時に表示する周辺監視用仮想視点画像を生成する。なお、周辺監視用仮想視点画像の中央に表示される画像は、ショベル60のCG画像60CGである。 The image generation apparatus 100 projects the input image of each of the three cameras 2 onto the plane region R1 and the curved region R2 of the space model MD, and reprojects the image on the processing target image plane R3 to generate the processing target image. . Further, the image generation apparatus 100 generates an output image by performing image conversion processing (for example, scale conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like) on the generated processing target image. As a result, the image generating apparatus 100 displays an image (image in the flat region R1) looking down near the shovel 60 from above and an image (image in the processing target image plane R3) looking horizontally from the shovel 60 A virtual viewpoint image for periphery monitoring to be simultaneously displayed is generated. The image displayed at the center of the periphery monitoring virtual viewpoint image is a CG image 60 CG of the shovel 60.
 図12において、右側方カメラ2Rの入力画像、及び、後方カメラ2Bの入力画像はそれぞれ、右側方カメラ2Rの撮像空間と後方カメラ2Bの撮像空間との重複部分内に人物を捉えている(右側方カメラ2Rの入力画像における二点鎖線で囲まれる領域R10、及び、後方カメラ2Bの入力画像における二点鎖線で囲まれる領域R11参照。)。 In FIG. 12, the input image of the right side camera 2R and the input image of the rear camera 2B respectively capture a person in the overlapping portion of the imaging space of the right side camera 2R and the imaging space of the rear camera 2B (right side A region R10 surrounded by a two-dot chain line in the input image of the two-way camera 2R and a region R11 surrounded by a two-dot chain line in the input image of the rear camera 2B.
 しかしながら、出力画像平面上の座標が入射角の最も小さいカメラに関する入力画像平面上の座標に対応付けられるものとすると、出力画像は、重複部分内の人物を消失させてしまう(出力画像内の一点鎖線で囲まれる領域R12参照。)。 However, assuming that the coordinates on the output image plane correspond to the coordinates on the input image plane related to the camera with the smallest incident angle, the output image will cause the person in the overlapping part to disappear (a point in the output image See the region R12 enclosed by a dashed line).
 そこで、画像生成装置100は、重複部分に対応する出力画像部分において、後方カメラ2Bの入力画像平面上の座標が対応付けられる領域と、右側方カメラ2Rの入力画像平面上の座標が対応付けられる領域とを混在させ、重複部分内の物体が消失するのを防止する。 Therefore, in the output image portion corresponding to the overlapping portion, in the output image portion corresponding to the overlapping portion, the area to which the coordinates on the input image plane of the rear camera 2B are associated is associated with the coordinates on the input image plane of the right side camera 2R. The area is mixed, and the object in the overlapping part is prevented from disappearing.
 図13は、2つのカメラのそれぞれの撮像空間の重複部分における物体の消失を防止する画像消失防止処理の一例であるストライプパタン処理を説明するための図である。 FIG. 13 is a diagram for explaining stripe pattern processing which is an example of the image loss prevention processing for preventing the loss of an object in the overlapping portion of the imaging space of each of two cameras.
 F13Aは、右側方カメラ2Rの撮像空間と後方カメラ2Bの撮像空間との重複部分に対応する出力画像部分を示す図であり、図12の点線で示す矩形領域R13に対応する。 F13A is a diagram showing an output image portion corresponding to an overlapping portion of the imaging space of the right side camera 2R and the imaging space of the rear camera 2B, and corresponds to a rectangular area R13 shown by a dotted line in FIG.
 また、F13Aにおいて、灰色で塗り潰された領域PR1は、後方カメラ2Bの入力画像部分が配置される画像領域であり、領域PR1に対応する出力画像平面上の各座標には後方カメラ2Bの入力画像平面上の座標が対応付けられる。 Also, in F13A, the area PR1 filled with gray is an image area where the input image portion of the rear camera 2B is arranged, and the input image of the rear camera 2B is displayed at each coordinate on the output image plane corresponding to the area PR1. Coordinates on the plane are associated.
 一方、白色で塗り潰された領域PR2は、右側方カメラ2Rの入力画像部分が配置される画像領域であり、領域PR2に対応する出力画像平面上の各座標には右側方カメラ2Rの入力画像平面上の座標が対応付けられる。 On the other hand, the area PR2 filled with white is an image area where the input image portion of the right side camera 2R is arranged, and each coordinate on the output image plane corresponding to the area PR2 is the input image plane of the right side camera 2R The upper coordinates are associated.
 本実施例では、領域PR1と領域PR2とが縞模様(ストライプパタン)を形成するように配置され、領域PR1と領域PR2とが縞状に交互に並ぶ部分の境界線は、ショベル60の旋回中心を中心とする水平面上の同心円によって定められる。 In the present embodiment, the area PR1 and the area PR2 are arranged to form a stripe pattern (stripe pattern), and the boundary line between the area PR1 and the area PR2 alternately arranged in a stripe is the turning center of the shovel 60. Defined by concentric circles on a horizontal plane centered on
 F13Bは、ショベル60の右斜め後方の空間領域の状況を示す上面図であり、後方カメラ2B及び右側方カメラ2Rの双方によって撮像される空間領域の現在の状況を示す。また、F13Bは、ショベル60の右斜め後方に棒状の立体物OBが存在することを示す。 F13B is a top view showing the situation of the space area at the diagonally right back of the shovel 60, and shows the current situation of the space area imaged by both the rear camera 2B and the right side camera 2R. Further, F13B indicates that a rod-like three-dimensional object OB is present obliquely rearward to the right of the shovel 60.
 F13Cは、F13Bが示す空間領域を後方カメラ2B及び右側方カメラ2Rで実際に撮像して得られた入力画像に基づいて生成される出力画像の一部を示す。 F13C shows a part of an output image generated based on an input image obtained by actually imaging the space area indicated by F13B with the rear camera 2B and the right side camera 2R.
 具体的には、画像OB1は、後方カメラ2Bの入力画像における立体物OBの画像が、路面画像を生成するための視点変換によって、後方カメラ2Bと立体物OBとを結ぶ線の延長方向に伸長されたものを表す。すなわち、画像OB1は、後方カメラ2Bの入力画像を用いて出力画像部分における路面画像を生成した場合に表示される立体物OBの画像の一部である。 Specifically, in the image OB1, the image of the solid object OB in the input image of the rear camera 2B is extended in the extension direction of the line connecting the rear camera 2B and the solid object OB by viewpoint conversion for generating a road surface image Represents what was That is, the image OB1 is a part of the image of the three-dimensional object OB displayed when the road surface image in the output image portion is generated using the input image of the rear camera 2B.
 また、画像OB2は、右側方カメラ2Rの入力画像における立体物OBの画像が、路面画像を生成するための視点変換によって、右側方カメラ2Rと立体物OBとを結ぶ線の延長方向に伸長されたものを表す。すなわち、画像OB2は、右側方カメラ2Rの入力画像を用いて出力画像部分における路面画像を生成した場合に表示される立体物OBの画像の一部である。 In the image OB2, the image of the solid object OB in the input image of the right side camera 2R is expanded in the extension direction of the line connecting the right side camera 2R and the solid object OB by viewpoint conversion for generating a road surface image. Represent the That is, the image OB2 is a part of the image of the three-dimensional object OB displayed when the road surface image in the output image portion is generated using the input image of the right side camera 2R.
 このように、画像生成装置100は、重複部分において、後方カメラ2Bの入力画像平面上の座標が対応付けられる領域PR1と、右側方カメラ2Rの入力画像平面上の座標が対応付けられる領域PR2とを混在させる。その結果、画像生成装置100は、1つの立体物OBに関する2つの画像OB1及び画像OB2の双方を出力画像上に表示させ、立体物OBが出力画像から消失するのを防止する。 As described above, in the overlapping portion, the image generation device 100 combines the area PR1 with which the coordinates on the input image plane of the rear camera 2B are associated and the area PR2 with which the coordinates on the input image plane of the right side camera 2R are associated. Mix As a result, the image generation apparatus 100 displays both the two images OB1 and OB2 of one solid object OB on the output image, and prevents the solid object OB from disappearing from the output image.
 図14は、図12の出力画像と、図12の出力画像に画像消失防止処理(ストライプパタン処理)を適用することで得られる出力画像との違いを表す対比図であり、図14上図が図12の出力画像を示し、図14下図が画像消失防止処理(ストライプパタン処理)を適用した後の出力画像を示す。図14上図における一点鎖線で囲まれる領域R12では人物が消失しているのに対し、図14下図における一点鎖線で囲まれる領域R14では人物が消失せずに表示されている。 14 is a contrast diagram showing the difference between the output image of FIG. 12 and the output image obtained by applying the image loss prevention process (stripe pattern process) to the output image of FIG. The output image of FIG. 12 is shown, and the lower part of FIG. 14 shows an output image after the image loss prevention process (stripe pattern process) is applied. While a person disappears in a region R12 surrounded by an alternate long and short dash line in the upper diagram of FIG. 14, a human is displayed without disappearing in a region R14 enclosed by an alternate long and short dash line in the lower diagram of FIG.
 なお、画像生成装置100は、ストライプパタン処理の代わりに、メッシュパタン処理、平均化処理等を適用して重複部分内の物体の消失を防止してもよい。具体的には、画像生成装置100は、平均化処理により、2つのカメラのそれぞれの入力画像における対応する画素の値(例えば、輝度値である。)の平均値を、重複部分に対応する出力画像部分の画素の値として採用する。或いは、画像生成装置100は、メッシュパタン処理により、重複部分に対応する出力画像部分において、一方のカメラの入力画像における画素の値が対応付けられる領域と、他方のカメラの入力画像における画素の値が対応付けられる領域とを網模様(メッシュパタン)を形成するように配置させる。これにより、画像生成装置100は、重複部分内の物体が消失するのを防止する。 The image generation apparatus 100 may prevent the disappearance of the object in the overlapping portion by applying mesh pattern processing, averaging processing, or the like instead of the stripe pattern processing. Specifically, the image generating apparatus 100 outputs an average value of the values (for example, luminance values) of corresponding pixels in the input images of the two cameras by averaging, and corresponds to the overlapping portion. Adopted as the pixel value of the image part. Alternatively, in the output image portion corresponding to the overlapping portion, the image generation apparatus 100 performs mesh pattern processing on a region to which values of pixels in the input image of one camera are associated and values of pixels in the input image of the other camera. Are arranged so as to form a mesh pattern (mesh pattern). Thereby, the image generation device 100 prevents the loss of the object in the overlapping portion.
 次に、図15~図17を参照して、画像生成手段11が、人存否判定手段12の判定結果に基づいて出力画像の内容を切り換える処理(以下、「第1出力画像切換処理」とする。)について説明する。なお、図15は、ショベル60に搭載された3台のカメラ2のそれぞれの入力画像と、それら入力画像を用いて生成される出力画像とを示す図であり、図12に対応する。また、図16は、第1出力画像切換処理で用いられる空間モデルMDと処理対象画像平面R3との間の関係の一例を示す図であり、図4に対応する。また、図17は、第1出力画像切換処理で切り換えられる2つの出力画像の関係を説明する図である。 Next, with reference to FIG. 15 to FIG. 17, the process of switching the content of the output image based on the determination result of the person existence / non-existence determination means 12 (hereinafter referred to as “first output image switching process”) ). FIG. 15 is a view showing input images of the three cameras 2 mounted on the shovel 60 and an output image generated using the input images, and corresponds to FIG. FIG. 16 is a diagram showing an example of the relationship between the space model MD used in the first output image switching process and the processing target image plane R3, and corresponds to FIG. FIG. 17 is a diagram for explaining the relationship between two output images switched in the first output image switching process.
 図15に示すように、画像生成装置100は、3台のカメラ2のそれぞれの入力画像を空間モデルMDの平面領域R1及び曲面領域R2上に投影した上で処理対象画像平面R3に再投影して処理対象画像を生成する。また、画像生成装置100は、生成した処理対象画像に画像変換処理(例えば、スケール変換、アフィン変換、歪曲変換、視点変換処理等である。)を施すことによって出力画像を生成する。その結果、画像生成装置100は、ショベル60の近傍を上空から見下ろした画像と、ショベル60から水平方向に周囲を見た画像とを同時に表示する周辺監視用仮想視点画像を生成する。 As shown in FIG. 15, the image generation apparatus 100 projects the input image of each of the three cameras 2 onto the plane region R1 and the curved region R2 of the space model MD, and reprojects it on the processing object image plane R3. Generate an image to be processed. Further, the image generation apparatus 100 generates an output image by performing image conversion processing (for example, scale conversion, affine conversion, distortion conversion, viewpoint conversion processing, and the like) on the generated processing target image. As a result, the image generation device 100 generates a periphery monitoring virtual viewpoint image that simultaneously displays an image of the vicinity of the shovel 60 as viewed from above and an image of the periphery of the shovel 60 viewed in the horizontal direction.
 また、図15において、左側方カメラ2L、後方カメラ2B、及び右側方カメラ2Rのそれぞれの入力画像は、作業者が3人ずつ存在する状態を示す。また、出力画像は、ショベル60の周囲に9人の作業者が存在する状態を示す。 Further, in FIG. 15, the input images of the left side camera 2L, the rear side camera 2B, and the right side camera 2R show a state in which three workers are present. The output image also shows that nine workers are present around the shovel 60.
 なお、空間モデルMDの平面領域R1の高さは、ショベル60の接地面である路面に相当する高さに設定されている。そのため、図15の平面領域R1における画像では、9人の作業者のそれぞれの接地位置とショベル60のCG画像60CGとの間の距離は、9人の作業者のそれぞれとショベル60との間の実際の距離を正確に表している。しかしながら、作業者の画像は、接地位置(足下)から遠ざかるにつれて大きくなるように表示される。特に、図15に示すように、作業者の頭部は、ショベル60のCG画像60CGの大きさと比べても顕著に大きく表示されている。そのため、図15の出力画像を見たショベル60の操作者は、ショベル60と作業者との間の距離が実際の距離よりも大きいと錯覚するおそれがある。 The height of the plane region R1 of the space model MD is set to a height corresponding to the road surface which is the ground contact surface of the shovel 60. Therefore, in the image in the plane region R1 of FIG. 15, the distance between the contact positions of the nine workers and the CG image 60CG of the shovel 60 is between the respective ones of the nine workers and the shovel 60. It accurately represents the actual distance. However, the image of the worker is displayed so as to be larger as it goes away from the ground contact position (under the foot). In particular, as shown in FIG. 15, the head of the worker is displayed significantly larger than the size of the CG image 60 CG of the shovel 60. Therefore, the operator of the shovel 60 who saw the output image of FIG. 15 may be illusive that the distance between the shovel 60 and the worker is larger than the actual distance.
 そこで、画像生成手段11は、人存否判定手段12により、左側方監視空間ZL、後方監視空間ZB、及び右側方監視空間ZRの何れかに人が存在すると判定された場合、出力画像の内容を切り換える。本実施例では、画像生成手段11は、空間モデルMDの平面領域R1の高さを、路面に相当する高さから人の頭の高さに相当する高さ(以下、「頭高さ」とする。)に切り換える。なお、頭高さは、予め設定される値であり、例えば150cmである。但し、頭高さは、動的に決定される値であってもよい。例えば、画像生成装置100は、人検出センサ6等の出力に基づいてショベル60の周囲に存在する作業者の高さ(身長)を検出できる場合、検出した身長に基づいて頭高さを決定してもよい。具体的には、ショベル60に最も近い作業者の身長に応じて頭高さを決定してもよく、ショベル60の周囲に存在する複数の作業者の身長の統計値(最大値、最小値、平均値等)に応じて頭高さを決定してもよい。 Therefore, the image generation unit 11 determines the content of the output image when it is determined by the human existence / nonexistence determination unit 12 that a person is present in any of the left side monitoring space ZL, the rear monitoring space ZB, and the right side monitoring space ZR. Switch. In the present embodiment, the image generation means 11 sets the height of the plane region R1 of the space model MD to a height corresponding to the height of a person's head from the height corresponding to the road surface (hereinafter referred to as “head height”). Switch to the The head height is a value set in advance, and is, for example, 150 cm. However, the head height may be a value determined dynamically. For example, when the height (height) of the worker present around the shovel 60 can be detected based on the output of the human detection sensor 6 or the like, the image generation device 100 determines the head height based on the detected height. May be Specifically, the height of the head may be determined according to the height of the worker closest to the shovel 60, and the statistical values (maximum value, minimum value, height values of heights of a plurality of workers present around the shovel 60) The head height may be determined according to the average value or the like.
 ここで、図16及び図17を参照して、路面に相当する高さの平面領域R1を有する図4の空間モデルMDと、頭高さの平面領域R1Mを有する頭高さ基準空間モデルMDMとの関係について説明する。なお、図16上図は、空間モデルMDと、平面領域R1を含む処理対象画像平面R3との関係を示し、図16下図は、頭高さ基準空間モデルMDMと、頭高さ基準平面領域R1Mを含む頭高さ基準処理対象画像平面R3Mとの関係を示す。また、図17において、出力画面D1は、空間モデルMDを用いて生成される路面高さ基準の周辺監視用仮想視点画像であり、出力画像D2は、空間モデルMDMを用いて生成される頭高さ基準の周辺監視用仮想視点画像である。なお、画像D3は、路面高さ基準の周辺監視用仮想視点画像と頭高さ基準の周辺監視用仮想視点画像との大きさの違いを表す説明用の画像である。また、路面高さ基準の周辺監視用仮想視点画像における画像部分D10の大きさは、頭高さ基準の周辺監視用仮想視点画像の大きさに対応する。 Here, referring to FIGS. 16 and 17, space model MD of FIG. 4 having plane area R1 of a height corresponding to the road surface, and head height reference space model MDM having plane area R1M of the head height Explain the relationship between The upper diagram in FIG. 16 shows the relationship between the space model MD and the processing target image plane R3 including the flat region R1, and the lower diagram in FIG. 16 shows the head height reference space model MDM and the head height reference plane region R1M. And a head height reference processing target image plane R3M including the Further, in FIG. 17, an output screen D1 is a virtual viewpoint image for perimeter monitoring based on a road surface height generated using a space model MD, and an output image D2 is a head height generated using a space model MDM It is a virtual viewpoint image for perimeter surveillance of the standard. The image D3 is an explanatory image representing the difference in size between the road surface height-based perimeter monitoring virtual viewpoint image and the head height-based perimeter monitoring virtual viewpoint image. Further, the size of the image portion D10 in the periphery monitoring virtual viewpoint image based on the road surface height corresponds to the size of the periphery monitoring virtual viewpoint image based on the head height.
 図16に示すように、頭高さ基準平面領域R1M及び頭高さ基準処理対象画像平面R3Mの高さは、路面の高さに相当する平面領域R1M及び処理対象画像平面R3の高さに比べ、頭高さHTだけ高い。 As shown in FIG. 16, the heights of the head height reference plane region R1M and the head height reference processing target image plane R3M are compared with the heights of the plane region R1M and the processing target image plane R3 corresponding to the height of the road surface. , Head height HT only.
 その結果、図17に示すように、頭高さ基準の周辺監視用仮想視点画像である出力画像D2は、頭高さHTに存在する作業者の身体部分(頭)とショベル60との距離が、作業者とショベル60との間の実際の距離を表すように表示される。これにより、画像生成装置100は、出力画像を見たショベル60の操作者が、ショベル60と作業者との間の距離を実際の距離よりも大きめに捉えてしまうのを防止できる。 As a result, as shown in FIG. 17, in the output image D2, which is a virtual viewpoint image for perimeter monitoring based on head height, the distance between the body portion (head) of the worker present at the head height HT and the shovel 60 is , Is displayed to represent the actual distance between the worker and the shovel 60. Thus, the image generation device 100 can prevent the operator of the shovel 60 who has viewed the output image from capturing the distance between the shovel 60 and the worker larger than the actual distance.
 なお、図16に示すように、画像部分D10に対応する空間モデルMDM上の領域D10Mは、頭高さ基準平面領域R1Mに含まれる。すなわち、頭高さ基準の周辺監視用仮想視点画像は、頭高さ基準処理対象画像平面R3Mの環状部分(頭高さ基準平面領域R1M以外の部分)に再投影される入力画像部分を利用しない。そのため、本実施例では、画像生成装置100は、領域D10M以外の領域における座標の対応付けを省略する。 As shown in FIG. 16, a region D10M on the space model MDM corresponding to the image portion D10 is included in the head height reference plane region R1M. That is, the virtual viewpoint image for perimeter monitoring based on head height does not use the input image portion reprojected to the annular portion (portion other than head height reference plane region R1M) of head height based processing target image plane R3M . Therefore, in the present embodiment, the image generating apparatus 100 omits the association of the coordinates in the area other than the area D10M.
 以上の構成により、画像生成装置100は、人存否判定手段12の判定結果に基づいて出力画像の内容を切り換える。具体的には、画像生成装置100は、ショベル60の周囲に人が存在すると判定した場合に、路面高さ基準の周辺監視用仮想視点画像を頭高さ基準の周辺監視用仮想視点画像に切り換える。その結果、画像生成装置100は、ショベル60の周囲に作業者を検出した場合に、ショベル60と作業者との間の距離をより正確にショベル60の操作者に伝えることができる。なお、画像生成装置100は、その後にショベル60の周囲に人が存在しないと判定した場合には、頭高さ基準の周辺監視用仮想視点画像を路面高さ基準の周辺監視用仮想視点画像に切り換える。ショベル60の操作者がショベル60の周囲をより広範囲に監視できるようにするためである。 With the above configuration, the image generation apparatus 100 switches the content of the output image based on the determination result of the person presence / absence determination unit 12. Specifically, when it is determined that a person is present around the shovel 60, the image generating apparatus 100 switches the virtual viewpoint image for perimeter monitoring based on the road surface height to the virtual viewpoint image for perimeter monitoring based on the head height. . As a result, when the image generation device 100 detects a worker around the shovel 60, the image generation device 100 can more accurately convey the distance between the shovel 60 and the worker to the operator of the shovel 60. If the image generating apparatus 100 subsequently determines that there is no person around the shovel 60, the virtual viewpoint image for perimeter monitoring based on head height is converted to a virtual viewpoint image for perimeter monitoring based on road surface height. Switch. This is to allow the operator of the shovel 60 to monitor the surroundings of the shovel 60 more extensively.
 次に、図18を参照して、画像生成手段11が、人存否判定手段12の判定結果に基づいて出力画像の内容を切り換える別の処理(以下、「第2出力画像切換処理」とする。)について説明する。なお、図18は、第2出力画像切換処理で切り換えられる3つの出力画像の関係を説明する図である。 Next, referring to FIG. 18, another process of switching the content of the output image based on the determination result of the person presence / absence determination means 12 (hereinafter, referred to as “second output image switching process”) will be described. Will be described. FIG. 18 is a diagram for explaining the relationship between three output images switched in the second output image switching process.
 画像生成装置100は、2つのカメラのそれぞれの撮像空間の重複部分における物体の消失を防止するストライプパタン処理、メッシュパタン処理、平均化処理等の画像消失防止処理を実行する。具体的には、画像生成装置100は、画像消失防止処理により、重複部分に対応する出力画像部分において、2つのカメラのそれぞれの入力画像平面上の座標が対応付けられる領域を混在させることによって重複部分における物体の消失を防止する。しかしながら、画像消失防止処理は、重複部分における物体の視認性を低下させてしまうという問題がある。 The image generation apparatus 100 executes image loss prevention processing such as stripe pattern processing, mesh pattern processing, and averaging processing for preventing the loss of an object in an overlapping portion of the imaging space of each of the two cameras. Specifically, the image generation apparatus 100 performs overlap by combining the coordinates on the input image plane of each of the two cameras in the output image portion corresponding to the overlapping portion by the image loss prevention process. Prevent the disappearance of objects in parts. However, the image loss prevention process has a problem that the visibility of the object in the overlapping portion is reduced.
 そこで、画像生成手段11は、人存否判定手段12の判定結果に応じて出力画像の内容を切り換える。本実施例では、画像生成手段11は、人が存在すると判定された監視空間に対応するカメラを優先カメラとし、人が存在しないと判定された監視空間に対応するカメラを被優先カメラとする。なお、優先カメラの撮像空間と被優先カメラの撮像空間は重複部分を有する。そして、画像生成手段11は、優先カメラと被優先カメラとを決定できない場合には、画像消失防止処理を適用した周辺監視用仮想視点画像を生成する。一方で、画像生成手段11は、優先カメラと被優先カメラとを決定できた場合には、画像消失防止処理を適用することなく、重複部分に対応する出力画像部分に優先カメラの入力画像平面上の座標を対応付けることによって周辺監視用仮想視点画像を生成する。 Therefore, the image generation unit 11 switches the content of the output image according to the determination result of the person presence / absence determination unit 12. In the present embodiment, the image generation unit 11 sets a camera corresponding to the monitoring space determined to have a person as a priority camera, and sets a camera corresponding to the monitoring space determined to have no person as a priority camera. Note that the imaging space of the priority camera and the imaging space of the priority camera have an overlapping portion. Then, when the priority camera and the camera to be prioritized can not be determined, the image generation unit 11 generates a virtual viewpoint image for periphery monitoring to which the image loss prevention process is applied. On the other hand, when the image generation unit 11 can determine the priority camera and the priority camera, the image loss prevention process is not applied to the output image portion corresponding to the overlapping portion on the input image plane of the priority camera. A virtual viewpoint image for monitoring the periphery is generated by correlating the coordinates of.
 具体的には、画像生成手段11は、左側方監視空間ZL、後方監視空間ZB、及び右側方監視空間ZRの全てで人が存在すると判定された場合に、画像消失防止処理を適用した周辺監視用仮想視点画像を生成する。何れのカメラも優先カメラとはなり得ず、何れのカメラも被優先カメラとはなり得ないためである。すなわち、何れかのカメラを優先カメラとした場合には重複部分内の人を消失させてしまうおそれがあるためである。 Specifically, the image generation unit 11 monitors the periphery by applying the image loss prevention process when it is determined that a person is present in all of the left side monitoring space ZL, the rear monitoring space ZB, and the right side monitoring space ZR. Virtual viewpoint images for This is because any camera can not be a priority camera, and neither camera can be a priority camera. That is, when any camera is used as a priority camera, there is a possibility that the person in the overlapping portion may be lost.
 また、画像生成手段11は、左側方監視空間ZL、後方監視空間ZB、及び右側方監視空間ZRの全てで人が存在しないと判定された場合にも、画像消失防止処理を適用した周辺監視用仮想視点画像を生成する。何れかのカメラを優先カメラとしながら表示すべき対象が存在しないためである。但し、この場合には、画像生成手段11は、画像消失防止処理を適用することなく、重複部分に対応する出力画像部分に何れか一方のカメラの入力画像平面上の座標を対応付けることによって周辺監視用仮想視点画像を生成してもよい。そもそも消失のおそれがある物体が存在しないためであり、また、画像消失防止処理を省略することで制御部1の処理負荷を低減できるためである。 In addition, the image generation means 11 is for peripheral monitoring to which the image loss prevention process is applied even when it is determined that no person exists in all of the left side monitoring space ZL, the rear monitoring space ZB, and the right side monitoring space ZR. Generate a virtual viewpoint image. This is because there is no target to be displayed while using any camera as a priority camera. However, in this case, without applying the image loss prevention process, the image generation unit 11 monitors the periphery by correlating the coordinates on the input image plane of one of the cameras with the output image portion corresponding to the overlapping portion. Virtual viewpoint images may be generated. This is because there is no object that may be lost in the first place, and the processing load of the control unit 1 can be reduced by omitting the image loss prevention process.
 図18の出力画像D11は、左側方監視空間ZL、後方監視空間ZB、及び右側方監視空間ZRの全てで人が存在すると判定された場合に生成される、画像消失防止処理が適用された周辺監視用仮想視点画像を示す。この場合、画像生成手段11は、左側方撮像空間CLと後方撮像空間CBの重複部分に対応する出力画像部分R15に、左側方カメラ2Lの入力画像平面上の座標が対応付けられる領域と、後方カメラ2Bの入力画像平面上の座標が対応付けられる領域とを混在させる。また、画像生成手段11は、右側方撮像空間CRと後方撮像空間CBの重複部分に対応する出力画像部分R16に、右側方カメラ2Rの入力画像平面上の座標が対応付けられる領域と、後方カメラ2Bの入力画像平面上の座標が対応付けられる領域とを混在させる。出力画像部分R15、R16での作業者の消失を防止するためである。 The output image D11 of FIG. 18 is generated when it is determined that a person is present in all of the left side monitoring space ZL, the rear monitoring space ZB, and the right side monitoring space ZR. 7 shows a monitoring virtual viewpoint image. In this case, the image generation unit 11 sets the output image portion R15 corresponding to the overlapping portion of the left side imaging space CL and the rear imaging space CB to the area where the coordinates on the input image plane of the left side camera 2L are associated An area to which coordinates on the input image plane of the camera 2B are associated is mixed. Further, the image generation unit 11 is configured such that a region on the input image plane of the right side camera 2R is associated with the output image portion R16 corresponding to the overlapping portion of the right side imaging space CR and the rear imaging space CB An area to which coordinates on the input image plane of 2B are associated is mixed. This is to prevent the loss of the worker at the output image portions R15 and R16.
 一方で、画像生成手段11は、左側方監視空間ZLに人が存在すると判定され、且つ、後方監視空間ZBに人が存在しないと判定された場合には、左側方カメラ2Lを優先カメラとし、後方カメラ2Bを被優先カメラとする。すなわち、画像生成手段11は、左側方撮像空間CLと後方撮像空間CBの重複部分に対応する出力画像部分R15に左側方カメラ2Lの入力画像平面上の座標を対応付ける。 On the other hand, when it is determined that a person is present in the left-side monitoring space ZL and it is determined that a person is not present in the backward monitoring space ZB, the image generation unit 11 sets the left-side camera 2L as a priority camera, The rear camera 2B is set as a priority camera. That is, the image generation unit 11 associates the coordinates on the input image plane of the left side camera 2L with the output image portion R15 corresponding to the overlapping portion of the left side imaging space CL and the rear imaging space CB.
 また、画像生成手段11は、右側方監視空間ZRに人が存在すると判定され、且つ、後方監視空間ZBに人が存在しないと判定された場合には、右側方カメラ2Rを優先カメラとし、後方カメラ2Bを被優先カメラとする。すなわち、画像生成手段11は、右側方撮像空間CRと後方撮像空間CBの重複部分に対応する出力画像部分R16に右側方カメラ2Rの入力画像平面上の座標を対応付ける。 When it is determined that a person is present in the right side surveillance space ZR, and it is determined that a person is not present in the rear surveillance space ZB, the image generation means 11 sets the right side camera 2R as a priority camera, The camera 2B is set as a priority camera. That is, the image generation unit 11 associates the coordinates on the input image plane of the right side camera 2R with the output image portion R16 corresponding to the overlapping portion of the right side imaging space CR and the rear imaging space CB.
 図18の出力画像D12は、左側方監視空間ZL及び右側方監視空間ZRに人が存在すると判定され、且つ、後方監視空間ZBに人が存在しないと判定された場合に生成される側方カメラ優先の周辺監視用仮想視点画像を示す。この場合、画像生成手段11は、左側方カメラ2L及び右側方カメラ2Rのそれぞれを優先カメラとし、後方カメラ2Bを被優先カメラとする。そして、画像生成手段11は、左側方撮像空間CLと後方撮像空間CBの重複部分に対応する出力画像部分R15に左側方カメラ2Lの入力画像平面上の座標を対応付け、右側方撮像空間CRと後方撮像空間CBの重複部分に対応する出力画像部分R16に右側方カメラ2Rの入力画像平面上の座標を対応付ける。その結果、画像生成手段11は、出力画像部分R17Pに左側方カメラ2Lの入力画像平面上の座標を対応付け、出力画像部分R18Pに右側方カメラ2Rの入力画像平面上の座標を対応付け、出力画像部分R19Nに後方カメラ2Bの入力画像平面上の座標を対応付ける。 The output image D12 of FIG. 18 is a side camera that is generated when it is determined that a person is present in the left side monitoring space ZL and the right side monitoring space ZR, and it is determined that a person is not present in the rear monitoring space ZB. The priority virtual observation image for periphery monitoring is shown. In this case, the image generation unit 11 sets each of the left side camera 2L and the right side camera 2R as a priority camera, and sets the rear camera 2B as a priority camera. Then, the image generation unit 11 associates the coordinates on the input image plane of the left side camera 2L with the output image portion R15 corresponding to the overlapping portion of the left side imaging space CL and the rear imaging space CB, Coordinates on the input image plane of the right side camera 2R are associated with the output image portion R16 corresponding to the overlapping portion of the rear imaging space CB. As a result, the image generation unit 11 associates the output image portion R17P with the coordinates on the input image plane of the left side camera 2L, associates the output image portion R18P with the coordinates on the input image plane of the right side camera 2R, and outputs Coordinates on the input image plane of the rear camera 2B are associated with the image portion R19N.
 また、画像生成手段11は、左側方監視空間ZLに人が存在しないと判定され、且つ、後方監視空間ZBに人が存在すると判定された場合には、左側方カメラ2Lを被優先カメラとし、後方カメラ2Bを優先カメラとする。すなわち、画像生成手段11は、左側方撮像空間CLと後方撮像空間CBの重複部分に対応する出力画像部分R15に後方カメラ2Bの入力画像平面上の座標を対応付ける。 In addition, when it is determined that a person does not exist in the left side monitoring space ZL and the person is determined to exist in the rear monitoring space ZB, the image generation unit 11 sets the left side camera 2L as a priority camera, The rear camera 2B is set as a priority camera. That is, the image generation unit 11 associates the coordinates on the input image plane of the rear camera 2B with the output image portion R15 corresponding to the overlapping portion of the left side imaging space CL and the rear imaging space CB.
 また、画像生成手段11は、右側方監視空間ZRに人が存在しないと判定され、且つ、後方監視空間ZBに人が存在すると判定された場合には、右側方カメラ2Rを被優先カメラとし、後方カメラ2Bを優先カメラとする。すなわち、画像生成手段11は、右側方撮像空間CRと後方撮像空間CBの重複部分に対応する出力画像部分R16に後方カメラ2Bの入力画像平面上の座標を対応付ける。 When it is determined that no person is present in the right side surveillance space ZR and it is determined that a person is present in the backward surveillance space ZB, the image generation unit 11 sets the right side camera 2R as a priority camera, The rear camera 2B is set as a priority camera. That is, the image generation unit 11 associates the coordinates on the input image plane of the rear camera 2B with the output image portion R16 corresponding to the overlapping portion of the right side imaging space CR and the rear imaging space CB.
 図18の出力画像D13は、左側方監視空間ZL及び右側方監視空間ZRに人が存在しないと判定され、且つ、後方監視空間ZBに人が存在すると判定された場合に生成される、後方カメラ優先の周辺監視用仮想視点画像を示す。この場合、画像生成手段11は、左側方カメラ2L及び右側方カメラ2Rのそれぞれを被優先カメラとし、後方カメラ2Bを優先カメラとする。そして、画像生成手段11は、左側方撮像空間CLと後方撮像空間CBの重複部分に対応する出力画像部分R15に後方カメラ2Bの入力画像平面上の座標を対応付け、右側方撮像空間CRと後方撮像空間CBの重複部分に対応する出力画像部分R16にも後方カメラ2Bの入力画像平面上の座標を対応付ける。その結果、画像生成手段11は、出力画像部分R17Nに左側方カメラ2Lの入力画像平面上の座標を対応付け、出力画像部分R18Nに右側方カメラ2Rの入力画像平面上の座標を対応付け、出力画像部分R19Pに後方カメラ2Bの入力画像平面上の座標を対応付ける。 An output image D13 of FIG. 18 is a rear camera generated when it is determined that a person does not exist in the left side surveillance space ZL and the right side surveillance space ZR and it is determined that a person exists in the rear surveillance space ZB. The priority virtual observation image for periphery monitoring is shown. In this case, the image generation unit 11 sets each of the left side camera 2L and the right side camera 2R as a priority camera, and sets the rear camera 2B as a priority camera. Then, the image generation unit 11 associates the coordinates on the input image plane of the rear camera 2B with the output image portion R15 corresponding to the overlapping portion of the left side imaging space CL and the rear imaging space CB, and the right side imaging space CR and the rear Coordinates on the input image plane of the rear camera 2B are also associated with the output image portion R16 corresponding to the overlapping portion of the imaging space CB. As a result, the image generation unit 11 associates the output image portion R17N with the coordinates on the input image plane of the left side camera 2L, associates the output image portion R18N with the coordinates on the input image plane of the right side camera 2R, and outputs Coordinates on the input image plane of the rear camera 2B are associated with the image portion R19P.
 なお、図18に示すように、出力画像部分R17Pは、出力画像部分R17Nに出力画像部分R15を加えた部分に相当し、出力画像部分R18Pは、出力画像部分R18Nに出力画像部分R16を加えた部分に相当する。また、出力画像部分R19Pは、出力画像部分R19Nに出力画像部分R15及び出力画像部分R16を加えた部分に相当する。 As shown in FIG. 18, the output image portion R17P corresponds to a portion obtained by adding the output image portion R15 to the output image portion R17N, and the output image portion R18P has the output image portion R16 added to the output image portion R18N. It corresponds to the part. The output image portion R19P corresponds to a portion obtained by adding the output image portion R15 and the output image portion R16 to the output image portion R19N.
 図19は、人存否判定手段12の判定結果と、出力画像の内容との対応関係を示す対応表である。なお、○印は、人存否判定手段12により人が存在すると判定されたことを表し、×印は、人が存在しないと判定されたことを表す。 FIG. 19 is a correspondence table showing the correspondence between the determination result of the human presence / absence determination means 12 and the content of the output image. A circle indicates that it is determined by the human presence / absence determination unit 12 that a person is present, and a cross indicates that it is determined that a person is not present.
 パターンAは、左側方監視空間ZLのみで人が存在すると判定され、後方監視空間ZB及び右側方監視空間ZRでは人が存在しないと判定された場合に、図18の出力画像D12に示すような側方カメラ優先の周辺監視用仮想視点画像が生成されることを表す。 As shown in the output image D12 of FIG. 18, when it is determined that a person is present only in the left-side monitoring space ZL and it is determined that a person is not present in the backward monitoring space ZB and the right-side monitoring space ZR. It represents that the virtual camera viewpoint image for periphery surveillance of a side camera priority is generated.
 パターンBは、右側方監視空間ZRのみで人が存在すると判定され、左側方監視空間ZL及び後方監視空間ZBでは人が存在しないと判定された場合に、図18の出力画像D12に示すような側方カメラ優先の周辺監視用仮想視点画像が生成されることを表す。 Pattern B is as shown in the output image D12 of FIG. 18 when it is determined that a person is present only in the right side surveillance space ZR and it is determined that a person does not exist in the left direction surveillance space ZL and the rear surveillance space ZB. It represents that the virtual camera viewpoint image for periphery surveillance of a side camera priority is generated.
 パターンCは、左側方監視空間ZL及び右側方監視空間ZRで人が存在すると判定され、後方監視空間ZBでは人が存在しないと判定された場合に、図18の出力画像D12に示すような側方カメラ優先の周辺監視用仮想視点画像が生成されることを表す。 The pattern C is a side as shown in the output image D12 of FIG. 18 when it is determined that a person is present in the left side monitoring space ZL and the right side monitoring space ZR and it is determined that a person is not present in the rear monitoring space ZB. It represents that a virtual camera viewpoint image with peripheral camera priority is generated.
 パターンDは、後方監視空間ZBで人が存在すると判定され、左側方監視空間ZL及び右側方監視空間ZRでは人が存在しないと判定された場合に、図18の出力画像D13に示すような後方カメラ優先の周辺監視用仮想視点画像が生成されることを表す。 In the pattern D, when it is determined that a person is present in the rear monitoring space ZB and it is determined that no person is present in the left side monitoring space ZL and the right side monitoring space ZR, a rear as shown in the output image D13 of FIG. It represents that a camera-preferred peripheral monitoring virtual viewpoint image is generated.
 パターンE~Gは、左側方監視空間ZL及び右側方監視空間ZRの少なくとも一方と後方監視空間ZBとで人が存在すると判定された場合に、図18の出力画像D11に示すような画像消失防止処理を適用した周辺監視用仮想視点画像が生成されることを表す。 In the patterns E to G, when it is determined that a person is present in at least one of the left side monitoring space ZL and the right side monitoring space ZR and the rear monitoring space ZB, the image loss prevention as shown in the output image D11 of FIG. It represents that a peripheral viewpoint virtual viewpoint image to which the processing is applied is generated.
 パターンHは、左側方監視空間ZL、後方監視空間ZB、及び右側方監視空間ZRの全てで人が存在しないと判定された場合に、図18の出力画像D11に示すような画像消失防止処理を適用した周辺監視用仮想視点画像が生成されることを表す。 When it is determined that there is no person in all of the left side monitoring space ZL, the rear monitoring space ZB, and the right side monitoring space ZR, the pattern H performs the image loss prevention process as shown in the output image D11 of FIG. Indicates that the applied peripheral monitoring virtual viewpoint image is generated.
 以上の構成により、画像生成装置100は、人存否判定手段12の判定結果に基づいて出力画像の内容を切り換える。具体的には、画像生成装置100は、優先カメラと被優先カメラを決定できない場合には、画像消失防止処理を適用した周辺監視用仮想視点画像を生成する。一方で、画像生成手段11は、優先カメラと被優先カメラとを決定できる場合には、画像消失防止処理を適用することなく、重複部分に対応する出力画像部分に優先カメラの入力画像平面上の座標を対応付けることによって周辺監視用仮想視点画像を生成する。その結果、画像生成装置100は、2つのカメラのそれぞれの撮像空間の重複部分に存在する作業者をより鮮明に表示することができる。 With the above configuration, the image generation apparatus 100 switches the content of the output image based on the determination result of the person presence / absence determination unit 12. Specifically, when the image generation apparatus 100 can not determine the priority camera and the camera to be prioritized, it generates a virtual viewpoint image for periphery monitoring to which the image loss prevention process is applied. On the other hand, when the image generation unit 11 can determine the priority camera and the prioritized camera, the image generation unit 11 does not apply the image loss prevention process to the output image portion corresponding to the overlapping portion on the input image plane of the priority camera. By associating the coordinates, a peripheral viewpoint virtual viewpoint image is generated. As a result, the image generation apparatus 100 can display the worker present in the overlapping portion of the imaging space of each of the two cameras more clearly.
 また、上述の実施例では、画像生成装置100は、左側方カメラ2L及び右側方カメラ2Rを優先カメラとし、後方カメラ2Bを被優先カメラとする側方カメラ優先の周辺監視用仮想視点画像を生成する。或いは、画像生成装置100は、左側方カメラ2L及び右側方カメラ2Rを被優先カメラとし、後方カメラ2Bを優先カメラとする後方カメラ優先の周辺監視用仮想視点画像を生成する。しかしながら、本発明は、この構成に限定されるものではない。例えば、画像生成装置100は、出力画像部分R15に左側方カメラ2L及び後方カメラ2Bの何れか一方の入力画像平面上の座標を対応付けながらも、出力画像部分R16に画像消失防止処理を適用してもよい。また、画像生成装置100は、出力画像部分R16に右側方カメラ2R及び後方カメラ2Bの何れか一方の入力画像平面上の座標を対応付けながらも、出力画像部分R15に画像消失防止処理を適用してもよい。 Further, in the above-described embodiment, the image generation device 100 generates a virtual viewpoint image for perimeter monitoring with side camera priority, with the left side camera 2L and the right side camera 2R as priority cameras and the rear camera 2B as priority cameras. Do. Alternatively, the image generating apparatus 100 generates a rear-view-preferred peripheral viewpoint virtual viewpoint image with the rear-side camera 2L and the right-side camera 2R as priority cameras and the rear camera 2B as a priority camera. However, the present invention is not limited to this configuration. For example, the image generation apparatus 100 applies the image loss prevention process to the output image portion R16 while associating the output image portion R15 with the coordinates on the input image plane of either the left side camera 2L or the rear camera 2B. May be In addition, the image generation apparatus 100 applies the image loss prevention process to the output image portion R15 while associating the output image portion R16 with the coordinates on the input image plane of either the right side camera 2R or the rear camera 2B. May be
 また、画像生成装置100は、第1出力画像切換処理と第2出力画像切換処理とを組み合わせてもよい。 Further, the image generation apparatus 100 may combine the first output image switching process and the second output image switching process.
 また、上述の実施例では、画像生成装置100は、1つのカメラの撮像空間に1つの人検出センサの監視空間を対応させるが、複数のカメラの撮像空間に1つの人検出センサの監視空間を対応させてもよく、1つのカメラの撮像空間に複数の人検出センサの監視空間を対応させてもよい。 Further, in the above-described embodiment, the image generation device 100 associates the monitoring space of one human detection sensor with the imaging space of one camera, but monitors the monitoring space of one human detection sensor in the imaging spaces of a plurality of cameras. It may be made to correspond, and the monitoring space of a plurality of person detection sensors may be made to correspond to the imaging space of one camera.
 また、上述の実施例では、画像生成装置100は、人存否判定手段12の判定結果が変わった瞬間に出力画像の内容を切り換える。しかしながら、本発明はこの構成に限定されるものではない。例えば、画像生成装置100は、人存否判定手段12の判定結果が変わってから出力画像の内容を切り換えるまでに所定の遅延時間を設定してもよい。出力画像の内容が頻繁に切り換えられるのを抑制するためである。 Further, in the above-described embodiment, the image generation apparatus 100 switches the content of the output image at the moment when the determination result of the person presence / absence determination means 12 changes. However, the present invention is not limited to this configuration. For example, the image generation apparatus 100 may set a predetermined delay time before the content of the output image is switched after the determination result of the person existence determination unit 12 changes. This is to suppress frequent switching of the content of the output image.
 次に、図20及び図21を参照しながら、警報制御手段13が人存否判定手段12の判定結果に基づいて警報出力部7を制御する処理(以下、「第1警報制御処理」とする。)について説明する。なお、図20は、第1警報制御処理の流れを示すフローチャートであり、図21は、第1警報制御処理中に表示される出力画像の推移の一例を示す。なお、警報制御手段13は、所定周期で繰り返しこの第1警報制御処理を実行する。 Next, with reference to FIGS. 20 and 21, the alarm control unit 13 controls the alarm output unit 7 based on the determination result of the human presence / absence determination unit 12 (hereinafter, referred to as “first alarm control processing”). Will be described. FIG. 20 is a flowchart showing the flow of the first alarm control process, and FIG. 21 shows an example of the transition of the output image displayed during the first alarm control process. The alarm control means 13 repeatedly executes this first alarm control process at a predetermined cycle.
 最初に、人存否判定手段12は、ショベル60の周囲に人が存在するか否かを判定する(ステップS11)。このとき、画像生成手段11は、例えば、図21の出力画像D21に示すような路面高さ基準の周辺監視用仮想視点画像を生成して表示する。 First, the human presence / absence determination means 12 determines whether or not a person is present around the shovel 60 (step S11). At this time, the image generation unit 11 generates and displays, for example, a virtual viewpoint image for perimeter monitoring based on the road surface height as shown in the output image D21 of FIG.
 ショベル60の周囲に人が存在すると判定した場合(ステップS11のYES)、人存否判定手段12は、検出信号を警報制御手段13に対して出力する。検出信号を受けた警報制御手段13は、警報を開始させるための警報開始信号を警報出力部7に対して出力し、警報出力部7から警報を出力させる(ステップS12)。 When it is determined that a person is present around the shovel 60 (YES in step S11), the person presence / absence determination means 12 outputs a detection signal to the alarm control means 13. The alarm control means 13 having received the detection signal outputs an alarm start signal for starting an alarm to the alarm output unit 7, and causes the alarm output unit 7 to output an alarm (step S12).
 本実施例では、警報出力部7は、警報音を出力する。そして、画像生成手段11は、例えば、図21の出力画像D22に示すように、路面高さ基準の周辺監視用仮想視点画像を頭高さ基準の周辺監視用仮想視点画像に切り換えて表示する。具体的には、人存否判定手段12は、後方監視空間ZBに作業者P1が存在すると判定した場合に警報制御手段13に対して検出信号を通知する。そして、警報制御手段13は、左側方警報出力部7L、後方警報出力部7B、及び右側方警報出力部7Rに対して警報開始信号を出力し、3つ全ての警報出力部から警報音を出力させる。また、画像生成手段11は、頭高さ基準の周辺監視用仮想視点画像上に警報停止ボタンG1を重畳表示する。警報停止ボタンG1は、入力部3としてのタッチパネルとの協働によって構成されるソフトウェアボタンである。操作者は、警報停止ボタンG1を押下することによって警報音を停止させることができる。なお、警報停止ボタンG1は、表示部5の近傍に設置されるハードウェアボタンであってもよい。この場合、画像生成手段11は、警報停止ボタンG1としてのハードウェアボタンを押下することによって警報を停止させることができる旨を表すテキストメッセージを周辺監視用仮想視点画像に重畳表示してもよい。また、画像生成手段11は、ショベル60の周囲に人が存在すると判定された場合であっても、図21の出力画像D23に示すように、路面高さ基準の周辺監視用仮想視点画像をそのまま用いながら警報停止ボタンG1を重畳表示してもよい。 In the present embodiment, the alarm output unit 7 outputs an alarm sound. Then, the image generation means 11 switches the virtual viewpoint image for perimeter monitoring based on the road surface height to the virtual viewpoint image for perimeter monitoring based on the head height, for example, as shown in the output image D22 of FIG. Specifically, the human presence / absence determination means 12 notifies the alarm control means 13 of a detection signal when it is determined that the worker P1 is present in the backward monitoring space ZB. The alarm control means 13 outputs an alarm start signal to the left side alarm output unit 7L, the rear alarm output unit 7B, and the right side alarm output unit 7R, and outputs an alarm sound from all three alarm output units. Let In addition, the image generation unit 11 superimposes and displays the alarm stop button G1 on the periphery monitoring virtual viewpoint image based on head height. The alarm stop button G1 is a software button configured in cooperation with the touch panel as the input unit 3. The operator can stop the alarm sound by pressing the alarm stop button G1. The alarm stop button G1 may be a hardware button installed in the vicinity of the display unit 5. In this case, the image generation unit 11 may superimpose a text message indicating that the alarm can be stopped by pressing the hardware button as the alarm stop button G1 on the periphery monitoring virtual viewpoint image. Further, even if it is determined that a person is present around the shovel 60, the image generation means 11 does not change the virtual viewpoint image for perimeter monitoring based on the road surface height as shown in the output image D23 of FIG. The alarm stop button G1 may be superimposed and displayed while being used.
 一方、ショベル60の周囲に人が存在しないと判定した場合(ステップS11のNO)、人存否判定手段12は、警報制御手段13に対して検出信号を出力しない。そのため、警報制御手段13は、警報出力部7に対して警報開始信号を出力しない。また、警報制御手段13は、既に警報を出力させている場合であっても、警報を停止させるための警報停止信号を警報出力部7に対して出力することはない。 On the other hand, when it is determined that a person does not exist around the shovel 60 (NO in step S11), the person presence / absence determination unit 12 does not output a detection signal to the alarm control unit 13. Therefore, the alarm control means 13 does not output an alarm start signal to the alarm output unit 7. Further, even when the alarm control means 13 has already output an alarm, it does not output an alarm stop signal for stopping the alarm to the alarm output unit 7.
 その後、警報制御手段13は、警報停止ボタンG1が押下されたか否かを判定する(ステップS13)。そして、警報停止ボタンG1が押下されたと判定した場合(ステップS13のYES)、警報制御手段13は、警報停止信号を警報出力部7に対して出力し、警報出力部7からの警報の出力を停止させる(ステップS14)。また、画像生成手段11は、頭高さ基準の周辺監視用仮想視点画像を表示している場合には、頭高さ基準の周辺監視用仮想視点画像を路面高さ基準の周辺監視用仮想視点画像に切り換えて表示する。 Thereafter, the alarm control means 13 determines whether or not the alarm stop button G1 has been pressed (step S13). When it is determined that the alarm stop button G1 is pressed (YES in step S13), the alarm control unit 13 outputs an alarm stop signal to the alarm output unit 7, and the alarm output from the alarm output unit 7 is output. It is stopped (step S14). In addition, when displaying the virtual viewpoint image for perimeter monitoring based on head height, the image generation unit 11 displays the virtual viewpoint image for perimeter monitoring based on head height as a virtual viewpoint for perimeter monitoring based on road surface height. Switch to images and display.
 一方、警報停止ボタンG1が押下されていないと判定した場合(ステップS13のNO)、警報制御手段13は、警報停止信号を警報出力部7に対して出力しない。すなわち、警報制御手段13が出力させた警報は、その後に人存否判定手段12がショベル60の周囲に人が存在しないと判定した場合であっても、警報停止ボタンG1が押下されるまで停止しない。また、画像生成手段11は、頭高さ基準の周辺監視用仮想視点画像を表示している場合には、頭高さ基準の周辺監視用仮想視点画像の表示を継続させる。 On the other hand, when it is determined that the alarm stop button G1 is not pressed (NO in step S13), the alarm control unit 13 does not output an alarm stop signal to the alarm output unit 7. That is, even if the human presence / absence determination means 12 subsequently determines that there is no person around the shovel 60, the warning output by the warning control means 13 does not stop until the warning stop button G1 is pressed. . Further, when displaying the peripheral monitoring virtual viewpoint image based on the head height, the image generation unit 11 continues the display of the peripheral monitoring virtual viewpoint image based on the head height.
 なお、警報制御手段13は、警報停止ボタンG1が押下される前であっても、人存否判定手段12がショベル60の周囲に人が存在しないと判定した場合には、警報の内容を変更してもよい。具体的には、警報制御手段13は、既に出力している警報音の強弱、高低、出力間隔等を変更してもよく、或いは、既に発光させている警報ランプの強弱、発光色、発光間隔等を変更してもよい。警報を受ける操作者が、人存否判定手段12により人が存在すると判定されている場合と、人が存在しないと判定されている場合とを区別できるようにするためである。 The alarm control means 13 changes the content of the alarm when the human existence / nonexistence determination means 12 determines that there is no person around the shovel 60 even before the alarm stop button G1 is pressed. May be Specifically, the alarm control means 13 may change the intensity, height, output interval, etc. of the alarm sound that has already been output, or the intensity of the alarm lamp that has already emitted light, emission color, emission interval You may change etc. This is to allow the operator who receives the alarm to distinguish between the case where it is determined by the person existence determination means 12 that there is a person and the case where it is determined that there is no person.
 以上の構成により、画像生成装置100は、監視空間内に人が存在すると判定した場合に警報を出力し、操作者が警報を停止させる意思を示すまでは、その警報を停止させない。すなわち、画像生成装置100は、監視空間内に存在していた人がその監視空間内から退出した場合、或いは、監視空間内に存在する人を検出し損ねた場合であっても警報を停止させることはない。そのため、画像生成装置100は、ショベル60の周囲の監視空間に存在していた作業者が監視空間外に退出したことの確認を操作者に促すことができる。 With the above configuration, the image generating apparatus 100 outputs an alarm when it is determined that a person is present in the monitoring space, and does not stop the alarm until the operator indicates the intention to stop the alarm. That is, the image generation apparatus 100 stops the alarm even when a person who has been present in the monitoring space has left the monitoring space or when a person who has been present in the monitoring space has failed to be detected. There is nothing to do. Therefore, the image generating apparatus 100 can urge the operator to confirm that the worker present in the monitoring space around the shovel 60 has left the monitoring space.
 また、画像生成装置100は、人存否判定手段12が動体検出センサに基づいて作業者(動体)の存否を判定する構成において、監視空間内で作業者が静止したことに応じて警報を停止させてしまうのを防止できる。また、画像生成装置100は、人存否判定手段12がオプティカルフローを用いて作業者(動体)の存否を判定する構成において、監視空間内で作業者が静止したことに応じて警報を停止させてしまうのを防止できる。なお、動体検出センサ及びオプティカルフローは、静止物を検出対象とせず、作業者が監視空間内で静止したことと作業者が監視空間から退出したこととを区別しない。そのため、人が存在しないと判定した時点で警報を停止させる構成では、監視空間内に作業者が存在するにもかかわらず警報を停止させてしまうおそれがある。これに対し、画像生成装置100は、監視空間内に作業者が存在するおそれがある場合には警報を停止させないようにすることによってショベル60の操作者の注意を喚起することができる。 Further, in the configuration in which the human presence / absence determination means 12 determines the presence or absence of the worker (moving body) based on the moving body detection sensor, the image generation device 100 causes the alarm to stop in response to the worker standing still in the monitoring space. Can be prevented. Further, in the configuration in which the human presence / absence determination means 12 determines the presence / absence of the worker (moving body) using the optical flow, the image generation device 100 stops the alarm in response to the worker being stopped in the monitoring space. It is possible to prevent it. Note that the moving object detection sensor and the optical flow do not distinguish between a stationary object in the monitoring space and a leaving of the worker from the monitoring space without setting a stationary object as a detection target. Therefore, in the configuration in which the alarm is stopped when it is determined that a person does not exist, the alarm may be stopped although the worker is present in the monitoring space. On the other hand, the image generating apparatus 100 can call attention of the operator of the shovel 60 by not stopping the alarm when there is a possibility that a worker is present in the monitoring space.
 次に、図22~図25を参照して、警報制御手段13が人存否判定手段12の判定結果と、作業機械状態判定手段14の判定結果とに基づいて警報出力部7を制御する処理(以下、「第2警報制御処理」とする。)について説明する。なお、図22は、第2警報制御処理の流れを示すフローチャートであり、図23は、第2警報制御処理中に表示される出力画像の推移の一例を示す。また、図24は、作業機械状態判定手段14が作業機械の状態を判定する処理(以下、「作業機械状態判定処理」とする。)の流れを示すフローチャートであり、図25は、作業不可状態から作業可能状態への切り替わり時に警報制御手段13が警報出力部7を制御する処理(以下、「作業開始時警報制御処理」とする。)の流れを示すフローチャートである。なお、警報制御手段13は、所定周期で繰り返し第2警報制御処理を実行し、作業機械状態判定手段14は、所定周期で繰り返し作業機械状態判定処理を実行する。また、警報制御手段13は、作業不可状態から作業可能状態への切り替わり時に作業開始時警報制御処理を実行する。 Next, with reference to FIG. 22 to FIG. 25, the alarm control means 13 controls the alarm output unit 7 based on the judgment result of the human presence / absence judgment means 12 and the judgment result of the work machine state judgment means 14 ( Hereinafter, the “second alarm control process” will be described. FIG. 22 is a flowchart showing the flow of the second alarm control process, and FIG. 23 shows an example of the transition of the output image displayed during the second alarm control process. FIG. 24 is a flowchart showing a flow of processing (hereinafter referred to as “working machine state determination processing”) for the working machine state determination means 14 to determine the state of the working machine, and FIG. 4 is a flowchart showing a flow of processing (hereinafter referred to as “work start alarm control processing”) in which the alarm control means 13 controls the alarm output unit 7 at the time of switching from the state to the work available state. The alarm control means 13 repeatedly executes the second alarm control process at a predetermined cycle, and the work machine state determination means 14 repeatedly executes the work machine state judgment process at a predetermined cycle. Further, the alarm control means 13 executes an alarm control process at the start of operation when switching from the operation impossible state to the operation enable state.
 最初に、人存否判定手段12は、ショベル60の周囲に人が存在するか否かを判定する(ステップS21)。このとき、画像生成手段11は、表示部5が起動している場合には、例えば、図23の出力画像D31に示すような路面高さ基準の周辺監視用仮想視点画像を生成して表示する。 First, the human existence determining means 12 determines whether or not a person exists around the shovel 60 (step S21). At this time, when the display unit 5 is activated, the image generation unit 11 generates and displays, for example, a virtual viewpoint image for perimeter monitoring based on the road surface height as shown in the output image D31 of FIG. .
 ショベル60の周囲に人が存在すると判定した場合(ステップS21のYES)、人存否判定手段12は、作業機械状態判定手段14による判定の結果(後述の図24参照。)を参照する(ステップS22)。この場合、人存否判定手段12は、ゲートロックレバー8をロック状態に固定してもよい。具体的には、人存否判定手段12は、操作者がゲートロックレバー8を引き上げることができないようにしてもよく、ゲートロックレバー8が引き上げられたとしてもロック解除状態とならないようにしてもよい。 When it is determined that a person is present around the shovel 60 (YES in step S21), the human presence / absence determination means 12 refers to the result of determination by the working machine state determination means 14 (see FIG. 24 described later) (step S22). ). In this case, the person existence determination means 12 may fix the gate lock lever 8 in the locked state. Specifically, the human presence / absence determination means 12 may not allow the operator to pull up the gate lock lever 8, and may not be in the unlocked state even if the gate lock lever 8 is pulled up. .
 作業機械状態判定手段14によりショベル60が作業可能状態にあると判定された場合(ステップS22のYES)、人存否判定手段12は、検出信号を警報制御手段13に対して出力する。検出信号を受けた警報制御手段13は、警報を出力させるための警報開始信号を警報出力部7に対して出力し、警報出力部7から警報を出力させる(ステップS23)。 When it is determined by the work machine state determination means 14 that the shovel 60 is in the work enable state (YES in step S22), the human presence / absence determination means 12 outputs a detection signal to the alarm control means 13. The alarm control means 13 having received the detection signal outputs an alarm start signal for outputting an alarm to the alarm output unit 7, and causes the alarm output unit 7 to output an alarm (step S23).
 本実施例では、警報出力部7は、警報音を出力する。具体的には、人存否判定手段12は、左側方監視空間ZLに人(ここでは作業者P10~P12である。)が存在すると判定した場合に警報制御手段13に対して左側方検出信号を通知する。そして、警報制御手段13は、左側方警報出力部7L、後方警報出力部7B、及び右側方警報出力部7Rに対して警報開始信号を出力し、3つ全ての警報出力部から警報音を出力させる。この場合、制御部1は、表示部5が起動していない場合には表示部5を起動させる。そして、画像生成手段11は、図21の出力画像D22に示すように、路面高さ基準の周辺監視用仮想視点画像を頭高さ基準の周辺監視用仮想視点画像に切り換えて警報停止ボタンG1を重畳表示する。また、画像生成手段11は、図21の出力画像D23に示すように、路面高さ基準の周辺監視用仮想視点画像上に警報停止ボタンG1を重畳表示してもよい。この場合、警報制御手段13は、警報停止ボタンG1が押下されるまで、ショベル60による作業を禁止してもよい。具体的には、警報制御手段13は、ゲートロック弁を閉状態にし、コントロールバルブと操作レバー等との間の作動油の流れを遮断して操作レバー等を無効にしてもよい。 In the present embodiment, the alarm output unit 7 outputs an alarm sound. Specifically, when it is determined that a person (here, workers P10 to P12) is present in the left direction monitoring space ZL, the human presence / absence determination means 12 sends a left direction detection signal to the alarm control means 13 Notice. The alarm control means 13 outputs an alarm start signal to the left side alarm output unit 7L, the rear alarm output unit 7B, and the right side alarm output unit 7R, and outputs an alarm sound from all three alarm output units. Let In this case, the control unit 1 activates the display unit 5 when the display unit 5 is not activated. Then, the image generation unit 11 switches the virtual viewpoint image for perimeter monitoring on the road surface height standard to the virtual viewpoint image for perimeter monitoring on the head height standard, as shown in the output image D22 of FIG. Display superimposed. Further, as shown in the output image D23 of FIG. 21, the image generation means 11 may superimpose and display the alarm stop button G1 on the virtual viewpoint image for perimeter monitoring based on the road surface height. In this case, the alarm control means 13 may prohibit the work by the shovel 60 until the alarm stop button G1 is pressed. Specifically, the alarm control means 13 may close the gate lock valve and shut off the flow of hydraulic fluid between the control valve and the operation lever or the like to invalidate the operation lever or the like.
 また、作業機械状態判定手段14によりショベル60が作業可能状態にないと判定された場合(ステップS22のNO)、人存否判定手段12は、検出信号を警報制御手段13に対して出力せず、NVRAM等に用意された人検出フラグの値を「1」(オン)に設定する(ステップS24)。なお、人検出フラグは、初期値として値「0」(オフ)が設定されており、人検出フラグの値「1」(オン)は人を検出したことを表し、人検出フラグの値「0」(オフ)は人を検出していないことを表す。この時点では、警報制御手段13は、検出信号を受けないため、警報開始信号を警報出力部7に対して出力することはなく、警報出力部7から警報を出力させることもない。また、警報制御手段13は、既に警報出力部7に対して警報開始信号を出力していた場合にはそれら警報出力部7に対して警報停止信号を出力してもよい。また、画像生成手段11は、表示部5が起動している場合には、ショベル60の周囲に人が存在すると判定されているにもかかわらず、人が存在しないと判定された場合と同様の出力画像を表示する。具体的には、画像生成手段11は、例えば図23の出力画像D32に示すような路面高さ基準の周辺監視用仮想視点画像を表示する。但し、画像生成手段11は、路面高さ基準の周辺監視用仮想視点画像を頭高さ基準の周辺監視用仮想視点画像に切り換えてもよい。 When it is determined by the work machine state determination means 14 that the shovel 60 is not in the work enable state (NO in step S22), the human presence / absence determination means 12 does not output a detection signal to the alarm control means 13. The value of the human detection flag prepared in the NVRAM or the like is set to "1" (on) (step S24). The human detection flag is initially set to a value “0” (off), and the human detection flag value “1” (on) represents that a human is detected, and the human detection flag value “0”. “(Off)” indicates that a person is not detected. At this time, since the alarm control means 13 does not receive the detection signal, it does not output the alarm start signal to the alarm output unit 7 and does not output the alarm from the alarm output unit 7. Further, when the alarm control unit 13 has already output the alarm start signal to the alarm output unit 7, the alarm control unit 13 may output the alarm stop signal to the alarm output unit 7. Further, the image generation means 11 is similar to the case where it is determined that there is no person even though it is determined that there is a person around the shovel 60 when the display unit 5 is activated. Display the output image. Specifically, the image generation unit 11 displays, for example, a peripheral viewpoint virtual viewpoint image based on the road surface height as shown in the output image D32 of FIG. However, the image generation unit 11 may switch the virtual viewpoint image for perimeter monitoring based on the road surface height to the virtual viewpoint image for perimeter monitoring based on the head height.
 一方、ショベル60の周囲に人が存在しないと判定した場合(ステップS21のNO)、人存否判定手段12は、作業機械状態判定手段14による判定の結果を参照することはなく、警報制御手段13に対して検出信号を出力することもない。 On the other hand, when it is determined that a person does not exist around the shovel 60 (NO in step S21), the human presence / absence determination means 12 does not refer to the result of determination by the working machine state determination means 14, and the alarm control means 13 No detection signal is output.
 この場合、画像生成手段11は、既に警報が出力されている場合には、ショベル60の周囲に人が存在しないと判定されているにもかかわらず、人が存在すると判定された場合と同様の出力画像を表示する。具体的には、画像生成手段11は、例えば図23の出力画像D33に示すような頭高さ基準の周辺監視用仮想視点画像を警報停止ボタンG1と共に表示する。但し、画像生成手段11は、頭高さ基準の周辺監視用仮想視点画像を、図23の出力画像D34に示すような路面高さ基準の周辺監視用仮想視点画像に切り換えてもよい。 In this case, in the case where the warning has already been output, the image generation means 11 is similar to the case where it is determined that a person is present despite the fact that no person is present around the shovel 60. Display the output image. Specifically, the image generation unit 11 displays a virtual viewpoint image for perimeter monitoring based on head height as shown in the output image D33 of FIG. 23, for example, together with the alarm stop button G1. However, the image generation unit 11 may switch the virtual viewpoint image for perimeter monitoring based on head height to the virtual viewpoint image for perimeter monitoring based on road surface height as shown in the output image D34 of FIG.
 その後、警報制御手段13は、既に警報が出力されている場合には、警報停止ボタンG1が押下されたか否かを監視する(ステップS25)。そして、警報停止ボタンG1が押下されたと判定した場合(ステップS25のYES)、警報制御手段13は、警報停止信号を警報出力部7に対して出力し、警報出力部7からの警報の出力を停止させる(ステップS26)。また、画像生成手段11は、頭高さ基準の周辺監視用仮想視点画像(出力画像D33参照。)を表示している場合には、頭高さ基準の周辺監視用仮想視点画像を路面高さ基準の周辺監視用仮想視点画像(出力画像D31参照。)に切り換えて表示する。 Thereafter, when the alarm is already output, the alarm control means 13 monitors whether the alarm stop button G1 is pressed (step S25). When it is determined that the alarm stop button G1 is pressed (YES in step S25), the alarm control unit 13 outputs an alarm stop signal to the alarm output unit 7, and the alarm output from the alarm output unit 7 is output. It is stopped (step S26). In addition, when displaying the virtual viewpoint image for perimeter monitoring based on head height (see output image D33), the image generation unit 11 displays the virtual viewpoint image for perimeter monitoring based on head height on the road surface height. It switches to the reference | standard periphery monitoring virtual viewpoint image (refer the output image D31), and is displayed.
 一方、警報停止ボタンG1が押下されていないと判定した場合(ステップS25のNO)、警報制御手段13は、警報停止信号を警報出力部7に対して出力しない。すなわち、警報制御手段13が出力させた警報は、その後に人存否判定手段12がショベル60の周囲に人が存在しないと判定した場合であっても、警報停止ボタンG1が押下されるまで停止しない。また、画像生成手段11は、頭高さ基準の周辺監視用仮想視点画像(出力画像D33参照。)を表示している場合には、その頭高さ基準の周辺監視用仮想視点画像の表示を継続させる。 On the other hand, when it is determined that the alarm stop button G1 is not pressed (NO in step S25), the alarm control unit 13 does not output an alarm stop signal to the alarm output unit 7. That is, even if the human presence / absence determination means 12 subsequently determines that there is no person around the shovel 60, the warning output by the warning control means 13 does not stop until the warning stop button G1 is pressed. . In addition, when displaying the virtual viewpoint image for perimeter monitoring based on head height (see output image D33), the image generation means 11 displays the virtual viewpoint image for perimeter monitoring based on that head height. To continue.
 なお、警報制御手段13は、警報停止ボタンG1が押下される前であっても、人存否判定手段12がショベル60の周囲に人が存在しないと判定した場合には、警報の内容を変更してもよい。具体的には、警報制御手段13は、既に出力している警報音の強弱、高低、出力間隔等を変更してもよく、或いは、既に発光させている警報ランプの強弱、発光色、発光間隔等を変更してもよい。警報を受ける操作者が、人存否判定手段12により人が存在すると判定されている場合と、人が存在しないと判定されている場合とを区別できるようにするためである。 The alarm control means 13 changes the content of the alarm when the human existence / nonexistence determination means 12 determines that there is no person around the shovel 60 even before the alarm stop button G1 is pressed. May be Specifically, the alarm control means 13 may change the intensity, height, output interval, etc. of the alarm sound that has already been output, or the intensity of the alarm lamp that has already emitted light, emission color, emission interval You may change etc. This is to allow the operator who receives the alarm to distinguish between the case where it is determined by the person existence determination means 12 that there is a person and the case where it is determined that there is no person.
 次に、図24を参照して、作業機械状態判定手段14による作業機械状態判定処理について説明する。 Next, with reference to FIG. 24, the working machine state determination processing by the working machine state determination means 14 will be described.
 最初に、作業機械状態判定手段14は、イグニッションスイッチ9の出力に基づいて、イグニッションスイッチ9がオン状態にあるか否かを判定する(ステップS31)。 First, the work machine state determination means 14 determines whether the ignition switch 9 is in the on state based on the output of the ignition switch 9 (step S31).
 イグニッションスイッチ9がオン状態にあると判定した場合(ステップS31のYES)、作業機械状態判定手段14は、ゲートロックレバー8の出力に基づいてゲートロックレバー8がロック解除状態にあるか否かを判定する(ステップS32)。 When it is determined that the ignition switch 9 is in the on state (YES in step S31), the work machine state determination unit 14 determines whether the gate lock lever 8 is in the unlocked state based on the output of the gate lock lever 8. It determines (step S32).
 ゲートロックレバー8がロック解除状態にあると判定した場合(ステップS32のYES)、作業機械状態判定手段14は、ショベル60が作業可能状態にあると判定する(ステップS33)。 If it is determined that the gate lock lever 8 is in the unlocked state (YES in step S32), the work machine state determination unit 14 determines that the shovel 60 is in the work available state (step S33).
 一方、イグニッションスイッチ9がオン状態にないと判定した場合(ステップS31のNO)、或いは、ゲートロックレバー8がロック解除状態にないと判定した場合(ステップS32のNO)、作業機械状態判定手段14は、ショベル60が作業可能状態にない、すなわち、ショベル60が作業不可状態にあると判定する(ステップS34)。 On the other hand, when it is determined that the ignition switch 9 is not in the on state (NO in step S31), or when it is determined that the gate lock lever 8 is not in the unlocking state (NO in step S32) It is determined that the shovel 60 is not in the workable state, that is, the shovel 60 is in the work impossible state (step S34).
 その後、作業機械状態判定手段14は、NVRAM等に用意された作業可能フラグの値を判定結果に応じて設定する。人存否判定手段12は、図22に示す第2警報制御処理のステップS22においてこの作業可能フラグの値を参照し、参照した値に基づいて警報出力の有無を判断する。なお、作業可能フラグは、初期値として値「0」(オフ)が設定されており、作業可能フラグの値「1」(オン)は作業可能であることを表し、作業可能フラグの値「0」(オフ)は作業不可であることを表す。 After that, the work machine state determination unit 14 sets the value of the work available flag prepared in the NVRAM or the like according to the determination result. The human presence / absence determination means 12 refers to the value of the operation possible flag in step S22 of the second alarm control process shown in FIG. 22, and determines the presence / absence of alarm output based on the referred value. Note that the work enable flag is initially set to the value “0” (off), and the work enable flag value “1” (on) indicates that work is possible, and the work enable flag value “0”. “(Off)” indicates that the work can not be performed.
 なお、作業機械状態判定手段14は、ゲートロックレバー8及びイグニッションスイッチ9の何れか一方の状態のみに基づいてショベル60が作業可能状態にあるか否かを判定してもよい。 The work machine state determination means 14 may determine whether the shovel 60 is in the work-enabled state based on only one of the gate lock lever 8 and the ignition switch 9.
 次に、図25を参照して、警報制御手段13による作業開始時警報制御処理について説明する。 Next, with reference to FIG. 25, the alarm control process at the work start time by the alarm control means 13 will be described.
 最初に、警報制御手段13は、人検出フラグの値を参照する(ステップS41)。そして、人検出フラグの値が「1」オンであると判定した場合(ステップS41のYES)、警報制御手段13は、警報開始信号を警報出力部7に対して出力し、警報出力部7から警報を出力させる(ステップS42)。具体的には、警報制御手段13は、人存否判定手段12により、現時点においてショベル60の周囲に人が存在しないと判定された場合であっても警報を出力させる。本実施例では、警報制御手段13は、左側方警報出力部7L、後方警報出力部7B、及び右側方警報出力部7Rに対して警報開始信号を出力し、3つ全ての警報出力部から警報音を出力させる。この場合、警報制御手段13は、警報の出力が停止されるまで、ショベル60による作業を禁止してもよい。具体的には、警報制御手段13は、ゲートロック弁を閉状態にし、コントロールバルブと操作レバー等との間の作動油の流れを遮断して操作レバー等を無効にしてもよい。 First, the alarm control means 13 refers to the value of the human detection flag (step S41). When it is determined that the value of the human detection flag is “1” on (YES in step S41), the alarm control unit 13 outputs an alarm start signal to the alarm output unit 7, and the alarm output unit 7 An alarm is output (step S42). Specifically, the alarm control means 13 outputs an alarm even when it is determined by the human presence / absence determination means 12 that there is no person around the shovel 60 at the present time. In this embodiment, the alarm control unit 13 outputs an alarm start signal to the left side alarm output unit 7L, the rear alarm output unit 7B, and the right side alarm output unit 7R, and generates an alarm from all three alarm output units. Make the sound output. In this case, the alarm control means 13 may prohibit the work by the shovel 60 until the output of the alarm is stopped. Specifically, the alarm control means 13 may close the gate lock valve and shut off the flow of hydraulic fluid between the control valve and the operation lever or the like to invalidate the operation lever or the like.
 その後、警報制御手段13は、警報開始後の経過時間を計測し、警報開始後に所定時間(例えば2秒間である。)が経過したか否かを判定する(ステップS43)。 Thereafter, the alarm control means 13 measures an elapsed time after the alarm start, and determines whether a predetermined time (for example, 2 seconds) has elapsed after the alarm start (step S43).
 警報開始後に所定時間が経過していないと判定した場合(ステップS43のNO)、警報制御手段13は、所定時間が経過するまで待機する。 If it is determined that the predetermined time has not elapsed after the start of the alarm (NO in step S43), the alarm control unit 13 stands by until the predetermined time elapses.
 一方、警報開始後に所定時間が経過したと判定した場合(ステップS43のYES)、警報制御手段13は、警報停止信号を警報出力部7に対して出力し、警報出力部7からの警報の出力を停止させる(ステップS44)。また、警報制御手段13は、人検出フラグの値を「0」(オフ)にリセットする(ステップS45)。なお、警報制御手段13は、警報停止ボタンG1が押下されるまで警報を停止させないようにしてもよい。 On the other hand, when it is determined that the predetermined time has elapsed after the start of the alarm (YES in step S43), the alarm control unit 13 outputs an alarm stop signal to the alarm output unit 7, and the alarm output from the alarm output unit 7 Are stopped (step S44). Further, the alarm control means 13 resets the value of the human detection flag to "0" (off) (step S45). The alarm control means 13 may not stop the alarm until the alarm stop button G1 is pressed.
 以上の構成により、画像生成装置100は、監視空間内に人が存在すると判定した場合であってもショベル60が作業不可状態の場合には警報を出力しない。そのため、画像生成装置100は、ショベル60が作業不可状態にある場合に不要な警報が出力されてしまうのを防止できる。 With the above configuration, the image generation apparatus 100 does not output an alarm when the shovel 60 is in the operation impossible state even when it is determined that a person is present in the monitoring space. Therefore, the image generation apparatus 100 can prevent an unnecessary alarm from being output when the shovel 60 is in the operation impossible state.
 一方で、画像生成装置100は、現時点において監視空間内に人が存在しないと判定した場合であっても、ショベル60が作業不可状態にある間に作業者が監視範囲内に進入していた場合には警報を出力する。そのため、画像生成装置100は、ショベル60の周囲の監視空間に存在していた作業者が監視空間外に退出したことの確認を操作者に促すことができる。 On the other hand, even if it is determined that the image generation device 100 does not have a person in the monitoring space at the present time, the case where the worker enters the monitoring range while the shovel 60 is in the operation impossible state Output an alarm to. Therefore, the image generating apparatus 100 can urge the operator to confirm that the worker present in the monitoring space around the shovel 60 has left the monitoring space.
 また、上述の作業開始時警報制御処理では、警報制御手段13は、人検出フラグの値が「1」(オン)でありさえすれば警報を出力させる。しかしながら、本発明はこの構成に限定されるものではない。例えば、警報制御手段13は、人検知フラグの値が「1」(オン)に設定された時点からの経過時間が所定時間以上であれば警報を出力させないようにしてもよい。或いは、人存否判定手段12は、人検知フラグの値を「1」(オン)に設定した時点からの経過時間が所定時間に達した場合に人検知フラグの値を「0」(オフ)にリセットしてもよい。あまりにも古い検出記録に基づいて警報を出力させてしまうのを防止するためである。 Further, in the above-described work start alarm control process, the alarm control means 13 outputs an alarm as long as the value of the human detection flag is “1” (on). However, the present invention is not limited to this configuration. For example, the alarm control unit 13 may not output an alarm if the elapsed time from the time when the value of the human detection flag is set to “1” (on) is equal to or longer than a predetermined time. Alternatively, the human presence / absence determination means 12 sets the value of the human detection flag to “0” (off) when the elapsed time from the time of setting the value of the human detection flag to “1” (on) reaches a predetermined time. It may be reset. This is to prevent an alarm from being output based on a detection record that is too old.
 また、画像生成装置100は、人存否判定手段12が動体検出センサに基づいて作業者(動体)の存否を判定する場合、ショベル60が作業不可状態にあるときに監視空間内に進入して静止している作業者を、ショベル60が作業可能状態になったときに検出し損ね、ショベル60の操作者がその作業者に気付かないままショベル60による作業を開始してしまうのを防止できる。 Further, when the human presence / absence determination means 12 determines the presence / absence of the worker (moving body) based on the moving body detection sensor, the image generating apparatus 100 enters into the monitoring space when the shovel 60 is in the operation impossible state and stands still. It fails to detect the worker who is doing work when the shovel 60 becomes ready to work, and it is possible to prevent the operator of the shovel 60 from starting the work by the shovel 60 without the worker being aware of it.
 また、画像生成装置100は、人存否判定手段12がオプティカルフローを用いて作業者(動体)の存否を判定する場合に、ショベル60が作業不可状態にあるときに監視空間内に進入して静止している作業者を、ショベル60が作業可能状態になったときに検出し損ね、ショベル60の操作者がその作業者に気付かないままショベル60による作業を開始してしまうのを防止できる。 Further, when the human existence determining means 12 determines the presence or absence of the worker (moving body) using the optical flow, the image generating apparatus 100 enters into the monitoring space when the shovel 60 is in the operation impossible state and stands still It fails to detect the worker who is doing work when the shovel 60 becomes ready to work, and it is possible to prevent the operator of the shovel 60 from starting the work by the shovel 60 without the worker being aware of it.
 なお、動体検出センサ及びオプティカルフローは、静止物を検出対象とせず、作業者が監視空間内で静止したことと作業者が監視空間から退出したこととを区別しない。そのため、現時点において監視空間内に人が存在しないと判定した場合には如何なる場合も警報を出力させない構成では、ショベル60が作業可能状態となる前に監視空間内に進入して静止している作業者が存在するにもかかわらずショベル60の作業を許容してしまうおそれがある。これに対し、画像生成装置100は、監視空間内に作業者が存在するおそれがある場合には警報を出力することによってショベル60の操作者の注意を喚起することができる。 Note that the moving object detection sensor and the optical flow do not distinguish between a stationary object in the monitoring space and a leaving of the worker from the monitoring space without setting a stationary object as a detection target. Therefore, in the configuration in which no alarm is output in any case when it is determined that there is no person in the monitoring space at the current time, the work is stopped by entering the monitoring space before the shovel 60 becomes ready to work. There is a possibility that work of the shovel 60 may be permitted despite the presence of a person. On the other hand, the image generating apparatus 100 can call attention of the operator of the shovel 60 by outputting an alarm when there is a possibility that a worker is present in the monitoring space.
 また、上述の第2警報制御処理において、画像生成装置100は、ショベル60が作業不可状態にある場合にも表示部5に出力画像を表示させるが、表示部5に出力画像を表示させないようにしてもよい。ショベル60が作業不可状態にある場合には、ショベル60と作業者が接触するおそれがなく、操作者は、出力画像を通じてショベル60の周囲を監視する必要もないためである。 Further, in the second alarm control process described above, the image generation device 100 causes the display unit 5 to display the output image even when the shovel 60 is in the operation impossible state, but does not display the output image on the display unit 5. May be When the shovel 60 is in the inoperable state, there is no possibility that the worker comes in contact with the shovel 60, and the operator does not have to monitor the periphery of the shovel 60 through the output image.
 次に、図26を参照して、報知部制御手段15が人存否判定手段12の判定結果に基づいて報知部20を制御する処理(以下、「第1報知部制御処理」とする。)について説明する。なお、図26は、図11に示すショベル60に搭載される画像生成装置100で実行される第1報知部制御処理の流れを示すフローチャートであり、報知部制御手段15は、所定周期で繰り返しこの第1報知部制御処理を実行する。 Next, with reference to FIG. 26, about the processing (hereinafter referred to as “first notification part control processing”) in which the notification part control means 15 controls the notification part 20 based on the determination result of the person presence / absence determination means 12. explain. FIG. 26 is a flowchart showing the flow of the first notification unit control process executed by the image generation apparatus 100 mounted on the shovel 60 shown in FIG. 11, and the notification unit control means 15 repeats this process at a predetermined cycle. The first notification unit control process is executed.
 最初に、報知部制御手段15は、作業機械状態判定手段14の判定結果を参照して、ショベル60が作業可能状態にあるか否かを判定する(ステップS51)。 First, the notification unit control means 15 refers to the determination result of the work machine state determination means 14 and determines whether or not the shovel 60 is in the work available state (step S51).
 ショベル60が作業可能状態にあると判定した場合(ステップS51のYES)、報知部制御手段15は、報知部20に対して作業可能状態信号を出力し、ショベル60が作業可能状態にあることを報知させる(ステップS52)。本実施例では、報知部制御手段15は、報知部20としての3つのインジケータランプを赤色で発光させ、ショベル60が作業可能状態にあることを周囲の人に知らせるようにする。また、報知部制御手段15は、インジケータランプを点灯させてもよく点滅させてもよい。なお、以下では、報知部20のこの状態を「作業可能報知状態」とする。 When it is determined that the shovel 60 is in the work enable state (YES in step S51), the notification unit control means 15 outputs a work enable state signal to the notification unit 20, and the shovel 60 is in the work enable state. The notification is made (step S52). In the present embodiment, the notification unit control means 15 causes the three indicator lamps as the notification unit 20 to emit light in red to notify surrounding people that the shovel 60 is ready for work. Further, the notification unit control means 15 may turn on or turn off the indicator lamp. In addition, below, this state of the alerting | reporting part 20 is made into a "operation possible alerting | reporting state."
 一方、ショベル60が作業可能状態にないと判定した場合(ステップS51のNO)、報知部制御手段15は、報知部20に対して作業不可状態信号を出力し、ショベル60が作業不可状態にあることを報知させる(ステップS53)。本実施例では、報知部制御手段15は、報知部20としての3つのインジケータランプを緑色で発光させ、ショベル60が作業可能状態にないこと、すなわち、ショベル60が作業不可状態にあることを周囲の人に知らせるようにする。また、報知部制御手段15は、インジケータランプを点灯させてもよく点滅させてもよい。なお、以下では、報知部20のこの状態を「作業不可報知状態」とする。 On the other hand, when it is determined that the shovel 60 is not in the work enable state (NO in step S51), the notification unit control means 15 outputs the operation impossible state signal to the notification unit 20, and the shovel 60 is in the operation impossible state. To notify that (step S53). In the present embodiment, the notification unit control unit 15 causes the three indicator lamps as the notification unit 20 to emit light in green and the shovel 60 is not in the work enable state, that is, the shovel 60 is in the operation impossible state. Inform people of Further, the notification unit control means 15 may turn on or turn off the indicator lamp. In addition, below, this state of the alerting | reporting part 20 is made into a "operation impossible alerting | reporting state."
 次に、図27を参照して、作業許否決定手段16が人存否判定手段12の判定結果に基づいてショベル60による作業の許否を決定する処理(以下、「作業許否決定処理」とする。)について説明する。なお、図27は、図11に示すショベル60に搭載される画像生成装置100で実行される作業許否決定処理の流れを示すフローチャートであり、作業許否決定手段16は、所定周期で繰り返しこの作業許否決定処理を実行する。 Next, with reference to FIG. 27, processing for determining whether or not work is permitted by the shovel 60 based on the determination result of the human presence / absence determination means 12 (hereinafter referred to as “work permission determination process”). Will be explained. FIG. 27 is a flow chart showing a flow of the work permission determination process executed by the image generation apparatus 100 mounted on the shovel 60 shown in FIG. 11, and the work permission determination means 16 repeatedly performs this work permission at a predetermined cycle. Execute the decision process.
 最初に、作業許否決定手段16は、作業機械状態判定手段14の判定結果を参照して、ショベル60が作業可能状態にあるか否かを判定する(ステップS61)。 First, the work permission determination means 16 refers to the determination result of the work machine state determination means 14 to determine whether the shovel 60 is in the work enable state (step S61).
 ショベル60が作業可能状態にあると判定した場合(ステップS61のYES)、作業許否決定手段16は、人存否判定手段12の判定結果を参照して、ショベル60の周囲に人が存在するか否かを判定する(ステップS62)。 When it is determined that the shovel 60 is in the work ready state (YES in step S61), the work permission determination means 16 refers to the determination result of the person existence determination means 12 and determines whether or not a person exists around the shovel 60. It is determined (step S62).
 ショベル60の周囲に人が存在すると判定した場合(ステップS62のYES)、作業許否決定手段16は、ショベル60による作業を禁止する(ステップS63)。具体的には、作業許否決定手段16は、例えば、ゲートロック弁に対して作業禁止信号を出力し、コントロールバルブと操作レバー等との間の作動油の流れを遮断して操作レバー等を無効にすることによって、ショベル60による作業を禁止する。この場合、作業許否決定手段16は、ゲートロックレバー8がロック解除状態であっても、ショベル60による作業を禁止する。但し、作業許否決定手段16は、警報制御手段13が出力させた警報が停止された場合等、所定の条件が満たされた場合に、ショベル60による作業の禁止を解除してもよい。 If it is determined that a person is present around the shovel 60 (YES in step S62), the work permission determination unit 16 prohibits the work by the shovel 60 (step S63). Specifically, the work permission determination means 16 outputs, for example, a work prohibition signal to the gate lock valve, shuts off the flow of hydraulic oil between the control valve and the operation lever, etc., and invalidates the operation lever, etc. Prohibits the work by the shovel 60. In this case, the work permission determination means 16 prohibits the work by the shovel 60 even when the gate lock lever 8 is in the unlocked state. However, the work permission determination unit 16 may release the prohibition of the work by the shovel 60 when a predetermined condition is satisfied, such as when the alarm output by the alarm control unit 13 is stopped.
 一方、ショベル60の周囲に人が存在しないと判定した場合(ステップS62のNO)、作業許否決定手段16は、ショベル60による作業を許可する(ステップS64)。具体的には、作業許否決定手段16は、例えば、ゲートロック弁に対して作業許可信号を出力し、コントロールバルブと操作レバー等との間で作動油を連通させて操作レバー等を有効にすることによって、ショベル60による作業を許可する。 On the other hand, when it is determined that there is no person around the shovel 60 (NO in step S62), the work permission determination unit 16 permits the work by the shovel 60 (step S64). Specifically, the work permission determination means 16 outputs, for example, a work permission signal to the gate lock valve, causes the hydraulic fluid to communicate between the control valve and the control lever, and makes the control lever or the like effective. Thus, the work by the shovel 60 is permitted.
 なお、ショベル60が作業可能状態にないと判定した場合(ステップS61のNO)、作業許否決定手段16は、ショベル60による作業の許否を決定することなく、今回の作業許否決定処理を終了する。ショベル60による作業が既に禁止されているためである。 In addition, when it determines with the shovel 60 not being in an operation possible state (NO of step S61), the operation | use permission determination means 16 complete | finishes this operation permission determination process, without determining the permission of the operation by the shovel 60. This is because the work by the shovel 60 is already prohibited.
 以上の構成により、画像生成装置100は、ショベル60が作業可能状態にあるか否かをショベル60の周囲の人に知らせることができる。また、画像生成装置100は、ショベル60の周囲に人が存在する場合には、ショベル60が作業可能状態であっても、ショベル60による作業を禁止することができる。また、作業可能報知状態にある報知部20を見た人は、仮にショベル60に接近したとしても、ショベル60の周囲に人が存在する場合にはショベル60による作業が禁止されるため、ショベル60が不意に動き出すことがないのを認識できる。そのため、作業可能報知状態にある報知部20を見た人は、安心してショベル60に接近することができる。また、作業不可報知状態にある報知部20を見た人は、ショベル60の周囲に人が存在するか否かにかかわらずショベル60による作業が禁止されているため、ショベル60が不意に動き出すことがないのを認識できる。そのため、作業不可報知状態にある報知部20を見た人は、安心してショベル60に接近することができる。このようにして、画像生成装置100は、ショベル60の周囲に存在する人にショベル60の状態をより適切に知らせることができる。 With the above configuration, the image generating apparatus 100 can notify persons around the shovel 60 whether or not the shovel 60 is in a workable state. In addition, when a person is present around the shovel 60, the image generating apparatus 100 can prohibit the work by the shovel 60 even if the shovel 60 is in a workable state. In addition, even if a person who looks at the notification unit 20 in the work ready notification state approaches the shovel 60 even if a person is present around the shovel 60, the work by the shovel 60 is prohibited. It can recognize that it does not start moving unexpectedly. Therefore, a person who looks at the notification unit 20 in the work availability notification state can safely approach the shovel 60. In addition, the person who saw the notification unit 20 in the work non-permission notification state starts the shovel 60 unexpectedly because the work by the shovel 60 is prohibited regardless of whether there is a person around the shovel 60 or not. Can recognize that there is no Therefore, a person who looks at the notification unit 20 in the inoperable notification state can approach the shovel 60 with confidence. In this manner, the image generating apparatus 100 can more appropriately notify a person present around the shovel 60 the state of the shovel 60.
 次に、図28を参照して、報知部制御手段15が人存否判定手段12の判定結果と、作業機械状態判定手段14の判定結果とに基づいて報知部20を制御する処理(以下、「第2報知部制御処理」とする。)について説明する。なお、図28は、図11に示すショベル60に搭載される画像生成装置100で実行される第2報知部制御処理の流れを示すフローチャートであり、報知部制御手段15は、所定周期で繰り返しこの第2報知部制御処理を実行する。また、作業許否決定手段16は、報知部制御手段15による第2報知部制御処理と並行して、所定周期で繰り返し作業許否決定処理を実行する。 Next, referring to FIG. 28, the process of controlling informing unit 20 based on the determination result of human presence / absence determination means 12 and the determination result of work machine state determination means 14 of notification unit control means 15 (hereinafter referred to as “ The second notification unit control process will be described. FIG. 28 is a flow chart showing the flow of the second notification unit control process executed by the image generation apparatus 100 mounted on the shovel 60 shown in FIG. 11, and the notification unit control means 15 repeats this process at a predetermined cycle. The second notification unit control process is executed. Further, in parallel with the second notification unit control processing by the notification unit control unit 15, the work permission determination unit 16 repeatedly executes the work permission determination processing at a predetermined cycle.
 最初に、報知部制御手段15は、作業機械状態判定手段14の判定結果を参照して、ショベル60が作業可能状態にあるか否かを判定する(ステップS71)。 First, the notification unit control means 15 refers to the determination result of the work machine state determination means 14 and determines whether or not the shovel 60 is in the work enable state (step S71).
 ショベル60が作業可能状態にあると判定した場合(ステップS71のYES)、報知部制御手段15は、人存否判定手段12の判定結果を参照して、ショベル60の周囲に人が存在するか否かを判定する(ステップS72)。 When it is determined that the shovel 60 is in the work-enabled state (YES in step S71), the informing unit control means 15 refers to the determination result of the person presence / absence determination means 12 and determines whether or not a person exists around the shovel 60. It is determined (step S72).
 ショベル60の周囲に人が存在すると判定した場合(ステップS72のYES)、報知部制御手段15は、報知部20に対して作業可能・人検出状態信号を出力し、ショベル60が作業可能状態にあり、且つ、人検出センサ6によって人が検出された状態(以下、「人検出状態」とする。)であることを報知させる(ステップS73)。本実施例では、報知部制御手段15は、報知部20としての3つのインジケータランプを赤色で発光させ、ショベル60が作業可能状態にあり且つ人検出状態にあることを周囲の人に知らせるようにする。また、報知部制御手段15は、インジケータランプを点灯させてもよく点滅させてもよい。なお、以下では、報知部20のこの状態を「作業可能・人検出報知状態」とする。また、この場合、ショベル60による作業は、作業許否決定手段16により禁止される。但し、作業許否決定手段16は、警報制御手段13が出力させた警報が停止された場合等、所定の条件が満たされた場合に、ショベル60による作業の禁止を解除してもよい。 When it is determined that a person is present around the shovel 60 (YES in step S72), the notification unit control unit 15 outputs a work possible / person detection state signal to the notification unit 20, and the shovel 60 is in a work possible state. And notifies that it is in a state where a person is detected by the person detection sensor 6 (hereinafter referred to as "person detection state") (step S73). In the present embodiment, the notification unit control unit 15 causes the three indicator lamps as the notification unit 20 to emit light in red, and notifies surrounding people that the shovel 60 is in the operation enable state and in the human detection state. Do. Further, the notification unit control means 15 may turn on or turn off the indicator lamp. In addition, below, let this state of the alerting | reporting part 20 be a "working possible / person detection alerting | reporting state." Further, in this case, the work by the shovel 60 is prohibited by the work permission determination means 16. However, the work permission determination unit 16 may release the prohibition of the work by the shovel 60 when a predetermined condition is satisfied, such as when the alarm output by the alarm control unit 13 is stopped.
 一方、ショベル60の周囲に人が存在しないと判定した場合(ステップS72のNO)、報知部制御手段15は、報知部20に対して作業可能・人非検出状態信号を出力し、ショベル60が作業可能状態にあり、且つ、人検出センサ6によって人が検出されていない状態(以下、「人非検出状態」とする。)であることを報知させる(ステップS74)。本実施例では、報知部制御手段15は、報知部20としての3つのインジケータランプを黄色で発光させ、ショベル60が作業可能状態にあり且つ人非検出状態にあることを周囲の人に知らせるようにする。また、報知部制御手段15は、インジケータランプを点灯させてもよく点滅させてもよい。なお、以下では、報知部20のこの状態を「作業可能・人非検出報知状態」とする。また、この場合、ショベル60による作業は、作業許否決定手段16により許可される。 On the other hand, when it is determined that a person does not exist around the shovel 60 (NO in step S72), the notification unit control means 15 outputs a work possible / non-detected state signal to the notification unit 20, and the shovel 60 It is informed that it is in a state in which it is possible to work and that no person is detected by the person detection sensor 6 (hereinafter referred to as "person non-detection state") (step S74). In the present embodiment, the notification unit control unit 15 causes the three indicator lamps as the notification unit 20 to emit light in yellow so as to notify surrounding people that the shovel 60 is in the operation enable state and in the human non-detection state. Make it Further, the notification unit control means 15 may turn on or turn off the indicator lamp. In addition, below, let this state of the alerting | reporting part 20 be a "working possible / person non-detection alerting | reporting state." Further, in this case, the work by the shovel 60 is permitted by the work permission determination means 16.
 なお、ショベル60が作業可能状態にないと判定した場合(ステップS71のNO)、報知部制御手段15は、報知部20に対して作業不可状態信号を出力して報知部20を作業不可報知状態とし、ショベル60が作業不可状態であることを報知させる(ステップS75)。本実施例では、報知部制御手段15は、報知部20としての3つのインジケータランプを緑色で発光させ、ショベル60が作業不可状態にあることを周囲の人に知らせるようにする。また、報知部制御手段15は、インジケータランプを点灯させてもよく点滅させてもよい。 When it is determined that the shovel 60 is not in the work enable state (NO in step S71), the informing unit control means 15 outputs the operation impossible state signal to the informing unit 20 to notify the informing unit 20 of the operation impossible informing state Then, the operator is informed that the shovel 60 is in the inoperable state (step S75). In the present embodiment, the notification unit control means 15 causes the three indicator lamps as the notification unit 20 to emit light in green to notify surrounding people that the shovel 60 is in an operation impossible state. Further, the notification unit control means 15 may turn on or turn off the indicator lamp.
 また、報知部制御手段15は、ショベル60が作業可能状態にないと判定した場合であっても、人存否判定手段12の判定結果を参照して、ショベル60の周囲に人が存在するか否かを判定してもよい。 Further, even when the notification unit control means 15 determines that the shovel 60 is not in the work-enabled state, whether or not a person exists around the shovel 60 with reference to the determination result of the person existence / non-existence determination means 12 It may be determined.
 この場合、報知部制御手段15は、ショベル60の周囲に人が存在すると判定したときに報知部20に対して作業不可・人検出状態信号を出力し、ショベル60が作業不可状態であり且つ人検出状態であることを報知させてもよい。具体的には、報知部制御手段15は、例えば、報知部20としての3つのインジケータランプを橙色で発光させ、ショベル60が作業不可状態であり且つ人検出状態にあることを周囲の人に知らせるようにしてもよい。また、報知部制御手段15は、インジケータランプを点灯させてもよく点滅させてもよい。なお、以下では、報知部20のこの状態を「作業不可・人検出報知状態」とする。 In this case, when it is determined that a person is present around the shovel 60, the notification unit control means 15 outputs an operation impossible / person detection state signal to the notification unit 20, and the shovel 60 is in the operation impossible state and the person is It may be informed that it is in the detection state. Specifically, the notification unit control unit 15 causes the three indicator lamps as the notification unit 20 to emit light in orange, for example, to notify surrounding people that the shovel 60 is in the operation impossible state and in the human detection state. You may do so. Further, the notification unit control means 15 may turn on or turn off the indicator lamp. In the following, this state of the notification unit 20 will be referred to as “a state where the operation is not possible / person detection notification”.
 また、報知部制御手段15は、ショベル60の周囲に人が存在しないと判定したときに報知部20に対して作業不可・人非検出状態信号を出力し、ショベル60が作業不可状態であり且つ人非検出状態であることを報知させてもよい。具体的には、報知部制御手段15は、例えば、報知部20としての3つのインジケータランプを青色で発光させ、ショベル60が作業不可状態であり且つ人非検出状態にあることを周囲の人に知らせるようにしてもよい。また、報知部制御手段15は、インジケータランプを点灯させてもよく点滅させてもよい。なお、以下では、報知部20のこの状態を「作業不可・人非検出報知状態」とする。 In addition, when it is determined that a person is not present around the shovel 60, the notification unit control means 15 outputs an operation impossible / person non-detection state signal to the notification unit 20, and the shovel 60 is in an operation impossible state It may be informed that it is a person non-detection state. Specifically, for example, the notification unit control unit 15 causes the three indicator lamps as the notification unit 20 to emit light in blue, and the surrounding person is in a state in which the shovel 60 is in an operation impossible state and in a person non-detection state. It may be notified. Further, the notification unit control means 15 may turn on or turn off the indicator lamp. In addition, below, let this state of the alerting | reporting part 20 be "the operation impossible / person non-detection alerting | reporting state."
 このように、報知部制御手段15は、個々の報知部20(インジケータランプ)で3つの報知状態(作業不可報知状態、作業可能・人検出報知状態、及び作業可能・人非検出報知状態)を区別できるように個々の報知部20を制御する。また、報知部制御手段15は、個々の報知部20(インジケータランプ)で4つの報知状態(作業不可・人検出報知状態、作業不可・人非検出報知状態、作業可能・人検出報知状態、及び作業可能・人非検出報知状態)を区別できるように個々の報知部20を制御してもよい。 As described above, the informing unit control means 15 sets three informing states (operation impossible notification state, operation possible / person detection notification state, and operation possible / not person detection notification state) in each of the notification units 20 (indicator lamps). The individual notification units 20 are controlled to be distinguishable. In addition, the notification unit control means 15 has four notification states (operation impossible / person detection notification state, operation not possible / person non-detection notification state, operation possible / person detection notification state, and the like) in each notification unit 20 (indicator lamp) The individual notification units 20 may be controlled so as to be able to distinguish between work possible / person non-detection notification state).
 或いは、画像生成装置100は、3つの報知部20とは別に、人存否判定手段12の判定結果をショベル60の周囲の人に知らせるための3つの追加的な報知部(図示せず。)を備えるようにしてもよい。この場合、報知部制御手段15は、作業不可報知状態と作業可能報知状態とを区別できるように報知部20のそれぞれの報知状態を制御し、人検出状態と人非検出状態とを区別できるように追加的な報知部のそれぞれの報知状態を制御する。 Alternatively, the image generation apparatus 100 may include three additional notification units (not shown) for notifying persons around the shovel 60 of the determination result of the person existence / non-existence determination means 12 separately from the three notification units 20. It may be provided. In this case, the notification unit control means 15 controls each notification state of the notification unit 20 so as to distinguish between the operation impossible notification state and the operation possible notification state, so that the person detection state and the person non-detection state can be distinguished. Control the notification status of each additional notification unit.
 以上の構成により、画像生成装置100は、ショベル60が作業可能状態にあるか否かをショベル60の周囲の人に知らせることができると同時に、ショベル60が人検出状態にあるか否かをショベル60の周囲の人に知らせることができる。その結果、報知部20を見た人は、ショベル60が作業可能状態にあるか否かとは別に、自身がショベル60によって検出されているか否かを知ることができる。また、画像生成装置100は、ショベル60の周囲に人が存在する場合には、ショベル60が作業可能状態であっても、ショベル60による作業を禁止することができる。また、作業可能・人検出報知状態にある報知部20を見た人は、仮にショベル60に接近したとしても、ショベル60の周囲に人が存在する場合にはショベル60による作業が禁止されるため、ショベル60が不意に動き出すことがないのを認識できる。そのため、作業可能・人検出報知状態にある報知部20を見た人は、安心してショベル60に接近することができる。また、作業不可・人検出報知状態又は作業不可・人非検出報知状態にある報知部20を見た人は、ショベル60の周囲に人が存在するか否かにかかわらずショベル60による作業が禁止されているため、ショベル60が不意に動き出すことがないのを認識できる。そのため、作業不可・人検出報知状態又は作業不可・人非検出報知状態にある報知部20を見た人は、安心してショベル60に接近することができる。このようにして、画像生成装置100は、ショベル60の周囲に存在する人にショベル60の状態をより適切に知らせることができる。 With the above configuration, the image generating apparatus 100 can notify people around the shovel 60 whether or not the shovel 60 is in a workable state, and at the same time, whether or not the shovel 60 is in a person detection state We can notify people around 60. As a result, a person who looks at the notification unit 20 can know whether or not he / she is detected by the shovel 60, independently of whether the shovel 60 is in the work enable state. In addition, when a person is present around the shovel 60, the image generating apparatus 100 can prohibit the work by the shovel 60 even if the shovel 60 is in a workable state. In addition, even if a person who sees the notification unit 20 in the work ready / person detection / notification state approaches the shovel 60 temporarily, the work by the shovel 60 is prohibited when there is a person around the shovel 60 , It can be recognized that the shovel 60 does not start moving unexpectedly. Therefore, a person who looks at the notification unit 20 in the work enable / person detection / notification state can safely approach the shovel 60. In addition, a person who sees the notification unit 20 in the inoperable / human detecting / not informing / non-human detecting / not informing state prohibits the operation by the shovel 60 regardless of whether there is a person around the shovel 60 or not. Because of this, it can be recognized that the shovel 60 does not start suddenly. Therefore, a person who looks at the notification unit 20 in the work non-permission / person detection / notification state or the work non-permission / person non-detection / notification state can safely approach the shovel 60. In this manner, the image generating apparatus 100 can more appropriately notify a person present around the shovel 60 the state of the shovel 60.
 また、上述の実施例では、報知部20は、発光色を変えることによって複数の報知状態を区別可能に報知するが、表示する文字情報等を変えることによって複数の報知状態を区別可能に報知してもよい。 Further, in the above-described embodiment, the notification unit 20 notifies the plurality of notification states in a distinguishable manner by changing the light emission color, but changes the character information to be displayed to notify the plurality of notification states in a distinguishable manner. May be
 以上、本発明の好ましい実施例について詳説したが、本発明は、上述した実施例に制限されることはなく、本発明の範囲を逸脱することなしに上述した実施例に種々の変形及び置換を加えることができる。 Although the preferred embodiments of the present invention have been described above in detail, the present invention is not limited to the above-described embodiments, and various modifications and substitutions may be made to the above-described embodiments without departing from the scope of the present invention. It can be added.
 例えば、上述の実施例において、画像生成装置100は、空間モデルとして円筒状の空間モデルMDを採用するが、多角柱等の他の柱状の形状を有する空間モデルを採用してもよく、底面及び側面の二面から構成される空間モデルを採用してもよく、或いは、側面のみを有する空間モデルを採用してもよい。 For example, in the above-described embodiment, the image generation apparatus 100 adopts a cylindrical space model MD as a space model, but may adopt a space model having another columnar shape such as a polygonal column, etc. A space model composed of two sides may be adopted, or a space model having only sides may be adopted.
 また、画像生成装置100は、バケット、アーム、ブーム、旋回機構等の可動部材を備えながら自走するショベルに、カメラ及び人検出センサと共に搭載される。そして、画像生成装置100は、周囲画像をその操作者に提示しながらそのショベルの移動及びそれら可動部材の操作を支援する操作支援システムを構成する。しかしながら、画像生成装置100は、フォークリフト、アスファルトフィニッシャ等のように旋回機構を有しない作業機械に、カメラ及び人検出センサと共に搭載されてもよい。或いは、画像生成装置100は、産業用機械若しくは固定式クレーン等のように可動部材を有するが自走はしない作業機械に、カメラ及び人検出センサと共に搭載されてもよい。そして、画像生成装置100は、それら作業機械の操作を支援する操作支援システムを構成してもよい。 In addition, the image generation apparatus 100 is mounted on a self-propelled shovel together with a camera and a human detection sensor while having movable members such as a bucket, an arm, a boom, and a turning mechanism. Then, the image generation device 100 configures an operation support system that supports the movement of the shovel and the operation of the movable members while presenting the surrounding image to the operator. However, the image generating apparatus 100 may be mounted on a working machine having no turning mechanism such as a forklift, an asphalt finisher, etc. together with the camera and the human detection sensor. Alternatively, the image generating apparatus 100 may be mounted together with the camera and the human detection sensor on a working machine having movable members but not self-propelled, such as industrial machines or fixed cranes. And the image generation apparatus 100 may comprise the operation assistance system which assists operation of those working machines.
 また、周辺監視装置は、カメラ2及び表示部5を含む画像生成装置100を1例として説明されたが、カメラ2、表示部5等による画像表示機能を含まない装置として構成されてもよい。例えば、第1報知部制御処理又は第2報知部制御処理を実行する装置としての周辺監視装置100Aは、図29に示すように、カメラ2、入力部3、記憶部4、表示部5、座標対応付け手段10、画像生成手段11、及び警報制御手段13を省略してもよい。 In addition, although the periphery monitoring device has been described as an example of the image generation device 100 including the camera 2 and the display unit 5, the periphery monitoring device may be configured as a device not including an image display function by the camera 2, the display unit 5 or the like. For example, as shown in FIG. 29, the periphery monitoring device 100A as a device that executes the first notification unit control process or the second notification unit control process is a camera 2, an input unit 3, a storage unit 4, a display unit 5, coordinates. The association unit 10, the image generation unit 11, and the alarm control unit 13 may be omitted.
 また、本願は、2013年3月19日に出願した日本国特許出願2013-057400号に基づく優先権を主張するものであり、この日本国特許出願の全内容を本願に参照により援用する。 The present application claims priority based on Japanese Patent Application No. 2013-057400 filed on March 19, 2013, and the entire contents of this Japanese patent application are incorporated herein by reference.
 1・・・制御部 2・・・カメラ 2L・・・左側方カメラ 2R・・右側方カメラ 2B・・後方カメラ 3・・・入力部 4・・・記憶部 5・・・表示部 6・・・人検出センサ 6L・・・左側方人検出センサ 6R・・・右側方人検出センサ 6B・・・後方人検出センサ 7・・・警報出力部 7L・・・左側方警報出力部 7B・・・後側方警報出力部 7R・・・右側方警報出力部 8・・・ゲートロックレバー 9・・・イグニッションスイッチ 10・・・座標対応付け手段 11・・・画像生成手段 12・・・人存否判定手段 13・・・警報制御手段 14・・・作業機械状態判定手段 15・・・報知部制御手段 16・・・作業許否決定手段 20・・・報知部 40・・・入力画像・空間モデル対応マップ 41・・・空間モデル・処理対象画像対応マップ 42・・・処理対象画像・出力画像対応マップ 60・・・ショベル 61・・・下部走行体 62・・・旋回機構 63・・・上部旋回体 64・・・キャブ 100・・・画像生成装置 100A・・・周辺監視装置 DESCRIPTION OF SYMBOLS 1 ... Control part 2 ... Camera 2L ... Left-hand side camera 2 R .. Right-hand side camera 2 B .. Rear view camera 3 ... Input part 4 ... Storage part 5 ... Display part 6 ..・ Human detection sensor 6L ・ ・ ・ Left side detection sensor 6R ・ ・ ・ Right side detection sensor 6B ・ ・ ・ Rear person detection sensor 7 ・ ・ ・ Alarm output part 7L ・ ・ ・ Left side alarm output part 7B ・ ・ ・Rear side alarm output unit 7R Right side alarm output unit 8 Gate lock lever 9 Ignition switch 10 Coordinate matching unit 11 Image generation unit 12 Human presence / absence judgment Means 13 · · · Alarm control means 14 · · · work machine state determination means 15 · · · notification unit control means 16 · · · work permission decision means 20 · · · notification unit 40 · · · · image corresponding to the input image · space model 41 ··· Space model · processing target image correspondence map 42 ... processing target image · output image correspondence map 60 ... shovel 61 ··· lower traveling body 62 · · · turning mechanism 63 · · · upper turning body 64 ... Cab 100 ... Image generation device 100A ... Peripheral monitoring device

Claims (5)

  1.  作業機械の周辺から視認可能な報知部を備える作業機械の周辺監視装置であって、
     前記作業機械が作業可能状態にあるか否かを判定する作業機械状態判定手段と、
     前記作業機械の周囲における人の存否を判定する人存否判定手段と、
     前記報知部を制御する報知部制御手段と、
     前記作業機械による作業の許否を決定する作業許否決定手段と、を備え、
     前記報知部制御手段は、前記作業機械が作業可能状態にあると判定された場合と前記作業機械が作業可能状態にないと判定された場合とで前記報知部の報知状態を異なる状態にし、
     前記作業許否決定手段は、前記人存否判定手段の判定結果に基づいて前記作業機械による作業の許否を決定する、
     作業機械用周辺監視装置。
    A peripheral monitoring device for a working machine including a notification unit that is visible from the periphery of the working machine,
    A working machine state determination unit that determines whether the working machine is in a workable state;
    A human presence / absence determination means for determining the presence or absence of a person around the work machine;
    Notification unit control means for controlling the notification unit;
    Operation permission determination means for determining permission or rejection of the work by the work machine;
    The notification unit control means makes the notification state of the notification unit different depending on whether the work machine is determined to be in the workable state or not determined to be in the workable state.
    The work permission determination means determines the permission of the work by the work machine based on the determination result of the person existence determination means.
    Peripheral monitoring device for work machines.
  2.  前記報知部制御手段は、前記作業機械の周囲に人が存在すると判定された場合と前記作業機械の周囲に人が存在しないと判定された場合とで前記報知部の報知状態を異なる状態にする、
     請求項1に記載の作業機械用周辺監視装置。
    The notification unit control means makes the notification state of the notification unit different depending on whether it is determined that there is a person around the work machine and it is determined that there is no person around the work machine. ,
    The work machine peripheral monitoring device according to claim 1.
  3.  前記作業許否決定手段は、前記作業機械が作業可能状態にあると判定された場合であっても、前記作業機械の周囲に人が存在すると判定された場合には、前記作業機械による作業を禁止する、
     請求項1に記載の作業機械用周辺監視装置。
    The work permission determination means prohibits the work by the work machine when it is determined that a person is present around the work machine even when it is determined that the work machine is in the work enable state. Do,
    The work machine peripheral monitoring device according to claim 1.
  4.  前記作業機械状態判定手段は、イグニッションスイッチがオフ状態の場合、或いは、ゲートロックレバーがロック状態の場合に、前記作業機械が作業可能状態にないと判定し、イグニッションスイッチがオン状態の場合で、且つ、ゲートロックレバーがロック解除状態の場合に、前記作業機械が作業可能状態にあると判定する、
     請求項1に記載の作業機械用周辺監視装置。
    When the ignition switch is in the off state or the gate lock lever is in the lock state, the work machine state determination unit determines that the work machine is not in the operation enable state, and the ignition switch is in the on state. Further, when the gate lock lever is in the unlocked state, it is determined that the work machine is in the operable state.
    The work machine peripheral monitoring device according to claim 1.
  5.  前記作業機械が作業可能状態にあると判定され、且つ、前記作業機械の周囲に人が存在すると判定された状態を前記作業機械の周囲に存在する人が識別できるように前記報知部制御手段が前記報知部を作動させる場合、前記作業許否決定手段は、前記作業機械が作業可能状態にあると判定され、且つ、前記作業機械の周囲に人が存在すると判定された場合であっても、前記作業機械による作業を許可する、
     請求項1に記載の作業機械用周辺監視装置。
    The notification unit control means is configured such that a person present around the work machine can identify a state in which the work machine is determined to be in a workable state and a person is determined to be present around the work machine. When the notification unit is operated, the work permission determination means determines that the work machine is in the work enable state, and it is determined that a person is present around the work machine. Permit work by work machine,
    The work machine peripheral monitoring device according to claim 1.
PCT/JP2014/054287 2013-03-19 2014-02-24 Periphery monitoring device for work machine WO2014148202A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013057400A JP6352592B2 (en) 2013-03-19 2013-03-19 Excavator
JP2013-057400 2013-03-19

Publications (1)

Publication Number Publication Date
WO2014148202A1 true WO2014148202A1 (en) 2014-09-25

Family

ID=51579894

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/054287 WO2014148202A1 (en) 2013-03-19 2014-02-24 Periphery monitoring device for work machine

Country Status (2)

Country Link
JP (1) JP6352592B2 (en)
WO (1) WO2014148202A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3020875A1 (en) * 2014-11-14 2016-05-18 Caterpillar Inc. System for improving safety in use of a machine of a kind comprising a body and an implement movable relative to the body
CN109706995A (en) * 2019-01-23 2019-05-03 山西德源宏泰科技有限公司 Flat shovel and excavator with same
CN110121739A (en) * 2017-12-06 2019-08-13 株式会社小松制作所 The surroundings monitoring system of working truck and the environment monitoring method of working truck
CN110268119A (en) * 2017-02-22 2019-09-20 住友建机株式会社 Excavator

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3197633U (en) * 2015-03-09 2015-05-28 青木あすなろ建設株式会社 Amphibious construction machine construction support system
JP6934077B2 (en) * 2015-11-30 2021-09-08 住友重機械工業株式会社 Excavator
CN108026714A (en) 2015-11-30 2018-05-11 住友重机械工业株式会社 Construction machinery surroundings monitoring system
JP7012643B2 (en) * 2016-07-04 2022-02-14 住友建機株式会社 Excavator
JP6589945B2 (en) * 2017-07-14 2019-10-16 コベルコ建機株式会社 Construction machinery
KR102627093B1 (en) 2017-12-04 2024-01-18 스미도모쥬기가이고교 가부시키가이샤 Surrounding monitoring device
JP2020190094A (en) * 2019-05-21 2020-11-26 日立建機株式会社 Work machine
JP7340996B2 (en) * 2019-09-03 2023-09-08 日立建機株式会社 Field management system
EP4242386A1 (en) * 2022-03-07 2023-09-13 Yanmar Holdings Co., Ltd. Work machine control system, work machine, work machine control method, and work machine control program
EP4242385A3 (en) * 2022-03-07 2023-11-22 Yanmar Holdings Co., Ltd. Work machine control system, work machine, work machine control method, and work machine control program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003105807A (en) * 2001-09-27 2003-04-09 Komatsu Ltd Stop control method in intrusion-prohibitive region for service car and its controller
JP2010121270A (en) * 2008-11-17 2010-06-03 Hitachi Constr Mach Co Ltd Monitoring equipment of working machine
JP2012097751A (en) * 2011-12-14 2012-05-24 Komatsu Ltd Working machine

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4451224B2 (en) * 2004-06-09 2010-04-14 住友建機株式会社 Work status notification device for construction machinery
JP4597914B2 (en) * 2006-06-06 2010-12-15 日立建機株式会社 Work machine

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003105807A (en) * 2001-09-27 2003-04-09 Komatsu Ltd Stop control method in intrusion-prohibitive region for service car and its controller
JP2010121270A (en) * 2008-11-17 2010-06-03 Hitachi Constr Mach Co Ltd Monitoring equipment of working machine
JP2012097751A (en) * 2011-12-14 2012-05-24 Komatsu Ltd Working machine

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3020875A1 (en) * 2014-11-14 2016-05-18 Caterpillar Inc. System for improving safety in use of a machine of a kind comprising a body and an implement movable relative to the body
EP3168373A1 (en) * 2014-11-14 2017-05-17 Caterpillar Inc. System for improving safety in use of a machine of a kind comprising a body and an implement movable relative to the body
CN110268119A (en) * 2017-02-22 2019-09-20 住友建机株式会社 Excavator
EP3587675A4 (en) * 2017-02-22 2020-04-29 Sumitomo (S.H.I.) Construction Machinery Co., Ltd. Excavator
CN110268119B (en) * 2017-02-22 2022-04-15 住友建机株式会社 Excavator
US11479945B2 (en) 2017-02-22 2022-10-25 Sumitomo(S.H.I.) Construction Machinery Co., Ltd. Shovel
US11987954B2 (en) 2017-02-22 2024-05-21 Sumitomo(S.H.L.) Construction Machinery Co., Ltd. Shovel
CN110121739A (en) * 2017-12-06 2019-08-13 株式会社小松制作所 The surroundings monitoring system of working truck and the environment monitoring method of working truck
EP3522133A4 (en) * 2017-12-06 2020-01-22 Komatsu Ltd. Work vehicle vicinity monitoring system and work vehicle vicinity monitoring method
CN109706995A (en) * 2019-01-23 2019-05-03 山西德源宏泰科技有限公司 Flat shovel and excavator with same

Also Published As

Publication number Publication date
JP6352592B2 (en) 2018-07-04
JP2014181510A (en) 2014-09-29

Similar Documents

Publication Publication Date Title
WO2014148202A1 (en) Periphery monitoring device for work machine
JP6052881B2 (en) Perimeter monitoring equipment for work machines
JP6456584B2 (en) Perimeter monitoring equipment for work machines
JP6545430B2 (en) Shovel
JP6541734B2 (en) Shovel
JP6763913B2 (en) Peripheral monitoring equipment and excavators for work machines
JP6740259B2 (en) Work machine
JP2019060228A (en) Periphery monitoring device for work machine
JP6290497B2 (en) Excavator
JP6852026B2 (en) Excavator
JP6420432B2 (en) Excavator
JP6602936B2 (en) Excavator
JP6301536B2 (en) Excavator
JP6391656B2 (en) Peripheral monitoring device for work machine and work machine
JP7125454B2 (en) Excavator
JP7257357B2 (en) Excavators and systems for excavators
JP2014062794A (en) Periphery monitoring device for operating machine
JP7478055B2 (en) Periphery monitoring device for work machine and work machine
JP7263290B2 (en) Excavators and systems for excavators
JP7263288B2 (en) Excavators and systems for excavators
JP7257356B2 (en) Excavators and systems for excavators
JP7263289B2 (en) Excavators and systems for excavators
JP6356323B2 (en) Excavator
JP7454461B2 (en) excavator
JP2019071677A (en) Shovel

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14767819

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14767819

Country of ref document: EP

Kind code of ref document: A1