WO2019163378A1 - Forklift image processing device and control program - Google Patents

Forklift image processing device and control program Download PDF

Info

Publication number
WO2019163378A1
WO2019163378A1 PCT/JP2019/002062 JP2019002062W WO2019163378A1 WO 2019163378 A1 WO2019163378 A1 WO 2019163378A1 JP 2019002062 W JP2019002062 W JP 2019002062W WO 2019163378 A1 WO2019163378 A1 WO 2019163378A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
distance
forklift
image processing
processing apparatus
Prior art date
Application number
PCT/JP2019/002062
Other languages
French (fr)
Japanese (ja)
Inventor
秀之 藤森
中村 彰宏
Original Assignee
コニカミノルタ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタ株式会社 filed Critical コニカミノルタ株式会社
Priority to JP2020502093A priority Critical patent/JP7259835B2/en
Publication of WO2019163378A1 publication Critical patent/WO2019163378A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • B66F9/20Means for actuating or controlling masts, platforms, or forks
    • B66F9/24Electrical devices or systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an image processing apparatus for a forklift mounted on a forklift and a control program.
  • the forklift moves with the load on the pallet on the fork.
  • a blind spot is formed forward when a load is loaded on the front fork higher than the driver's line of sight.
  • the driver moves while moving the forklift backward.
  • it is necessary to move forward, and the driver has to lean out to the side and see it.
  • the forklift disclosed in Patent Document 1 is provided with a camera for photographing the front of the fork, and the photographed image is displayed on the display.
  • Patent Document 2 there is a technique for obtaining a forward view by providing a camera on each of the left and right forks of the forklift, calculating a distance to an object in front by stereo vision, and displaying the calculation result (Patent Document 2).
  • Patent Document 1 one camera is provided on the fork. In this case, although it is possible to visually recognize the front, the distance from the object in front is not known only by looking at the image.
  • the present invention has been made in view of the above-described circumstances, and a first object of the present invention is to provide a forklift that can confirm the front in a loaded state and can stably measure the distance to an object.
  • An image processing apparatus is provided.
  • the second purpose is to provide a safe working environment while easily confirming the front situation even in a loaded state in a forklift.
  • An image processing apparatus used for a forklift A camera that is provided at a front end portion of one fork among a plurality of forks supported to be movable up and down on the front side of the forklift, and shoots the front of the forklift; A detection sensor for detecting a distance to an object in front of the forklift, provided at the tip portion of the fork provided with the camera; Based on detection information of the detection sensor, a processing unit that processes the video acquired by the camera; A display for displaying a processed image processed by the processing unit; An image processing apparatus comprising:
  • the first imaging device and the second imaging device are provided on one fork so that at least a part of each imaging region overlaps, At least one of the first and second imaging elements functions as a part of the camera, and both the first and second imaging elements function as the detection sensor,
  • the processing unit detects a distance to an object in front of the fork based on images acquired from both the first and second imaging elements, and processes the image based on the detected distance.
  • a projector that emits light toward the front of the forklift, or that emits a two-dimensional pattern light toward the front of the forklift, With The projector and the processing unit also function as the detection sensor, and the processing unit is in front of the forklift based on an image from the camera that has captured the light emitted from the projector or the pattern light.
  • the image processing apparatus according to (1) or (2), wherein a distance to an object is detected and the video is processed based on the detected distance.
  • the detection sensor is a distance measurement sensor that detects a distance to an object in front of the forklift and acquires a plurality of distance measurement point group data.
  • the front of the forklift is provided as an imaging region so that the first imaging device and the second imaging device share at least a part of each imaging region with one fork.
  • At least one of the first and second imaging elements functions as a part of the camera, and both the first and second imaging elements function as the detection sensor,
  • the processing unit detects a distance to an object in front of the forklift based on images acquired from both the first and second imaging elements,
  • the tapered portion at the tip of the fork has a width that gradually decreases toward the tip when viewed from above and a thickness that gradually decreases toward the tip when the bottom surface is inclined when viewed from the side.
  • the processing unit generates a distance map that is a distance measuring point group indicating a position or a shape of an object in a work space where the forklift operates based on an image from the camera and detection information of the detection sensor.
  • the image processing apparatus according to any one of (1) to (6), which is stored in the storage unit.
  • a position detection sensor for acquiring the position state of the fork is included, The image processing apparatus according to any one of (1) to (8), wherein the processing unit acquires the position state of the fork provided with the camera by the position detection sensor.
  • the processing unit The image processing device according to any one of (1) to (9), wherein an image obtained by adding additional information corresponding to a distance to a front object to the image acquired by the camera is displayed on the display. .
  • the processing unit The image processing apparatus according to any one of (1) to (10), wherein an image obtained by performing viewpoint conversion on an image acquired by the camera is displayed on the display.
  • the combiner is disposed at a position where the front side of the forklift can be seen through, The image processing apparatus according to (13), wherein the head-up display has a virtual image projection distance set in a range of 50 cm to 20 m.
  • At least a part of the electronic components constituting the camera and the detection sensor are connected to the main body of the fork via a heat conductive member made of a flexible and high heat conductive material.
  • An image processing apparatus used for a forklift A camera for photographing the front of the forklift; A distance sensor to measure the distance to an object in front of the forklift, and a distance sensor data indicating a distance value distribution, and a detection sensor; A processing unit that performs processing for adding an image of a distance value based on the acquired distance measuring point group data to the video acquired by the camera; A display for displaying the processed video processed by the processing unit;
  • An image processing apparatus comprising:
  • a position detection sensor for acquiring posture information of the camera is provided, The image processing apparatus according to (21), wherein the processing unit performs the viewpoint conversion process using the posture information acquired from the position detection sensor.
  • a storage unit is provided, The image processing apparatus according to (21) or (22), wherein the processing unit creates a three-dimensional distance map using distance measurement point group data acquired by the detection sensor and stores the three-dimensional distance map in the storage unit.
  • the three-dimensional distance map stored in the storage unit includes drawing data relating to a building or facility in which the forklift is used, distance measuring point group data obtained from sensors installed in the building, and other vehicles.
  • the viewpoint conversion process includes a viewpoint conversion process in which the viewpoint position of the driver sitting on the cab of the forklift is a virtual viewpoint position, and a viewpoint conversion process in which a position higher than the driver's viewpoint position is a virtual viewpoint position.
  • the image processing device according to any one of (21) to (25) above, wherein the viewpoint conversion processing uses a position away from the forklift as a virtual viewpoint position.
  • the viewpoint conversion process in which the driver's viewpoint position is a virtual viewpoint position is a viewpoint conversion process by keystone correction according to an angle or height of the camera with respect to the ground, or the distance measurement point group data, Alternatively, the image processing device according to (26), which is viewpoint conversion processing using a three-dimensional distance map stored in a storage unit.
  • the viewpoint conversion process in which a position higher than the driver's viewpoint position is a virtual viewpoint position is a viewpoint conversion process using the distance measurement point group data or a three-dimensional distance map stored in a storage unit.
  • (31) The image processing device according to any one of (21) to (30), wherein the viewpoint conversion processing changes presence or absence of the viewpoint conversion processing or intensity according to a distance to the object.
  • the display is a transparent screen or a head-up display attached to the forklift so that the front of the forklift can be seen through.
  • the processing unit recognizes an object in front of the forklift, generates an additional image corresponding to the distance and / or direction to each of the recognized objects, and superimposes the generated additional image on each of the objects.
  • the image processing apparatus according to any one of (17) to (20), wherein the image is displayed on the transparent screen or the head-up display.
  • the processing unit recognizes an object in front of the forklift and, as the processing process, adds an additional image corresponding to the recognized type of the object, the distance to the object, or the position to the video.
  • the image processing device according to any one of (17) to (32), which is generated and added to the video.
  • the processing unit determines an inclination with respect to the pallet based on a shape of the insertion port of the pallet, and the additional image corresponding to the determined inclination amount of the horizontal plane of the pallet.
  • the processing unit In the additional image generated by the processing unit, the content information of the package acquired from the physical distribution information system used in the building where the forklift is used, the empty shelf information indicating the availability of the shelf, and the procedure for handling the cargo are included.
  • the image processing device according to any one of (32) to (34), wherein at least one of the cargo handling procedure information to be displayed is included.
  • the processing unit generates an overhead view image of an upper viewpoint according to the distance to the object, adds the generated overhead image and displays the generated image on the display, any of (17) to (35) above An image processing apparatus according to claim 1.
  • the processing unit recognizes an object in front of the forklift and issues a warning when the distance from the forklift or the fork tip of the forklift becomes a predetermined value or less, or the display of the display
  • the image processing device according to any one of (17) to (36), wherein the display is switched to a proximity screen.
  • the processing unit recognizes an object in front of the forklift and outputs information on the shortest distance from the fork tip of the object recognized in the image of the distance value. 37) The image processing device according to any one of
  • An image processing apparatus used for a forklift, a distance measuring point group data indicating a distance value distribution by measuring a distance between a camera for photographing the front of the forklift and an object in front of the forklift A control program that is executed by a computer that controls an image processing apparatus including a detection sensor for acquiring Acquiring an image by the camera (a); A step (b) of obtaining ranging point cloud data with the detection sensor; A step (c) of performing processing for adding an image of a distance value based on the acquired distance measuring point group data to the video acquired by the camera; Displaying the processed image on a display (d); A control program for causing the computer to execute a process including:
  • the process further includes: Recognizing an object in front of the forklift (e), In the step (c), as the processing, an additional image corresponding to the recognized object type, the distance to the object, or the position and position is generated in the video and added to the video (39) Or the control program as described in said (40).
  • a camera for photographing the front of the fork and a detection sensor for detecting the distance to the object in front are provided at the tip of one fork. Based on this, the video acquired by the camera is processed and displayed on the display. By doing in this way, the driver can confirm the front by the screen displayed on the display and can stably measure the distance to the object even when it is difficult to see the front by loading on the fork.
  • a camera for photographing the front of the forklift and a detection for measuring the distance to the object in front of the forklift and obtaining distance measurement point group data indicating the distribution of distance values A sensor, and a processing for adding an image of a distance value based on the acquired distance measurement point group data to the video acquired by the camera, and displaying the processed video on a display.
  • FIG. 1 is a block diagram illustrating a hardware configuration of an image processing apparatus according to a first embodiment and a functional configuration of a processing unit. It is a schematic diagram which shows the state which attached the 1st, 2nd camera to the front-end
  • FIG. 5 is a cross-sectional view taken along the line A-A ′ of FIG. It is an enlarged view of the fork of another example. It is a schematic diagram explaining the angle of view of the horizontal direction of a 1st, 2nd camera. It is a schematic diagram explaining the angle of view of the perpendicular direction of a 1st, 2nd camera.
  • FIG. 1 is a side view showing an appearance of a forklift.
  • the forklift 10 includes a main body 11, a cab 12, a mast 13, a finger bar 14, a pair of forks 15 and 16, and a head guard 17.
  • a pallet 91 and a luggage 92 on the pallet 91 are loaded on the forks 15 and 16.
  • a mast 13 that can be expanded and contracted in the vertical direction is provided in front of the main body 11, and the forks 15 and 16 are supported by the finger bar 14 and are attached to the mast 13 through the finger bar 14 so as to be vertically movable. It has been.
  • the positions of the forks 15 and 16 are controlled in the vertical direction. Further, the inclination angle (tilt) of the forks 15 and 16 with respect to the ground (traveling surface) can be changed within a predetermined range by a hydraulic cylinder (not shown) connected to the mast 13. Further, the opening angle and interval between the forks 15 and 16 may be changed within a predetermined range by a hydraulic cylinder (not shown) in the finger bar 14.
  • the forks 15 and 16 are generally made of hard metal.
  • FIG. 2 is a block diagram illustrating a hardware configuration of the image processing apparatus according to the first embodiment and a functional configuration of the processing unit.
  • FIG. 3 is a schematic diagram showing a state in which the first camera and the second camera are attached to the tip portion of one fork 15.
  • the image processing apparatus 20 includes a first camera 21, a second camera 22, a processing unit 23, a storage unit 24, and a display 25, and these components are mounted on the forklift 10.
  • each of the first and second cameras 21 and 22 includes an imaging device 200 (first and second imaging devices) having sensitivity in a visible light region such as a CCD or a CMOS, and an optical device such as a lens.
  • a system is provided, and the front of the forklift 10 is photographed to obtain an image (video). At least a part of the imaging areas of the first and second cameras 21 and 22 overlap.
  • the imaging devices 200 of both the first and second cameras 21 and 22 detect the distance to the object in cooperation with the processing unit 23 and generate distance measurement point group data. It also functions as a detection sensor.
  • FIGS. 4 (a) to 4 (c) and FIG. 4 (a) to 4 (c) are enlarged views of the front end side (also referred to as “claw” or “blade”) of the fork 15, and FIG. 5 is an AA view of FIG. 4 (a).
  • 5 is a cross-sectional view, and in FIG. 5, a contour line on the upper surface s ⁇ b> 2 in FIG.
  • the cameras 21 and 22 are preferably arranged on the side surface or the lower surface in the vicinity of the boundary between the linear portion on the side surface of the fork 15 and the tip protrusion (tip s1 described later) so that a wider angle of view can be obtained. More specifically, the cameras 21 and 22 are preferably arranged in a tapered portion s51 described below.
  • FIG. 4A is a side view of the fork 15 to which two cameras 21 and 22 are attached
  • FIG. 4B is a plan view
  • FIG. 4C is a view from the front end side of the fork 15. It is a front view.
  • the fork 15 has a tip s1, an upper surface s2, a lower surface s3, a side surface s4, and a tapered portion s51 of the tip portion.
  • the tip s1 is a plane extending in the YZ plane.
  • the “tip portion” includes not only the tip s1 but also its peripheral portion. For example, a range of 20 to several centimeters from the tip s1 in the X direction is included (see FIG. 9 described later).
  • the peripheral portion includes a tapered portion s51.
  • the tapered portion s51 has a width that gradually decreases toward the tip s1 in the top view as shown in FIG. 4B, and the bottom surface s3 inclines in the side view as shown in FIG. 4A. And the tapered surface gradually decreases in thickness toward the tip s1.
  • the tip s1 may be formed as a curved surface instead of a flat surface.
  • a cylindrical hole is provided on each of the left and right sides of the tapered portion s51 of the fork 15, and the cameras 21 and 22 are embedded in the holes, respectively. It is preferable that the cameras 21 and 22 be arranged so that the front surface of the lens slightly protrudes from the outer peripheral surface of the tapered portion s51 in terms of securing a wide angle of view. From the viewpoint of breakage due to, it is more preferable to dispose inside the opening surface of the cylindrical hole.
  • the object When the object is viewed in stereo (stereoscopic view) for distance measurement, it is necessary to overlap the shooting areas of the two cameras 21 and 22.
  • the two cameras 21 and 22 arranged in the tapered portion s ⁇ b> 51 can ensure a wide angle of view toward the front side of the fork 15.
  • FIGS. 6A to 6C are enlarged views of a fork 15 according to another example.
  • the fork 15 shown in FIGS. 6A to 6C has a thickness of 10 mm, a width of 100 mm, and a tapered portion s51 whose width gradually decreases toward the tip s1 when viewed from above.
  • the tapered portion s51 has an R (radius) of 60 mm, and the tapered portion s51 extends from the tip s1 to 40 mm in the X direction and from the side surface s4 to 40 mm in the Y direction.
  • the first and second cameras 21 and 22 are provided in the tapered portion s51.
  • the cameras 21 and 22 are preferably separated from the upper surface s2 and the lower surface s3 by 2 mm or more, respectively.
  • the cameras 21 and 22 are both arranged in a size and position so as to be within a range of 2 to 8 mm from the lower surface.
  • the fork 15 may intentionally come into contact with a floor surface or a load and collide with it. For this reason, this arrangement prevents the cameras 21 and 22 from directly colliding with the floor or luggage.
  • the front surfaces of the lenses of the cameras 21 and 22 are preferably arranged on the inner side of the front surface, that is, the front surface s1 and the surface of the tapered portion s51 in the top view. With such an arrangement, when the fork collides (intentionally) with another object from the front, the cameras 21 and 22 can be prevented from directly colliding with another object.
  • FIG. 7 is a schematic diagram for explaining the angle of view in the horizontal direction.
  • the angle of view will be described using the fork 15 having the shape shown in FIGS. 6 (a) to 6 (c) as an example, but the shape of the shape shown in FIGS. 4 (a) to 4 (c) and FIG.
  • the present invention can be applied to any fork shape such as the fork 15 (the same applies to FIG. 8).
  • the angle of view of the first and second cameras 21 and 22 is ideally 180 degrees in the horizontal direction with the front at the center. However, it is difficult to arrange the cameras 21 and 22 at the tip s1 in consideration of an impact caused by a collision with another object.
  • the minimum value of the horizontal angle of view is 44 degrees or more so that an object with a width of 2 m can be photographed (stereoscopic view) 5 m ahead of the fork 15 (the half angle of view to the inside of both cameras is 22 degrees)
  • the above is preferably ensured.
  • the reason for the width of 2 m is that in the case of a large forklift, the distance between the two forks 15 and 16 is about 2 m, so that the area (an additional image 402 described later) on which the fork 15 hits can be viewed in stereo at a minimum. It is.
  • FIG. 8 is a schematic diagram for explaining the angle of view in the vertical direction.
  • the vertical angle of view is 120 degrees (60 degrees with a half angle of view above the horizontal) so that a range of up to 5 m in height 3 m can be photographed. It is preferable.
  • the minimum angle of view in the vertical direction is at least 44 degrees so that an object with a height of 2 m can be photographed 5 m ahead of the fork 15 (half angle of view to the inside of both cameras). Of 22 degrees or more) is preferably ensured. 5m forward and 2m high is based on the fact that the overall height of a small forklift that is widely used indoors is 2m. This is because it is possible to confirm that the top of the head does not contact any object in front.
  • FIG. 9 shows a state in which the fork 15 is inserted from the insertion port of the pallet 91 and the pallet 91 is placed on the fork 15.
  • the position at which the fork 15 is inserted is shifted by 50% from the center (indicated by a broken line) of the insertion port on one side of the pallet 91 to one side.
  • the tip positions of the cameras 21 and 22 should be arranged within 14 cm from the end of the pallet. preferable.
  • the standard pallet length is 110 cm and the standard fork length is 122 cm, the amount of protrusion from the end of the pallet to the tip s1 of the fork 15 is 12 cm. Therefore, it is preferable to arrange the cameras 21 and 22 so that the entire surface of the lens is located in the range from the leading edge (tip s1) of the fork 15 to 26 cm (14 + 12 cm) in the X direction.
  • the relative positions of both cameras are always constant.
  • the base line length and parallelism between the two cameras 21 and 22 can always be kept constant.
  • the angle of view can be increased compared to the case where the cameras 21 and 22 are disposed on the flat lower surface s3 or the side surface s4, and a wider range can be obtained. Can shoot.
  • the processing unit 23 includes a CPU (Central Processing Unit) and a memory, and performs various controls of the entire image processing apparatus 20 by the CPU executing a control program stored in the memory. Each function performed by the processing unit 23 will be described later.
  • CPU Central Processing Unit
  • the storage unit 24 is a hard disk or a semiconductor memory, and stores a large amount of data.
  • the storage unit 24 stores a three-dimensional distance map, which will be described later, and accumulates or updates the distance map sent from the processing unit 23.
  • the storage unit 24 may be mounted on the forklift 10, but all or part of the storage unit 24 may be provided in an external file server. Providing a part of the storage unit 24 in an external device is particularly useful when using a distance measurement map from an external distance measurement sensor 80 (see FIGS. 8 and 22) described later.
  • Data transmission / reception with an external file server is performed via a LAN by a wireless communication unit included in the image processing apparatus 20.
  • the three-dimensional distance map may reflect drawing data relating to a building or facility such as a warehouse or a factory in which the forklift 10 is used, that is, a work space where the forklift travels.
  • the drawing data includes, for example, position information on the floor surface, wall surface, window, and lighting device.
  • the three-dimensional distance map may include position information of other vehicles such as forklifts that travel in the building.
  • the location information of the luggage 92 used in the building and acquired from an external logistics system (see FIG. 24 described later) that manages the logistics of the inside luggage may be included.
  • this logistics system has a server and is connected to the image processing apparatus 20 via a network, for example.
  • the server stores package position information in the building, package content information, empty shelf information indicating shelf availability, cargo handling procedure information indicating procedures for cargo handling, and the like.
  • an IC tag is attached to each load 92 or a pallet 91 on which the load 92 is placed, and the physical distribution system can grasp position information of each load 92 using the IC tag.
  • the position information of other vehicles (including forklifts) operating in the building may be grasped and acquired by a signal from an external distance measuring sensor 80, or the image processing apparatus 20 may acquire other vehicles. It may be obtained directly by performing P2P (peer-to-peer) communication with a communication unit mounted on the PC.
  • P2P peer-to-peer
  • the display 25 is attached to a frame that supports the head guard 17 in front of the driver as shown in FIG. 1, and displays a processed image generated and processed by the processing unit 23 as described below.
  • the processed image is, for example, an image obtained by performing viewpoint conversion processing of images acquired by the cameras 21 and 22 and / or processing processing for adding an image of a distance value.
  • the processing for adding the image of the distance value includes a process of superimposing an additional image corresponding to the type of the recognized object or the distance and position to the object on the video. Accordingly, the driver can check the status of the front of the luggage on the display screen of the display 25 even when the forward visibility is deteriorated due to the luggage loaded on the forks 15 and 16.
  • the display 25 is a liquid crystal display, for example.
  • the display 25 may be a HUD (head-up display) or a head-mounted display worn by the driver.
  • a display for HUD includes a combiner which is a concave mirror or a plane mirror having semi-transparency, and projects a virtual image on the combiner. As the virtual image, there is an additional image generated by the processing unit 23 described later.
  • a driver sitting on the driver's cab 12 can visually recognize the real image ahead through the combiner and simultaneously recognize the virtual image reflected by the combiner.
  • the combiner becomes transparent when the image is not projected, so that the forward view is not hindered.
  • the processing unit 23 includes an image acquisition unit 301, a preprocessing unit 302, a feature point extraction unit 303, a distance map generation unit 304, an object position determination unit 305, an additional image generation unit 306, an association unit 307, a viewpoint conversion unit 308, an image It functions as a composition unit 309 and an image output unit 310. These functions are performed by the processing unit 23 executing a program stored in the internal memory, but some of these functions may be performed by a built-in dedicated hardware circuit. In the embodiment described below, the processing unit 23 generates a distance map that can grasp the position and distance of the object. However, the present invention is not limited to this, and only the distance measurement to the object may be performed. .
  • the image acquisition unit 301 performs control such as synchronizing the two cameras 21 and 22 by applying a timing trigger, and acquires images (videos) photographed at a predetermined frame rate by these imaging elements 200.
  • the image acquisition unit 301 may perform exposure using the central portion at the shooting angle of view. This is because the image from the camera is bright only at the center and the periphery is dark until the fork is inserted into the insertion port of the pallet 91 and the fork tip passes through the insertion port.
  • exposure is performed using the central portion of the angle of view. As a result, an image from the central portion can be appropriately captured without being overexposed.
  • Pre-processing unit 302 adjusts the brightness and contrast of a set of images acquired from the two cameras 21 and 22 via the image acquisition unit 301. A known adjustment process can be applied to these adjustments. Further, post-processing such as binarization processing is further performed on the adjusted image, and the processed image is supplied to the feature point extraction unit 303. On the other hand, the preprocessing unit 302 supplies the color image that has been subjected to the preprocessing in the previous stage to the viewpoint conversion unit 308. It should be noted that stitch processing for pasting a set of images in a positional relationship according to the base line length may be performed, and the processed image may be supplied to the viewpoint conversion unit 308. May be supplied to the viewpoint conversion unit 308.
  • the feature point extraction unit 303 extracts feature points corresponding to the shape and contour of an object that is an index of association from each set of images. Note that color, contrast, edge, and / or frame information may be used as the association index.
  • the distance map generation unit 304 extracts common corresponding points from the extracted feature points of a set of images, and calculates distances from the corresponding points to the feature points using the conversion parameters. For example, in the pair of cameras 21 and 22 arranged on the left and right, the distance value of each pixel is calculated according to the shift amount of the left and right pixel values of the same corresponding point using the baseline length (ranging).
  • the distance map generation unit 304 may perform SLAM (Simultaneous localization and mapping) processing.
  • SLAM Simultaneous localization and mapping
  • software for executing SLAM processing there is SDK software for ZED cameras.
  • RGB-D SLAMV2, etc. as an open source for creating SLAM, there are RGB-D SLAMV2, etc., and these may be used.
  • the SLAM processing the moving position of the own vehicle, that is, the forklift 10 in the three-dimensional distance map (hereinafter simply referred to as “distance map”) of the forklift 10 can be grasped in real time.
  • a distance map in the work space where the forklift 10 stored in the storage unit 24 is used may be used.
  • the distance sensor 80 is generated by past driving of an external distance measuring sensor 80 (see FIG. 8 and FIG. 22 described later) or the forklift 10 described later and stored in the storage unit 24. You may correct
  • the object position determination unit 305 determines the position of the object in front of the forklift 10 in the three-dimensional space. This object determination may be performed, for example, by clustering pixels according to the similarity of the distance values of each pixel. Further, the similarity of distance values may be combined with the similarity of pixel colors for clustering. The size of each cluster determined by clustering is calculated. For example, the vertical dimension, horizontal dimension, total area, etc. are calculated. The “size” here is an actual size, and unlike the apparent size (field angle, that is, the spread of pixels), the cluster of pixels is determined according to the distance to the object.
  • the object position determination unit 305 determines whether or not the calculated size is equal to or smaller than a predetermined size threshold value for specifying the analysis target object to be extracted.
  • the size threshold can be arbitrarily set depending on the measurement location, the behavior analysis target, and the like. If the behavior is analyzed by tracking a worker who passes, the minimum value of the size of a normal person may be set as a size threshold for clustering. Further, if the environment in which the forklift travels is limited in a specific warehouse or the like, a size threshold corresponding to an object existing in the environment may be applied.
  • the object position determination unit 305 accumulates the generated three-dimensional distance map ahead of the forklift in the storage unit 24. This distance map includes information on the size and position of each object having a predetermined size or larger. As described above, only the distance measurement to the object may be performed to reduce the processing load.
  • machine learning As for the type of the object, it is possible to use machine learning or proportion discrimination (aspect ratio) for discrimination between a human and another object.
  • machine learning learning is repeated using data of a three-dimensional distance map obtained by a computer. Particularly in the work space, it is important to discriminate the worker in order to prevent a serious accident, and the use of these is expected to improve the discrimination accuracy.
  • the additional image generation unit 306 adds the distance according to the distance to the object ahead of the forklift 10 determined by the object position determination unit 305, more specifically, the distance to the object ahead of which the forks 15 and 16 are extended.
  • An image also referred to as an annotation image
  • the additional image include a distance ladder indicating a distance and a numerical value display (see FIG. 10 described later).
  • the additional image may be a mark or text for alerting the driver when there is a rectangular frame whose color or form has been changed according to the type and distance of the object, or an object requiring attention. Good.
  • the additional image may be information on an inclination angle, an opening angle, an inclination angle, or a height from the ground on the horizontal plane or the XY plane in front of the fork with respect to the pallet.
  • the additional image corresponding to the distance in the height direction between the forks 15 and 16 and the truck bed, the distance between the mast 13 and the ceiling of the building or facility, the upper end on the front side of the luggage 92, the ceiling An additional image corresponding to the distance in the height direction to the canopy of the truck or the upper stage of the shelf may be generated.
  • the additional image corresponding to the relative positional relationship with the tips of the forks 15 and 16 is the posture information of the cameras 21 and 22 on the forks 15, that is, the direction (opening angle), height, and the like of the forks 15. The angle is changed according to the change of the tilt angle (tilt) and the interval between the forks 15 and 16.
  • the direction, height, inclination angle, and distance between the forks of these forks 15 may be detected from the captured images of the cameras 21 and 22 and detected by various sensors attached to the main body 11 of the forklift 10. Also good.
  • an additional image corresponding to the inclination amount of the pallet 91 may be generated.
  • the object position determination unit 305 recognizes the shape of the insertion port of the pallet 91 ahead, and inclines according to the comparison with the shape and size of the insertion port registered in the storage unit 24 in advance. Determine the amount.
  • the additional image generation unit 306 generates an additional image corresponding to the tilt amount.
  • the tilt amount is, for example, a tilt angle (tilt) with respect to a horizontal plane that occurs when the upper surface of the load 92 is not horizontal when the pallet 91 and the load 92 are stacked two or more levels as shown in FIG. Further, this inclination angle may be a relative angle with respect to the fork 15.
  • inclination angle (yaw angle) between the virtual extension line of the fork 15 and the pallet 91 in the horizontal plane (XY plane). Further, when these inclination amounts are equal to or larger than a predetermined value, a warning may be given.
  • the association unit 307 associates the position of each object in the two-dimensional image captured by the cameras 21 and 22 with the position of each object in the distance map.
  • the viewpoint conversion unit 308 is provided for each angle and direction when viewed from a viewpoint position (virtual viewpoint position) designated in advance according to the viewpoint position corresponding to the eye height of the driver sitting on the cab 12.
  • the viewpoint conversion of the image is performed by performing coordinate conversion of the interval and position of the pixel points.
  • the driver can set the viewpoint position (virtual viewpoint position) using an input device (not shown) such as a keyboard, a pointing device, or a touch sensor of the image processing apparatus 20.
  • display processing is performed for a blind spot area that cannot be captured from the camera.
  • the blind spot area data NULL area
  • the blind spot area is replaced with a pixel in the storage unit 24 corresponding to the area.
  • the blind spot area occurs when the position of the camera is different from the viewpoint position of the driver. Further, this viewpoint conversion is performed according to the distance to the object in the image using a conversion by trapezoidal correction of the image according to the tilt angle or height of the camera 21 or 22 with respect to the floor surface and a three-dimensional distance map. The case where the conversion is performed is included. For example, when the distance map is not sufficiently generated over the photographing region, the trapezoidal correction may be performed.
  • the viewpoint conversion unit 308 determines an inclination angle or height as posture information of the cameras 21 and 22 themselves provided on the ground or the fork 15 with respect to the main body 11 from the images acquired from the cameras 21 and 22, and the determination May be reflected in the viewpoint conversion. For example, when the fork 15 is tilted upward toward the tip or moved upward, the viewpoint conversion is performed so as to cancel the tilt angle and the upward movement amount.
  • the inclination angle or height of the fork 15 with respect to the ground surface of the fork 15 or the main body 11 may be detected by various sensors (see modifications described later) attached to the forklift 10. Further, the posture information of the fork 15 and the cameras 21 and 22 may be detected by various sensors attached to the forklift 10 (see a modified example of FIG. 22 described later).
  • a viewpoint conversion process in which a higher position is set as the virtual viewpoint position, or a position away from the main body of the forklift 10 is used.
  • a viewpoint conversion process using the virtual viewpoint position may be performed.
  • a viewpoint conversion process in which the height position of the forks 15 and 16 or a height position obtained by adding a distance corresponding to the height of the pallet 91 and the luggage 92 above the fork 15 or the virtual viewpoint position or the forklift 10 Is a viewpoint conversion process for setting a virtual viewpoint position (overhead view) from above or a virtual position viewpoint (third-person viewpoint position) from the back to the front.
  • an overhead image can be obtained.
  • the front of the forklift 10 can be easily confirmed.
  • the change of the virtual viewpoint position and the setting of the position can be appropriately set from an input device provided in the forklift 10.
  • an external distance measuring sensor 80 is, for example, a laser rider (Laser Lidar (Light Detection And Ranging)), and is provided on the ceiling of a building or facility where the forklift 10 is used as shown in FIG.
  • the presence / absence of viewpoint conversion or the viewpoint conversion amount (intensity) may be changed according to the distance from the forklift 10 to the object.
  • the viewpoint conversion process may be turned off, the viewpoint conversion amount (viewpoint movement distance) may be reduced, or the object may be processed by two-dimensional trapezoidal correction.
  • the two-dimensional video used for the viewpoint conversion may be obtained by extracting an image from only an overlapping area of the two cameras 21 and 22 and performing viewpoint conversion processing based on the image.
  • the stitch processing for pasting may be performed based on the positional relationship, and the processed image may be used. In that case, viewpoint conversion processing may be performed on a central overlapping area using a three-dimensional distance map, and viewpoint conversion processing may be performed on pixels in a non-overlapping area by two-dimensional trapezoidal correction.
  • the viewpoint conversion amount may be reduced.
  • the viewpoint conversion process when a viewpoint conversion process for upward viewing is performed on a photographed image, if the ratio of the number of pixels serving as a blind spot area is greater than or equal to a predetermined value with respect to the total number of displayed images, the virtual viewpoint position is set. Limit to lower than the set position or switch to two-dimensional trapezoidal correction.
  • Image composition unit 309 The image composition unit 309 superimposes the additional image generated by the additional image generation unit 306 on the image generated by the viewpoint conversion unit 308 at a display position corresponding to the position of the object detected by the object position determination unit 305. Generate a composite image.
  • the display position and content of the additional image are displayed in different directions and positions so as to be superimposed on the object (real image) that the driver is looking at. Calculate and generate. Further, when the HUD is configured to change the virtual image distance, the additional image may be generated with a virtual image distance corresponding to the position and direction of the object.
  • the image output unit 310 outputs the combined image generated by the image combining unit 309, that is, the processed image (video) to the display 25 in real time and displays it to the driver.
  • the image output unit 310 outputs the processed image generated by the image composition unit 309, that is, the image (video) after the process of superimposing the viewpoint conversion process and / or the additional image, to the display 25 in real time. Display to the driver.
  • FIG. 10 is an example of a screen 250 displayed on the display 25.
  • a truck 95 as an object
  • a pallet 91 and a luggage 92 thereon thereon
  • an operator 96 and the camera 21 in a situation where the forklift 10 is brought close to the loading platform of the truck 95
  • the image of the front of the forklift 10 taken by 22 is displayed.
  • additional images 401 to 406 are superimposed.
  • the additional image 401 is an illustration image (animation image) corresponding to the forks 15 and 16.
  • the additional images 402 and 403 are images showing a line in which the forks 15 and 16 are extended forward and the vicinity of the contact position. Accordingly, the driver can recognize the position where the forks 15 and 16 are hit (inserted).
  • the additional images 404 and 405 are images indicating the distance to the object in front of the forklift 10.
  • the additional images 404 and 405 are also referred to as a distance ladder together with the additional image 403.
  • the additional image 406 shows the distance of the forks 15 and 16 to the upper surface of the loading platform of the truck 95 in the height direction.
  • the additional image 407 is a mark that alerts the driver when a person (worker 96) approaches the front of the forklift 10.
  • a warning sound is emitted from a speaker attached to the side of the display 25 as a notification process.
  • the additional image 407 may be changed in color or blinked to indicate a warning.
  • the additional images 401 to 406 correspond to the change in the distance and the inclination angle if the distance between the pair of forks 15 and 16 and the inclination angle of the forks 15 and 16 with respect to the ground can be changed.
  • the shape, size, and orientation may be changed.
  • the additional images 403 to 406 are changed according to the tilt angle.
  • the interval distance and the inclination angle may be obtained by the processing unit 23 based on images acquired from the cameras 21 and 22, or may be detected by various sensors attached to the forklift 10 (see modifications described later). Also good.
  • a pair of cameras 21 and 22 provided on one fork 15 is used, and the image is acquired in front of the forklift 10 based on the images acquired by the imaging elements of the cameras 21 and 22.
  • processing is performed by superimposing an additional image on the acquired video or changing the viewpoint, and the processed video is displayed on the display 25.
  • the driver can confirm the front on the screen displayed on the display 25 even when the front is difficult to see due to loading on the forks 15 and 16.
  • work support, safety warning, etc. can be performed, so that the forklift can be operated more safely.
  • FIG. 11 is a block diagram illustrating a hardware configuration of an image processing apparatus according to the second embodiment and a functional configuration of a processing unit.
  • the image processing apparatus according to the above-described first embodiment FIG. 2 and the like
  • the front of the forklift 10 is photographed and the distance is measured using the two cameras 21 and 22.
  • forward shooting and distance measurement are performed using one camera 21 and a distance measuring sensor 22b.
  • the distance measuring sensor 22b functions as a “detection sensor” for detecting the distance.
  • the distance measuring sensor 22b is, for example, an area distance measuring device such as a laser lidar (Light Lidar (Light Detection And Ranging)), a TOF (Time Of Flight) scanner, or a sonar. Visible light, infrared light, sound waves, and the like are emitted from the output unit by the area range finder, and the distance to the object ahead is measured by the time difference until the energy reflected from the object reaches the input unit. The distance to the object in front of the forklift 10 is measured by the area range finder, the distance measurement point group data of a plurality of points is measured, and the distance measurement point group data indicating the distribution of distance values is acquired.
  • a laser lidar Light Lidar (Light Detection And Ranging)
  • TOF Time Of Flight
  • sonar Visible light, infrared light, sound waves, and the like are emitted from the output unit by the area range finder, and the distance to the object ahead is measured by the time difference until the energy reflected from the object
  • the distance measuring sensor 22b when a laser lidar is used as the distance measuring sensor 22b, the emitted pulsed laser is irradiated while scanning the front measurement space, and the reflected light is detected. Then, distance information at each irradiation position is obtained according to the time difference between the emission timing and the light reception timing, and distance measurement point group data in the measurement space is acquired.
  • the acquired ranging point group data is sequentially sent to the distance map generation unit 304 and used for various processes.
  • the distance measurement point group data is stored in the storage unit 24 as a distance map.
  • the distance measuring sensor 22b and the camera 21 are disposed at the tip of one fork 15.
  • the measurement space of the distance measuring sensor 22b and the shooting area of the camera 21 are arranged so that part or all of them overlap.
  • the distance measuring sensor 22b and the camera 21 are arranged in the tapered portion s51 at the tip portion of the fork 15 as in the first embodiment (see FIGS. 4 to 9).
  • the position detection sensor 26 may be provided, or the distance map acquired by the external distance measuring sensor 80 and stored in the storage unit 24 may be used.
  • FIG. 12 is a block diagram illustrating a hardware configuration of the image processing apparatus 20 according to the third embodiment.
  • one camera 21 and a projector 22c are provided.
  • the projector 22c and the processing unit 23 described below function as a detection sensor for detecting the distance to the object in front of the forklift by cooperating with the camera 21.
  • the projector 22c projects pulsed pattern light toward the front of the forklift.
  • the pattern light is, for example, a plurality of vertical and horizontal line lights or lattice pattern light composed of dot light at predetermined vertical and horizontal intervals. Further, a random dot pattern may be projected as the pattern light.
  • This pattern light irradiation region overlaps part or all of the photographing region of the camera 21.
  • one point of irradiation light may be configured to sequentially scan the imaging region of the camera 21.
  • the projector 22c and the camera 21 are arranged at the tip portion of one fork 15. In the width direction (Y direction) of the fork 15, the projector 22c and the camera 21 are separated by a predetermined distance (base line). (Long) is separated and attached. In addition, it is preferable that the projector 22c and the camera 21 are disposed in the tapered portion s51 at the tip portion of the fork 15 as in the first embodiment (see FIGS. 4 to 6).
  • the processing unit 23 (distance map generation unit 304) irradiates pattern light at a predetermined timing
  • the captured camera 21 acquires the interval or position of each dot light or line light constituting the pattern light. Detect by image. Using the conversion parameter based on the position of the detected pattern light and the base line length, the distance in the plurality of pixels of the acquired image is calculated.
  • the distance to the object ahead of the forklift is detected, and the viewpoint is converted, the additional image is superimposed, or the viewpoint is converted to the acquired video.
  • the processed image is displayed on the display 25.
  • the front image of the forklift 10 and the distance to the front object can be obtained.
  • the processed video that adds the image of the distance value based on the acquired distance measurement point group data to the video acquired by the camera is displayed on the display.
  • the example using one camera 21 was shown, it is not restricted to this, You may use the two cameras 21 and 22 arrange
  • FIG. 13 is a block diagram illustrating a hardware configuration and a functional configuration of the processing unit according to the first modification.
  • a position detection sensor 26 that acquires the position state of the fork 15 provided with two cameras 21 and 22 is further provided in the configuration of the first embodiment.
  • the position detection sensor 26 may be, for example, a sensor that detects the inclination angle (tilt) of the fork 15. Further, the position detection sensor 26 may be a sensor that detects the height of the fork 15 relative to the mast 13, that is, the height relative to the ground. These sensors are composed of an actuator and an optical element, for example. With these sensors, the relative positions of the cameras 21 and 22 provided on the fork 15 with respect to the main body 11 can be detected. Further, the position detection sensor 26 may be an acceleration sensor or a gyro sensor.
  • Angular velocity information and turning angular velocity information can be acquired by the acceleration sensor and the gyro sensor, and the relative positions of the cameras 21 and 22 provided on the fork 15 with respect to the main body 11 or the surroundings can be grasped.
  • the relative position mentioned here includes grasping the angle (horizontal or inclined) of the fork 15 (cameras 21 and 22) and the horizontal plane.
  • the image processing apparatus 20 applies the position detection sensor 26 to the configuration of the first embodiment.
  • the present invention is not limited to this, and the position detection is performed in the second and third embodiments.
  • the sensor 26 may be applied.
  • FIGS. 14A and 14B are schematic views showing a state in which the first camera and the second camera are attached to the tip of one sheath fork 19 in the second modification.
  • the sheath fork 19 is attached to the fork 15 (or the fork 16) on the distal end side as in this embodiment, the distal end portion of the fork or the distal end of the fork is the distal end portion of the sheath fork 19 or the sheath fork. 19 points to the tip.
  • the sheath fork 19 is mounted so as to cover the fork 15 and is fixed by a holder (not shown) such as a screw.
  • FIG. 14A is a side view showing a sheath fork 19 attached to the front end side of the fork 15, and FIG. 14B is a plan view.
  • the sheath fork 19 has a tip s11, an upper surface s12, a lower surface s13, a side surface s14, and a tapered portion s151 of the tip portion.
  • Each of the left and right tapered portions s151 of the fork 19 is provided with a cylindrical hole, and the cameras 21 and 22 are respectively embedded in the holes.
  • the arrangement positions of the cameras 21 and 22 and the effects thereof are the same as those in the first embodiment described with reference to FIGS.
  • the cameras 21 and 22 are connected to the main body 11 of the forklift 10 by a cable or wirelessly, and transmission of the video signal to the processing unit 23 is performed.
  • a cable connection power is supplied from the main body 11, and in the case of wireless connection, power is supplied by a battery attached together with the cameras 21 and 22.
  • the same effect as that of the first embodiment can be obtained.
  • a sheath fork 19 may be used instead of the fork 15. That is, the camera 21 and the distance measuring sensor 22 b or the projector 22 c are provided at the tip of the sheath 19. Even if it does in this way, the effect similar to 2nd, 3rd embodiment can be acquired.
  • FIG. 15 is a block diagram illustrating a hardware configuration and a functional configuration of a processing unit according to the third modification.
  • the processing unit 23 includes an optical flow processing unit 320.
  • the image sensor 200, the position detection sensor 27, and the processing unit 23 (optical flow processing unit 320) of the camera 21 function as a detection sensor for detecting the distance to the object.
  • the position detection sensor 27 has the same configuration as the position detection sensor 26 according to the modification. From the position detection sensor 27, position data relating to the traveling direction and movement amount of the camera 21 when the forklift 10 moves is acquired.
  • the optical flow processing unit 320 acquires distance measurement information using a known optical flow process. Specifically, a difference between a plurality of frames (a plurality of images) in time series of the camera 21 is detected, and a distance value to each object in the image is acquired using position data from the position detection sensor 27 at that time. . Even if it does in this way, the effect similar to each above-mentioned embodiment can be acquired.
  • FIG. 16 is a block diagram illustrating the hardware configuration of the image processing apparatus, the functional configuration of the processing unit, and the configuration of the HUD according to the fourth modification.
  • FIG. 17 is a schematic diagram showing the configuration of the HUD.
  • the fourth modification has a HUD 25b as a display.
  • the combiner 522 of the HUD 25b is located at the same position as the display 25 in FIG. 1 and can pass through the front side of the forklift. The driver can see the front real image through the combiner while projecting an image such as the additional images 401 to 407 shown in FIG. 10 as a virtual image by the HUD 25b.
  • the HUD 25b enlarges a display element 51 having a two-dimensional display surface, an image i formed on the display element 51, converts it into a virtual image f, and projects the mirror 521 and a combiner 522. Including a virtual image projection optical system 52 and a moving mechanism 53. Note that the mirror 521 may be omitted.
  • the display element 51 may be a liquid crystal, an OLED (Organic Light Emitting Diode), or an intermediate screen. The light reflected by the combiner 522 is guided to the driver's pupil 910 and recognized as a virtual image f.
  • the virtual image projection optical system 52 includes a mirror 521 and a combiner 522, enlarges the image i formed on the display element 51, converts the image i into a virtual image f, and projects the image onto the combiner 522.
  • the moving mechanism 53 moves the mirror 521 or the display element 51 to adjust the sitting height in accordance with the sitting height of the driver.
  • the virtual image on the HUD25b needs to work by alternately looking at the luggage, luggage rack, and fork toe.
  • For setting the virtual image distance it is easy to see the fork toe, luggage rack, and luggage. Something is desirable.
  • the driver With the display light from the display element 51 guided to the driver's pupil 910 sitting in the driver's seat, the driver can observe the virtual image f as a display image as if it is in front of the body of the forklift.
  • the distance to the observed virtual image f is the virtual image distance, and corresponds to the distance between the pupil 910 and the virtual image f in FIG.
  • the virtual image distance is preferably set according to the size of the forklift 10 or the distance between the tip of the fork 15 and the driver's seat. For example, in the case of a small forklift of about 1.5 tons, since a virtual image can be observed without removing the viewpoint from the toe, the projection distance is preferably set within a range of 1 m to 3 m.
  • HUD is good also as a structure which can change a virtual image projection distance.
  • a moving mechanism for changing the position of the display element 51 in the optical axis AX direction or a moving mechanism for moving the mirror 521 may be provided, thereby changing the projection distance to the virtual image f.
  • the projection distance is changed according to the distance to the luggage 92 or the pallet insertion port 91a.
  • an intermediate screen may be provided, and the projection distance may be changed by changing the position of the intermediate screen in the optical axis direction.
  • the intermediate screen is a member that projects an image formed on the display surface of the display element on the surface thereof, and is a member having a diffusion function such as ground glass, and is disposed between the display element 51 and the mirror 521.
  • the HUD is configured as a 3D-HUD so that a virtual image is displayed three-dimensionally at a virtual image distance corresponding to the distance to the actual object (object) such as a front luggage, a load shelf, or an operator. May be.
  • the virtual image distance is changed by moving the elephant (display element 51 or mirror 521) by a moving mechanism at a period of several tens of Hz, and the display control unit forms the display element in accordance with the movement timing of the elephant. Control the image. This makes it appear to the driver that multiple virtual images with different virtual image distances are displayed simultaneously.
  • FIG. 18 is a schematic diagram illustrating a state in which the first camera and the second cameras 21 and 22 are attached to the tip of one sheath fork 19 in the fifth modification.
  • the fifth modified example uses the sheath fork 19 as in the second modified example, and further has a configuration in consideration of impact relaxation at the time of collision.
  • FIG. 18 is a side sectional view showing the sheath fork 19 attached to the front end side of the fork 15.
  • the sheath fork 19 includes a main body 191, a lid 192, a transparent plate 193, an impact relaxation member 194, and a heat conduction member 195.
  • the substrate unit 40 includes an aluminum plate 41, first and second cameras 21 and 22, a camera substrate 42, and an IMU (Internal Measurement Unit) substrate 43 disposed on the aluminum plate 41. On the camera substrate 42, some or all of the functions of the processing unit 23 are arranged.
  • the IMU substrate 43 corresponds to the position detection sensor 27 described above.
  • the lid 192 is fixed to the main body 191 with a bolt 90 or the like, so that the board unit 40 is accommodated inside the space of the main body 191.
  • the main body 191 and the lid 192 are made of steel.
  • the transparent plate 193 is a member that transmits light, and is made of, for example, polycarbonate.
  • the first and second cameras 21 and 22 photograph the outside through the transparent plate 193.
  • the impact relaxation member 194 is made of an elastic body such as silicon rubber or a gelled material.
  • the heat conduction member 195 is a member made of a flexible and high heat conductivity material such as aluminum, copper, carbon, or a heat pipe, or a heat pipe, and is attached to the aluminum plate 41 and the main body 191. The heat is transmitted to the main body 191 through the aluminum plate 41 and the heat conducting member 195, and the heat generated from each electronic component of the board unit 40 is radiated.
  • a carbon-based graphite sheet or heat conductive sheet
  • a flexible heat pipe may be used as the heat conductive member 195.
  • the impact relaxation member 194 is affixed on the inner surface of the lid portion 192, and its end portion is also in contact with the main body portion 191. With this impact relaxation member 194, the camera and the electronic components constituting the detection sensor are indirectly connected to the main body 191 through the impact relaxation member 194. Specifically, in the example of FIG. 18, the entire board unit 40 is covered with the impact reducing member 194, and the impact caused by the collision of the object with the sheath fork 19 is reduced, and each electronic component of the board unit 40 is applied. It is transmitted.
  • the camera and the detection sensor are protected by the impact relaxation member 194.
  • the influence given to these components by the impact to a sheath can be relieved.
  • a flexible and flexible heat conducting member 195 vibration and impact on the electronic component can be mitigated and exhaust heat from the electronic component can be removed.
  • the present invention is not limited to this, and the configuration shown in FIG. 16 is shown in FIGS.
  • the fork 15 may be applied.
  • FIG. 19 is a flowchart showing display processing performed by the image processing apparatus 20 according to the fourth embodiment. Hereinafter, the display process will be described with reference to FIG.
  • Step S101 First, the processing unit 23 (image acquisition unit 301) of the image processing apparatus 20 controls the first and second cameras 21 and 22 to acquire video captured at a predetermined frame rate.
  • Step S102 the processing unit 23 acquires a distance map by processing two corresponding images acquired from the respective cameras 21 and 22. This processing is performed by the above-described preprocessing unit 302, feature point extraction unit 303, and distance map generation unit 304. That is, after pre-processing the acquired image, a distance map is generated by calculating the distance value of each pixel based on the correspondence between the feature points of the two images using the baseline length.
  • Step S103 the processing unit 23 performs viewpoint conversion processing on the two-dimensional video acquired in step S101 using the generated distance map.
  • This processing is performed by the object position determination unit 305, the association unit 307, and the viewpoint conversion unit 308. That is, the position of the object in front is determined in the three-dimensional space, and each determined object is associated with each pixel of the two-dimensional image. Then, by converting the coordinates and the distance and position of each pixel point with respect to the angle and direction when viewed from the virtual viewpoint position specified in advance, corresponding to the viewpoint position of the driver sitting on the cab 12, etc. Performs image viewpoint conversion processing.
  • Step S104 The processing unit 23 creates an additional image. Specifically, the additional image generation unit 306 generates an additional image corresponding to the distance between the object in front of the forklift 10 and the tips of the forks 15 and 16. Further, the processing unit 23 performs approach prediction. That is, when the fork 15 or the like gets too close to the object and the distance to the object is equal to or less than a predetermined value, it is determined that the reporting condition is satisfied.
  • Step S105 The processing unit 23 performs a processing process for adding (superimposing) the additional image generated in step S104 to the image subjected to the viewpoint conversion process in step S103. Specifically, the image composition unit 309 generates a composite image in which the additional image is superimposed at a display position corresponding to the position of the object. Note that the viewpoint conversion processing in step S103 may be omitted, and the video may be processed only by the processing processing in step S104.
  • Step S106 The image output unit 310 of the processing unit 23 outputs the composite image generated in step S105 to the display 25 in real time and displays it to the driver. For example, in step S106, a screen 250 as shown in FIG. If it is determined in step S104 that the notification condition is satisfied as the approach prediction, the processing unit 23 outputs a warning sound from a speaker provided around the display 25 or displays a predetermined warning display on the display. Or perform a notification process. When a person approaches within a predetermined range and makes an approach prediction, a warning sound is emitted from a speaker attached to the side of the display 25 as a notification process. At this time, the additional image 407 may be changed in color or blinked to indicate a warning.
  • information on the shortest distance from the tips of the forks 15 and 16 may be output.
  • the distance information is output by voice, or a warning lamp (see FIG. (Not shown).
  • voice or a warning lamp
  • a warning lamp see FIG. (Not shown).
  • distance information for a plurality of grid-like points arranged at predetermined intervals (in the angle of view) on the top, bottom, left, and right of the additional image 402 is displayed.
  • the distance in the YZ plane is displayed as the relative position to the palette 91.
  • a pair of cameras 21 and 22 provided on the fork 15 is used, and the images up to the object in front of the forklift 10 are obtained based on the images acquired by the imaging elements of the cameras 21 and 22.
  • Distance measurement, distance measurement point group data indicating the distribution of distance values is acquired, and processing of adding an image of distance values based on the acquired distance measurement point group data to the image acquired by the camera is performed
  • the processed video is displayed on the display.
  • the driver can easily confirm the front situation on the screen displayed on the display 25 even when the front is difficult to see due to loading on the forks 15 and 16.
  • the driver can use the processed image that has been subjected to the viewpoint conversion process, even when it is difficult to see the front due to loading on the forks 15 and 16. You can check the front more easily.
  • FIG. 20 is a modification of the screen 251 displayed on the display 25.
  • an overhead image is added, and the screen 251 includes a front image 2501 and an overhead image 2502 in front view.
  • a front image 2501 is the same as the screen 250 of FIG. 10, and is a reduced display of this.
  • the overhead image 2502 is an image in which the additional image generation unit 306 generates an overhead image with a top viewpoint (upper viewpoint) in accordance with the distance value to the object, and displays the overhead image on the front image 2501 side by side.
  • the overhead image 2502 displays an additional image 409 indicating a distance and an additional image 410 that is an illustration image corresponding to the forks 15 and 16.
  • the distance value of the front side of the object facing the forklift 10, or the front side, side surface, and upper surface is obtained, but no information on the distance value is obtained for the back side of the object. Therefore, there is no display data on the back side.
  • the distance value on the front side can be determined without knowing the distance value on the back side.
  • the contour of the object existing in the blind spot area is generated and superimposed. Also good.
  • the broken line in FIG. 20 shows the outline of the object (truck, worker) in the blind spot area. In this way, by displaying the bird's-eye view image 2502, the driver can correctly grasp the surrounding situation and can drive the forklift more safely.
  • FIG. 21A is an image (original image) taken by the cameras 21 and 22, and FIG. 21B is an illustration of each object in the image for the purpose of explanation with respect to the image of FIG. It is the schematic diagram which provided the code
  • Objects 971 to 975 are rectangular parallelepiped objects, and the object 971 is closest to the cameras 21 and 22, and is disposed in the order of objects 972, 973, 974, and 975 in the following order.
  • FIG. 21C is an image obtained by performing viewpoint conversion with the viewpoint conversion unit 308 using a position higher than the cameras 21 and 22 as the virtual viewpoint position with respect to the original image in FIG.
  • the back side of the object is a blind spot area that cannot be captured by the cameras 21 and 22. Therefore, in the upward viewpoint conversion, the pixel information on the back side of the object is in a state where the pixel information is missing (Null).
  • FIG. 21D is a schematic diagram in which a reference numeral is given to the image in FIG. In FIG. 21D, blind spot areas b1 to b3 are generated on the back side of each object. Black pixels may be assigned to the blind spot areas b1 to b3, respectively, but the following pixels may be assigned for ease of viewing.
  • the texture of the surface of the object above the object where the blind spot area is formed and at a distance farther than this object, that is, the color and pattern of the pixels constituting the surface are assigned to the blind spot area.
  • the contour information of the object in the three-dimensional distance map stored in the storage unit 24 the contour of the object existing in the blind spot area is generated with respect to the blind spot area and superimposed on the image. Even in this way, the uncomfortable feeling caused by the blind spot area in the display can be alleviated.
  • FIG. 22 is a block diagram illustrating a hardware configuration and a functional configuration of the processing unit according to the sixth modification.
  • a position detection sensor 26 that acquires posture information of the two cameras 21 and 22 provided on the fork 15 is further provided in the configuration of the first embodiment.
  • a distance measurement map based on distance measurement point group data acquired by an external distance measurement sensor 80 (see FIG. 8) installed in a building or facility is stored in the storage unit 24.
  • the posture information includes information on an inclination angle and height with respect to the ground, or a height of the forklift 10 with respect to the main body 11, an opening angle between both forks 15 and 16, and an interval.
  • the position detection sensor 26 is a sensor that detects an inclination angle (tilt) of the fork 15 with respect to the ground.
  • the position detection sensor 26 is a sensor that detects the height of the fork 15 relative to the mast 13, that is, the height relative to the ground.
  • These sensors are composed of an actuator and an optical element, for example.
  • These sensors are encoders provided in a motor provided in the forklift 10.
  • the position detection sensor 26 can detect the relative positions of the cameras 21 and 22 provided on the fork 15 with respect to the main body 11 by these sensors.
  • the position detection sensor 26 may be an acceleration sensor or a gyro sensor.
  • Angular velocity information and turning angular velocity information can be acquired by the acceleration sensor and the gyro sensor, and the relative positions of the cameras 21 and 22 provided on the fork 15 with respect to the main body 11 or the surroundings can be grasped.
  • the relative position mentioned here includes grasping the angle (horizontal or inclined) of the fork 15 (cameras 21 and 22) and the horizontal plane.
  • the same effects as those of the first embodiment can be obtained, and the position detection sensor 26 is provided to superimpose an additional image or convert the viewpoint. 23 processing load can be reduced.
  • the distance measurement map obtained from the external distance measurement sensor 80 in the storage unit 24 the distance measurement accuracy can be further improved when distance measurement is performed on an object in front of the forklift 10. Further, by using this distance measurement map, it is possible to obtain shape information such as the contour of the back side of the object that cannot be taken by the cameras 21 and 22.
  • FIG. 23 is an example of the proximity screen displayed on the display in the seventh modified example.
  • step S104 of FIG. 19 described above when the distance to the object is equal to or less than a predetermined value, it is determined that the reporting condition is satisfied, and the warning processing such as warning display is performed.
  • the processing unit 23 when the distance from the tip of the fork 15 to the object is equal to or less than a predetermined distance, for example, 2 m or less, the processing unit 23 replaces the alarm processing with proximity as shown in FIG. Screen 252 is generated, and the display on the display is switched to this proximity screen 252.
  • a proximity screen 252 shown in FIG. 23 shows a state immediately before the tip of the fork 15 is inserted into the insertion port of the pallet 91 on which the luggage 92 is placed.
  • this proximity screen 252 unlike the screen 250 of FIG. 10 displayed so far, an object close to the fork 15 is displayed and an additional image indicating the virtual position of the tip of the fork 15 with respect to this object is added. Specifically, in the proximity screen 252, the distance ladder of the additional images 403, 404, and 405 is lost, and instead, an additional image 411 indicating the virtual tip of the fork 15 is displayed.
  • the proximity screen 252 When the fork 15 comes close to the pallet 91, the positional relationship between the insertion port 91a of the pallet 91 and the tip of the fork 15 becomes the greatest concern rather than the distance to the front object. Therefore, switching to the proximity screen 252 is effective.
  • the tip of the fork 15 is located slightly to the left of the center of the insertion port 91a, and the driver shifts the tip of the fork 15 slightly to the right.
  • the fork 15 can be inserted into the center of the insertion port 91a.
  • the present invention is not limited to this. Two virtual positions may be displayed.
  • the driver can switch to the proximity screen 252 so that the driver can move the tip of the fork 15 to the insertion port 91a of the pallet 91. It is possible to easily grasp whether it is in such a position.
  • FIG. 24 is a block diagram illustrating the hardware configuration of the image processing apparatus according to the eighth modification, the functional configuration of the processing unit, and the configuration of the HUD.
  • FIG. 25 is an example of a virtual image related to the package content information, empty shelf information, and cargo handling procedure information displayed on the HUD.
  • the 8th modification has HUD25b as a display.
  • the HUD 25b has the same configuration as the HUD 25b shown in FIG.
  • the image processing apparatus 20 in the eighth modified example is connected to an external physical distribution system 60 via a network. Stored are location information of packages in the building, content information of packages, empty shelf information indicating the availability of shelves, cargo handling procedure information indicating procedures for cargo handling, and the like.
  • FIG. 25 is an example of a virtual image regarding the package content information, empty shelf information, and cargo handling procedure information displayed on the HUD in the eighth modification.
  • the driver can directly see the floor, the shelf, the luggage 92, and the pallet 91, which are real images (objects), through the combiner 522.
  • the virtual images f11 to f13 are projected onto the combiner 522 by the HUD 25b, and the driver can see the virtual image at a predetermined virtual image distance.
  • This virtual image distance is set to a value close to the measured distance to the real image. As long as the virtual image is displayed in a three-dimensional manner, different virtual image distances may be set according to the distance to each target real image.
  • the virtual image f11 is package contents (“product number”, “raw material”)
  • the virtual image f12 is empty shelf information (“space for raw material BB”)
  • the virtual image f13 is cargo handling procedure information (“loading ranking”).
  • These pieces of information are information acquired from the external logistics system 60.
  • FIG. 25 shows a display example in the HUD, but the present invention is not limited to this, and the display may be applied to the liquid crystal display 25 used in the first embodiment and the like. For example, additional images relating to the package content information, empty shelf information, and cargo handling procedure information corresponding to the virtual images f11 to f13 are added to the video displayed on the liquid crystal display 25 used in the first embodiment and the like.
  • the information on the virtual image can be understood without the need to change the focus of the eyes, so that the recognition time can be shortened. I can do it.
  • the degree of fatigue can be reduced by reducing the burden on the eyes.
  • the driver can carry out cargo handling work more safely and smoothly without being confused by the work. .
  • the configuration of the image processing apparatus 20 for a forklift described above is the main configuration in describing the features of the above-described embodiment, and is not limited to the above-described configuration. Various modifications can be made within the scope of the claims. can do. Further, the configuration of a general image processing apparatus is not excluded.
  • the display 25 attached to the forklift 10 is used, but is not limited thereto.
  • a display may be provided in a management office provided in the work space used by the forklift so that the video signal transmitted by the processing unit 23 by radio or the like is displayed on the display. Good. By doing so, it is possible to supervise the work situation in the management office or to perform an operation to leave a work record.
  • the means and method for performing various processes in the image processing apparatus can be realized by either a dedicated hardware circuit or a programmed computer.
  • the program may be provided by a computer-readable recording medium such as a USB memory or a DVD (Digital Versatile Disc) -ROM, or may be provided online via a network such as the Internet.
  • the program recorded on the computer-readable recording medium is usually transferred to and stored in a storage unit such as a hard disk.
  • the program may be provided as a single application software, or may be incorporated in the software of the apparatus as a function of the image processing apparatus.

Landscapes

  • Engineering & Computer Science (AREA)
  • Transportation (AREA)
  • Structural Engineering (AREA)
  • Civil Engineering (AREA)
  • Combustion & Propulsion (AREA)
  • Chemical & Material Sciences (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geology (AREA)
  • Mechanical Engineering (AREA)
  • Forklifts And Lifting Vehicles (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The purpose of the invention is to allow a forklift to check in front while in a loaded state and to measure the distance to an object stably. An image processing device 20 is provided with, on the distal end of one fork 15, cameras 21, 22 to photograph in front of the fork 15, and a detection sensor for detecting the distance to an object in front. Based on the detection information from the detection sensor, the image processing device processes the images acquired by the cameras 21, 22 and displays same on a display 25.

Description

フォークリフト用の画像処理装置、および制御プログラムImage processing apparatus and control program for forklift
 本発明は、フォークリフトに搭載されたフォークリフト用の画像処理装置、および制御プログラムに関する。 The present invention relates to an image processing apparatus for a forklift mounted on a forklift and a control program.
 フォークリフトは、フォークにパレット上の荷物を載せて移動する。例えば運転者が進行方向を向いて運転台に座る座席式のフォークリフトにおいては、前方のフォーク上に、運転者の目線より高く荷積みした場合、前方に死角が出来てしまう。走行する場合には、運転者は、フォークリフトを後進させながら、移動する。しかし、荷役時は、前進が必要で運転者は、横に身をのり出して視認せざるを得ない。 The forklift moves with the load on the pallet on the fork. For example, in a seat-type forklift in which a driver faces a traveling direction and sits on a driver's cab, a blind spot is formed forward when a load is loaded on the front fork higher than the driver's line of sight. When traveling, the driver moves while moving the forklift backward. However, at the time of cargo handling, it is necessary to move forward, and the driver has to lean out to the side and see it.
 また、運転者からフォークの前方を視認しづらい高所棚での積み下ろし作業においても、フォークの前方や、フォークを差し込んだ状態でパレットの前方側を視認したいという要望がある。このような問題に対して特許文献1に開示されたフォークリフトでは、フォークに前方を撮影するカメラを設け、撮影した画像をディスプレイに表示させている。 Also, there is a demand for the driver to see the front side of the pallet in front of the fork or in the state where the fork is inserted even in the loading and unloading work at the high shelf where it is difficult to see the front of the fork from the driver. With respect to such a problem, the forklift disclosed in Patent Document 1 is provided with a camera for photographing the front of the fork, and the photographed image is displayed on the display.
 また、フォークリフトの左右のフォークにそれぞれカメラを設けることで、前方視界を得るとともに、ステレオ視によって前方にある物体までの距離を算出し、算出結果を表示する技術がある(特許文献2)。 Also, there is a technique for obtaining a forward view by providing a camera on each of the left and right forks of the forklift, calculating a distance to an object in front by stereo vision, and displaying the calculation result (Patent Document 2).
特開2003-246597号公報JP 2003-246597 A 特開2013-86959号公報JP 2013-86959 A
 フォークリフトの2本のフォークをフィンガバーに取り付けたとき、一般に、両者間に意図的に、がたつきを設けている。このため、走行時の振動などにより、2本のフォークが互いにバラバラな動きをしたり、取り付けられているフィンガバーやこれらを支持するマスト部分と異なる動きをしたりすることがある。 ¡When two forks of a forklift are attached to a finger bar, generally there is intentionally rattling between them. For this reason, the two forks may move apart from each other due to vibration during traveling, or may move differently from the attached finger bar and the mast portion that supports them.
 特許文献1では、フォークに1台のカメラを設けている。この場合、前方視認はできるものの、その画像を見ただけでは前方にある物体との距離はわからない。 In Patent Document 1, one camera is provided on the fork. In this case, although it is possible to visually recognize the front, the distance from the object in front is not known only by looking at the image.
 この点、特許文献2に開示された技術では、2本のフォークのそれぞれにカメラを取り付けることで、ステレオ視により測距できる。しかしながら、上述のように、2本のフォークは、その間隔が安定しておらず、両カメラの間隔(基線長)が安定しないことや、フォークの取り付けガタにより、2本のフォークがバラバラに動く。また、2本のフォークの平行度が確保できないことが多い。このため、ステレオ視計算で精度を保つことが不可能か、または非常に困難になる。 In this regard, with the technique disclosed in Patent Document 2, it is possible to measure the distance by stereo viewing by attaching a camera to each of the two forks. However, as described above, the distance between the two forks is not stable, and the distance between the two cameras (base line length) is not stable, and the two forks move apart due to the loose mounting of the forks. . In many cases, the parallelism of the two forks cannot be ensured. For this reason, it is impossible or very difficult to maintain accuracy in stereo vision calculation.
 さらに、特許文献2に開示された技術では、それぞれのフォークの根元側にカメラを配置しているために、荷積みした状態では、前方の視界が非常に狭くなるため、前方の視界を十分に確保することができない。 Furthermore, in the technique disclosed in Patent Document 2, since the camera is arranged on the base side of each fork, the front field of view becomes very narrow in the loaded state, so that the front field of view is sufficiently reduced. It cannot be secured.
 また、特許文献1に開示された技術では、フォーク先端部にカメラを設けている。そのため、フォークに荷積みした状態では、一般に、フォーク先端部のカメラは、地面に近い位置にあり、そのカメラからの映像は、下から見上げるような画角となるために、そのカメラからの映像を表示したとしても、運転者は周囲の状況を把握しづらいという問題がある。 In the technique disclosed in Patent Document 1, a camera is provided at the tip of the fork. For this reason, when loaded on a fork, the camera at the tip of the fork is generally close to the ground, and the image from the camera has an angle of view looking up from below. Even if is displayed, the driver has a problem that it is difficult to grasp the surrounding situation.
 本発明は、上述の事情に鑑みなされたものであり、第1の目的は、フォークリフトにおいて、荷積みした状態で前方を確認できるとともに、安定して、物体までの距離を測定できる、フォークリフト用の画像処理装置を提供することである。 The present invention has been made in view of the above-described circumstances, and a first object of the present invention is to provide a forklift that can confirm the front in a loaded state and can stably measure the distance to an object. An image processing apparatus is provided.
 第2の目的は、フォークリフトにおいて、荷積みした状態であっても前方の状況を容易に確認できるとともに、安全な作業環境を提供することである。 The second purpose is to provide a safe working environment while easily confirming the front situation even in a loaded state in a forklift.
 本発明の上記目的は、下記の手段によって達成される。 The above object of the present invention is achieved by the following means.
 (1)フォークリフトに用いられる画像処理装置であって、
 前記フォークリフトの前方側に昇降可能に支持された複数のフォークのうちの1本のフォークの先端部分に設けられ、前記フォークリフトの前方を撮影するカメラと、
 前記カメラが設けられた前記フォークの前記先端部分に設けられ、前記フォークリフトの前方にある物体までの距離を検知するための検知センサーと、
 前記検知センサーの検知情報に基づいて、前記カメラが取得した映像を加工する処理部と、
 前記処理部が加工した加工後の映像を表示するディスプレイと、
 を備える画像処理装置。
(1) An image processing apparatus used for a forklift,
A camera that is provided at a front end portion of one fork among a plurality of forks supported to be movable up and down on the front side of the forklift, and shoots the front of the forklift;
A detection sensor for detecting a distance to an object in front of the forklift, provided at the tip portion of the fork provided with the camera;
Based on detection information of the detection sensor, a processing unit that processes the video acquired by the camera;
A display for displaying a processed image processed by the processing unit;
An image processing apparatus comprising:
 (2)1本の前記フォークに、第1の撮像素子および第2の撮像素子が、それぞれの撮影領域の少なくとも一部が重なるように設けられ、
 前記第1、第2の撮像素子の少なくとも一方が、前記カメラの一部として機能するとともに、前記1、第2の撮像素子の両方が前記検知センサーとして機能し、
 前記処理部は、前記第1、第2の撮像素子の双方から取得した映像に基づいて、前記フォークの前方にある物体までの距離を検出し、検出した距離に基づいて前記映像を加工する、上記(1)に記載の画像処理装置。
(2) The first imaging device and the second imaging device are provided on one fork so that at least a part of each imaging region overlaps,
At least one of the first and second imaging elements functions as a part of the camera, and both the first and second imaging elements function as the detection sensor,
The processing unit detects a distance to an object in front of the fork based on images acquired from both the first and second imaging elements, and processes the image based on the detected distance. The image processing apparatus according to (1) above.
 (3)前記フォークリフトの前方に向けて光を照射する、または、前記フォークリフトの前方に向けて2次元のパターン光を照射する投光器、
 を備え、
 前記投光器、および前記処理部は、前記検知センサーとしても機能し、前記処理部は、前記投光器による照射光、または前記パターン光を撮影した前記カメラからの映像に基づいて、前記フォークリフトの前方にある物体までの距離を検出し、検出した距離に基づいて前記映像を加工する、上記(1)または上記(2)に記載の画像処理装置。
(3) A projector that emits light toward the front of the forklift, or that emits a two-dimensional pattern light toward the front of the forklift,
With
The projector and the processing unit also function as the detection sensor, and the processing unit is in front of the forklift based on an image from the camera that has captured the light emitted from the projector or the pattern light. The image processing apparatus according to (1) or (2), wherein a distance to an object is detected and the video is processed based on the detected distance.
 (4)前記検知センサーは、前記フォークリフトの前方にある物体までの距離を検知して複数点の測距点群データを取得する測距センサーである、上記(1)に記載の画像処理装置。 (4) The image processing apparatus according to (1), wherein the detection sensor is a distance measurement sensor that detects a distance to an object in front of the forklift and acquires a plurality of distance measurement point group data.
 (5)前記フォークの先端のテーパー部であって、上面視において先端に向けて幅が徐々に狭くなり、および/または、側面視において下面が傾斜することで厚みが先端に向けて徐々に薄くなるテーパー部に、前記カメラと前記検知センサーが設けられている、上記(1)から上記(4)のいずれかに記載の画像処理装置。 (5) The tapered portion of the tip of the fork, the width gradually decreases toward the tip when viewed from above, and / or the bottom surface is inclined when viewed from the side so that the thickness gradually decreases toward the tip. The image processing apparatus according to any one of (1) to (4), wherein the camera and the detection sensor are provided in a tapered portion.
 (6)第1の撮像素子および第2の撮像素子が1本の前記フォークに、それぞれの撮影領域の少なくとも一部を共通するように、前記フォークリフトの前方を撮影領域として設けられており、
 前記第1、第2の撮像素子の少なくとも一方が、前記カメラの一部として機能するとともに、前記1、第2の撮像素子の両方が前記検知センサーとして機能し、
 前記処理部は、前記第1、第2の撮像素子の双方から取得した映像に基づいて、前記フォークリフトの前方にある物体までの距離を検出し、
 前記フォークの先端のテーパー部であって、上面視において先端に向けて幅が徐々に狭くなり、かつ、側面視において下面が傾斜することで厚みが先端に向けて徐々に薄くなるテーパー部の左右両側のそれぞれに、前記第1、第2の撮像素子が配置されている、上記(1)に記載の画像処理装置。
(6) The front of the forklift is provided as an imaging region so that the first imaging device and the second imaging device share at least a part of each imaging region with one fork.
At least one of the first and second imaging elements functions as a part of the camera, and both the first and second imaging elements function as the detection sensor,
The processing unit detects a distance to an object in front of the forklift based on images acquired from both the first and second imaging elements,
The tapered portion at the tip of the fork has a width that gradually decreases toward the tip when viewed from above and a thickness that gradually decreases toward the tip when the bottom surface is inclined when viewed from the side. The image processing apparatus according to (1), wherein the first and second imaging elements are arranged on both sides.
 (7)さらに、記憶部を備え、
 前記処理部は、前記カメラからの映像と、前記検知センサーの検知情報に基づいて、前記フォークリフトが動作する作業空間内の物体の位置、または形状を示す測距点群である距離マップを生成し、前記記憶部に蓄積する、上記(1)から上記(6)のいずれかに記載の画像処理装置。
(7) Furthermore, a storage unit is provided,
The processing unit generates a distance map that is a distance measuring point group indicating a position or a shape of an object in a work space where the forklift operates based on an image from the camera and detection information of the detection sensor. The image processing apparatus according to any one of (1) to (6), which is stored in the storage unit.
 (8)前記処理部は、前記検知センサー、または該検知センサーの検知情報と前記カメラからの映像に基づいて検出した物体までの距離のデータの一部を、前記記憶部に蓄積した前記距離マップで補正する、上記(7)に記載の画像処理装置。 (8) The distance map in which the processing unit accumulates a part of the distance data to the object detected based on the detection sensor or detection information of the detection sensor and an image from the camera in the storage unit. The image processing apparatus according to (7), wherein correction is performed by
 (9)さらに、前記フォークの位置状態を取得する位置検知センサーを含み、
 前記処理部は、前記位置検知センサーにより、前記カメラが設けられた前記フォークの位置状態を取得する、上記(1)から上記(8)のいずれかに記載の画像処理装置。
(9) Furthermore, a position detection sensor for acquiring the position state of the fork is included,
The image processing apparatus according to any one of (1) to (8), wherein the processing unit acquires the position state of the fork provided with the camera by the position detection sensor.
 (10)前記処理部は、前記加工した映像として、
 前記カメラによる取得した映像に対して、前方の物体までの距離に対応した付加情報を付加した映像を、前記ディスプレイに表示させる上記(1)から上記(9)のいずれかに記載の画像処理装置。
(10) As the processed image, the processing unit
The image processing device according to any one of (1) to (9), wherein an image obtained by adding additional information corresponding to a distance to a front object to the image acquired by the camera is displayed on the display. .
 (11)前記処理部は、前記加工した映像として、
 前記カメラによる取得した映像に対して視点変換した映像を前記ディスプレイに表示させる、上記(1)から上記(10)のいずれかに記載の画像処理装置。
(11) As the processed video, the processing unit
The image processing apparatus according to any one of (1) to (10), wherein an image obtained by performing viewpoint conversion on an image acquired by the camera is displayed on the display.
 (12)前記カメラは、撮影画角の中央部分を用いて露出を行う、上記(1)から上記(11)のいずれかに記載の画像処理装置。 (12) The image processing apparatus according to any one of (1) to (11), wherein the camera performs exposure using a central portion of a shooting angle of view.
 (13)前記ディスプレイは、前記フォークリフトに取付けられたコンバイナーに虚像を投影するヘッドアップディスプレイである、上記(1)から上記(12)のいずれかに記載の画像処理装置。 (13) The image processing apparatus according to any one of (1) to (12), wherein the display is a head-up display that projects a virtual image on a combiner attached to the forklift.
 (14)前記コンバイナーは、前記フォークリフトの前方側を透過視できる位置に配置されており、
 前記ヘッドアップディスプレイは、虚像の投影距離が50cmから20mの範囲に設定されている、上記(13)に記載の画像処理装置。
(14) The combiner is disposed at a position where the front side of the forklift can be seen through,
The image processing apparatus according to (13), wherein the head-up display has a virtual image projection distance set in a range of 50 cm to 20 m.
 (15)前記フォークの先端部分に設けられ前記カメラ、および前記検知センサーは、衝撃緩和部材を介して、前記フォークの本体部に取付けられている、上記(1)から上記(14)のいずれかに記載の画像処理装置。 (15) Any of (1) to (14) above, wherein the camera and the detection sensor provided at a tip portion of the fork are attached to a main body of the fork via an impact relaxation member. An image processing apparatus according to 1.
 (16)前記カメラ、および前記検知センサーを構成する電子部品の少なくとも一部は、可撓性で高熱伝導性の材料で構成された熱伝導部材を介して、前記フォークの本体部に接続されている、上記(15)に記載の画像処理装置。 (16) At least a part of the electronic components constituting the camera and the detection sensor are connected to the main body of the fork via a heat conductive member made of a flexible and high heat conductive material. The image processing apparatus according to (15).
 (17)フォークリフトに用いられる画像処理装置であって、
 前記フォークリフトの前方を撮影するカメラと、
 前記フォークリフトの前方にある物体までの距離を測距し、距離値の分布を示す測距点群データを取得するための検知センサーと、
 前記カメラが取得した映像に対して、取得した測距点群データに基づく距離値の画像を付加する加工処理を行う処理部と、
 前記処理部が加工処理した処理後の映像を表示するディスプレイと、
 を備える画像処理装置。
(17) An image processing apparatus used for a forklift,
A camera for photographing the front of the forklift;
A distance sensor to measure the distance to an object in front of the forklift, and a distance sensor data indicating a distance value distribution, and a detection sensor;
A processing unit that performs processing for adding an image of a distance value based on the acquired distance measuring point group data to the video acquired by the camera;
A display for displaying the processed video processed by the processing unit;
An image processing apparatus comprising:
 (18)前記カメラは、可視光領域に感度を有する撮像素子を含む、上記(17)に記載の画像処理装置。 (18) The image processing device according to (17), wherein the camera includes an image sensor having sensitivity in a visible light region.
 (19)前記カメラは、前記フォークリフトの前方側に昇降可能に支持されたフォークに、前記前方を撮影するように設置されている、上記(17)または上記(18)に記載の画像処理装置。 (19) The image processing apparatus according to (17) or (18), wherein the camera is installed on a fork supported to be movable up and down on a front side of the forklift so as to photograph the front.
 (20)前記カメラは、撮影画角の中央部分を用いて露出を行う、上記(19)に記載の画像処理装置。 (20) The image processing apparatus according to (19), wherein the camera performs exposure using a central portion of a shooting angle of view.
 (21)前記処理部は、前記加工処理として、さらに、前記測距点群データに基づいて、前記映像に対して視点変換処理を行う、上記(17)から上記(20)のいずれかに記載の画像処理装置。 (21) The processing unit according to any one of (17) to (20), wherein the processing unit further performs viewpoint conversion processing on the video based on the distance measurement point group data. Image processing apparatus.
 (22)さらに、前記カメラの姿勢情報を取得する位置検知センサーを備え、
 前記処理部は、前記位置検知センサーから取得した前記姿勢情報を用いて、前記視点変換処理を行う、上記(21)に記載の画像処理装置。
(22) Furthermore, a position detection sensor for acquiring posture information of the camera is provided,
The image processing apparatus according to (21), wherein the processing unit performs the viewpoint conversion process using the posture information acquired from the position detection sensor.
 (23)さらに、記憶部を備え、
 前記処理部は、前記検知センサーにより取得した測距点群データを用いて3次元距離マップを作成し、前記記憶部に記憶させる、上記(21)または上記(22)に記載の画像処理装置。
(23) Furthermore, a storage unit is provided,
The image processing apparatus according to (21) or (22), wherein the processing unit creates a three-dimensional distance map using distance measurement point group data acquired by the detection sensor and stores the three-dimensional distance map in the storage unit.
 (24)前記記憶部に記憶された前記3次元距離マップは、前記フォークリフトが使用される建物もしくは設備に関する図面データ、前記建物に設置されたセンサーから得られた測距点群データ、他の車両の位置情報、および/または前記建物で用いられる物流情報システムから取得した荷物の位置情報が反映されている、上記(23)に記載の画像処理装置。 (24) The three-dimensional distance map stored in the storage unit includes drawing data relating to a building or facility in which the forklift is used, distance measuring point group data obtained from sensors installed in the building, and other vehicles. The image processing apparatus according to (23), wherein the position information of the package and / or the position information of the package acquired from the physical distribution information system used in the building are reflected.
 (25)前記3次元距離マップには、前記建物もしくは設備に関する、床面、壁面、窓、または照明装置の位置情報が含まれている、上記(24)に記載の画像処理装置。 (25) The image processing device according to (24), wherein the three-dimensional distance map includes position information of a floor surface, a wall surface, a window, or a lighting device related to the building or equipment.
 (26)前記視点変換処理は、前記フォークリフトの運転台に座る運転者の視点位置を仮想視点位置とする視点変換処理、前記運転者の視点位置よりも高い位置を仮想視点位置とする視点変換処理、または、前記フォークリフトから離れた位置を仮想視点位置とする視点変換処理である、上記(21)から上記(25)のいずれかに記載の画像処理装置。 (26) The viewpoint conversion process includes a viewpoint conversion process in which the viewpoint position of the driver sitting on the cab of the forklift is a virtual viewpoint position, and a viewpoint conversion process in which a position higher than the driver's viewpoint position is a virtual viewpoint position. Alternatively, the image processing device according to any one of (21) to (25) above, wherein the viewpoint conversion processing uses a position away from the forklift as a virtual viewpoint position.
 (27)前記運転者の視点位置を仮想視点位置とする前記視点変換処理は、前記カメラの地面に対する角度、もしくは高さに応じた台形補正による視点変換処理、または、前記測距点群データ、もしくは記憶部に記憶した3次元距離マップを用いた視点変換処理である、上記(26)に記載の画像処理装置。 (27) The viewpoint conversion process in which the driver's viewpoint position is a virtual viewpoint position is a viewpoint conversion process by keystone correction according to an angle or height of the camera with respect to the ground, or the distance measurement point group data, Alternatively, the image processing device according to (26), which is viewpoint conversion processing using a three-dimensional distance map stored in a storage unit.
 (28)前記運転者の視点位置よりも高い位置を仮想視点位置とする前記視点変換処理は、前記測距点群データまたは記憶部に記憶した3次元距離マップを用いた視点変換処理である、上記(26)に記載の画像処理装置。 (28) The viewpoint conversion process in which a position higher than the driver's viewpoint position is a virtual viewpoint position is a viewpoint conversion process using the distance measurement point group data or a three-dimensional distance map stored in a storage unit. The image processing apparatus according to (26) above.
 (29)前記運転者の視点位置よりも高い位置を仮想視点位置とする前記視点変換処理、または前記フォークリフトから離れた位置を仮想視点位置とする視点変換処理では、前記カメラの死角領域に関しては、前記カメラの画角において、前記死角領域が形成される物体の上方で、かつ、該物体よりも遠い距離にある物体の表面のテクスチャーを、前記死角領域に配置する、上記(26)または上記(28)に記載の画像処理装置。 (29) In the viewpoint conversion process in which a position higher than the viewpoint position of the driver is a virtual viewpoint position, or in the viewpoint conversion process in which a position away from the forklift is a virtual viewpoint position, regarding the blind spot area of the camera, In the angle of view of the camera, the texture of the surface of the object above the object where the blind spot area is formed and at a distance farther than the object is arranged in the blind spot area (26) or ( 28) The image processing apparatus according to 28).
 (30)前記運転者の視点位置よりも高い位置を仮想視点位置とする前記視点変換処理、または前記フォークリフトから離れた位置を仮想視点位置とする視点変換処理では、前記カメラの死角領域に関しては、記憶部に記憶した3次元距離マップにおける物体の輪郭情報を用いて、前記死角領域に対して、前記死角領域に存在する前記物体の輪郭を重畳させる、上記(26)または上記(28)に記載の画像処理装置。 (30) In the viewpoint conversion process in which a position higher than the viewpoint position of the driver is a virtual viewpoint position, or in the viewpoint conversion process in which a position away from the forklift is a virtual viewpoint position, regarding the blind spot area of the camera, (26) or (28), wherein the object outline information in the three-dimensional distance map stored in the storage unit is used to superimpose the object outline existing in the blind spot area on the blind spot area. Image processing apparatus.
 (31)前記視点変換処理は、物体までの距離に応じて、視点変換処理の有無、または強度を変更する、上記(21)から上記(30)のいずれかに記載の画像処理装置。 (31) The image processing device according to any one of (21) to (30), wherein the viewpoint conversion processing changes presence or absence of the viewpoint conversion processing or intensity according to a distance to the object.
 (32)前記ディスプレイは、前記フォークリフトの前方を透過視できるように前記フォークリフトに取り付けられた、透明スクリーン、またはヘッドアップディスプレイであり、
 前記処理部は、前記フォークリフトの前方にある物体を認識するとともに、認識した前記物体それぞれまでの距離、および/または方向に対応する付加画像を生成し、生成した前記付加画像を前記物体それぞれに重畳させる態様で、前記透明スクリーン、または前記ヘッドアップディスプレイに表示させる、上記(17)から上記(20)のいずれかに記載の画像処理装置。
(32) The display is a transparent screen or a head-up display attached to the forklift so that the front of the forklift can be seen through.
The processing unit recognizes an object in front of the forklift, generates an additional image corresponding to the distance and / or direction to each of the recognized objects, and superimposes the generated additional image on each of the objects. The image processing apparatus according to any one of (17) to (20), wherein the image is displayed on the transparent screen or the head-up display.
 (33)前記処理部は、前記フォークリフトの前方にある物体を認識するとともに、前記加工処理として、前記映像に、認識した前記物体の種類、または前記物体までの距離、位置に対応した付加画像を生成し、前記映像に付加する、上記(17)から上記(32)のいずれかに記載の画像処理装置。 (33) The processing unit recognizes an object in front of the forklift and, as the processing process, adds an additional image corresponding to the recognized type of the object, the distance to the object, or the position to the video. The image processing device according to any one of (17) to (32), which is generated and added to the video.
 (34)前記処理部は、前記物体としてパレットを認識した場合に、前記パレットの差し込み口の形状により、前記パレットに対する傾きを判定し、判定した前記パレットの水平面の傾き量に応じた前記付加画像を生成する、上記(33)に記載の画像処理装置。 (34) When the processing unit recognizes a pallet as the object, the processing unit determines an inclination with respect to the pallet based on a shape of the insertion port of the pallet, and the additional image corresponding to the determined inclination amount of the horizontal plane of the pallet The image processing device according to (33), wherein
 (35)前記処理部が生成する前記付加画像には、前記フォークリフトが使用される建物で用いられる物流情報システムから取得した荷物の内容情報、棚の空き状況を示す空棚情報、荷役する手順を示す荷役手順情報の少なくとも一つが含まれる、上記(32)から上記(34)のいずれかに記載の画像処理装置。 (35) In the additional image generated by the processing unit, the content information of the package acquired from the physical distribution information system used in the building where the forklift is used, the empty shelf information indicating the availability of the shelf, and the procedure for handling the cargo are included. The image processing device according to any one of (32) to (34), wherein at least one of the cargo handling procedure information to be displayed is included.
 (36)前記処理部は、前記物体までの距離に応じて上方視点の俯瞰画像を生成し、生成した俯瞰画像を追加して前記ディスプレイに表示する、上記(17)から上記(35)のいずれかに記載の画像処理装置。 (36) The processing unit generates an overhead view image of an upper viewpoint according to the distance to the object, adds the generated overhead image and displays the generated image on the display, any of (17) to (35) above An image processing apparatus according to claim 1.
 (37)前記処理部は、前記フォークリフトの前方にある物体を認識するとともに、前記フォークリフト、もしくは前記フォークリフトのフォーク先端からの距離が所定値以下になった場合に、警告を発する、または前記ディスプレイの表示を近接用画面に切り替える、上記(17)から上記(36)のいずれかに記載の画像処理装置。 (37) The processing unit recognizes an object in front of the forklift and issues a warning when the distance from the forklift or the fork tip of the forklift becomes a predetermined value or less, or the display of the display The image processing device according to any one of (17) to (36), wherein the display is switched to a proximity screen.
 (38)前記処理部は、前記フォークリフトの前方にある物体を認識するとともに、前記距離値の画像において認識した前記物体のフォーク先端からの最短距離に関する情報を出力する、上記(17)から上記(37)のいずれかに記載の画像処理装置。 (38) The processing unit recognizes an object in front of the forklift and outputs information on the shortest distance from the fork tip of the object recognized in the image of the distance value. 37) The image processing device according to any one of
 (39)フォークリフトに用いられる画像処理装置であって、前記フォークリフトの前方を撮影するカメラと、前記フォークリフトの前方にある物体までの距離を測距し、距離値の分布を示す測距点群データを取得するための検知センサーと、を備える画像処理装置を制御するコンピューターで実行される制御プログラムであって、
 前記カメラにより映像を取得するステップ(a)と、
 前記検知センサーで測距点群データを取得するステップ(b)と、
 前記カメラが取得した映像に対して、取得した測距点群データに基づく距離値の画像を付加する加工処理を行うステップ(c)と、
 処理後の映像をディスプレイに表示するステップ(d)と、
 を含む処理を、前記コンピューターに実行させるための制御プログラム。
(39) An image processing apparatus used for a forklift, a distance measuring point group data indicating a distance value distribution by measuring a distance between a camera for photographing the front of the forklift and an object in front of the forklift A control program that is executed by a computer that controls an image processing apparatus including a detection sensor for acquiring
Acquiring an image by the camera (a);
A step (b) of obtaining ranging point cloud data with the detection sensor;
A step (c) of performing processing for adding an image of a distance value based on the acquired distance measuring point group data to the video acquired by the camera;
Displaying the processed image on a display (d);
A control program for causing the computer to execute a process including:
 (40)前記ステップ(c)では、前記加工処理として、さらに、前記測距点群データに基づいて、前記映像に対して視点変換処理を行う、上記(39)に記載の制御プログラム。 (40) The control program according to (39), wherein in the step (c), as the processing, a viewpoint conversion process is further performed on the video based on the distance measurement point group data.
 (41)前記処理は、さらに、
 前記フォークリフトの前方にある物体を認識するステップ(e)を含み、
 前記ステップ(c)では、前記加工処理として、前記映像に、認識した前記物体の種類、または前記物体までの距離、位置に対応した付加画像を生成し、前記映像に付加する、上記(39)または上記(40)に記載の制御プログラム。
(41) The process further includes:
Recognizing an object in front of the forklift (e),
In the step (c), as the processing, an additional image corresponding to the recognized object type, the distance to the object, or the position and position is generated in the video and added to the video (39) Or the control program as described in said (40).
 第1の発明によれば、1本のフォークの先端部分に、フォークの前方を撮影するカメラと、前方にある物体までの距離を検知するための検知センサーとを設け、検知センサーの検知情報に基づいて、カメラが取得した映像を加工して、ディスプレイに表示する。このようにすることで、運転者は、フォーク上への荷積みにより前方が見えにくい場合であってもディスプレイに表示した画面により前方を確認できるとともに、安定して物体までの距離を測定できる。 According to the first invention, a camera for photographing the front of the fork and a detection sensor for detecting the distance to the object in front are provided at the tip of one fork. Based on this, the video acquired by the camera is processed and displayed on the display. By doing in this way, the driver can confirm the front by the screen displayed on the display and can stably measure the distance to the object even when it is difficult to see the front by loading on the fork.
 また、第2の発明によれば、フォークリフトの前方を撮影するカメラと、フォークリフトの前方にある物体までの距離を測距し、距離値の分布を示す測距点群データを取得するための検知センサーと、を備え、カメラが取得した映像に対して、取得した測距点群データに基づく距離値の画像を付加する加工処理を行い、処理後の映像をディスプレイに表示する。このようにすることで、運転者は、フォーク上への荷積みにより前方が見えにくい場合であってもディスプレイに表示した画面により前方の状況を容易に確認できるとともに、安全な作業環境を提供できる。 Further, according to the second invention, a camera for photographing the front of the forklift and a detection for measuring the distance to the object in front of the forklift and obtaining distance measurement point group data indicating the distribution of distance values A sensor, and a processing for adding an image of a distance value based on the acquired distance measurement point group data to the video acquired by the camera, and displaying the processed video on a display. By doing in this way, the driver can easily check the front situation by the screen displayed on the display even when it is difficult to see the front by loading on the fork, and can provide a safe working environment. .
フォークリフトの外観を示す側面図である。It is a side view which shows the external appearance of a forklift. 第1の実施形態に係る画像処理装置のハードウェア構成、および処理部の機能構成を示すブロック図である。1 is a block diagram illustrating a hardware configuration of an image processing apparatus according to a first embodiment and a functional configuration of a processing unit. 1本のフォークの先端部に第1、第2カメラを取り付けた状態を示す模式図である。It is a schematic diagram which shows the state which attached the 1st, 2nd camera to the front-end | tip part of one fork. フォークの拡大図である。It is an enlarged view of a fork. 図4(a)のA-A’断面図である。FIG. 5 is a cross-sectional view taken along the line A-A ′ of FIG. 他の例のフォークの拡大図である。It is an enlarged view of the fork of another example. 第1、第2カメラの水平方向の画角を説明する模式図である。It is a schematic diagram explaining the angle of view of the horizontal direction of a 1st, 2nd camera. 第1、第2カメラの垂直方向の画角を説明する模式図である。It is a schematic diagram explaining the angle of view of the perpendicular direction of a 1st, 2nd camera. カメラの取付け位置と画角を説明する模式図である。It is a schematic diagram explaining the attachment position and angle of view of a camera. ディスプレイに表示した表示画面の例である。It is an example of the display screen displayed on the display. 第2の実施形態に係る画像処理装置のハードウェア構成、および処理部の機能構成を示すブロック図である。It is a block diagram which shows the hardware constitutions of the image processing apparatus which concerns on 2nd Embodiment, and the functional structure of a process part. 第3の実施形態に例に係る画像処理装置のハードウェア構成を示すブロック図である。It is a block diagram which shows the hardware constitutions of the image processing apparatus which concerns on an example in 3rd Embodiment. 第1の変形例に係る画像処理装置のハードウェア構成、および処理部の機能構成を示すブロック図である。It is a block diagram which shows the hardware constitutions of the image processing apparatus which concerns on a 1st modification, and the function structure of a process part. 1本のさやフォークの先端部に、第1カメラ、第2カメラを取り付けた状態を示す模式図である。It is a schematic diagram which shows the state which attached the 1st camera and the 2nd camera to the front-end | tip part of one sheath. 第3の変形例に係る画像処理装置のハードウェア構成、および処理部の機能構成を示すブロック図である。It is a block diagram which shows the hardware constitutions of the image processing apparatus which concerns on a 3rd modification, and the function structure of a process part. 第4の変形例に係る画像処理装置のハードウェア構成、処理部の機能構成、およびHUDの構成を示すブロック図である。It is a block diagram which shows the hardware constitutions of the image processing apparatus which concerns on a 4th modification, the function structure of a process part, and the structure of HUD. HUDの構成を示す模式図である。It is a schematic diagram which shows the structure of HUD. 第5の変形例における1本のさやフォークの先端部に、第1カメラ、第2カメラを取り付けた状態を示す模式図である。It is a schematic diagram which shows the state which attached the 1st camera and the 2nd camera to the front-end | tip part of the one sheath fork in a 5th modification. 第4の実施形態に係る画像処理装置が実行する表示処理を示すフローチャートである。It is a flowchart which shows the display process which the image processing apparatus which concerns on 4th Embodiment performs. 俯瞰画像を追加した表示画面の変形例である。It is a modification of the display screen which added the bird's-eye view image. 視点変換により生じる死角領域への処理を説明する図である。It is a figure explaining the process to the blind spot area | region which arises by viewpoint conversion. 第6の変形例に係る画像処理装置のハードウェア構成、および処理部の機能構成を示すブロック図である。It is a block diagram which shows the hardware constitutions of the image processing apparatus which concerns on a 6th modification, and the function structure of a process part. 第7の変形例におけるディスプレイに表示した近接用画面の例である。It is an example of the screen for proximity | contact displayed on the display in a 7th modification. 第8の変形例に係る画像処理装置のハードウェア構成、処理部の機能構成、およびHUDの構成を示すブロック図である。It is a block diagram which shows the hardware constitutions of the image processing apparatus which concerns on an 8th modification, the functional structure of a process part, and the structure of HUD. 第8の変形例におけるHUDに表示した荷物内容情報、空棚情報、荷役手順情報に関する虚像の例である。It is an example of the virtual image regarding the package content information displayed on HUD in the 8th modification, empty shelf information, and cargo handling procedure information.
 以下、添付した図面を参照して、本発明の実施形態を説明する。なお、図面の説明において同一の要素には同一の符号を付し、重複する説明を省略する。図面の寸法比率は、説明の都合上誇張されており、実際の比率とは異なる場合がある。また図面においては、上下方向をZ方向、フォークリフトの進行方向をX方向、これらに直交する方向をY方向とする。 Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the description of the drawings, the same elements are denoted by the same reference numerals, and redundant description is omitted. The dimensional ratios in the drawings are exaggerated for convenience of explanation, and may differ from actual ratios. In the drawings, the vertical direction is the Z direction, the forklift traveling direction is the X direction, and the direction perpendicular to these is the Y direction.
 (フォークリフト)
 図1は、フォークリフトの外観を示す側面図である。フォークリフト10は、本体11、運転台12、マスト13、フィンガバー14、1対のフォーク15、16、およびヘッドガード17を有する。フォーク15、16の上には、パレット91、およびパレット91上の荷物92が荷積みされている。本体11の前方には、上下方向に伸縮可能なマスト13が設けられており、フォーク15、16はフィンガバー14に支持されており、フィンガバー14を介してマスト13に上下に昇降可能に取り付けられている。マスト13に取り付けられたチェーン(図示せず)とホイールを介して、フィンガバー14がマスト13に沿って上下に移動することで、フォーク15、16は、上下方向で位置制御される。また、フォーク15、16の地面(走行面)に対する傾斜角度(チルト)は、マスト13に連結された油圧シリンダー(図示せず)により所定範囲内で変更可能である。また、両フォーク15、16間の開き角度、間隔は、フィンガバー14内にある油圧シリンダー(図示せず)により所定範囲内で変更可能としてもよい。また、フォーク15、16は一般に硬い金属で構成される。
(forklift)
FIG. 1 is a side view showing an appearance of a forklift. The forklift 10 includes a main body 11, a cab 12, a mast 13, a finger bar 14, a pair of forks 15 and 16, and a head guard 17. A pallet 91 and a luggage 92 on the pallet 91 are loaded on the forks 15 and 16. A mast 13 that can be expanded and contracted in the vertical direction is provided in front of the main body 11, and the forks 15 and 16 are supported by the finger bar 14 and are attached to the mast 13 through the finger bar 14 so as to be vertically movable. It has been. As the finger bar 14 moves up and down along the mast 13 via a chain (not shown) attached to the mast 13 and a wheel, the positions of the forks 15 and 16 are controlled in the vertical direction. Further, the inclination angle (tilt) of the forks 15 and 16 with respect to the ground (traveling surface) can be changed within a predetermined range by a hydraulic cylinder (not shown) connected to the mast 13. Further, the opening angle and interval between the forks 15 and 16 may be changed within a predetermined range by a hydraulic cylinder (not shown) in the finger bar 14. The forks 15 and 16 are generally made of hard metal.
 (画像処理装置)
 図2は、第1の実施形態に係る画像処理装置のハードウェア構成、および処理部の機能構成を示すブロック図である。図3は、1本のフォーク15の先端部分に第1カメラ、第2カメラを取り付けた状態を示す模式図である。
(Image processing device)
FIG. 2 is a block diagram illustrating a hardware configuration of the image processing apparatus according to the first embodiment and a functional configuration of the processing unit. FIG. 3 is a schematic diagram showing a state in which the first camera and the second camera are attached to the tip portion of one fork 15.
 画像処理装置20は、第1カメラ21、第2カメラ22、処理部23、記憶部24、およびディスプレイ25を有し、これらの構成部はフォークリフト10に搭載されている。 The image processing apparatus 20 includes a first camera 21, a second camera 22, a processing unit 23, a storage unit 24, and a display 25, and these components are mounted on the forklift 10.
 図2に示すように第1、第2カメラ21、22は、それぞれCCD、またはCMOS等の可視光領域に感度を有する撮像素子200(第1、第2の撮像素子)と、レンズ等の光学系を備え、フォークリフト10の前方を撮影し画像(映像)を取得する。第1、第2カメラ21、22の撮影領域の少なくとも一部は重なる。第1の実施形態においては、第1、第2カメラ21、22の両方の撮像素子200が、処理部23と協働することで物体までの距離を検知し、測距点群データを生成するための検知センサーとしても機能する。 As shown in FIG. 2, each of the first and second cameras 21 and 22 includes an imaging device 200 (first and second imaging devices) having sensitivity in a visible light region such as a CCD or a CMOS, and an optical device such as a lens. A system is provided, and the front of the forklift 10 is photographed to obtain an image (video). At least a part of the imaging areas of the first and second cameras 21 and 22 overlap. In the first embodiment, the imaging devices 200 of both the first and second cameras 21 and 22 detect the distance to the object in cooperation with the processing unit 23 and generate distance measurement point group data. It also functions as a detection sensor.
 (カメラ21、22)
 図3に示すように、第1の実施形態においては、フォークリフト10の2本のフォーク15、16のうち、1本のフォーク15の先端部分に、ステレオ視(複眼ともいう)するための2台のカメラ21、22を本体11の前方が撮影領域となるように取り付けている。また、フォーク15の幅方向(Y方向)において、両カメラ21、22は所定間隔(基線長)だけ離している。同図に示す例では、左側のフォーク15に2台のカメラを取り付けているが、これに限られず、右側のフォーク16に取り付けてもよい。カメラ21、22と処理部23とは、ケーブル(図示せず)または無線で接続しており、映像信号が処理部23に伝送される。
(Cameras 21, 22)
As shown in FIG. 3, in the first embodiment, of the two forks 15, 16 of the forklift 10, two units for stereoscopic viewing (also referred to as compound eyes) at the tip of one fork 15. The cameras 21 and 22 are attached so that the front of the main body 11 is an imaging region. Further, in the width direction (Y direction) of the fork 15, both the cameras 21 and 22 are separated by a predetermined distance (base line length). In the example shown in the figure, two cameras are attached to the left fork 15, but the present invention is not limited to this, and may be attached to the right fork 16. The cameras 21 and 22 and the processing unit 23 are connected by a cable (not shown) or wirelessly, and a video signal is transmitted to the processing unit 23.
 次に、図4(a)~図4(c)、図5を参照し、カメラ21、22のフォーク15への取付け位置について説明する。図4(a)~図4(c)は、フォーク15の先端側(「爪」または「ブレード」とも称される)の拡大図であり、図5は、図4(a)のA-A’断面図であり、図5においては、図4(b)の上面s2における輪郭線を破線で示している。 Next, the mounting positions of the cameras 21 and 22 on the fork 15 will be described with reference to FIGS. 4 (a) to 4 (c) and FIG. 4 (a) to 4 (c) are enlarged views of the front end side (also referred to as “claw” or “blade”) of the fork 15, and FIG. 5 is an AA view of FIG. 4 (a). 5 is a cross-sectional view, and in FIG. 5, a contour line on the upper surface s <b> 2 in FIG.
 カメラ21、22は、より広い画角が得られるように、フォーク15の側面の直線部と先端突部(後述の先端s1)との境界近傍の側面または下面に配置することが好ましい。より具体的には、カメラ21、22は、以下に説明するテーパー部s51に配置することが好ましい。 The cameras 21 and 22 are preferably arranged on the side surface or the lower surface in the vicinity of the boundary between the linear portion on the side surface of the fork 15 and the tip protrusion (tip s1 described later) so that a wider angle of view can be obtained. More specifically, the cameras 21 and 22 are preferably arranged in a tapered portion s51 described below.
 図4(a)は、2台のカメラ21、22を取り付けたフォーク15の側面図であり、図4(b)は平面図であり、図4(c)はフォーク15の先端側から視た正面図である。 4A is a side view of the fork 15 to which two cameras 21 and 22 are attached, FIG. 4B is a plan view, and FIG. 4C is a view from the front end side of the fork 15. It is a front view.
 フォーク15は先端s1、上面s2、下面s3、および側面s4、ならびに先端部分のテーパー部s51を有する。先端s1はYZ平面に延在する平面である。ここで、「先端部分」とは、先端s1のみならず、その周辺部分を含むものとする。例えば、X方向において先端s1から二十数センチメートルの範囲が含まれる(後述の図9参照)。さらにこの周辺部分には、テーパー部s51が包含される。テーパー部s51は、図4(b)に示すように上面視において、先端s1に向けて幅が徐々に狭くなり、かつ、図4(a)に示すように側面視において下面s3が傾斜することで厚みが先端s1に向けて徐々に薄くなるテーパー面で構成される。なお先端s1を平面とせずに、曲面で形成されていてもよい。 The fork 15 has a tip s1, an upper surface s2, a lower surface s3, a side surface s4, and a tapered portion s51 of the tip portion. The tip s1 is a plane extending in the YZ plane. Here, the “tip portion” includes not only the tip s1 but also its peripheral portion. For example, a range of 20 to several centimeters from the tip s1 in the X direction is included (see FIG. 9 described later). Further, the peripheral portion includes a tapered portion s51. The tapered portion s51 has a width that gradually decreases toward the tip s1 in the top view as shown in FIG. 4B, and the bottom surface s3 inclines in the side view as shown in FIG. 4A. And the tapered surface gradually decreases in thickness toward the tip s1. The tip s1 may be formed as a curved surface instead of a flat surface.
 フォーク15のテーパー部s51の左右両側にはそれぞれ、円柱状の穴が設けられており、カメラ21、22はそれぞれ、この穴に埋め込まれている。カメラ21、22はレンズの前面が、テーパー部s51の外周面からわずかに突出するように配置する方が広い画角を確保できる点で好ましいが、使用時のフォーク15の床面等への衝突による破損の観点から、円柱状の穴の開口面よりも内側に配置することがより好ましい。 A cylindrical hole is provided on each of the left and right sides of the tapered portion s51 of the fork 15, and the cameras 21 and 22 are embedded in the holes, respectively. It is preferable that the cameras 21 and 22 be arranged so that the front surface of the lens slightly protrudes from the outer peripheral surface of the tapered portion s51 in terms of securing a wide angle of view. From the viewpoint of breakage due to, it is more preferable to dispose inside the opening surface of the cylindrical hole.
 測距するために物体をステレオ視(立体視)するときは、2台のカメラ21、22の撮影領域を重複させる必要がある。広い視界(画角)を確保し、より多くの領域が重なるようにするためには側面s4または下面s3のテーパー部s51にカメラ21、22を設けることが好ましい。図5に示すように、テーパー部s51に配置した2台のカメラ21、22は、フォーク15の前方側に向けて広い画角を確保できる。 When the object is viewed in stereo (stereoscopic view) for distance measurement, it is necessary to overlap the shooting areas of the two cameras 21 and 22. In order to secure a wide field of view (angle of view) and allow more regions to overlap, it is preferable to provide the cameras 21 and 22 on the tapered portion s51 of the side surface s4 or the lower surface s3. As shown in FIG. 5, the two cameras 21 and 22 arranged in the tapered portion s <b> 51 can ensure a wide angle of view toward the front side of the fork 15.
 図6(a)~図6(c)は、他の例に係るフォーク15の拡大図である。図6(a)~図6(c)に示すフォーク15は、厚みが10mmで、幅が100mm、上面視において先端s1に向けて幅が徐々に狭くなるテーパー部s51が設けられている。テーパー部s51は、R(半径)60mmであり、X方向において先端s1から40mmまで、Y方向において側面s4から40mmまでがテーパー部s51である。第1、第2カメラ21、22は、このテーパー部s51に設けられている。 6 (a) to 6 (c) are enlarged views of a fork 15 according to another example. The fork 15 shown in FIGS. 6A to 6C has a thickness of 10 mm, a width of 100 mm, and a tapered portion s51 whose width gradually decreases toward the tip s1 when viewed from above. The tapered portion s51 has an R (radius) of 60 mm, and the tapered portion s51 extends from the tip s1 to 40 mm in the X direction and from the side surface s4 to 40 mm in the Y direction. The first and second cameras 21 and 22 are provided in the tapered portion s51.
 高さ方向(Z方向)において、カメラ21、22は、上面s2、下面s3からそれぞれ2mm以上離れていることが好ましい。上述のようにフォーク15の厚みが10mmであれば、カメラ21、22はともに下面から2~8mmの範囲内に収まるようなサイズ、および位置で配置することが好ましい。一般に、荷積み作業においては、フォーク15を床面や荷物に意図的に接触し、衝突させる場合がある。そのため、このような配置とすることでカメラ21、22が床や荷物に、直接的に衝突することを防止する。また、カメラ21、22のレンズの前面は、上面視において先端側の表面、すなわち先端s1、およびテーパー部s51の表面よりも内側に配置されることが好ましい。このような配置にすることで、正面からの他の物体へフォークを(意図的に)衝突させた際に、カメラ21、22が直接的に他の物体に衝突することを防止できる。 In the height direction (Z direction), the cameras 21 and 22 are preferably separated from the upper surface s2 and the lower surface s3 by 2 mm or more, respectively. As described above, if the thickness of the fork 15 is 10 mm, it is preferable that the cameras 21 and 22 are both arranged in a size and position so as to be within a range of 2 to 8 mm from the lower surface. In general, in the loading operation, the fork 15 may intentionally come into contact with a floor surface or a load and collide with it. For this reason, this arrangement prevents the cameras 21 and 22 from directly colliding with the floor or luggage. In addition, the front surfaces of the lenses of the cameras 21 and 22 are preferably arranged on the inner side of the front surface, that is, the front surface s1 and the surface of the tapered portion s51 in the top view. With such an arrangement, when the fork collides (intentionally) with another object from the front, the cameras 21 and 22 can be prevented from directly colliding with another object.
 (画角)
 以下、図7、図8を参照し、画角について説明する。図7は、水平方向の画角を説明する模式図である。図7においては、図6(a)~図6(c)に示す形状のフォーク15を例に示し画角を説明するが、図4(a)~図4(c)、図5の形状のフォーク15の様な、どのようなフォーク形状に対しても適用できる(図8も同様)。第1、第2カメラ21、22の画角は理想的には、前方を中心として、水平方向において180度あることが好ましい。しかしながら、先端s1にカメラ21、22を配置することは、他の物体への衝突による衝撃を考慮すると難しい。水平方向の画角の最低値としては、フォーク15の5m前方で幅2mの物体を撮影(ステレオ視)できるように、画角は44度以上(両カメラの内側への半画角が22度以上)確保されていることが好ましい。テーパー部s51にカメラ21、22を配置することで、最低値以上の画角を確保できる。幅2mの根拠は、大型のフォークリフトにおいては、2本のフォーク15、16の間隔は約2mであるため、フォーク15の当たる領域(後述の付加画像402)を最低限ステレオ視できるようにするためである。
(Angle of view)
Hereinafter, the angle of view will be described with reference to FIGS. FIG. 7 is a schematic diagram for explaining the angle of view in the horizontal direction. In FIG. 7, the angle of view will be described using the fork 15 having the shape shown in FIGS. 6 (a) to 6 (c) as an example, but the shape of the shape shown in FIGS. 4 (a) to 4 (c) and FIG. The present invention can be applied to any fork shape such as the fork 15 (the same applies to FIG. 8). The angle of view of the first and second cameras 21 and 22 is ideally 180 degrees in the horizontal direction with the front at the center. However, it is difficult to arrange the cameras 21 and 22 at the tip s1 in consideration of an impact caused by a collision with another object. The minimum value of the horizontal angle of view is 44 degrees or more so that an object with a width of 2 m can be photographed (stereoscopic view) 5 m ahead of the fork 15 (the half angle of view to the inside of both cameras is 22 degrees) The above is preferably ensured. By arranging the cameras 21 and 22 in the tapered portion s51, it is possible to secure a field angle equal to or greater than the minimum value. The reason for the width of 2 m is that in the case of a large forklift, the distance between the two forks 15 and 16 is about 2 m, so that the area (an additional image 402 described later) on which the fork 15 hits can be viewed in stereo at a minimum. It is.
 図8は、垂直方向の画角を説明する模式図である。垂直方向については、理想的には、フォークリフトの前方3mにある高さ5mのラックが撮影できることが好ましい。すなわちフォーク15を地面すれすれに位置させた場合において、前方3mで高さ5mまでの範囲が撮影できるように、垂直方向の画角は120度(水平より上方の半画角で60度)であることが好ましい。 FIG. 8 is a schematic diagram for explaining the angle of view in the vertical direction. In the vertical direction, ideally, it is preferable that a 5 m high rack located 3 m ahead of the forklift can be photographed. In other words, when the fork 15 is located at the side of the ground, the vertical angle of view is 120 degrees (60 degrees with a half angle of view above the horizontal) so that a range of up to 5 m in height 3 m can be photographed. It is preferable.
 垂直方向の画角の最低値としては、水平方向と同様に、フォーク15の5m前方で高さ2mの物体を撮影できるように、画角は44度以上(両カメラの内側への半画角が22度以上)確保されていることが好ましい。5m前方で、高さ2mとした根拠は、屋内で広く使用される小型のフォークリフトの全高が2mであるため、全高と同じ高さまで荷積みしながら、フォークリフトを前進させても、荷物またはフォークリフトの頭頂部が、前方の何らかの物体と接触しないことを確認できるようにするためである。 As in the horizontal direction, the minimum angle of view in the vertical direction is at least 44 degrees so that an object with a height of 2 m can be photographed 5 m ahead of the fork 15 (half angle of view to the inside of both cameras). Of 22 degrees or more) is preferably ensured. 5m forward and 2m high is based on the fact that the overall height of a small forklift that is widely used indoors is 2m. This is because it is possible to confirm that the top of the head does not contact any object in front.
 図9はフォーク15をパレット91の差込み口から挿入し、フォーク15上にパレット91を載せた状態を示している。同図では、フォーク15を差し込む位置が、パレット91の片側の差込口の中心(破線で示す)から、片側へ50%シフトした場合を想定している。このような50%ずれた位置であっても水平面(XY平面)で半画角30度を確保するためには、カメラ21、22の先端位置は、パレット端部から14cm以内に配置することが好ましい。 FIG. 9 shows a state in which the fork 15 is inserted from the insertion port of the pallet 91 and the pallet 91 is placed on the fork 15. In the figure, it is assumed that the position at which the fork 15 is inserted is shifted by 50% from the center (indicated by a broken line) of the insertion port on one side of the pallet 91 to one side. In order to ensure a half angle of view of 30 degrees on the horizontal plane (XY plane) even at such a position shifted by 50%, the tip positions of the cameras 21 and 22 should be arranged within 14 cm from the end of the pallet. preferable.
 標準的なパレットの長さは110cmであり、標準的なフォークの長さは122cmであれば、パレット端部からのフォーク15の先端s1までの突出量は12cmとなる。よって、X方向において、フォーク15の先端エッジ(先端s1)から26cm(14+12cm)までの範囲内にレンズの全面が位置するようにカメラ21、22を配置することが好ましい。 If the standard pallet length is 110 cm and the standard fork length is 122 cm, the amount of protrusion from the end of the pallet to the tip s1 of the fork 15 is 12 cm. Therefore, it is preferable to arrange the cameras 21 and 22 so that the entire surface of the lens is located in the range from the leading edge (tip s1) of the fork 15 to 26 cm (14 + 12 cm) in the X direction.
 このように本実施形態においては、1本の硬い(剛体)フォーク15に2台のカメラ21、22を配置することにより、両カメラの相対位置は常に一定になる。これにより両カメラ21、22間の基線長と平行度を常に一定に保つことができ、後述する2台のカメラ21、22からの映像によって測距を行う場合に、高精度に安定して行うことができる。 Thus, in the present embodiment, by arranging the two cameras 21 and 22 on one hard (rigid) fork 15, the relative positions of both cameras are always constant. As a result, the base line length and parallelism between the two cameras 21 and 22 can always be kept constant. When distance measurement is performed using images from the two cameras 21 and 22 described later, this is performed stably with high accuracy. be able to.
 また、カメラ21、22をフォーク15の先端部分のテーパー部s51に配置することにより、平らな下面s3、または側面s4に配置した場合に比べて、画角を広くすることができ、より広範囲を撮影できる。 Further, by disposing the cameras 21 and 22 on the tapered portion s51 at the tip portion of the fork 15, the angle of view can be increased compared to the case where the cameras 21 and 22 are disposed on the flat lower surface s3 or the side surface s4, and a wider range can be obtained. Can shoot.
 再び、図2を参照し、処理部23等について説明する。処理部23は、CPU(Central Processing Unit)とメモリを備え、メモリに保存した制御プログラムをCPUが実行することで画像処理装置20全体の各種制御を行う。処理部23が担う各機能については後述する。 Again, the processing unit 23 and the like will be described with reference to FIG. The processing unit 23 includes a CPU (Central Processing Unit) and a memory, and performs various controls of the entire image processing apparatus 20 by the CPU executing a control program stored in the memory. Each function performed by the processing unit 23 will be described later.
 記憶部24は、ハードディスクまたは半導体メモリであり、大容量のデータを記憶する。また記憶部24は、後述する3次元の距離マップを記憶しており、処理部23から送られた距離マップを蓄積、または更新する。記憶部24は、フォークリフト10に搭載されてもよいが、全部、またはその一部を外部のファイルサーバー内に設けられもよい。記憶部24の一部を、外部の装置に設けることで、特に後述の機外の測距センサー80(図8、図22参照)からの測距マップを用いる場合に有用である。外部のファイルサーバーとのデータ送受信は、画像処理装置20が備える無線通信部により、LANを経由して行う。 The storage unit 24 is a hard disk or a semiconductor memory, and stores a large amount of data. The storage unit 24 stores a three-dimensional distance map, which will be described later, and accumulates or updates the distance map sent from the processing unit 23. The storage unit 24 may be mounted on the forklift 10, but all or part of the storage unit 24 may be provided in an external file server. Providing a part of the storage unit 24 in an external device is particularly useful when using a distance measurement map from an external distance measurement sensor 80 (see FIGS. 8 and 22) described later. Data transmission / reception with an external file server is performed via a LAN by a wireless communication unit included in the image processing apparatus 20.
 また、3次元の距離マップには、フォークリフト10が使用される、すなわち、フォークリフトが走行する作業空間である、倉庫、工場等の建物または設備に関する図面データが反映されていてもよい。この図面データには、例えば、床面、壁面、窓、および照明装置の位置情報が含まれている。また、この3次元の距離マップには、建物内を走行する他のフォークリフト等の車両の位置情報が含まれてもよい。また、建物で用いられ、内部の荷物の物流を管理する外部の物流システム(後述の図24参照)から取得した荷物92の位置情報が含まれてもよい。 Also, the three-dimensional distance map may reflect drawing data relating to a building or facility such as a warehouse or a factory in which the forklift 10 is used, that is, a work space where the forklift travels. The drawing data includes, for example, position information on the floor surface, wall surface, window, and lighting device. Further, the three-dimensional distance map may include position information of other vehicles such as forklifts that travel in the building. Further, the location information of the luggage 92 used in the building and acquired from an external logistics system (see FIG. 24 described later) that manages the logistics of the inside luggage may be included.
 例えば、この物流システムはサーバーを有し、例えば画像処理装置20とネットワーク接続する。そして、このサーバーには、建物内の荷物の位置情報、荷物の内容情報、棚の空き状況を示す空棚情報、荷役する手順を示す荷役手順情報、等が記憶されている。例えば、各荷物92、または荷物92を載置したパレット91には、ICタグが取り付けられており、物流システムは、このICタグにより、各荷物92の位置情報を把握することができる。なお、建物内で稼働する他の車両(フォークリフトを含む)の位置情報は、外部の測距センサー80の信号により把握し、これを取得してもよく、あるいは画像処理装置20が、他の車両に搭載した通信部とP2P(ピアツーピア)通信することにより、直接的に取得するようにしてもよい。 For example, this logistics system has a server and is connected to the image processing apparatus 20 via a network, for example. The server stores package position information in the building, package content information, empty shelf information indicating shelf availability, cargo handling procedure information indicating procedures for cargo handling, and the like. For example, an IC tag is attached to each load 92 or a pallet 91 on which the load 92 is placed, and the physical distribution system can grasp position information of each load 92 using the IC tag. Note that the position information of other vehicles (including forklifts) operating in the building may be grasped and acquired by a signal from an external distance measuring sensor 80, or the image processing apparatus 20 may acquire other vehicles. It may be obtained directly by performing P2P (peer-to-peer) communication with a communication unit mounted on the PC.
 ディスプレイ25は、図1に示すように運転者の前方のヘッドガード17を支えるフレームに取り付けられており、以下に説明するように処理部23が生成し、加工した映像を表示する。加工処理した映像とは、例えばカメラ21、22が取得した画像の視点変換処理、および/または距離値の画像を付加する加工処理を行った映像である。この距離値の画像を付加する加工処理には、認識した前記物体の種類、または前記物体までの距離、位置に対応した付加画像を映像に重畳する処理が含まれる。これにより、運転者はフォーク15、16に積載した荷物により前方の視認性が悪くなった場合であっても、ディスプレイ25の表示画面により荷物の先の状況を確認できる。ディスプレイ25は、例えば液晶ディスプレイである。また、ディスプレイ25は、HUD(ヘッドアップディスプレイ)や、運転者が装着するヘッドマウントディスプレイであってもよい。HUD用のディスプレイは、半透過性を有する凹面鏡または平面鏡であるコンバイナーを備え、コンバイナーに虚像を投影する。虚像としては、後述する処理部23が生成した付加画像がある。運転台12に座った運転者は、コンバイナーを通じて、その先にある実像を視認できるとともに、コンバイナーが反射する虚像を同時に認識できる。HUDとすることで、たとえ、コンバイナーを運転席の正面側に配置したとしても、映像を投影していないときには、コンバイナーは透明になるので、前方への視界を妨げることはない。 The display 25 is attached to a frame that supports the head guard 17 in front of the driver as shown in FIG. 1, and displays a processed image generated and processed by the processing unit 23 as described below. The processed image is, for example, an image obtained by performing viewpoint conversion processing of images acquired by the cameras 21 and 22 and / or processing processing for adding an image of a distance value. The processing for adding the image of the distance value includes a process of superimposing an additional image corresponding to the type of the recognized object or the distance and position to the object on the video. Accordingly, the driver can check the status of the front of the luggage on the display screen of the display 25 even when the forward visibility is deteriorated due to the luggage loaded on the forks 15 and 16. The display 25 is a liquid crystal display, for example. The display 25 may be a HUD (head-up display) or a head-mounted display worn by the driver. A display for HUD includes a combiner which is a concave mirror or a plane mirror having semi-transparency, and projects a virtual image on the combiner. As the virtual image, there is an additional image generated by the processing unit 23 described later. A driver sitting on the driver's cab 12 can visually recognize the real image ahead through the combiner and simultaneously recognize the virtual image reflected by the combiner. By using the HUD, even if the combiner is arranged on the front side of the driver's seat, the combiner becomes transparent when the image is not projected, so that the forward view is not hindered.
 (処理部23)
 処理部23は、画像取得部301、前処理部302、特徴点抽出部303、距離マップ生成部304、物体位置判定部305、付加画像生成部306、対応付け部307、視点変換部308、画像合成部309、および画像出力部310として機能する。これらの機能は、処理部23が、内部メモリに記憶しているプログラムを実行することにより行うが、これらの機能の一部を組み込み型の専用ハードウェア回路により行うようにしてもよい。なお、以下に説明する実施形態においては、処理部23により物体の位置、距離を把握できる距離マップを生成しているが、これに限られず、物体への距離測定だけを行うようにしてもよい。
(Processing unit 23)
The processing unit 23 includes an image acquisition unit 301, a preprocessing unit 302, a feature point extraction unit 303, a distance map generation unit 304, an object position determination unit 305, an additional image generation unit 306, an association unit 307, a viewpoint conversion unit 308, an image It functions as a composition unit 309 and an image output unit 310. These functions are performed by the processing unit 23 executing a program stored in the internal memory, but some of these functions may be performed by a built-in dedicated hardware circuit. In the embodiment described below, the processing unit 23 generates a distance map that can grasp the position and distance of the object. However, the present invention is not limited to this, and only the distance measurement to the object may be performed. .
 (画像取得部301)
 画像取得部301は、2台のカメラ21、22にタイミングトリガーをかけて同期させる等の制御をし、これらの撮像素子200により所定のフレームレートで撮影された画像(映像)を取得する。ここで、画像取得部301は、カメラ21、22の制御に関して、撮影画角において、中央部分を用いて露出を行うようにしてもよい。これは、特にパレット91の差し込み口にフォークを挿入して、フォーク先端が差し込み口を突き抜けるまでの間においては、カメラからの映像は、中央部分のみが明るく、その周辺は暗くなるためである。すなわち、差し込み口のフォークが挿入される側と反対側を抜けた空間を適切な露出で撮影するために、画角の中央部分を用いて露出する。これにより露出オーバーにならずに中央部分からの映像を適切に撮影できる。
(Image acquisition unit 301)
The image acquisition unit 301 performs control such as synchronizing the two cameras 21 and 22 by applying a timing trigger, and acquires images (videos) photographed at a predetermined frame rate by these imaging elements 200. Here, regarding the control of the cameras 21 and 22, the image acquisition unit 301 may perform exposure using the central portion at the shooting angle of view. This is because the image from the camera is bright only at the center and the periphery is dark until the fork is inserted into the insertion port of the pallet 91 and the fork tip passes through the insertion port. In other words, in order to take a picture of the space that has passed through the opposite side of the insertion port from the side where the fork is inserted, exposure is performed using the central portion of the angle of view. As a result, an image from the central portion can be appropriately captured without being overexposed.
 (前処理部302)
 前処理部302は、2台のカメラ21、22から画像取得部301を介してそれぞれ取得した1組の画像の明るさ、コントラストの調整を行う。これらの調整は、既知の調整処理を適用できる。また、調整後の画像に対してさらに2値化処理等の後段の前処理を行い、処理後の画像を特徴点抽出部303に供給する。一方で、前処理部302は、前段の前処理を行ったカラー画像を視点変換部308に供給する。なお、1組の画像を、基線長に応じた位置関係で貼り付けるステッチ処理を行い、処理後の画像を視点変換部308に供給してもよく、両カメラ21、22の共通撮影領域の画像を視点変換部308に供給してもよい。
(Pre-processing unit 302)
The preprocessing unit 302 adjusts the brightness and contrast of a set of images acquired from the two cameras 21 and 22 via the image acquisition unit 301. A known adjustment process can be applied to these adjustments. Further, post-processing such as binarization processing is further performed on the adjusted image, and the processed image is supplied to the feature point extraction unit 303. On the other hand, the preprocessing unit 302 supplies the color image that has been subjected to the preprocessing in the previous stage to the viewpoint conversion unit 308. It should be noted that stitch processing for pasting a set of images in a positional relationship according to the base line length may be performed, and the processed image may be supplied to the viewpoint conversion unit 308. May be supplied to the viewpoint conversion unit 308.
 (特徴点抽出部303)
 特徴点抽出部303は、1組の画像それぞれから対応付けの指標となる物体の形状、輪郭に対応する特徴点を抽出する。なお、対応付けの指標としては、色、コントラスト、エッジ、および/またはフレームの情報を用いてもよい。
(Feature point extraction unit 303)
The feature point extraction unit 303 extracts feature points corresponding to the shape and contour of an object that is an index of association from each set of images. Note that color, contrast, edge, and / or frame information may be used as the association index.
 (距離マップ生成部304)
 距離マップ生成部304は、1組の画像の抽出した特徴点から共通の対応点を抽出し、それらの対応点から変換パラメータを用いて、特徴点それぞれまでの距離を算出する。例えば左右に配置した1対のカメラ21、22において、基線長を用いて、同じ対応点の左右の画素値のズレ量に応じてそれぞれの画素の距離値を算出する(測距)。
(Distance map generator 304)
The distance map generation unit 304 extracts common corresponding points from the extracted feature points of a set of images, and calculates distances from the corresponding points to the feature points using the conversion parameters. For example, in the pair of cameras 21 and 22 arranged on the left and right, the distance value of each pixel is calculated according to the shift amount of the left and right pixel values of the same corresponding point using the baseline length (ranging).
 また、距離マップ生成部304は、SLAM(Simultaneous localization and mapping)処理を行ってもよい。SLAM処理を実行するソフトとしては、ZEDカメラ用SDKソフトがある。また、SLAMを作るオープンソースとしてはRGB-D SLAMV2などがあり、これらを用いてもよい。SLAM処理を行うことでフォークリフト10の3次元距離マップ(以下、単に「距離マップ」という)内における自車、すなわちフォークリフト10の移動位置をリアルタイムに把握できる。また、SLAM処理において、記憶部24に記憶しているフォークリフト10が使用される作業空間内の距離マップを利用してもよい。これによりカメラ21、22の撮影領域(画角範囲)から外れる領域内の状況をも把握できる。また、カメラ21、22が撮影した1組の画像がハロ、ゴースト、フレア、光芒、外部光源(太陽光等)の反射等の現象により、得られた画像に差が生じ、一次的に、あるいは一部の画素領域で測距できないような場合においては、後述する外部の測距センサー80(図8、および後述の図22参照)、またはフォークリフト10の過去の走行により生成し、記憶部24に蓄積している距離マップを用いて補正してもよい。この測距できない状況は、例えば、工場や倉庫の窓から外光が照らされたときに生じる。この補正としては、例えば撮影領域内で、距離値が検出できなかった領域に対して、過去の距離マップの対応する位置のデータで置換する処理がある。 Further, the distance map generation unit 304 may perform SLAM (Simultaneous localization and mapping) processing. As software for executing SLAM processing, there is SDK software for ZED cameras. In addition, as an open source for creating SLAM, there are RGB-D SLAMV2, etc., and these may be used. By performing the SLAM processing, the moving position of the own vehicle, that is, the forklift 10 in the three-dimensional distance map (hereinafter simply referred to as “distance map”) of the forklift 10 can be grasped in real time. In the SLAM process, a distance map in the work space where the forklift 10 stored in the storage unit 24 is used may be used. As a result, it is possible to grasp the situation in an area outside the imaging area (view angle range) of the cameras 21 and 22. In addition, a set of images taken by the cameras 21 and 22 is different in the obtained images due to a phenomenon such as halo, ghost, flare, light glare, reflection of an external light source (sunlight, etc.), etc. In the case where distance measurement cannot be performed in some pixel areas, the distance sensor 80 is generated by past driving of an external distance measuring sensor 80 (see FIG. 8 and FIG. 22 described later) or the forklift 10 described later and stored in the storage unit 24. You may correct | amend using the accumulated distance map. This situation in which distance measurement cannot be performed occurs, for example, when external light is illuminated from a factory or warehouse window. As this correction, for example, there is a process of replacing an area in which a distance value could not be detected in the imaging area with data at a corresponding position in the past distance map.
 (物体位置判定部305)
 物体位置判定部305は、フォークリフト10の前方にある物体の3次元空間での位置を判定する。この物体の判定は、例えば各画素の距離値の類似度に応じて画素をクラスタリングすることにより行ってもよい。また、距離値の類似度に、画素の色の類似度を組み合わせてクラスタリングしてもよい。クラスタリングより判定した各クラスタのサイズを算定する。例えば、垂直方向寸法、水平方向寸法、総面積等を算出する。なお、ここでいう「サイズ」は、実寸法であり、見た目上の大きさ(画角、すなわち画素の広がり)とは異なり、対象物までの距離に応じて画素群の塊が判断される。例えば、物体位置判定部305は算定したサイズが抽出対象の解析対象の物体を特定するための所定のサイズ閾値以下か否か判定する。サイズ閾値は、測定場所や行動解析対象等により任意に設定できる。通行する作業者を追跡して行動を解析するのであれば、通常の人の大きさの最小値を、クラスタリングする場合のサイズ閾値とすればよい。また、フォークリフトが走行する環境が特定の倉庫内等で限定されるのであれば、その環境に存在する物体に応じたサイズ閾値を適用してもよい。また物体位置判定部305は、生成したフォークリフト前方の3次元の距離マップを記憶部24に蓄積する。この距離マップには、所定サイズ以上の物体それぞれの大きさ、位置の情報が含まれる。なお、上述のように、処理負担軽減のために、物体までの距離測定のみを行うようにしてもよい。
(Object position determination unit 305)
The object position determination unit 305 determines the position of the object in front of the forklift 10 in the three-dimensional space. This object determination may be performed, for example, by clustering pixels according to the similarity of the distance values of each pixel. Further, the similarity of distance values may be combined with the similarity of pixel colors for clustering. The size of each cluster determined by clustering is calculated. For example, the vertical dimension, horizontal dimension, total area, etc. are calculated. The “size” here is an actual size, and unlike the apparent size (field angle, that is, the spread of pixels), the cluster of pixels is determined according to the distance to the object. For example, the object position determination unit 305 determines whether or not the calculated size is equal to or smaller than a predetermined size threshold value for specifying the analysis target object to be extracted. The size threshold can be arbitrarily set depending on the measurement location, the behavior analysis target, and the like. If the behavior is analyzed by tracking a worker who passes, the minimum value of the size of a normal person may be set as a size threshold for clustering. Further, if the environment in which the forklift travels is limited in a specific warehouse or the like, a size threshold corresponding to an object existing in the environment may be applied. The object position determination unit 305 accumulates the generated three-dimensional distance map ahead of the forklift in the storage unit 24. This distance map includes information on the size and position of each object having a predetermined size or larger. As described above, only the distance measurement to the object may be performed to reduce the processing load.
 また、物体の種類として人間と他の物体との判別については、機械学習を用いたり、プロポーション判別(縦横比)を用いたりしてもよい。機械学習では、コンピューターが得られた3次元距離マップのデータを用いて、学習を繰り返す。とくに作業空間内においては、甚大な事故を防ぐために、作業者を判別することは重要であり、これらを用いることにより判別精度の向上が見込まれる。 In addition, as for the type of the object, it is possible to use machine learning or proportion discrimination (aspect ratio) for discrimination between a human and another object. In machine learning, learning is repeated using data of a three-dimensional distance map obtained by a computer. Particularly in the work space, it is important to discriminate the worker in order to prevent a serious accident, and the use of these is expected to improve the discrimination accuracy.
 (付加画像生成部306)
 付加画像生成部306は、物体位置判定部305が判定したフォークリフト10の前方にある物体までの距離、より具体的には、フォーク15、16を延長した先にある物体までの距離に応じた付加画像(アノテーション画像ともいう)を生成する。付加画像としては、距離を示す距離梯子や、数値表示がある(後述の図10参照)。また、付加画像としては、物体の種類、距離に応じて色や態様を変更した矩形枠、または要注意物が存在する場合などに、運転者に注意喚起するためのマーク、テキストであってもよい。さらに、付加画像として、水平面、または、フォークの対パレット正面でのX-Y平面での傾き角度、開き角度、傾斜角度、もしくは地面からの高さの情報であってもよい。
(Additional image generation unit 306)
The additional image generation unit 306 adds the distance according to the distance to the object ahead of the forklift 10 determined by the object position determination unit 305, more specifically, the distance to the object ahead of which the forks 15 and 16 are extended. An image (also referred to as an annotation image) is generated. Examples of the additional image include a distance ladder indicating a distance and a numerical value display (see FIG. 10 described later). In addition, the additional image may be a mark or text for alerting the driver when there is a rectangular frame whose color or form has been changed according to the type and distance of the object, or an object requiring attention. Good. Further, the additional image may be information on an inclination angle, an opening angle, an inclination angle, or a height from the ground on the horizontal plane or the XY plane in front of the fork with respect to the pallet.
 さらに、物流システムから荷物の位置情報、荷物の内容情報、棚の空き状況を示す空棚情報、荷役する手順を示す荷役手順情報、等を取得し、これらの情報を付加画像として生成してもよい(後述の図25参照)。 Furthermore, it is also possible to acquire package position information, package content information, empty shelf information indicating shelf availability, cargo handling procedure information indicating a procedure for cargo handling, etc. from the logistics system and generate these information as additional images. Good (see FIG. 25 described later).
 なお、付加画像として、さらに、荷物を上げる時に、フォーク15、16とトラック荷台までの高さ方向の距離、マスト13と建物もしくは設備の天井との距離、荷物92の前面側上端と、天井、トラックの天蓋、または棚の上段までの高さ方向の距離に応じた付加画像を生成してもよい。また、これらのうち、フォーク15、16の先端との相対位置関係に応じた付加画像は、フォーク15上のカメラ21、22の姿勢情報、すなわち、フォーク15の向き(開き角度)、高さ、傾斜角度(チルト)、およびフォーク15、16の間隔の変更に応じて、変更される。これらのフォーク15の向き、高さ、傾斜角度、フォーク間の間隔は、カメラ21、22の撮影画像から検出してもよく、フォークリフト10の本体11に取り付けられている種々のセンサーにより検知してもよい。 As an additional image, when the luggage is further lifted, the distance in the height direction between the forks 15 and 16 and the truck bed, the distance between the mast 13 and the ceiling of the building or facility, the upper end on the front side of the luggage 92, the ceiling, An additional image corresponding to the distance in the height direction to the canopy of the truck or the upper stage of the shelf may be generated. Of these, the additional image corresponding to the relative positional relationship with the tips of the forks 15 and 16 is the posture information of the cameras 21 and 22 on the forks 15, that is, the direction (opening angle), height, and the like of the forks 15. The angle is changed according to the change of the tilt angle (tilt) and the interval between the forks 15 and 16. The direction, height, inclination angle, and distance between the forks of these forks 15 may be detected from the captured images of the cameras 21 and 22 and detected by various sensors attached to the main body 11 of the forklift 10. Also good.
 また、パレット91の傾き量に応じた付加画像を生成してもよい。具体的には、物体位置判定部305が、前方にあるパレット91の差し込み口の形状を認識し、予め記憶部24に登録しておいた差し込み口の形状、寸法との比較に応じて、傾き量を判定する。そして付加画像生成部306が、傾き量に応じた付加画像を生成する。この傾き量としては、例えば、図1のようにパレット91と荷物92を2段以上重ねた場合に、荷物92の上面が水平でない場合に生じる、水平面に対する傾き角度(チルト)である。また、この傾き角度は、フォーク15との相対角度であってもよい。また、水平面(XY面)における、フォーク15の仮想延長線とパレット91との相対的な傾き角度(yaw角)であってもよい。また、これらの傾き量が所定値以上の場合には、警告を行うようにしてもよい。 Further, an additional image corresponding to the inclination amount of the pallet 91 may be generated. Specifically, the object position determination unit 305 recognizes the shape of the insertion port of the pallet 91 ahead, and inclines according to the comparison with the shape and size of the insertion port registered in the storage unit 24 in advance. Determine the amount. Then, the additional image generation unit 306 generates an additional image corresponding to the tilt amount. The tilt amount is, for example, a tilt angle (tilt) with respect to a horizontal plane that occurs when the upper surface of the load 92 is not horizontal when the pallet 91 and the load 92 are stacked two or more levels as shown in FIG. Further, this inclination angle may be a relative angle with respect to the fork 15. Further, it may be a relative inclination angle (yaw angle) between the virtual extension line of the fork 15 and the pallet 91 in the horizontal plane (XY plane). Further, when these inclination amounts are equal to or larger than a predetermined value, a warning may be given.
 (対応付け部307)
 対応付け部307は、カメラ21、22が撮影した2次元の画像における各物体の位置と、距離マップにおける各物体の位置との対応付けを行う。
(Association unit 307)
The association unit 307 associates the position of each object in the two-dimensional image captured by the cameras 21 and 22 with the position of each object in the distance map.
 (視点変換部308)
 視点変換部308は、運転台12に座る運転者の目の高さに応じた視点位置に応じて予め指定された視点位置(仮想視点位置)から見た時の角度、方向に対して、各画素点の間隔や位置を座標変換することで、画像の視点変換を行う。運転者は、視点位置(仮想視点位置)を画像処理装置20のキーボート、ポインティングデバイス、タッチセンサー等の入力デバイス(図示せず)により設定できる。また、このとき、カメラからは撮影することができない死角領域に対する表示処理を行う。例えば、死角領域(データNULL領域)を記憶部24にある、その領域に対応する画素で置換する。死角領域は、カメラの位置と、運転者の視点位置が異なる場合に生じる。また、この視点変換は、カメラ21、22の床面に対する傾斜角度、または高さに応じた画像の台形補正による変換と、3次元の距離マップを用いて、画像内の物体までの距離に応じた変換を行う場合が含まれる。例えば、撮影領域に渡って、十分に距離マップが生成できていない場合等には、単純な台形補正により行ってもよい。
(Viewpoint conversion unit 308)
The viewpoint conversion unit 308 is provided for each angle and direction when viewed from a viewpoint position (virtual viewpoint position) designated in advance according to the viewpoint position corresponding to the eye height of the driver sitting on the cab 12. The viewpoint conversion of the image is performed by performing coordinate conversion of the interval and position of the pixel points. The driver can set the viewpoint position (virtual viewpoint position) using an input device (not shown) such as a keyboard, a pointing device, or a touch sensor of the image processing apparatus 20. At this time, display processing is performed for a blind spot area that cannot be captured from the camera. For example, the blind spot area (data NULL area) is replaced with a pixel in the storage unit 24 corresponding to the area. The blind spot area occurs when the position of the camera is different from the viewpoint position of the driver. Further, this viewpoint conversion is performed according to the distance to the object in the image using a conversion by trapezoidal correction of the image according to the tilt angle or height of the camera 21 or 22 with respect to the floor surface and a three-dimensional distance map. The case where the conversion is performed is included. For example, when the distance map is not sufficiently generated over the photographing region, the trapezoidal correction may be performed.
 また、視点変換部308は、カメラ21、22から取得した映像により地面、または本体11に対するフォーク15に設けられたカメラ21、22自体の姿勢情報として傾斜角度、または高さを判定し、その判定を視点変換に反映してもよい。例えば、フォーク15が先端に向けて上方に傾斜した場合、または上方に移動した場合には、その傾斜角度、上方移動量を相殺するように視点変換を行う。フォーク15の地面、または本体11に対するフォーク15の傾斜角度、または高さは、フォークリフト10に取り付けられている種々のセンサー(後述の変形例参照)により検知してもよい。また、フォーク15、およびカメラ21、22の姿勢情報は、フォークリフト10に取り付けられている種々のセンサー(後述の図22の変形例参照)により検知してもよい。 Further, the viewpoint conversion unit 308 determines an inclination angle or height as posture information of the cameras 21 and 22 themselves provided on the ground or the fork 15 with respect to the main body 11 from the images acquired from the cameras 21 and 22, and the determination May be reflected in the viewpoint conversion. For example, when the fork 15 is tilted upward toward the tip or moved upward, the viewpoint conversion is performed so as to cancel the tilt angle and the upward movement amount. The inclination angle or height of the fork 15 with respect to the ground surface of the fork 15 or the main body 11 may be detected by various sensors (see modifications described later) attached to the forklift 10. Further, the posture information of the fork 15 and the cameras 21 and 22 may be detected by various sensors attached to the forklift 10 (see a modified example of FIG. 22 described later).
 (視点変換の変形例)
 仮想視点位置としては、運転者の視点位置を仮想視点位置とする視点変換処理に代えて、これよりも高い位置を仮想視点位置とする視点変換処理、または、フォークリフト10の本体から離れた位置を仮想視点位置とする視点変換処理を行ってもよい。例えば、フォーク15、16の高さ位置、または、これにパレット91、およびこれの上の荷物92の高さ相当の距離を加えた高さ位置を仮想視点位置とする視点変換処理や、フォークリフト10の上方からの仮想視点位置(俯瞰)、または背後から前方に向けた仮想位置視点(三人称視点位置)とする視点変換処理である。これにより俯瞰画像を得ることができる。俯瞰画像により、例えば、運転者の目線よりも高所の棚に荷物92を下ろすときに、フォークリフト10の前方を容易に確認できる。なお、これらの仮想視点位置の変更や、位置の設定は、フォークリフト10に設けられた入力デバイスから適宜設定できる。
(Modification of viewpoint conversion)
As the virtual viewpoint position, instead of the viewpoint conversion process in which the driver's viewpoint position is the virtual viewpoint position, a viewpoint conversion process in which a higher position is set as the virtual viewpoint position, or a position away from the main body of the forklift 10 is used. A viewpoint conversion process using the virtual viewpoint position may be performed. For example, a viewpoint conversion process in which the height position of the forks 15 and 16 or a height position obtained by adding a distance corresponding to the height of the pallet 91 and the luggage 92 above the fork 15 or the virtual viewpoint position or the forklift 10 Is a viewpoint conversion process for setting a virtual viewpoint position (overhead view) from above or a virtual position viewpoint (third-person viewpoint position) from the back to the front. Thereby, an overhead image can be obtained. From the bird's-eye view image, for example, when the luggage 92 is lowered to a shelf higher than the driver's eyes, the front of the forklift 10 can be easily confirmed. The change of the virtual viewpoint position and the setting of the position can be appropriately set from an input device provided in the forklift 10.
 また、これらの高い位置、または本体から離れた位置を仮想視点位置とする視点変換処理においては、第1、第2カメラ21、22の撮影により生成した距離マップに加えて、外部の測距センサー80により取得し、記憶部24に蓄積した距離マップを用いることが好ましい。この測距センサー80は、例えばレーザーライダー(Laser Lidar(Light Detection And Ranging))であり、図8に示すように、フォークリフト10が用いられる建物または設備の天井に設けられる。 In addition, in the viewpoint conversion processing in which these high positions or positions away from the main body are used as the virtual viewpoint position, in addition to the distance map generated by photographing with the first and second cameras 21 and 22, an external distance measuring sensor It is preferable to use the distance map acquired by 80 and accumulated in the storage unit 24. The distance measuring sensor 80 is, for example, a laser rider (Laser Lidar (Light Detection And Ranging)), and is provided on the ceiling of a building or facility where the forklift 10 is used as shown in FIG.
 さらに、これらの視点変換処理は、フォークリフト10から物体までの距離に応じて、視点変換の有無、または視点変換量(強度)を変更するようにしてもよい。具体的には、距離値が所定以上の物体に関しては、視点変換処理をオフしたり、視点変換量(視点移動距離)を減少させたり、2次元の台形補正により処理したりしてもよい。また、この視点変換に用いる2次元の映像は、2つのカメラ21、22の重複する領域のみから画像を切り出し、その画像に基づき、視点変換処理してもよく、2つの画像を基線長に応じた位置関係で貼り付けるステッチ処理を行い、処理後の画像用いてもよい。その場合、中央の重複領域に対して3次元距離マップを用いて視点変換処理し、重複しない領域の画素に対しては、2次元の台形補正により、視点変換処理するようにしてもよい。 Further, in these viewpoint conversion processes, the presence / absence of viewpoint conversion or the viewpoint conversion amount (intensity) may be changed according to the distance from the forklift 10 to the object. Specifically, for an object having a distance value greater than or equal to a predetermined value, the viewpoint conversion process may be turned off, the viewpoint conversion amount (viewpoint movement distance) may be reduced, or the object may be processed by two-dimensional trapezoidal correction. In addition, the two-dimensional video used for the viewpoint conversion may be obtained by extracting an image from only an overlapping area of the two cameras 21 and 22 and performing viewpoint conversion processing based on the image. Alternatively, the stitch processing for pasting may be performed based on the positional relationship, and the processed image may be used. In that case, viewpoint conversion processing may be performed on a central overlapping area using a three-dimensional distance map, and viewpoint conversion processing may be performed on pixels in a non-overlapping area by two-dimensional trapezoidal correction.
 また、視点変換処理を行う際に、死角領域が多くなる場合には視点変換量を減少させてもよい。例えば、撮影画像に対して、上方視への視点変換処理をする場合に、総表示画像数に対して、死角領域となる画素数の割合が所定値以上になる場合には、仮想視点位置を設定位置よりも低くなるように制限したり、2次元の台形補正に切り替えたりする。 In addition, when the viewpoint conversion process is performed, if the blind spot area increases, the viewpoint conversion amount may be reduced. For example, when a viewpoint conversion process for upward viewing is performed on a photographed image, if the ratio of the number of pixels serving as a blind spot area is greater than or equal to a predetermined value with respect to the total number of displayed images, the virtual viewpoint position is set. Limit to lower than the set position or switch to two-dimensional trapezoidal correction.
 (画像合成部309)
 画像合成部309は、付加画像生成部306が生成した付加画像を物体位置判定部305で検出した物体の位置に対応させた表示位置で、視点変換部308が生成した画像に対して重畳させ、合成画像を生成する。
(Image composition unit 309)
The image composition unit 309 superimposes the additional image generated by the additional image generation unit 306 on the image generated by the viewpoint conversion unit 308 at a display position corresponding to the position of the object detected by the object position determination unit 305. Generate a composite image.
 なお、ディスプレイ25として、HUDや透明スクリーン(透過スクリーン)を使用する場合、付加画像の表示位置、および内容は、運転者が見ている物体(実像)に重畳させるように、表示方向、位置を計算して、生成する。また、さらに、HUDにおいて、虚像距離を変更できる構成とした場合には、物体の位置、方向に対応させた虚像距離で付加画像を生成するようにしてもよい。 When a HUD or a transparent screen (transparent screen) is used as the display 25, the display position and content of the additional image are displayed in different directions and positions so as to be superimposed on the object (real image) that the driver is looking at. Calculate and generate. Further, when the HUD is configured to change the virtual image distance, the additional image may be generated with a virtual image distance corresponding to the position and direction of the object.
 (画像出力部310)
 画像出力部310は、画像合成部309が生成した合成画像、すなわち加工後の画像(映像)をリアルタイムに、ディスプレイ25に出力し、運転者に表示する。また、画像出力部310は、画像合成部309が生成した処理後の画像、すなわち視点変換処理、および/または付加画像を重畳した処理後の画像(映像)をリアルタイムに、ディスプレイ25に出力し、運転者に表示する。
(Image output unit 310)
The image output unit 310 outputs the combined image generated by the image combining unit 309, that is, the processed image (video) to the display 25 in real time and displays it to the driver. In addition, the image output unit 310 outputs the processed image generated by the image composition unit 309, that is, the image (video) after the process of superimposing the viewpoint conversion process and / or the additional image, to the display 25 in real time. Display to the driver.
 図10は、ディスプレイ25に表示した画面250の例である。フォークリフト10の正面には、物体としてのトラック95、その上のパレット91、および荷物92、ならびに作業者96が存在し、トラック95の荷台に向けフォークリフト10を近づけている状況下における、カメラ21、22が撮影した、フォークリフト10正面の映像を表示している。同図に示すように画面250においては、付加画像401~406が重畳されている。 FIG. 10 is an example of a screen 250 displayed on the display 25. In front of the forklift 10, there are a truck 95 as an object, a pallet 91 and a luggage 92 thereon, and an operator 96, and the camera 21 in a situation where the forklift 10 is brought close to the loading platform of the truck 95, The image of the front of the forklift 10 taken by 22 is displayed. As shown in the figure, on the screen 250, additional images 401 to 406 are superimposed.
 付加画像401は、フォーク15、16に対応するイラスト画像(アニメーション画像)である。付加画像402、403は、フォーク15、16を前方に向けて延長させた線およびその接触位置周辺を示す画像である。これにより運転者はフォーク15、16が当たる(挿入される)位置を認識できる。付加画像404、405は、フォークリフト10の正面にある物体までの距離を示す画像である。この付加画像404、405は、付加画像403とともに距離梯子とも称される。付加画像406は、高さ方向における、トラック95の荷台の上面までのフォーク15、16の距離を示している。付加画像407は、フォークリフト10の前方に人(作業者96)が近づいた場合に、運転者に注意を促すマークである。なお、人が所定範囲内に近づき接近予測をした場合には、発報処理として、ディスプレイ25の脇に取り付けられたスピーカーから警告音が鳴る。また、このとき付加画像407は、警告を示すために、色を変更したり、点滅させたりしてもよい。 The additional image 401 is an illustration image (animation image) corresponding to the forks 15 and 16. The additional images 402 and 403 are images showing a line in which the forks 15 and 16 are extended forward and the vicinity of the contact position. Accordingly, the driver can recognize the position where the forks 15 and 16 are hit (inserted). The additional images 404 and 405 are images indicating the distance to the object in front of the forklift 10. The additional images 404 and 405 are also referred to as a distance ladder together with the additional image 403. The additional image 406 shows the distance of the forks 15 and 16 to the upper surface of the loading platform of the truck 95 in the height direction. The additional image 407 is a mark that alerts the driver when a person (worker 96) approaches the front of the forklift 10. When a person approaches within a predetermined range and makes an approach prediction, a warning sound is emitted from a speaker attached to the side of the display 25 as a notification process. At this time, the additional image 407 may be changed in color or blinked to indicate a warning.
 なお、付加画像401~406は、1対のフォーク15、16の間隔距離、およびフォーク15、16の地面に対する傾斜角度が、変更可能なフォークリフト10であれば、間隔距離、傾斜角度の変更に応じて、形状、サイズ、向きを変更してもよい。例えば、フォーク15が上方に傾斜した場合には、その傾斜角度に応じて、付加画像403~406を変更する。この間隔距離、傾斜角度は、カメラ21、22から取得した画像により、処理部23が求めてもよく、あるいは、フォークリフト10に取り付けられている種々のセンサー(後述の変形例参照)により検知してもよい。 The additional images 401 to 406 correspond to the change in the distance and the inclination angle if the distance between the pair of forks 15 and 16 and the inclination angle of the forks 15 and 16 with respect to the ground can be changed. The shape, size, and orientation may be changed. For example, when the fork 15 is tilted upward, the additional images 403 to 406 are changed according to the tilt angle. The interval distance and the inclination angle may be obtained by the processing unit 23 based on images acquired from the cameras 21 and 22, or may be detected by various sensors attached to the forklift 10 (see modifications described later). Also good.
 このように、本実施形態においては、1本のフォーク15に設けた、1対のカメラ21、22を用い、カメラ21、22の撮像素子により取得した映像に基づいて、フォークリフト10の前方にある、物体までの距離を検出するとともに、取得した映像に付加画像を重畳したり、視点変換したりすることで加工を行い、加工後の映像をディスプレイ25に表示する。1本のフォーク15に1対のカメラ21、22を設けることで、安定して測距できる。また、加工後の映像を表示することで、運転者は、フォーク15、16上への荷積みにより前方が見えにくい場合であってもディスプレイ25に表示した画面により前方を確認できる。また物体までの距離に関する付加画像を付加することで、作業支援や安全警告等を行えるので、より安全にフォークリフトを運転できる。 As described above, in the present embodiment, a pair of cameras 21 and 22 provided on one fork 15 is used, and the image is acquired in front of the forklift 10 based on the images acquired by the imaging elements of the cameras 21 and 22. In addition to detecting the distance to the object, processing is performed by superimposing an additional image on the acquired video or changing the viewpoint, and the processed video is displayed on the display 25. By providing a pair of cameras 21 and 22 on one fork 15, stable ranging can be achieved. Further, by displaying the processed image, the driver can confirm the front on the screen displayed on the display 25 even when the front is difficult to see due to loading on the forks 15 and 16. Also, by adding an additional image related to the distance to the object, work support, safety warning, etc. can be performed, so that the forklift can be operated more safely.
 (第2の実施形態)
 図11は、第2の実施形態に係る画像処理装置のハードウェア構成、および処理部の機能構成を示すブロック図である。上述の第1の実施形態に係る画像処理装置(図2等)では、2台のカメラ21、22を用いて、フォークリフト10の前方の撮影および測距を行った。これに対して、第2の実施形態は、1台のカメラ21と、測距センサー22bを用いて、前方の撮影および測距を行う。第2の実施形態では、この測距センサー22bが距離を検知するための「検知センサー」として機能する。
(Second Embodiment)
FIG. 11 is a block diagram illustrating a hardware configuration of an image processing apparatus according to the second embodiment and a functional configuration of a processing unit. In the image processing apparatus according to the above-described first embodiment (FIG. 2 and the like), the front of the forklift 10 is photographed and the distance is measured using the two cameras 21 and 22. On the other hand, in the second embodiment, forward shooting and distance measurement are performed using one camera 21 and a distance measuring sensor 22b. In the second embodiment, the distance measuring sensor 22b functions as a “detection sensor” for detecting the distance.
 測距センサー22bとしては、例えばレーザーライダー(Laser Lidar(Light Detection And Ranging))、TOF(Time Of Flight)スキャナー、ソナー、等のエリア測距器である。エリア測距器により出力部から可視光、赤外光、音波などをエネルギー出射し、物体から反射したエネルギーが入力部に届くまでの時間差により、前方の物体までの測距を行う。エリア測距器によりフォークリフト10の前方にある物体までの距離を測定して、複数点の測距点群データを測定して、距離値の分布を示す測距点群データを取得する。 The distance measuring sensor 22b is, for example, an area distance measuring device such as a laser lidar (Light Lidar (Light Detection And Ranging)), a TOF (Time Of Flight) scanner, or a sonar. Visible light, infrared light, sound waves, and the like are emitted from the output unit by the area range finder, and the distance to the object ahead is measured by the time difference until the energy reflected from the object reaches the input unit. The distance to the object in front of the forklift 10 is measured by the area range finder, the distance measurement point group data of a plurality of points is measured, and the distance measurement point group data indicating the distribution of distance values is acquired.
 例えば測距センサー22bとしてレーザーライダーを用いる場合であれば、出射したパルス状のレーザーを前方の測定空間内を走査しながら照射し、その反射光を検出する。そして、出射タイミングと受光タイミングとの時間差に応じて、各照射位置における距離情報を求め、測定空間内の測距点群データを取得する。 For example, when a laser lidar is used as the distance measuring sensor 22b, the emitted pulsed laser is irradiated while scanning the front measurement space, and the reflected light is detected. Then, distance information at each irradiation position is obtained according to the time difference between the emission timing and the light reception timing, and distance measurement point group data in the measurement space is acquired.
 取得した測距点群データは、逐次、距離マップ生成部304に送られ、各種の処理に用いられる。また、この測距点群データは、距離マップとして記憶部24に記憶される。 The acquired ranging point group data is sequentially sent to the distance map generation unit 304 and used for various processes. The distance measurement point group data is stored in the storage unit 24 as a distance map.
 また、第1の実施形態と同様に、測距センサー22bとカメラ21は、1本のフォーク15の先端部分に配置される。測距センサー22bの測定空間と、カメラ21の撮影領域とは、一部または全部が重なるように配置される。また、好ましくは測距センサー22bとカメラ21は、第1の実施形態と同様に、フォーク15の先端部分のテーパー部s51に配置することが好ましい(図4~図9参照)。 Further, as in the first embodiment, the distance measuring sensor 22b and the camera 21 are disposed at the tip of one fork 15. The measurement space of the distance measuring sensor 22b and the shooting area of the camera 21 are arranged so that part or all of them overlap. In addition, it is preferable that the distance measuring sensor 22b and the camera 21 are arranged in the tapered portion s51 at the tip portion of the fork 15 as in the first embodiment (see FIGS. 4 to 9).
 このように、測距センサーと、カメラを用いた第2の実施形態においても、第1の実施形態と同様の効果を得ることができる。なお、第2の実施形態に対しても、位置検知センサー26を備えたり、外部の測距センサー80で取得し、記憶部24に記憶した距離マップを用いたりしてもよい。 As described above, also in the second embodiment using the distance measuring sensor and the camera, the same effect as in the first embodiment can be obtained. Also for the second embodiment, the position detection sensor 26 may be provided, or the distance map acquired by the external distance measuring sensor 80 and stored in the storage unit 24 may be used.
 (第3の実施形態)
 図12は、第3の実施形態に係る画像処理装置20のハードウェア構成示すブロック図である。第3の実施形態においては、1台のカメラ21、投光器22cを備える。本実施形態においては、以下に説明する投光器22cおよび処理部23が、カメラ21と協働することでフォークリフトの前方にある物体までの距離を検知するための検知センサーとして機能する。
(Third embodiment)
FIG. 12 is a block diagram illustrating a hardware configuration of the image processing apparatus 20 according to the third embodiment. In the third embodiment, one camera 21 and a projector 22c are provided. In the present embodiment, the projector 22c and the processing unit 23 described below function as a detection sensor for detecting the distance to the object in front of the forklift by cooperating with the camera 21.
 処理部23の制御信号に応じて、投光器22cはフォークリフトの前方に向けてパルス状のパターン光の投光を行う。パターン光としては例えば、複数本の縦横ライン光、または縦横所定間隔のドット光で構成される格子状のパターン光である。また、パターン光としてランダムなドットパターンを投光してもよい。このパターン光の照射領域は、カメラ21の撮影領域の一部、または全部と重なる。なお、パターン光に代えて、1点の照射光を、カメラ21の撮影領域内を順次走査するように構成してもよい。 In response to the control signal of the processing unit 23, the projector 22c projects pulsed pattern light toward the front of the forklift. The pattern light is, for example, a plurality of vertical and horizontal line lights or lattice pattern light composed of dot light at predetermined vertical and horizontal intervals. Further, a random dot pattern may be projected as the pattern light. This pattern light irradiation region overlaps part or all of the photographing region of the camera 21. In addition, instead of the pattern light, one point of irradiation light may be configured to sequentially scan the imaging region of the camera 21.
 第1の実施形態と同様に投光器22cと、カメラ21は、1本のフォーク15の先端部分に配置され、フォーク15の幅方向(Y方向)において、投光器22cと、カメラ21は所定間隔(基線長)だけ離して、取り付けている。また、好ましくは投光器22cと、カメラ21は、第1の実施形態と同様に、フォーク15の先端部分のテーパー部s51に配置することが好ましい(図4~図6参照)。 Similarly to the first embodiment, the projector 22c and the camera 21 are arranged at the tip portion of one fork 15. In the width direction (Y direction) of the fork 15, the projector 22c and the camera 21 are separated by a predetermined distance (base line). (Long) is separated and attached. In addition, it is preferable that the projector 22c and the camera 21 are disposed in the tapered portion s51 at the tip portion of the fork 15 as in the first embodiment (see FIGS. 4 to 6).
 処理部23(距離マップ生成部304)は、所定タイミングでパターン光を照射した際に、そのパターン光を構成する各ドット光、もしくはライン光の間隔、または位置を、撮影したカメラ21が取得した画像により検出する。検出したパターン光の位置、および基線長に基づく変換パラメータを用いて、取得した画像の複数画素における距離を算出する。 When the processing unit 23 (distance map generation unit 304) irradiates pattern light at a predetermined timing, the captured camera 21 acquires the interval or position of each dot light or line light constituting the pattern light. Detect by image. Using the conversion parameter based on the position of the detected pattern light and the base line length, the distance in the plurality of pixels of the acquired image is calculated.
 そして、得られた距離値に基づいて、フォークリフトの前方にある物体までの距離を検出するとともに、取得した映像に、視点変換したり、付加画像を重畳したり、視点変換したりすることで加工を行い、加工後の映像をディスプレイ25に表示する。 Then, based on the obtained distance value, the distance to the object ahead of the forklift is detected, and the viewpoint is converted, the additional image is superimposed, or the viewpoint is converted to the acquired video. The processed image is displayed on the display 25.
 このように、第3の実施形態においても、投光器22cとカメラ21(撮像素子)を用いることで、第1の実施形態と同様に、フォークリフト10の前方の画像および、前方の物体までの距離を検出し、加工後の映像をディスプレイに表示する。また、カメラが取得した映像に対して、取得した測距点群データに基づく距離値の画像を付加する加工処理した処理後の映像をディスプレイに表示する。これにより、第1の実施形態と同様の効果を得ることができる。 As described above, also in the third embodiment, by using the projector 22c and the camera 21 (imaging device), as in the first embodiment, the front image of the forklift 10 and the distance to the front object can be obtained. Detect and display the processed video on the display. In addition, the processed video that adds the image of the distance value based on the acquired distance measurement point group data to the video acquired by the camera is displayed on the display. Thereby, the effect similar to 1st Embodiment can be acquired.
 なお、図12の例では、1台のカメラ21を用いる例を示したが、これに限られず、1本のフォークに配置した2台のカメラ21、22を用いてもよい。すなわち第1の実施形態(図2等)の構成に、さらに投光器22cを加える。この場合、例えば、フォーク15の幅方向においてカメラ22と同じ位置に投光器22cを配置し、これと基線長離れるカメラ21で取得した画像から、各画素の距離を算出する。このように2台のカメラによる測距と、投光器22cを用いた測距を併用することで、より高精度に測距を行うことができる。 In addition, in the example of FIG. 12, although the example using one camera 21 was shown, it is not restricted to this, You may use the two cameras 21 and 22 arrange | positioned at one fork. That is, the projector 22c is further added to the configuration of the first embodiment (FIG. 2 and the like). In this case, for example, the projector 22 c is arranged at the same position as the camera 22 in the width direction of the fork 15, and the distance of each pixel is calculated from the image acquired by the camera 21 that is separated from the base line length. As described above, the distance measurement using two cameras and the distance measurement using the projector 22c can be used together to perform the distance measurement with higher accuracy.
 (第1の変形例)
 図13は、第1の変形例に係る、ハードウェア構成および処理部の機能構成を示すブロック図である。
(First modification)
FIG. 13 is a block diagram illustrating a hardware configuration and a functional configuration of the processing unit according to the first modification.
 第1の変形例においては、第1の実施形態の構成に対して、さらに2台のカメラ21、22が設けられたフォーク15の位置状態を取得する位置検知センサー26を備える。 In the first modification, a position detection sensor 26 that acquires the position state of the fork 15 provided with two cameras 21 and 22 is further provided in the configuration of the first embodiment.
 位置検知センサー26としては、例えばフォーク15の傾斜角度(チルト)を検知するセンサーであってもよい。また、位置検知センサー26としては、フォーク15のマスト13に対する高さ、すなわち、地面に対する高さを検知するセンサーであってもよい。これらのセンサーは、例えばアクチュエータと光学素子から構成される。これらのセンサーにより本体11に対するフォーク15に設けられたカメラ21、22の相対的な位置を検出できる。また、位置検知センサー26は、加速度センサーやジャイロセンサーであってもよい。加速度センサーやジャイロセンサーにより、角速度情報や旋回角度速度情報を取得でき、これらによりフォーク15に設けられたカメラ21、22の本体11、または周囲に対する相対的な位置を把握できる。なお、ここでいう相対的な位置には、フォーク15(カメラ21、22)の角度(水平または傾斜)、水平面の把握が含まれる。 The position detection sensor 26 may be, for example, a sensor that detects the inclination angle (tilt) of the fork 15. Further, the position detection sensor 26 may be a sensor that detects the height of the fork 15 relative to the mast 13, that is, the height relative to the ground. These sensors are composed of an actuator and an optical element, for example. With these sensors, the relative positions of the cameras 21 and 22 provided on the fork 15 with respect to the main body 11 can be detected. Further, the position detection sensor 26 may be an acceleration sensor or a gyro sensor. Angular velocity information and turning angular velocity information can be acquired by the acceleration sensor and the gyro sensor, and the relative positions of the cameras 21 and 22 provided on the fork 15 with respect to the main body 11 or the surroundings can be grasped. The relative position mentioned here includes grasping the angle (horizontal or inclined) of the fork 15 (cameras 21 and 22) and the horizontal plane.
 このような変形例においても第1の実施形態と同様の効果が得られるとともに、さらに、位置検知センサー26を備えることで、付加画像を重畳したり、視点変換したりする処理部23の処理負荷を低減させることができる。なお、変形例においては、画像処理装置20は、第1の実施形態の構成に対して位置検知センサー26を適用していたが、これに限られず、第2、第3の実施形態に位置検知センサー26を適用してもよい。 Even in such a modification, the same effect as that of the first embodiment can be obtained, and the processing load of the processing unit 23 that superimposes the additional image or converts the viewpoint by providing the position detection sensor 26. Can be reduced. In the modification, the image processing apparatus 20 applies the position detection sensor 26 to the configuration of the first embodiment. However, the present invention is not limited to this, and the position detection is performed in the second and third embodiments. The sensor 26 may be applied.
 (第2の変形例)
 図14(a)、図14(b)は、第2の変形例における1本のさやフォーク19の先端部に、第1カメラ、第2カメラを取り付けた状態を示す模式図である。本実施形態のように、さやフォーク19をフォーク15(またはフォーク16)に先端側に装着する場合には、フォークの先端部、またはフォークの先端とは、さやフォーク19の先端部、またはさやフォーク19の先端を指す。さやフォーク19は、フォーク15に覆うように装着され、ネジ等の保持具(図示せず)により固定される。
(Second modification)
FIGS. 14A and 14B are schematic views showing a state in which the first camera and the second camera are attached to the tip of one sheath fork 19 in the second modification. When the sheath fork 19 is attached to the fork 15 (or the fork 16) on the distal end side as in this embodiment, the distal end portion of the fork or the distal end of the fork is the distal end portion of the sheath fork 19 or the sheath fork. 19 points to the tip. The sheath fork 19 is mounted so as to cover the fork 15 and is fixed by a holder (not shown) such as a screw.
 図14(a)は、フォーク15の先端側に装着したさやフォーク19を示す側面図であり、図14(b)は平面図である。 FIG. 14A is a side view showing a sheath fork 19 attached to the front end side of the fork 15, and FIG. 14B is a plan view.
 さやフォーク19は先端s11、上面s12、下面s13、および側面s14、ならびに先端部分のテーパー部s151を有する。フォーク19の左右のテーパー部s151にはそれぞれ、円柱状の穴が設けられており、カメラ21、22はそれぞれ、この穴に埋め込まれて配置されている。カメラ21、22の配置位置、およびその効果については、図4~図9で説明した第1の実施形態と同様であり、説明を省略する。 The sheath fork 19 has a tip s11, an upper surface s12, a lower surface s13, a side surface s14, and a tapered portion s151 of the tip portion. Each of the left and right tapered portions s151 of the fork 19 is provided with a cylindrical hole, and the cameras 21 and 22 are respectively embedded in the holes. The arrangement positions of the cameras 21 and 22 and the effects thereof are the same as those in the first embodiment described with reference to FIGS.
 カメラ21、22はフォークリフト10の本体11とケーブルまたは無線で接続しており、映像信号の処理部23への伝送が行われる。ケーブル接続の場合は、本体11から電力供給がなされ、無線接続の場合には、カメラ21、22とともに取り付けたバッテリーにより電力供給がなされる。 The cameras 21 and 22 are connected to the main body 11 of the forklift 10 by a cable or wirelessly, and transmission of the video signal to the processing unit 23 is performed. In the case of cable connection, power is supplied from the main body 11, and in the case of wireless connection, power is supplied by a battery attached together with the cameras 21 and 22.
 このように第2の変形例のように、さやフォーク19に、第1、第2カメラ21、22を設けることによっても、第1の実施形態と同様の効果を得ることができる。さらに、第2、第3の実施形態についても、フォーク15に代えてさやフォーク19を用いてもよい。すなわち、カメラ21と、測距センサー22bまたは投光器22cをさやフォーク19の先端部に設ける。このようにしても、第2、第3の実施形態と同様の効果を得ることができる。 As described above, by providing the first and second cameras 21 and 22 on the sheath fork 19 as in the second modification, the same effect as that of the first embodiment can be obtained. Further, in the second and third embodiments, a sheath fork 19 may be used instead of the fork 15. That is, the camera 21 and the distance measuring sensor 22 b or the projector 22 c are provided at the tip of the sheath 19. Even if it does in this way, the effect similar to 2nd, 3rd embodiment can be acquired.
 (第3の変形例)
 図15は、第3の変形例に係る、ハードウェア構成および処理部の機能構成を示すブロック図である。第3の変形例においては、1台(単眼)のカメラ21と、位置検知センサー27を備える。また、第3の変形例においては、処理部23にはオプティカルフロー処理部320が含まれる。本変形例においては、カメラ21の撮像素子200、位置検知センサー27、および処理部23(オプティカルフロー処理部320)が協働することで物体までの距離を検知するための検知センサーとして機能する。
(Third Modification)
FIG. 15 is a block diagram illustrating a hardware configuration and a functional configuration of a processing unit according to the third modification. In the third modification example, one (monocular) camera 21 and a position detection sensor 27 are provided. In the third modification, the processing unit 23 includes an optical flow processing unit 320. In this modification, the image sensor 200, the position detection sensor 27, and the processing unit 23 (optical flow processing unit 320) of the camera 21 function as a detection sensor for detecting the distance to the object.
 位置検知センサー27は、変形例に係る位置検知センサー26と同様の構成を備える。位置検知センサー27から、フォークリフト10が移動する際のカメラ21の進行方向や移動量に関する位置データを取得する。 The position detection sensor 27 has the same configuration as the position detection sensor 26 according to the modification. From the position detection sensor 27, position data relating to the traveling direction and movement amount of the camera 21 when the forklift 10 moves is acquired.
 オプティカルフロー処理部320は、公知のオプティカルフロー処理を用いて測距情報を取得する。具体的には、カメラ21の時系列の複数フレーム(複数映像)間の差分を検出し、その時の位置検知センサー27からの位置データを用いて、画像内の各物体までの距離値を取得する。このようにしても、上述の各実施形態と同様の効果を得ることができる。 The optical flow processing unit 320 acquires distance measurement information using a known optical flow process. Specifically, a difference between a plurality of frames (a plurality of images) in time series of the camera 21 is detected, and a distance value to each object in the image is acquired using position data from the position detection sensor 27 at that time. . Even if it does in this way, the effect similar to each above-mentioned embodiment can be acquired.
 (第4の変形例)
 図16は、第4の変形例に係る画像処理装置のハードウェア構成、処理部の機能構成、およびHUDの構成を示すブロック図である。図17は、HUDの構成を示す模式図である。第4の変形例は、ディスプレイとして、HUD25bを有する。HUD25bのコンバイナー522は、図1のディスプレイ25と同様な位置であって、フォークリフトの前方側を透過できる位置に配置されている。HUD25bにより、運転者は、図10で示した付加画像401~407のような画像を虚像として投影しながら、前方の実像をコンバイナー越しに透過視できる。なお、この第4の変形例におけるHUD25bを、上述した第1から第3の実施形態、および各変形例に適用してもよい。
(Fourth modification)
FIG. 16 is a block diagram illustrating the hardware configuration of the image processing apparatus, the functional configuration of the processing unit, and the configuration of the HUD according to the fourth modification. FIG. 17 is a schematic diagram showing the configuration of the HUD. The fourth modification has a HUD 25b as a display. The combiner 522 of the HUD 25b is located at the same position as the display 25 in FIG. 1 and can pass through the front side of the forklift. The driver can see the front real image through the combiner while projecting an image such as the additional images 401 to 407 shown in FIG. 10 as a virtual image by the HUD 25b. In addition, you may apply HUD25b in this 4th modification to the 1st-3rd embodiment mentioned above and each modification.
 図17に示すように、HUD25bは、2次元的な表示面を有する表示素子51、表示素子51に形成された像iを拡大し、虚像fに変換して投影する、ミラー521、およびコンバイナー522を含む虚像投影光学系52、および移動機構53を有する。なお、ミラー521を省略した構成としてもよい。表示素子51は、液晶、OLED(Organic Light Emitting Diode)、または中間スクリーンであってもよい。コンバイナー522で反射された光は、運転者の瞳910に導かれ、虚像fとして認識される。虚像投影光学系52は、ミラー521、コンバイナー522を含み、表示素子51に形成された像iを拡大し、虚像fに変換してコンバイナー522へ投影する。移動機構53は、ミラー521、または表示素子51を動かすことにより、運転者の座高に合わせた座高調整を行う。 As shown in FIG. 17, the HUD 25b enlarges a display element 51 having a two-dimensional display surface, an image i formed on the display element 51, converts it into a virtual image f, and projects the mirror 521 and a combiner 522. Including a virtual image projection optical system 52 and a moving mechanism 53. Note that the mirror 521 may be omitted. The display element 51 may be a liquid crystal, an OLED (Organic Light Emitting Diode), or an intermediate screen. The light reflected by the combiner 522 is guided to the driver's pupil 910 and recognized as a virtual image f. The virtual image projection optical system 52 includes a mirror 521 and a combiner 522, enlarges the image i formed on the display element 51, converts the image i into a virtual image f, and projects the image onto the combiner 522. The moving mechanism 53 moves the mirror 521 or the display element 51 to adjust the sitting height in accordance with the sitting height of the driver.
 HUD25bでの虚像は、荷物や荷棚、フォーク爪先を交互に見て作業する必要があるので、虚像距離の設定としては、フォーク爪先、荷棚、荷物が見やすい、ディスプレイより50cmから20mの間である事が望ましい。運転席に座った運転者の瞳910に導かれる表示素子51からの表示光により、運転者は、あたかもフォークリフトの車体前方にあるような表示像として、虚像fを観察することができる。この観察する虚像fまでの距離が虚像距離であり、図17では瞳910と虚像fまでの距離に相当する。視認すべき実体物までの距離と虚像距離が近い事により、眼の焦点距離を殆ど変える事なく、実体物と虚像を見る事ができるので、虚像上の情報が目の焦点変更を必要なく理解できる為、認識時間短縮が出来る。また、目への負担が減ることにより疲労度軽減が出来る。この虚像距離の設定は、フォークリフト10のサイズまたはフォーク15先端と運転席の距離に応じて行われることが好ましい。例えば1.5トン程度の小型のフォークリフトの場合、爪先から視点を外さないで、虚像も観察できるために、投影距離の設定は、1mから3mの範囲内が望ましい。 The virtual image on the HUD25b needs to work by alternately looking at the luggage, luggage rack, and fork toe. For setting the virtual image distance, it is easy to see the fork toe, luggage rack, and luggage. Something is desirable. With the display light from the display element 51 guided to the driver's pupil 910 sitting in the driver's seat, the driver can observe the virtual image f as a display image as if it is in front of the body of the forklift. The distance to the observed virtual image f is the virtual image distance, and corresponds to the distance between the pupil 910 and the virtual image f in FIG. Because the distance to the object to be visually recognized and the virtual image distance are short, the object and the virtual image can be seen with almost no change in the focal distance of the eyes, so the information on the virtual image understands without changing the focus of the eye Because it can, recognition time can be shortened. In addition, the degree of fatigue can be reduced by reducing the burden on the eyes. The virtual image distance is preferably set according to the size of the forklift 10 or the distance between the tip of the fork 15 and the driver's seat. For example, in the case of a small forklift of about 1.5 tons, since a virtual image can be observed without removing the viewpoint from the toe, the projection distance is preferably set within a range of 1 m to 3 m.
 なお、HUDは、虚像投影距離を変更可能な構成としてもよい。例えば、表示素子51を光軸AX方向の位置を変更する移動機構、またはミラー521を移動する移動機構を設け、これにより、虚像fまでの投影距離を変更してもよい。例えば、荷物92またはパレット差込口91aまでの距離に応じて投影距離を変更する。さらに、他の形態として、中間スクリーンを設け、中間スクリーンの光軸方向の位置を変更することで、投影距離を変更するように構成してもよい。この中間スクリーンは、その表面に表示素子の表示面に形成した画像を投影する部材であり、摺りガラス等の拡散機能を有する部材であり、表示素子51とミラー521との間に配置する。 In addition, HUD is good also as a structure which can change a virtual image projection distance. For example, a moving mechanism for changing the position of the display element 51 in the optical axis AX direction or a moving mechanism for moving the mirror 521 may be provided, thereby changing the projection distance to the virtual image f. For example, the projection distance is changed according to the distance to the luggage 92 or the pallet insertion port 91a. Furthermore, as another embodiment, an intermediate screen may be provided, and the projection distance may be changed by changing the position of the intermediate screen in the optical axis direction. The intermediate screen is a member that projects an image formed on the display surface of the display element on the surface thereof, and is a member having a diffusion function such as ground glass, and is disposed between the display element 51 and the mirror 521.
 また、さらに、HUDを3D-HUDの構成として、前方の荷物、荷棚、作業者等の実態物(オブジェクト)までの距離に対応させた虚像距離で、虚像を3次元的に表示するようにしてもよい。例えば、数十Hzの周期で、移動機構により象面(表示素子51やミラー521)を移動することで虚像距離を変更し、表示制御部が象面の移動タイミングに合わせて表示素子に形成する像を制御する。これにより運転者には、複数の異なる虚像距離の虚像が同時に表示されているように見える。 Further, the HUD is configured as a 3D-HUD so that a virtual image is displayed three-dimensionally at a virtual image distance corresponding to the distance to the actual object (object) such as a front luggage, a load shelf, or an operator. May be. For example, the virtual image distance is changed by moving the elephant (display element 51 or mirror 521) by a moving mechanism at a period of several tens of Hz, and the display control unit forms the display element in accordance with the movement timing of the elephant. Control the image. This makes it appear to the driver that multiple virtual images with different virtual image distances are displayed simultaneously.
 (第5の変形例)
 図18は、第5の変形例における、1本のさやフォーク19の先端部に、第1カメラ、第2カメラ21、22を取り付けた状態を示す模式図である。第5の変形例は、第2の変形例と同様にさやフォーク19を用いたものであり、さらに衝突時の衝撃緩和を考慮した構成としている。
(Fifth modification)
FIG. 18 is a schematic diagram illustrating a state in which the first camera and the second cameras 21 and 22 are attached to the tip of one sheath fork 19 in the fifth modification. The fifth modified example uses the sheath fork 19 as in the second modified example, and further has a configuration in consideration of impact relaxation at the time of collision.
 図18は、フォーク15の先端側に装着したさやフォーク19を示す側面断面図である。さやフォーク19は、本体部191、蓋部192、透明板193、衝撃緩和部材194、および熱伝導部材195を有する。 FIG. 18 is a side sectional view showing the sheath fork 19 attached to the front end side of the fork 15. The sheath fork 19 includes a main body 191, a lid 192, a transparent plate 193, an impact relaxation member 194, and a heat conduction member 195.
 基板ユニット40は、アルミプレート41、ならびにこのアルミプレート41上に配置した第1、第2カメラ21、22、カメラ基板42、およびIMU(Inertial Measurement Unit)基板43を有する。カメラ基板42には、処理部23の一部、または全部の機能が配置される。また、IMU基板43は、上述の位置検知センサー27に対応する。蓋部192がボルト90等により、本体部191に固定されることで、基板ユニット40は、本体部191の空間内部に収納される。 The substrate unit 40 includes an aluminum plate 41, first and second cameras 21 and 22, a camera substrate 42, and an IMU (Internal Measurement Unit) substrate 43 disposed on the aluminum plate 41. On the camera substrate 42, some or all of the functions of the processing unit 23 are arranged. The IMU substrate 43 corresponds to the position detection sensor 27 described above. The lid 192 is fixed to the main body 191 with a bolt 90 or the like, so that the board unit 40 is accommodated inside the space of the main body 191.
 本体部191、蓋部192は、鋼材で構成される。透明板193は、光を透過する部材で、例えばポリカーボネートで構成され、第1、第2カメラ21、22は、この透明板193を通じて、外部を撮影する。衝撃緩和部材194はシリコンゴム等の弾性体、またはゲル化素材で構成される。 The main body 191 and the lid 192 are made of steel. The transparent plate 193 is a member that transmits light, and is made of, for example, polycarbonate. The first and second cameras 21 and 22 photograph the outside through the transparent plate 193. The impact relaxation member 194 is made of an elastic body such as silicon rubber or a gelled material.
 熱伝導部材195は、可撓性で、高熱伝導性のアルミニウム、銅、カーボンなどの熱伝導率の高い材料、またはヒートパイプで構成された部材であり、アルミプレート41と本体部191に貼り付けられ、アルミプレート41、熱伝導部材195を介して、熱を本体部191に伝え、基板ユニット40の各電子部品から発生した熱を放熱する。熱伝導部材195としては、例えば、カーボン基材のグラファイトシート(または熱伝導シート)を用いてもよく、フレキシブルなヒートパイプを用いてもよい。 The heat conduction member 195 is a member made of a flexible and high heat conductivity material such as aluminum, copper, carbon, or a heat pipe, or a heat pipe, and is attached to the aluminum plate 41 and the main body 191. The heat is transmitted to the main body 191 through the aluminum plate 41 and the heat conducting member 195, and the heat generated from each electronic component of the board unit 40 is radiated. As the heat conductive member 195, for example, a carbon-based graphite sheet (or heat conductive sheet) may be used, or a flexible heat pipe may be used.
 衝撃緩和部材194は、蓋部192の内側表面上に貼り付けられており、その端部は、本体部191にも接触している。この衝撃緩和部材194により、カメラ、および検知センサーを構成する電子部品が、衝撃緩和部材194を介して間接的に本体部191とつながる。具体的には、図18の例では、基板ユニット40全体が衝撃緩和部材194により覆われており、さやフォーク19へ物体が衝突することによる衝撃は緩和されて、基板ユニット40の各電子部品に伝わる。 The impact relaxation member 194 is affixed on the inner surface of the lid portion 192, and its end portion is also in contact with the main body portion 191. With this impact relaxation member 194, the camera and the electronic components constituting the detection sensor are indirectly connected to the main body 191 through the impact relaxation member 194. Specifically, in the example of FIG. 18, the entire board unit 40 is covered with the impact reducing member 194, and the impact caused by the collision of the object with the sheath fork 19 is reduced, and each electronic component of the board unit 40 is applied. It is transmitted.
 このように、第5の変形例では、衝撃緩和部材194により、カメラおよび検知センサーが保護されている。これにより、さやフォークへの衝撃による、これらの構成部品に与える影響を緩和できる。また、さらに、可撓性のフレキシブルな熱伝導部材195を用いることで、電子部品への振動や衝撃を緩和するととともに、電子部品からの排熱を行える。 Thus, in the fifth modified example, the camera and the detection sensor are protected by the impact relaxation member 194. Thereby, the influence given to these components by the impact to a sheath can be relieved. Furthermore, by using a flexible and flexible heat conducting member 195, vibration and impact on the electronic component can be mitigated and exhaust heat from the electronic component can be removed.
 なお、第5の変形例は、さやフォーク19に衝撃緩和および放熱用の部材を適用した例を説明したが、これに限られず、図16に示した構成を、図3から図5等に示したフォーク15に適用してもよい。また、上述の第1の実施形態等の各実施形態に適用してもよい。 In the fifth modification, the example in which the member for impact relaxation and heat dissipation is applied to the sheath fork 19 is described. However, the present invention is not limited to this, and the configuration shown in FIG. 16 is shown in FIGS. The fork 15 may be applied. Moreover, you may apply to each embodiment, such as the above-mentioned 1st Embodiment.
 (第4の実施形態)
 (画像表示処理)
 図19は、第4の実施形態に係る画像処理装置20が行う表示処理を示すフローチャートである。以下、図19を参照し、表示処理について説明する。
(Fourth embodiment)
(Image display processing)
FIG. 19 is a flowchart showing display processing performed by the image processing apparatus 20 according to the fourth embodiment. Hereinafter, the display process will be described with reference to FIG.
 (ステップS101)
 画像処理装置20の処理部23(画像取得部301)は、最初に、第1、第2のカメラ21、22を制御し、所定のフレームレートで撮影された映像を取得する。
(Step S101)
First, the processing unit 23 (image acquisition unit 301) of the image processing apparatus 20 controls the first and second cameras 21 and 22 to acquire video captured at a predetermined frame rate.
 (ステップS102)
 次に、処理部23は、それぞれのカメラ21、22から取得した対応する2枚の画像を処理することより、距離マップを取得する。この処理は、上述の前処理部302、特徴点抽出部303、および距離マップ生成部304により行われる。すなわち、取得した画像に対して、前処理を行った後、基線長を用いて、2枚の画像の特徴点の対応関係により、各画素の距離値を算出して、距離マップを生成する。
(Step S102)
Next, the processing unit 23 acquires a distance map by processing two corresponding images acquired from the respective cameras 21 and 22. This processing is performed by the above-described preprocessing unit 302, feature point extraction unit 303, and distance map generation unit 304. That is, after pre-processing the acquired image, a distance map is generated by calculating the distance value of each pixel based on the correspondence between the feature points of the two images using the baseline length.
 (ステップS103)
 続いて、処理部23は、生成した距離マップを用いて、ステップS101で取得した2次元の映像に対して、視点変換処理を施す。この処理は、物体位置判定部305、対応付け部307、および視点変換部308により行われる。すなわち、前方にある物体の3次元空間での位置を判定し、判定した物体それぞれを、2次元の映像の各画素に対して対応付けをする。そして、運転台12に座る運転者の視点位置等に対応した、予め指定された仮想視点位置から見た時の角度、方向に対して、各画素点の間隔や位置を座標変換することで、画像の視点変換処理を行う。
(Step S103)
Subsequently, the processing unit 23 performs viewpoint conversion processing on the two-dimensional video acquired in step S101 using the generated distance map. This processing is performed by the object position determination unit 305, the association unit 307, and the viewpoint conversion unit 308. That is, the position of the object in front is determined in the three-dimensional space, and each determined object is associated with each pixel of the two-dimensional image. Then, by converting the coordinates and the distance and position of each pixel point with respect to the angle and direction when viewed from the virtual viewpoint position specified in advance, corresponding to the viewpoint position of the driver sitting on the cab 12, etc. Performs image viewpoint conversion processing.
 (ステップS104)
 処理部23は、付加画像を作成する。具体的には、付加画像生成部306は、フォークリフト10の前方にある物体と、フォーク15、16の先端との距離に応じた付加画像を生成する。また、処理部23は、接近予測を行う。すなわち、フォーク15等が物体に近づき過ぎて物体までの距離が所定値以下となった場合には発報条件を満たすと判断する。
(Step S104)
The processing unit 23 creates an additional image. Specifically, the additional image generation unit 306 generates an additional image corresponding to the distance between the object in front of the forklift 10 and the tips of the forks 15 and 16. Further, the processing unit 23 performs approach prediction. That is, when the fork 15 or the like gets too close to the object and the distance to the object is equal to or less than a predetermined value, it is determined that the reporting condition is satisfied.
 (ステップS105)
 処理部23は、ステップS104で生成した付加画像を、ステップS103で視点変換処理した画像に付加(重畳)させる加工処理を行う。具体的には、画像合成部309は、付加画像を物体の位置に対応させた表示位置で重畳させた合成画像を生成する。なお、ステップS103による視点変換処理を省略し、ステップS104による加工処理のみにより、映像に対して加工処理してもよい。
(Step S105)
The processing unit 23 performs a processing process for adding (superimposing) the additional image generated in step S104 to the image subjected to the viewpoint conversion process in step S103. Specifically, the image composition unit 309 generates a composite image in which the additional image is superimposed at a display position corresponding to the position of the object. Note that the viewpoint conversion processing in step S103 may be omitted, and the video may be processed only by the processing processing in step S104.
 (ステップS106)
 処理部23の画像出力部310は、ステップS105で生成した合成画像をリアルタイムにディスプレイ25に出力して、運転者に表示する。例えば、ステップS106において、図10に示したような画面250をディスプレイ25に表示する。また、ステップS104で接近予測として、発報条件を満たすと判断した場合には、処理部23は、ディスプレイ25周辺に設けられたスピーカーにより警告音を出力したり、所定の警告表示をディスプレイに表示したりする発報処理を行う。なお、人が所定範囲内に近づき接近予測をした場合には、発報処理として、ディスプレイ25の脇に取り付けられたスピーカーから警告音が鳴る。また、このとき付加画像407は、警告を示すために、色を変更したり、点滅させたりしてもよい。
(Step S106)
The image output unit 310 of the processing unit 23 outputs the composite image generated in step S105 to the display 25 in real time and displays it to the driver. For example, in step S106, a screen 250 as shown in FIG. If it is determined in step S104 that the notification condition is satisfied as the approach prediction, the processing unit 23 outputs a warning sound from a speaker provided around the display 25 or displays a predetermined warning display on the display. Or perform a notification process. When a person approaches within a predetermined range and makes an approach prediction, a warning sound is emitted from a speaker attached to the side of the display 25 as a notification process. At this time, the additional image 407 may be changed in color or blinked to indicate a warning.
 また、フォーク15、16先端からの最短距離に関する情報を出力するようにしてもよい。この情報としては、例えば、前方映像に最短距離の距離値を重畳させたり、音声で距離情報を出力したり、所定値以下の危険な距離になった時に、フォークリフト10に設けた警告ランプ(図示せず)で光を点滅させたりする。また、パレット91だけの位置でなく、パレット91の上下左右の空間情報(その中でのパレット91の位置)を必要とするので、一点だけの距離情報ではなく、ある程度の広さをもつ範囲(カメラ撮影範囲)の距離情報を表示するようにしてもよい。具体的には、例えば付加画像402の上下左右に(画角における)所定間隔で配置した格子状の複数のポイントに対する距離情報を表示する。また、その中でパレット91への相対的な位置として、YZ平面における距離を表示する。 Further, information on the shortest distance from the tips of the forks 15 and 16 may be output. As this information, for example, the distance value of the shortest distance is superimposed on the front image, the distance information is output by voice, or a warning lamp (see FIG. (Not shown). Further, since not only the position of the pallet 91 but also the spatial information on the top, bottom, left, and right of the pallet 91 (the position of the pallet 91 in the pallet 91) is required, not a single point of distance information but a range having a certain extent The distance information of (camera shooting range) may be displayed. Specifically, for example, distance information for a plurality of grid-like points arranged at predetermined intervals (in the angle of view) on the top, bottom, left, and right of the additional image 402 is displayed. In addition, the distance in the YZ plane is displayed as the relative position to the palette 91.
 このように、第4の実施形態においては、フォーク15に設けた1対のカメラ21、22を用い、カメラ21、22の撮像素子により取得した映像に基づいて、フォークリフト10の前方の物体までの距離を測距し、距離値の分布を示す測距点群データを取得し、カメラが取得した映像に対して、取得した測距点群データに基づく距離値の画像を付加する加工処理を行い、処理後の映像をディスプレイに表示する。このように距離値の画像を付加する加工処理することで、前方の風景が距離として把握できるので、単に映像を見るより安全に作業を行うことができる。また、運転者は、フォーク15、16上への荷積みにより前方が見えにくい場合であってもディスプレイ25に表示した画面により容易に前方の状況を確認できる。また、視点変換処理した映像をディスプレイ25に表示することで、運転者は、フォーク15、16上への荷積みにより前方が見えにくい場合であっても、視点変換処理した加工後の映像によって、より容易に前方を確認できる。 As described above, in the fourth embodiment, a pair of cameras 21 and 22 provided on the fork 15 is used, and the images up to the object in front of the forklift 10 are obtained based on the images acquired by the imaging elements of the cameras 21 and 22. Distance measurement, distance measurement point group data indicating the distribution of distance values is acquired, and processing of adding an image of distance values based on the acquired distance measurement point group data to the image acquired by the camera is performed The processed video is displayed on the display. By performing the processing for adding the distance value image in this manner, the scenery in front can be grasped as the distance, so that the work can be performed more safely than simply watching the video. Further, the driver can easily confirm the front situation on the screen displayed on the display 25 even when the front is difficult to see due to loading on the forks 15 and 16. In addition, by displaying the viewpoint-converted image on the display 25, the driver can use the processed image that has been subjected to the viewpoint conversion process, even when it is difficult to see the front due to loading on the forks 15 and 16. You can check the front more easily.
 (表示変形例)
 図20は、ディスプレイ25に表示した画面251の変形例である。変形例では俯瞰画像を追加しており、画面251は、正面視の正面画像2501と俯瞰画像2502から構成される。正面画像2501は、図10の画面250と同じであり、これを縮小表示したものである。俯瞰画像2502は、付加画像生成部306が、物体までの距離値に応じて真上視点(上方視点)の俯瞰画像を生成し、これを正面画像2501に並べて表示したものである。俯瞰画像2502は、距離を示す付加画像409、およびフォーク15、16に対応するイラスト画像である付加画像410が表示されている。通常は、フォークリフト10に面する物体の前面側、あるいは前面側、側面、上面の距離値が得られるが、物体の背面側に関しては、何ら距離値の情報が得られない。そのため、背面側については表示データがない。しかしながら、フォークリフト10の運転に関しては、背面側の距離値は分からなくても、正面側の距離値が判別できれば十分である。なお、背面側に対しては、図20の例に示すように記憶部24に記憶している距離マップを参照することで、死角領域に存在する物体の輪郭を生成し、重畳するようにしてもよい。図20の破線は、死角領域の物体(トラック、作業者)の輪郭を示している。このように、俯瞰画像2502を表示することで運転者は、周囲の状況をより正しく把握することができ、より安全にフォークリフトを運転できる。
(Display modification)
FIG. 20 is a modification of the screen 251 displayed on the display 25. In the modification, an overhead image is added, and the screen 251 includes a front image 2501 and an overhead image 2502 in front view. A front image 2501 is the same as the screen 250 of FIG. 10, and is a reduced display of this. The overhead image 2502 is an image in which the additional image generation unit 306 generates an overhead image with a top viewpoint (upper viewpoint) in accordance with the distance value to the object, and displays the overhead image on the front image 2501 side by side. The overhead image 2502 displays an additional image 409 indicating a distance and an additional image 410 that is an illustration image corresponding to the forks 15 and 16. Usually, the distance value of the front side of the object facing the forklift 10, or the front side, side surface, and upper surface is obtained, but no information on the distance value is obtained for the back side of the object. Therefore, there is no display data on the back side. However, regarding the operation of the forklift 10, it is sufficient if the distance value on the front side can be determined without knowing the distance value on the back side. For the back side, by referring to the distance map stored in the storage unit 24 as shown in the example of FIG. 20, the contour of the object existing in the blind spot area is generated and superimposed. Also good. The broken line in FIG. 20 shows the outline of the object (truck, worker) in the blind spot area. In this way, by displaying the bird's-eye view image 2502, the driver can correctly grasp the surrounding situation and can drive the forklift more safely.
 (死角領域の処理)
 次に、図21(a)~図21(d)を参照し、死角領域の処理について説明する。図21(a)は、カメラ21、22により撮影した画像(元画像)であり、図21(b)は図21(a)の画像に対して、説明のために、画像内の各物体に符号を付与した模式図である。物体971から975は直方体の物体であり、物体971が最もカメラ21、22に近く、以下、物体972、973、974、975の順に遠く位置に配置されている。
(Processing of blind spot area)
Next, the process of the blind spot area will be described with reference to FIGS. 21 (a) to 21 (d). FIG. 21A is an image (original image) taken by the cameras 21 and 22, and FIG. 21B is an illustration of each object in the image for the purpose of explanation with respect to the image of FIG. It is the schematic diagram which provided the code | symbol. Objects 971 to 975 are rectangular parallelepiped objects, and the object 971 is closest to the cameras 21 and 22, and is disposed in the order of objects 972, 973, 974, and 975 in the following order.
 図21(c)は、視点変換部308により、図21(a)の元画像に対して、カメラ21、22よりも高い位置を仮想視点位置とする視点変換を行った画像である。物体の背面側は、カメラ21、22で撮影することができない死角領域である。そのため、上方への視点変換においては、物体の背面側は、画素情報が欠損(Null)した状態となる。図21(d)は、図21(c)の画像に対して、説明のために、符号を付与した模式図である。図21(d)では、死角領域b1からb3が各物体の背面側に発生している。死角領域b1からb3に対しては、それぞれ黒画素を割り当ててもよいが、見易さのために以下のような画素を割り当ててもよい。 FIG. 21C is an image obtained by performing viewpoint conversion with the viewpoint conversion unit 308 using a position higher than the cameras 21 and 22 as the virtual viewpoint position with respect to the original image in FIG. The back side of the object is a blind spot area that cannot be captured by the cameras 21 and 22. Therefore, in the upward viewpoint conversion, the pixel information on the back side of the object is in a state where the pixel information is missing (Null). FIG. 21D is a schematic diagram in which a reference numeral is given to the image in FIG. In FIG. 21D, blind spot areas b1 to b3 are generated on the back side of each object. Black pixels may be assigned to the blind spot areas b1 to b3, respectively, but the following pixels may be assigned for ease of viewing.
 例えば、(1)死角領域を形成される物体の上方で、かつ、この物体よりも遠い距離にある物体の表面のテクスチャー、すなわち表面を構成する画素の色、模様を、死角領域に割り当てる。このようにすることで、死角領域が生じたしたとしても、遠い側の物体の表面に溶け込むので、表示での違和感を緩和できる。あるいは(2)記憶部24に記憶している3次元距離マップにおける物体の輪郭情報を用いて、死角領域に対して、死角領域に存在する物体の輪郭を生成し、画像に重畳させる。このようにしても表示での死角領域による違和感を緩和できる。 For example, (1) The texture of the surface of the object above the object where the blind spot area is formed and at a distance farther than this object, that is, the color and pattern of the pixels constituting the surface are assigned to the blind spot area. By doing so, even if a blind spot area is generated, it melts into the surface of the object on the far side, so that the discomfort in the display can be alleviated. Alternatively, (2) using the contour information of the object in the three-dimensional distance map stored in the storage unit 24, the contour of the object existing in the blind spot area is generated with respect to the blind spot area and superimposed on the image. Even in this way, the uncomfortable feeling caused by the blind spot area in the display can be alleviated.
 (第6の変形例)
 図22は、第6の変形例に係る、ハードウェア構成および処理部の機能構成を示すブロック図である。
(Sixth Modification)
FIG. 22 is a block diagram illustrating a hardware configuration and a functional configuration of the processing unit according to the sixth modification.
 第6の変形例においては、第1の実施形態の構成に対して、さらにフォーク15に設けられた2台のカメラ21、22の姿勢情報を取得する位置検知センサー26を備える。また、建物または設備に設置された外部の測距センサー80(図8参照)が取得した測距点群データによる測距マップが、記憶部24に蓄積されている。ここで、姿勢情報には、地面に対する傾斜角度、高さ、または、フォークリフト10の本体11に対する高さ、両フォーク15、16間の開き角度、間隔の情報が含まれる。 In the sixth modification, a position detection sensor 26 that acquires posture information of the two cameras 21 and 22 provided on the fork 15 is further provided in the configuration of the first embodiment. In addition, a distance measurement map based on distance measurement point group data acquired by an external distance measurement sensor 80 (see FIG. 8) installed in a building or facility is stored in the storage unit 24. Here, the posture information includes information on an inclination angle and height with respect to the ground, or a height of the forklift 10 with respect to the main body 11, an opening angle between both forks 15 and 16, and an interval.
 例えば、位置検知センサー26は、フォーク15の地面に対する傾斜角度(チルト)を検知するセンサーである。また、位置検知センサー26は、フォーク15のマスト13に対する高さ、すなわち、地面に対する高さを検知するセンサーである。これらのセンサーは、例えばアクチュエータと光学素子から構成される。また、両フォーク15、16の開き角度、および間隔を検知するセンサーを含めてもよい。これらのセンサーは、フォークリフト10に設けられたモーターに設けられたエンコーダーである。位置検知センサー26によりこれらのセンサーにより本体11に対するフォーク15に設けられたカメラ21、22の相対的な位置を検出できる。また、位置検知センサー26は、加速度センサーやジャイロセンサーであってもよい。加速度センサーやジャイロセンサーにより、角速度情報や旋回角度速度情報を取得でき、これらによりフォーク15に設けられたカメラ21、22の本体11、または周囲に対する相対的な位置を把握できる。なお、ここでいう相対的な位置には、フォーク15(カメラ21、22)の角度(水平または傾斜)、水平面の把握が含まれる。 For example, the position detection sensor 26 is a sensor that detects an inclination angle (tilt) of the fork 15 with respect to the ground. The position detection sensor 26 is a sensor that detects the height of the fork 15 relative to the mast 13, that is, the height relative to the ground. These sensors are composed of an actuator and an optical element, for example. Moreover, you may include the sensor which detects the opening angle of both the forks 15 and 16, and the space | interval. These sensors are encoders provided in a motor provided in the forklift 10. The position detection sensor 26 can detect the relative positions of the cameras 21 and 22 provided on the fork 15 with respect to the main body 11 by these sensors. Further, the position detection sensor 26 may be an acceleration sensor or a gyro sensor. Angular velocity information and turning angular velocity information can be acquired by the acceleration sensor and the gyro sensor, and the relative positions of the cameras 21 and 22 provided on the fork 15 with respect to the main body 11 or the surroundings can be grasped. The relative position mentioned here includes grasping the angle (horizontal or inclined) of the fork 15 (cameras 21 and 22) and the horizontal plane.
 このような第6の変形例においても第1の実施形態等と同様の効果が得られるとともに、さらに、位置検知センサー26を備えることで、付加画像を重畳したり、視点変換したりする処理部23の処理負荷を低減させることができる。また、外部の測距センサー80から得られた測距マップを記憶部24に蓄積することで、フォークリフト10の前方の物体へ測距を行う場合に、測距精度をより向上させることができる。また、この測距マップを用いることで、カメラ21、22で撮影することができない、物体の背面側の輪郭等の形状情報を得ることができる。 In the sixth modified example, the same effects as those of the first embodiment can be obtained, and the position detection sensor 26 is provided to superimpose an additional image or convert the viewpoint. 23 processing load can be reduced. In addition, by storing the distance measurement map obtained from the external distance measurement sensor 80 in the storage unit 24, the distance measurement accuracy can be further improved when distance measurement is performed on an object in front of the forklift 10. Further, by using this distance measurement map, it is possible to obtain shape information such as the contour of the back side of the object that cannot be taken by the cameras 21 and 22.
 (第7の変形例)
 図23は、第7の変形例における、ディスプレイに表示した近接用画面の例である。上述の図19のステップS104では、物体までの距離が所定値以下となった場合に発報条件を満たすと判断し、警告表示等の発報処理を行っていた。第7の変形例においては、処理部23はフォーク15の先端から物体までの距離が所定距離以下、例えば2m以下になった場合には、発報処理に替えて、図23に示すような近接用画面252を生成し、ディスプレイの表示をこの近接用画面252に切り替える。図23に示す近接用画面252は、荷物92を載せたパレット91の差し込み口にフォーク15の先端を挿入する直前の状態を示している。
(Seventh Modification)
FIG. 23 is an example of the proximity screen displayed on the display in the seventh modified example. In step S104 of FIG. 19 described above, when the distance to the object is equal to or less than a predetermined value, it is determined that the reporting condition is satisfied, and the warning processing such as warning display is performed. In the seventh modified example, when the distance from the tip of the fork 15 to the object is equal to or less than a predetermined distance, for example, 2 m or less, the processing unit 23 replaces the alarm processing with proximity as shown in FIG. Screen 252 is generated, and the display on the display is switched to this proximity screen 252. A proximity screen 252 shown in FIG. 23 shows a state immediately before the tip of the fork 15 is inserted into the insertion port of the pallet 91 on which the luggage 92 is placed.
 この近接用画面252では、それまでに表示していた図10の画面250と異なり、フォーク15に近接する物体を表示するとともに、この物体に対するフォーク15先端の仮想位置を示す付加画像を追加する。具体的には、近接用画面252では、付加画像403、404、405の距離梯子は無くなり、その代わりにフォーク15の仮想先端を示す付加画像411を表示する。パレット91にフォーク15が近接した時には、前方物体までの距離よりもパレット91の差し込み口91aとフォーク15先端の位置関係が最大の関心事になる。そのため、近接用画面252に切り替えることは有効である。図23に示す近接用画面252の例では、差し込み口91aの中央よりもやや左側にフォーク15先端が位置していることを示しており、運転者は、少し右側にフォーク15先端をシフトさせることで、差し込み口91aの中央にフォーク15を挿入することができる。なお、同図の例では、1本のフォーク15にのみカメラを搭載しているので、フォーク先端の仮想位置は1個のみ表示しているが、これに限られず2本のフォークそれぞれに対応させて2個の仮想位置を表示するようにしてもよい。このように、フォーク15の先端からの距離が所定値以下になった場合に、近接用画面252に切り替えることで、運転者は、フォーク15の先端が、パレット91の差し込み口91aに対してどのような位置にあるか容易に把握することができる。 In this proximity screen 252, unlike the screen 250 of FIG. 10 displayed so far, an object close to the fork 15 is displayed and an additional image indicating the virtual position of the tip of the fork 15 with respect to this object is added. Specifically, in the proximity screen 252, the distance ladder of the additional images 403, 404, and 405 is lost, and instead, an additional image 411 indicating the virtual tip of the fork 15 is displayed. When the fork 15 comes close to the pallet 91, the positional relationship between the insertion port 91a of the pallet 91 and the tip of the fork 15 becomes the greatest concern rather than the distance to the front object. Therefore, switching to the proximity screen 252 is effective. The example of the proximity screen 252 shown in FIG. 23 indicates that the tip of the fork 15 is located slightly to the left of the center of the insertion port 91a, and the driver shifts the tip of the fork 15 slightly to the right. Thus, the fork 15 can be inserted into the center of the insertion port 91a. In the example shown in the figure, since the camera is mounted on only one fork 15, only one virtual position of the tip of the fork is displayed. However, the present invention is not limited to this. Two virtual positions may be displayed. As described above, when the distance from the tip of the fork 15 is equal to or less than the predetermined value, the driver can switch to the proximity screen 252 so that the driver can move the tip of the fork 15 to the insertion port 91a of the pallet 91. It is possible to easily grasp whether it is in such a position.
 (第8の変形例)
 図24は、第8の変形例に係る画像処理装置のハードウェア構成、処理部の機能構成、およびHUDの構成を示すブロック図である。図25は、HUDに表示した荷物内容情報、空棚情報、荷役手順情報に関する虚像の例である。
(Eighth modification)
FIG. 24 is a block diagram illustrating the hardware configuration of the image processing apparatus according to the eighth modification, the functional configuration of the processing unit, and the configuration of the HUD. FIG. 25 is an example of a virtual image related to the package content information, empty shelf information, and cargo handling procedure information displayed on the HUD.
 第8の変形例は、ディスプレイとして、HUD25bを有する。このHUD25bは、図17に示したHUD25bと同様の構成であり、説明を省略する。また、第8の変形例における画像処理装置20は、ネットワークを介して外部の物流システム60と接続する。建物内の荷物の位置情報、荷物の内容情報、棚の空き状況を示す空棚情報、荷役する手順を示す荷役手順情報、等が記憶されている。 8th modification has HUD25b as a display. The HUD 25b has the same configuration as the HUD 25b shown in FIG. Further, the image processing apparatus 20 in the eighth modified example is connected to an external physical distribution system 60 via a network. Stored are location information of packages in the building, content information of packages, empty shelf information indicating the availability of shelves, cargo handling procedure information indicating procedures for cargo handling, and the like.
 図25は、第8の変形例におけるHUDに表示した荷物内容情報、空棚情報、荷役手順情報に関する虚像の例である。運転者は、コンバイナー522越しに、実像(物体)である床、棚、荷物92、パレット91を直接見ることができる。また、コンバイナー522には、HUD25bにより虚像f11からf13が投影され、運転者は、所定の虚像距離で虚像を見ることができる。この虚像距離は、測定した実像までの距離に近い値に設定される。3次元的に虚像を表示する態様であれば、対象となる各実像までの距離に合わせて、それぞれ異なる虚像距離に設定してもよい。虚像f11は荷物内容(「製品番号」、「原材料」)であり、虚像f12は空棚情報(「原材料BB用スペース」)であり、虚像f13は荷役手順情報(「荷役順位」)である。これらの情報は、外部の物流システム60から、取得した情報である。なお、図25は、HUDにおける表示例であるが、これに限られず、第1の実施形態等で用いた液晶のディスプレイ25に適用してもよい。例えば、虚像f11~f13に対応する荷物内容情報、空棚情報、荷役手順情報に関する付加画像を、第1の実施形態等で用いた液晶のディスプレイ25に表示する映像に付加する。 FIG. 25 is an example of a virtual image regarding the package content information, empty shelf information, and cargo handling procedure information displayed on the HUD in the eighth modification. The driver can directly see the floor, the shelf, the luggage 92, and the pallet 91, which are real images (objects), through the combiner 522. Further, the virtual images f11 to f13 are projected onto the combiner 522 by the HUD 25b, and the driver can see the virtual image at a predetermined virtual image distance. This virtual image distance is set to a value close to the measured distance to the real image. As long as the virtual image is displayed in a three-dimensional manner, different virtual image distances may be set according to the distance to each target real image. The virtual image f11 is package contents (“product number”, “raw material”), the virtual image f12 is empty shelf information (“space for raw material BB”), and the virtual image f13 is cargo handling procedure information (“loading ranking”). These pieces of information are information acquired from the external logistics system 60. FIG. 25 shows a display example in the HUD, but the present invention is not limited to this, and the display may be applied to the liquid crystal display 25 used in the first embodiment and the like. For example, additional images relating to the package content information, empty shelf information, and cargo handling procedure information corresponding to the virtual images f11 to f13 are added to the video displayed on the liquid crystal display 25 used in the first embodiment and the like.
 このように、第8の変形例においては、ディスプレイとしてHUDを用いることで実体物と虚像を見る事ができるので、虚像上の情報が目の焦点変更を必要なく理解できる為、認識時間短縮が出来る。また、目への負担が減ることにより疲労度軽減が出来る。また、付加情報として、荷物内容情報、空棚情報、荷役手順情報を用い、これを虚像として投影することで、運転者は、作業に戸惑うことなく、より安全、かつ、スムーズに荷役作業を行える。 As described above, in the eighth modification example, since the entity and the virtual image can be seen by using the HUD as the display, the information on the virtual image can be understood without the need to change the focus of the eyes, so that the recognition time can be shortened. I can do it. In addition, the degree of fatigue can be reduced by reducing the burden on the eyes. Moreover, by using the package content information, empty shelf information, and cargo handling procedure information as additional information, and projecting this as a virtual image, the driver can carry out cargo handling work more safely and smoothly without being confused by the work. .
 以上に説明したフォークリフト用の画像処理装置20の構成は、上記の実施形態の特徴を説明するにあたって主要構成を説明したのであって、上記の構成に限られず、特許請求の範囲内において、種々改変することができる。また、一般的な画像処理装置が備える構成を排除するものではない。 The configuration of the image processing apparatus 20 for a forklift described above is the main configuration in describing the features of the above-described embodiment, and is not limited to the above-described configuration. Various modifications can be made within the scope of the claims. can do. Further, the configuration of a general image processing apparatus is not excluded.
 例えば、本実施形態においては、ディスプレイ25はフォークリフト10に取り付けられたものを用いたが、これに限られない。ディスプレイ25とともに、またはこれに代えて、フォークリフトが使用する作業空間に設けられた管理事務所に、ディスプレイを設け、処理部23が無線等により伝送した映像信号をこのディスプレイに表示させるようにしてもよい。このようにすることで、管理事務所において作業状況を監督したり、作業記録を残す操作を行ったりすることができる。 For example, in the present embodiment, the display 25 attached to the forklift 10 is used, but is not limited thereto. In addition to or instead of the display 25, a display may be provided in a management office provided in the work space used by the forklift so that the video signal transmitted by the processing unit 23 by radio or the like is displayed on the display. Good. By doing so, it is possible to supervise the work situation in the management office or to perform an operation to leave a work record.
 上述した実施形態に係る画像処理装置における各種処理を行う手段および方法は、専用のハードウェア回路、またはプログラムされたコンピューターのいずれによっても実現することが可能である。上記プログラムは、たとえば、USBメモリやDVD(Digital Versatile Disc)-ROM等のコンピューター読み取り可能な記録媒体によって提供されてもよいし、インターネット等のネットワークを介してオンラインで提供されてもよい。この場合、コンピューター読み取り可能な記録媒体に記録されたプログラムは、通常、ハードディスク等の記憶部に転送され記憶される。また、上記プログラムは、単独のアプリケーションソフトとして提供されてもよいし、画像処理装置の一機能としてその装置のソフトウエアに組み込まれてもよい。 The means and method for performing various processes in the image processing apparatus according to the above-described embodiments can be realized by either a dedicated hardware circuit or a programmed computer. The program may be provided by a computer-readable recording medium such as a USB memory or a DVD (Digital Versatile Disc) -ROM, or may be provided online via a network such as the Internet. In this case, the program recorded on the computer-readable recording medium is usually transferred to and stored in a storage unit such as a hard disk. The program may be provided as a single application software, or may be incorporated in the software of the apparatus as a function of the image processing apparatus.
 本出願は、2018年2月23日に出願された日本特許出願(特願2018-030793号)、および2018年3月8日に出願された日本特許出願(特願2018-42292号)に基づいており、その開示内容は、参照され、全体として組み入れられている。 This application is based on a Japanese patent application (Japanese Patent Application No. 2018-030793) filed on February 23, 2018 and a Japanese patent application (Japanese Patent Application No. 2018-42292) filed on March 8, 2018. The disclosure of which is incorporated by reference in its entirety.
10 フォークリフト
11 本体
12 運転台
13 マスト
14 フィンガバー
15、16 フォーク
19 さやフォーク
 191 本体部
 192 蓋部
 193 透明板
 194 衝撃緩和材
 195 熱伝導部材
20 画像処理装置
21、22 カメラ
23 処理部
 301 画像取得部
 302 前処理部
 303 特徴点抽出部
 304 距離マップ生成部
 305 物体位置判定部
 306 付加画像生成部
 307 対応付け部
 308 視点変換部
 309 画像合成部
 310 画像出力部
24 記憶部
25 ディスプレイ
40 基板ユニット
 41 アルミプレート
 42 カメラ基板
 43 IMU基板
25b HUD
51 表示素子
52 虚像投影光学系
 522 コンバイナー
53 移動機構
91 パレット
401~407、409~411 付加画像
 
DESCRIPTION OF SYMBOLS 10 Forklift 11 Main body 12 Driver's cab 13 Mast 14 Finger bar 15, 16 Fork 19 Sheath fork 191 Main body part 192 Lid part 193 Transparent plate 194 Impact relaxation material 195 Thermal conduction member 20 Image processing device 21, 22 Camera 23 Processing part 301 Image acquisition Unit 302 preprocessing unit 303 feature point extraction unit 304 distance map generation unit 305 object position determination unit 306 additional image generation unit 307 association unit 308 viewpoint conversion unit 309 image synthesis unit 310 image output unit 24 storage unit 25 display 40 substrate unit 41 Aluminum plate 42 Camera board 43 IMU board 25b HUD
51 Display Element 52 Virtual Image Projection Optical System 522 Combiner 53 Movement Mechanism 91 Palette 401 to 407, 409 to 411 Additional Image

Claims (41)

  1.  フォークリフトに用いられる画像処理装置であって、
     前記フォークリフトの前方側に昇降可能に支持された複数のフォークのうちの1本のフォークの先端部分に設けられ、前記フォークリフトの前方を撮影するカメラと、
     前記カメラが設けられた前記フォークの前記先端部分に設けられ、前記フォークリフトの前方にある物体までの距離を検知するための検知センサーと、
     前記検知センサーの検知情報に基づいて、前記カメラが取得した映像を加工する処理部と、
     前記処理部が加工した加工後の映像を表示するディスプレイと、
     を備える画像処理装置。
    An image processing apparatus used for a forklift,
    A camera that is provided at a front end portion of one fork among a plurality of forks supported to be movable up and down on the front side of the forklift, and shoots the front of the forklift;
    A detection sensor for detecting a distance to an object in front of the forklift, provided at the tip portion of the fork provided with the camera;
    Based on detection information of the detection sensor, a processing unit that processes the video acquired by the camera;
    A display for displaying a processed image processed by the processing unit;
    An image processing apparatus comprising:
  2.  1本の前記フォークに、第1の撮像素子および第2の撮像素子が、それぞれの撮影領域の少なくとも一部が重なるように設けられ、
     前記第1、第2の撮像素子の少なくとも一方が、前記カメラの一部として機能するとともに、前記1、第2の撮像素子の両方が前記検知センサーとして機能し、
     前記処理部は、前記第1、第2の撮像素子の双方から取得した映像に基づいて、前記フォークの前方にある物体までの距離を検出し、検出した距離に基づいて前記映像を加工する、請求項1に記載の画像処理装置。
    A first image sensor and a second image sensor are provided on one fork so that at least a part of each imaging region overlaps,
    At least one of the first and second imaging elements functions as a part of the camera, and both the first and second imaging elements function as the detection sensor,
    The processing unit detects a distance to an object in front of the fork based on images acquired from both the first and second imaging elements, and processes the image based on the detected distance. The image processing apparatus according to claim 1.
  3.  前記フォークリフトの前方に向けて光を照射する、または、前記フォークリフトの前方に向けて2次元のパターン光を照射する投光器、
     を備え、
     前記投光器、および前記処理部は、前記検知センサーとしても機能し、前記処理部は、前記投光器による照射光、または前記パターン光を撮影した前記カメラからの映像に基づいて、前記フォークリフトの前方にある物体までの距離を検出し、検出した距離に基づいて前記映像を加工する、請求項1または請求項2に記載の画像処理装置。
    A projector that emits light toward the front of the forklift, or that emits a two-dimensional pattern light toward the front of the forklift,
    With
    The projector and the processing unit also function as the detection sensor, and the processing unit is in front of the forklift based on an image from the camera that has captured the light emitted from the projector or the pattern light. The image processing apparatus according to claim 1, wherein a distance to an object is detected, and the video is processed based on the detected distance.
  4.  前記検知センサーは、前記フォークリフトの前方にある物体までの距離を検知して複数点の測距点群データを取得する測距センサーである、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the detection sensor is a distance measurement sensor that detects a distance to an object in front of the forklift and acquires a plurality of distance measurement point group data.
  5.  前記フォークの先端のテーパー部であって、上面視において先端に向けて幅が徐々に狭くなり、および/または、側面視において下面が傾斜することで厚みが先端に向けて徐々に薄くなるテーパー部に、前記カメラと前記検知センサーが設けられている、請求項1から請求項4のいずれかに記載の画像処理装置。 A tapered portion at the tip of the fork, the width of which gradually decreases toward the tip when viewed from above and / or the thickness of the taper gradually decreases toward the tip when the bottom surface is inclined when viewed from side. The image processing apparatus according to claim 1, wherein the camera and the detection sensor are provided.
  6.  第1の撮像素子および第2の撮像素子が1本の前記フォークに、それぞれの撮影領域の少なくとも一部を共通するように、前記フォークリフトの前方を撮影領域として設けられており、
     前記第1、第2の撮像素子の少なくとも一方が、前記カメラの一部として機能するとともに、前記1、第2の撮像素子の両方が前記検知センサーとして機能し、
     前記処理部は、前記第1、第2の撮像素子の双方から取得した映像に基づいて、前記フォークリフトの前方にある物体までの距離を検出し、
     前記フォークの先端のテーパー部であって、上面視において先端に向けて幅が徐々に狭くなり、かつ、側面視において下面が傾斜することで厚みが先端に向けて徐々に薄くなるテーパー部の左右両側のそれぞれに、前記第1、第2の撮像素子が配置されている、請求項1に記載の画像処理装置。
    The front of the forklift is provided as an imaging area so that the first imaging element and the second imaging element share at least a part of each imaging area with one fork,
    At least one of the first and second imaging elements functions as a part of the camera, and both the first and second imaging elements function as the detection sensor,
    The processing unit detects a distance to an object in front of the forklift based on images acquired from both the first and second imaging elements,
    The tapered portion at the tip of the fork has a width that gradually decreases toward the tip when viewed from above and a thickness that gradually decreases toward the tip when the bottom surface is inclined when viewed from the side. The image processing apparatus according to claim 1, wherein the first and second imaging elements are arranged on both sides.
  7.  さらに、記憶部を備え、
     前記処理部は、前記カメラからの映像と、前記検知センサーの検知情報に基づいて、前記フォークリフトが動作する作業空間内の物体の位置、または形状を示す測距点群である距離マップを生成し、前記記憶部に蓄積する、請求項1から請求項6のいずれかに記載の画像処理装置。
    Furthermore, a storage unit is provided,
    The processing unit generates a distance map that is a distance measuring point group indicating a position or a shape of an object in a work space where the forklift operates based on an image from the camera and detection information of the detection sensor. The image processing apparatus according to claim 1, wherein the image processing apparatus accumulates in the storage unit.
  8.  前記処理部は、前記検知センサー、または該検知センサーの検知情報と前記カメラからの映像に基づいて検出した物体までの距離のデータの一部を、前記記憶部に蓄積した前記距離マップで補正する、請求項7に記載の画像処理装置。 The processing unit corrects part of the distance data to the object detected based on the detection sensor or the detection information of the detection sensor and the image from the camera with the distance map stored in the storage unit. The image processing apparatus according to claim 7.
  9.  さらに、前記フォークの位置状態を取得する位置検知センサーを含み、
     前記処理部は、前記位置検知センサーにより、前記カメラが設けられた前記フォークの位置状態を取得する、請求項1から請求項8のいずれかに記載の画像処理装置。
    Furthermore, a position detection sensor for acquiring the position state of the fork is included,
    The image processing device according to claim 1, wherein the processing unit acquires a position state of the fork provided with the camera by the position detection sensor.
  10.  前記処理部は、前記加工した映像として、
     前記カメラによる取得した映像に対して、前方の物体までの距離に対応した付加情報を付加した映像を、前記ディスプレイに表示させる請求項1から請求項9のいずれかに記載の画像処理装置。
    The processing unit, as the processed video,
    The image processing apparatus according to claim 1, wherein an image obtained by adding additional information corresponding to a distance to a front object to the image acquired by the camera is displayed on the display.
  11.  前記処理部は、前記加工した映像として、
     前記カメラによる取得した映像に対して視点変換した映像を前記ディスプレイに表示させる、請求項1から請求項10のいずれかに記載の画像処理装置。
    The processing unit, as the processed video,
    The image processing apparatus according to claim 1, wherein an image obtained by performing viewpoint conversion on an image acquired by the camera is displayed on the display.
  12.  前記カメラは、撮影画角の中央部分を用いて露出を行う、請求項1から請求項11のいずれかに記載の画像処理装置。 The image processing apparatus according to any one of claims 1 to 11, wherein the camera performs exposure using a central portion of a shooting angle of view.
  13.  前記ディスプレイは、前記フォークリフトに取付けられたコンバイナーに虚像を投影するヘッドアップディスプレイである、請求項1から請求項12のいずれかに記載の画像処理装置。 The image processing apparatus according to any one of claims 1 to 12, wherein the display is a head-up display that projects a virtual image on a combiner attached to the forklift.
  14.  前記コンバイナーは、前記フォークリフトの前方側を透過視できる位置に配置されており、
     前記ヘッドアップディスプレイは、虚像の投影距離が50cmから20mの範囲に設定されている、請求項13に記載の画像処理装置。
    The combiner is disposed at a position where the front side of the forklift can be seen through,
    The image processing apparatus according to claim 13, wherein the head-up display has a virtual image projection distance set in a range of 50 cm to 20 m.
  15.  前記フォークの先端部分に設けられ前記カメラ、および前記検知センサーは、衝撃緩和部材を介して、前記フォークの本体部に取付けられている、請求項1から請求項14のいずれかに記載の画像処理装置。 The image processing according to any one of claims 1 to 14, wherein the camera and the detection sensor provided at a tip portion of the fork are attached to a main body of the fork via an impact reducing member. apparatus.
  16.  前記カメラ、および前記検知センサーを構成する電子部品の少なくとも一部は、可撓性で高熱伝導性の材料で構成された熱伝導部材を介して、前記フォークの本体部に接続されている、請求項15に記載の画像処理装置。 The camera and at least a part of the electronic components constituting the detection sensor are connected to the main body of the fork via a heat conducting member made of a flexible and highly heat conductive material. Item 15. The image processing apparatus according to Item 15.
  17.  フォークリフトに用いられる画像処理装置であって、
     前記フォークリフトの前方を撮影するカメラと、
     前記フォークリフトの前方にある物体までの距離を測距し、距離値の分布を示す測距点群データを取得するための検知センサーと、
     前記カメラが取得した映像に対して、取得した測距点群データに基づく距離値の画像を付加する加工処理を行う処理部と、
     前記処理部が加工処理した処理後の映像を表示するディスプレイと、
     を備える画像処理装置。
    An image processing apparatus used for a forklift,
    A camera for photographing the front of the forklift;
    A distance sensor to measure the distance to an object in front of the forklift, and a distance sensor data indicating a distance value distribution, and a detection sensor;
    A processing unit that performs processing for adding an image of a distance value based on the acquired distance measuring point group data to the video acquired by the camera;
    A display for displaying the processed video processed by the processing unit;
    An image processing apparatus comprising:
  18.  前記カメラは、可視光領域に感度を有する撮像素子を含む、請求項17に記載の画像処理装置。 The image processing apparatus according to claim 17, wherein the camera includes an image sensor having sensitivity in a visible light region.
  19.  前記カメラは、前記フォークリフトの前方側に昇降可能に支持されたフォークに、前記前方を撮影するように設置されている、請求項17または請求項18に記載の画像処理装置。 The image processing apparatus according to claim 17 or 18, wherein the camera is installed on a fork supported so as to be movable up and down on the front side of the forklift so as to photograph the front.
  20.  前記カメラは、撮影画角の中央部分を用いて露出を行う、請求項19に記載の画像処理装置。 20. The image processing apparatus according to claim 19, wherein the camera performs exposure using a central portion of a shooting angle of view.
  21.  前記処理部は、前記加工処理として、さらに、前記測距点群データに基づいて、前記映像に対して視点変換処理を行う、請求項17から請求項20のいずれかに記載の画像処理装置。 The image processing apparatus according to any one of claims 17 to 20, wherein the processing unit further performs a viewpoint conversion process on the video based on the distance measuring point group data as the processing process.
  22.  さらに、前記カメラの姿勢情報を取得する位置検知センサーを備え、
     前記処理部は、前記位置検知センサーから取得した前記姿勢情報を用いて、前記視点変換処理を行う、請求項21に記載の画像処理装置。
    Furthermore, a position detection sensor for acquiring posture information of the camera is provided,
    The image processing apparatus according to claim 21, wherein the processing unit performs the viewpoint conversion process using the posture information acquired from the position detection sensor.
  23.  さらに、記憶部を備え、
     前記処理部は、前記検知センサーにより取得した測距点群データを用いて3次元距離マップを作成し、前記記憶部に記憶させる、請求項21または請求項22に記載の画像処理装置。
    Furthermore, a storage unit is provided,
    The image processing apparatus according to claim 21 or 22, wherein the processing unit creates a three-dimensional distance map using distance measurement point group data acquired by the detection sensor and stores the three-dimensional distance map in the storage unit.
  24.  前記記憶部に記憶された前記3次元距離マップは、前記フォークリフトが使用される建物もしくは設備に関する図面データ、前記建物に設置されたセンサーから得られた測距点群データ、他の車両の位置情報、および/または前記建物で用いられる物流情報システムから取得した荷物の位置情報が反映されている、請求項23に記載の画像処理装置。 The three-dimensional distance map stored in the storage unit includes drawing data relating to a building or facility in which the forklift is used, distance measurement point group data obtained from sensors installed in the building, and position information of other vehicles. 24. The image processing apparatus according to claim 23, wherein position information of a package acquired from a physical distribution information system used in the building is reflected.
  25.  前記3次元距離マップには、前記建物もしくは設備に関する、床面、壁面、窓、または照明装置の位置情報が含まれている、請求項24に記載の画像処理装置。 25. The image processing apparatus according to claim 24, wherein the three-dimensional distance map includes position information of a floor surface, a wall surface, a window, or a lighting device related to the building or equipment.
  26.  前記視点変換処理は、前記フォークリフトの運転台に座る運転者の視点位置を仮想視点位置とする視点変換処理、前記運転者の視点位置よりも高い位置を仮想視点位置とする視点変換処理、または、前記フォークリフトから離れた位置を仮想視点位置とする視点変換処理である、請求項21から請求項25のいずれかに記載の画像処理装置。 The viewpoint conversion process is a viewpoint conversion process in which the viewpoint position of the driver sitting on the cab of the forklift is a virtual viewpoint position, a viewpoint conversion process in which a position higher than the viewpoint position of the driver is a virtual viewpoint position, or The image processing apparatus according to any one of claims 21 to 25, wherein the image processing apparatus performs viewpoint conversion processing in which a position away from the forklift is a virtual viewpoint position.
  27.  前記運転者の視点位置を仮想視点位置とする前記視点変換処理は、前記カメラの地面に対する角度、もしくは高さに応じた台形補正による視点変換処理、または、前記測距点群データ、もしくは記憶部に記憶した3次元距離マップを用いた視点変換処理である、請求項26に記載の画像処理装置。 The viewpoint conversion process in which the driver's viewpoint position is a virtual viewpoint position is the viewpoint conversion process by keystone correction according to the angle or height of the camera with respect to the ground, or the distance measurement point group data, or the storage unit 27. The image processing apparatus according to claim 26, which is a viewpoint conversion process using a three-dimensional distance map stored in.
  28.  前記運転者の視点位置よりも高い位置を仮想視点位置とする前記視点変換処理は、前記測距点群データまたは記憶部に記憶した3次元距離マップを用いた視点変換処理である、請求項26に記載の画像処理装置。 27. The viewpoint conversion process using a position higher than the viewpoint position of the driver as a virtual viewpoint position is a viewpoint conversion process using the distance measurement point group data or a three-dimensional distance map stored in a storage unit. An image processing apparatus according to 1.
  29.  前記運転者の視点位置よりも高い位置を仮想視点位置とする前記視点変換処理、または前記フォークリフトから離れた位置を仮想視点位置とする視点変換処理では、前記カメラの死角領域に関しては、前記カメラの画角において、前記死角領域が形成される物体の上方で、かつ、該物体よりも遠い距離にある物体の表面のテクスチャーを、前記死角領域に配置する、請求項26または請求項28に記載の画像処理装置。 In the viewpoint conversion process in which a position higher than the viewpoint position of the driver is a virtual viewpoint position, or in the viewpoint conversion process in which a position away from the forklift is a virtual viewpoint position, with respect to the blind spot area of the camera, 29. The texture of the surface of an object at an angle of view above the object where the blind spot area is formed and at a distance farther than the object is arranged in the blind spot area. Image processing device.
  30.  前記運転者の視点位置よりも高い位置を仮想視点位置とする前記視点変換処理、または前記フォークリフトから離れた位置を仮想視点位置とする視点変換処理では、前記カメラの死角領域に関しては、記憶部に記憶した3次元距離マップにおける物体の輪郭情報を用いて、前記死角領域に対して、前記死角領域に存在する前記物体の輪郭を重畳させる、請求項26または請求項28に記載の画像処理装置。 In the viewpoint conversion process in which a position higher than the viewpoint position of the driver is a virtual viewpoint position, or in the viewpoint conversion process in which a position away from the forklift is a virtual viewpoint position, the blind spot area of the camera is stored in the storage unit. The image processing apparatus according to claim 26 or 28, wherein an outline of the object existing in the blind spot area is superimposed on the blind spot area using outline information of the object in the stored three-dimensional distance map.
  31.  前記視点変換処理は、物体までの距離に応じて、視点変換処理の有無、または強度を変更する、請求項21から請求項30のいずれかに記載の画像処理装置。 The image processing apparatus according to any one of claims 21 to 30, wherein the viewpoint conversion process changes presence or absence of a viewpoint conversion process or intensity according to a distance to an object.
  32.  前記ディスプレイは、前記フォークリフトの前方を透過視できるように前記フォークリフトに取り付けられた、透明スクリーン、またはヘッドアップディスプレイであり、
     前記処理部は、前記フォークリフトの前方にある物体を認識するとともに、認識した前記物体それぞれまでの距離、および/または方向に対応する付加画像を生成し、生成した前記付加画像を前記物体それぞれに重畳させる態様で、前記透明スクリーン、または前記ヘッドアップディスプレイに表示させる、請求項17から請求項20のいずれかに記載の画像処理装置。
    The display is a transparent screen or a head-up display attached to the forklift so that the front of the forklift can be seen through.
    The processing unit recognizes an object in front of the forklift, generates an additional image corresponding to the distance and / or direction to each of the recognized objects, and superimposes the generated additional image on each of the objects. The image processing apparatus according to any one of claims 17 to 20, wherein the image is displayed on the transparent screen or the head-up display.
  33.  前記処理部は、前記フォークリフトの前方にある物体を認識するとともに、前記加工処理として、前記映像に、認識した前記物体の種類、または前記物体までの距離、位置に対応した付加画像を生成し、前記映像に付加する、請求項17から請求項32のいずれかに記載の画像処理装置。 The processing unit recognizes an object in front of the forklift, and generates an additional image corresponding to the recognized type of the object, the distance to the object, or the position in the video as the processing. The image processing apparatus according to any one of claims 17 to 32, which is added to the video.
  34.  前記処理部は、前記物体としてパレットを認識した場合に、前記パレットの差し込み口の形状により、前記パレットに対する傾きを判定し、判定した前記パレットの水平面の傾き量に応じた前記付加画像を生成する、請求項33に記載の画像処理装置。 When the processing unit recognizes a pallet as the object, the processing unit determines an inclination with respect to the pallet based on a shape of the insertion port of the pallet, and generates the additional image according to the determined inclination amount of the horizontal surface of the pallet. 34. The image processing apparatus according to claim 33.
  35.  前記処理部が生成する前記付加画像には、前記フォークリフトが使用される建物で用いられる物流情報システムから取得した荷物の内容情報、棚の空き状況を示す空棚情報、荷役する手順を示す荷役手順情報の少なくとも一つが含まれる、請求項32から請求項34のいずれかに記載の画像処理装置。 In the additional image generated by the processing unit, the contents information of the package acquired from the physical distribution information system used in the building where the forklift is used, the empty shelf information indicating the availability of the shelf, the cargo handling procedure indicating the procedure for cargo handling. The image processing apparatus according to claim 32, wherein at least one piece of information is included.
  36.  前記処理部は、前記物体までの距離に応じて上方視点の俯瞰画像を生成し、生成した俯瞰画像を追加して前記ディスプレイに表示する、請求項17から請求項35のいずれかに記載の画像処理装置。 The image according to any one of claims 17 to 35, wherein the processing unit generates an overhead image of an upper viewpoint according to a distance to the object, adds the generated overhead image, and displays the image on the display. Processing equipment.
  37.  前記処理部は、前記フォークリフトの前方にある物体を認識するとともに、前記フォークリフト、もしくは前記フォークリフトのフォーク先端からの距離が所定値以下になった場合に、警告を発する、または前記ディスプレイの表示を近接用画面に切り替える、請求項17から請求項36のいずれかに記載の画像処理装置。 The processing unit recognizes an object in front of the forklift, and issues a warning when the distance from the forklift or the fork tip of the forklift is a predetermined value or less, or closes the display on the display The image processing device according to any one of claims 17 to 36, wherein the image processing device is switched to a display screen.
  38.  前記処理部は、前記フォークリフトの前方にある物体を認識するとともに、前記距離値の画像において認識した前記物体のフォーク先端からの最短距離に関する情報を出力する、請求項17から請求項37のいずれかに記載の画像処理装置。 The processing unit according to any one of claims 17 to 37, wherein the processing unit recognizes an object in front of the forklift and outputs information on the shortest distance from the fork tip of the object recognized in the image of the distance value. An image processing apparatus according to 1.
  39.  フォークリフトに用いられる画像処理装置であって、前記フォークリフトの前方を撮影するカメラと、前記フォークリフトの前方にある物体までの距離を測距し、距離値の分布を示す測距点群データを取得するための検知センサーと、を備える画像処理装置を制御するコンピューターで実行される制御プログラムであって、
     前記カメラにより映像を取得するステップ(a)と、
     前記検知センサーで測距点群データを取得するステップ(b)と、
     前記カメラが取得した映像に対して、取得した測距点群データに基づく距離値の画像を付加する加工処理を行うステップ(c)と、
     処理後の映像をディスプレイに表示するステップ(d)と、
     を含む処理を、前記コンピューターに実行させるための制御プログラム。
    An image processing apparatus used for a forklift, which measures a distance between a camera for photographing the front of the forklift and an object in front of the forklift, and acquires distance measurement point group data indicating a distribution of distance values A control program executed by a computer for controlling an image processing apparatus comprising:
    Acquiring an image by the camera (a);
    A step (b) of obtaining ranging point cloud data with the detection sensor;
    A step (c) of performing processing for adding an image of a distance value based on the acquired distance measuring point group data to the video acquired by the camera;
    Displaying the processed image on a display (d);
    A control program for causing the computer to execute a process including:
  40.  前記ステップ(c)では、前記加工処理として、さらに、前記測距点群データに基づいて、前記映像に対して視点変換処理を行う、請求項39に記載の制御プログラム。 40. The control program according to claim 39, wherein in the step (c), as the processing, a viewpoint conversion process is further performed on the video based on the distance measuring point group data.
  41.  前記処理は、さらに、
     前記フォークリフトの前方にある物体を認識するステップ(e)を含み、
     前記ステップ(c)では、前記加工処理として、前記映像に、認識した前記物体の種類、または前記物体までの距離、位置に対応した付加画像を生成し、前記映像に付加する、請求項39または請求項40に記載の制御プログラム。
     
    The process further includes
    Recognizing an object in front of the forklift (e),
    40. In the step (c), as the processing, an additional image corresponding to the recognized type of the object, the distance to the object, or the position is generated in the video and added to the video. The control program according to claim 40.
PCT/JP2019/002062 2018-02-23 2019-01-23 Forklift image processing device and control program WO2019163378A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020502093A JP7259835B2 (en) 2018-02-23 2019-01-23 Image processor and control program for forklift

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2018-030793 2018-02-23
JP2018030793 2018-02-23
JP2018-042292 2018-03-08
JP2018042292 2018-03-08

Publications (1)

Publication Number Publication Date
WO2019163378A1 true WO2019163378A1 (en) 2019-08-29

Family

ID=67686835

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/002062 WO2019163378A1 (en) 2018-02-23 2019-01-23 Forklift image processing device and control program

Country Status (2)

Country Link
JP (1) JP7259835B2 (en)
WO (1) WO2019163378A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110733424A (en) * 2019-10-18 2020-01-31 深圳市麦道微电子技术有限公司 Calculation method for horizontal distance between ground position and vehicle body in driving video systems
JP2021042070A (en) * 2019-09-13 2021-03-18 株式会社豊田自動織機 Position and attitude estimation device
CN113280855A (en) * 2021-04-30 2021-08-20 中国船舶重工集团公司第七一三研究所 Intelligent sensing system and method for multi-source sensing pallet fork
CN113387302A (en) * 2020-02-27 2021-09-14 三菱物捷仕株式会社 Arithmetic device, movement control system, control device, mobile body, arithmetic method, and computer-readable storage medium
KR20220010965A (en) * 2020-07-20 2022-01-27 세메스 주식회사 Gas cylinder transfer apparatus and gas cylinder logistics system including the same
CN115848878A (en) * 2023-02-28 2023-03-28 云南烟叶复烤有限责任公司 AGV-based cigarette frame identification and stacking method and system

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09309696A (en) * 1996-05-21 1997-12-02 Toyo Umpanki Co Ltd Forklift
JP2002087793A (en) * 2000-09-07 2002-03-27 Toshiba Fa Syst Eng Corp Pallet carrying device
JP2003063793A (en) * 2001-08-30 2003-03-05 Nippon Yusoki Co Ltd Contact detecting sensor for fork
JP2003246597A (en) * 2002-02-20 2003-09-02 Fujitsu General Ltd Camera system for fork lift truck
JP2006096457A (en) * 2004-09-28 2006-04-13 Toyota Industries Corp Forklift work assisting device
JP2007084162A (en) * 2005-09-20 2007-04-05 Toyota Industries Corp Cargo handling supporting device of fork lift
JP2009111946A (en) * 2007-11-01 2009-05-21 Alpine Electronics Inc Vehicle surrounding image providing apparatus
JP2011162189A (en) * 2010-02-12 2011-08-25 Robert Bosch Gmbh Dynamic range display for automotive rear-view and parking systems
JP2011195334A (en) * 2010-09-03 2011-10-06 Shinmei Ind Co Ltd Safety device of forklift
JP2013086959A (en) * 2011-10-21 2013-05-13 Sumitomo Heavy Ind Ltd Support device and method for positioning fork of forklift
JP2014064192A (en) * 2012-09-21 2014-04-10 Komatsu Ltd Periphery monitoring system for work vehicle and work vehicle
JP3205015U (en) * 2016-04-18 2016-06-30 株式会社豊田自動織機 Manned and unmanned forklift
JP2016161340A (en) * 2015-02-27 2016-09-05 株式会社デンソー Noise reduction method and object recognition device
WO2017090568A1 (en) * 2015-11-26 2017-06-01 京セラ株式会社 Display device, moving body, and light source device
JP2017145586A (en) * 2016-02-16 2017-08-24 株式会社小松製作所 Work vehicle
JP2017151652A (en) * 2016-02-23 2017-08-31 村田機械株式会社 Object state specification method, object state specification apparatus, and conveyance vehicle
JP2017174197A (en) * 2016-03-24 2017-09-28 日産自動車株式会社 Three-dimensional object detection method and three-dimensional object detection device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6542574B2 (en) * 2015-05-12 2019-07-10 株式会社豊田中央研究所 forklift

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09309696A (en) * 1996-05-21 1997-12-02 Toyo Umpanki Co Ltd Forklift
JP2002087793A (en) * 2000-09-07 2002-03-27 Toshiba Fa Syst Eng Corp Pallet carrying device
JP2003063793A (en) * 2001-08-30 2003-03-05 Nippon Yusoki Co Ltd Contact detecting sensor for fork
JP2003246597A (en) * 2002-02-20 2003-09-02 Fujitsu General Ltd Camera system for fork lift truck
JP2006096457A (en) * 2004-09-28 2006-04-13 Toyota Industries Corp Forklift work assisting device
JP2007084162A (en) * 2005-09-20 2007-04-05 Toyota Industries Corp Cargo handling supporting device of fork lift
JP2009111946A (en) * 2007-11-01 2009-05-21 Alpine Electronics Inc Vehicle surrounding image providing apparatus
JP2011162189A (en) * 2010-02-12 2011-08-25 Robert Bosch Gmbh Dynamic range display for automotive rear-view and parking systems
JP2011195334A (en) * 2010-09-03 2011-10-06 Shinmei Ind Co Ltd Safety device of forklift
JP2013086959A (en) * 2011-10-21 2013-05-13 Sumitomo Heavy Ind Ltd Support device and method for positioning fork of forklift
JP2014064192A (en) * 2012-09-21 2014-04-10 Komatsu Ltd Periphery monitoring system for work vehicle and work vehicle
JP2016161340A (en) * 2015-02-27 2016-09-05 株式会社デンソー Noise reduction method and object recognition device
WO2017090568A1 (en) * 2015-11-26 2017-06-01 京セラ株式会社 Display device, moving body, and light source device
JP2017145586A (en) * 2016-02-16 2017-08-24 株式会社小松製作所 Work vehicle
JP2017151652A (en) * 2016-02-23 2017-08-31 村田機械株式会社 Object state specification method, object state specification apparatus, and conveyance vehicle
JP2017174197A (en) * 2016-03-24 2017-09-28 日産自動車株式会社 Three-dimensional object detection method and three-dimensional object detection device
JP3205015U (en) * 2016-04-18 2016-06-30 株式会社豊田自動織機 Manned and unmanned forklift

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7272197B2 (en) 2019-09-13 2023-05-12 株式会社豊田自動織機 Position and orientation estimation device
JP2021042070A (en) * 2019-09-13 2021-03-18 株式会社豊田自動織機 Position and attitude estimation device
CN110733424B (en) * 2019-10-18 2022-03-15 深圳市麦道微电子技术有限公司 Method for calculating horizontal distance between ground position and vehicle body in driving video system
CN110733424A (en) * 2019-10-18 2020-01-31 深圳市麦道微电子技术有限公司 Calculation method for horizontal distance between ground position and vehicle body in driving video systems
CN113387302A (en) * 2020-02-27 2021-09-14 三菱物捷仕株式会社 Arithmetic device, movement control system, control device, mobile body, arithmetic method, and computer-readable storage medium
EP3882574A3 (en) * 2020-02-27 2022-01-26 Mitsubishi Logisnext Co., Ltd. Arithmetic device, control device, calculation method, and computer readable storage medium
CN113387302B (en) * 2020-02-27 2023-11-24 三菱物捷仕株式会社 Mobile control system, mobile body, control method, and storage medium
KR20220010965A (en) * 2020-07-20 2022-01-27 세메스 주식회사 Gas cylinder transfer apparatus and gas cylinder logistics system including the same
KR102606604B1 (en) * 2020-07-20 2023-11-27 세메스 주식회사 Gas cylinder transfer apparatus and gas cylinder logistics system including the same
CN113280855B (en) * 2021-04-30 2022-07-26 中国船舶重工集团公司第七一三研究所 Intelligent sensing system and method for multi-source sensing pallet fork
CN113280855A (en) * 2021-04-30 2021-08-20 中国船舶重工集团公司第七一三研究所 Intelligent sensing system and method for multi-source sensing pallet fork
CN115848878A (en) * 2023-02-28 2023-03-28 云南烟叶复烤有限责任公司 AGV-based cigarette frame identification and stacking method and system
CN115848878B (en) * 2023-02-28 2023-05-26 云南烟叶复烤有限责任公司 AGV-based tobacco frame identification and stacking method and system

Also Published As

Publication number Publication date
JPWO2019163378A1 (en) 2021-03-04
JP7259835B2 (en) 2023-04-18

Similar Documents

Publication Publication Date Title
JP2019156641A (en) Image processing device for fork lift and control program
WO2019163378A1 (en) Forklift image processing device and control program
CN110073659B (en) Image projection apparatus
JP6827073B2 (en) Depth mapping with structured light and time of flight
US10768416B2 (en) Projection type display device, projection display method, and projection display program
CN107944422B (en) Three-dimensional camera device, three-dimensional camera method and face recognition method
US9440591B2 (en) Enhanced visibility system
JP6036065B2 (en) Gaze position detection device and gaze position detection method
EP3288259A1 (en) Array detector for depth mapping
KR102059244B1 (en) Apparatus for Light Detection and Ranging
JP2008143701A (en) View improving system and method for industrial vehicle
EP3351958A1 (en) Object position detection apparatus
EP3689810B1 (en) Multi-function camera system
JP2019142714A (en) Image processing device for fork lift
JPWO2019163378A5 (en)
WO2018047400A1 (en) Vehicle display control device, vehicle display system, vehicle display control method, and program
JP6353989B2 (en) Projection display apparatus and projection control method
JP2013057541A (en) Method and device for measuring relative position to object
KR102536448B1 (en) Monitoring device to detect the risk of forward collision of forklift
JP2013024735A (en) Mobile body and movement surface detection system
JP2013009267A (en) Periphery monitoring device
EP3817373B1 (en) Overhead image generation apparatus, overhead image generation method, and program
JP2020040769A (en) Forklift image processing device and control program
JP2020142903A (en) Image processing apparatus and control program
US20130241882A1 (en) Optical touch system and optical touch position detecting method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19757936

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020502093

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19757936

Country of ref document: EP

Kind code of ref document: A1