CN113631779A - Excavator and construction system - Google Patents

Excavator and construction system Download PDF

Info

Publication number
CN113631779A
CN113631779A CN202080024901.8A CN202080024901A CN113631779A CN 113631779 A CN113631779 A CN 113631779A CN 202080024901 A CN202080024901 A CN 202080024901A CN 113631779 A CN113631779 A CN 113631779A
Authority
CN
China
Prior art keywords
information
image
shovel
construction
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080024901.8A
Other languages
Chinese (zh)
Inventor
西贵志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sumitomo SHI Construction Machinery Co Ltd
Original Assignee
Sumitomo SHI Construction Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sumitomo SHI Construction Machinery Co Ltd filed Critical Sumitomo SHI Construction Machinery Co Ltd
Publication of CN113631779A publication Critical patent/CN113631779A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/26Indicating devices
    • E02F9/261Surveying the work-site to be treated
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/20Drives; Control devices
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F3/00Dredgers; Soil-shifting machines
    • E02F3/04Dredgers; Soil-shifting machines mechanically-driven
    • E02F3/28Dredgers; Soil-shifting machines mechanically-driven with digging tools mounted on a dipper- or bucket-arm, i.e. there is either one arm or a pair of arms, e.g. dippers, buckets
    • E02F3/36Component parts
    • E02F3/42Drives for dippers, buckets, dipper-arms or bucket-arms
    • E02F3/43Control of dipper or bucket position; Control of sequence of drive operations
    • E02F3/435Control of dipper or bucket position; Control of sequence of drive operations for dipper-arms, backhoes or the like
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/08Superstructures; Supports for superstructures
    • E02F9/10Supports for movable superstructures mounted on travelling or walking gears or on other superstructures
    • E02F9/12Slewing or traversing gears
    • E02F9/121Turntables, i.e. structure rotatable about 360°
    • E02F9/123Drives or control devices specially adapted therefor
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/20Drives; Control devices
    • E02F9/2025Particular purposes of control systems not otherwise provided for
    • E02F9/2033Limiting the movement of frames or implements, e.g. to avoid collision between implements and the cabin
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/20Drives; Control devices
    • E02F9/22Hydraulic or pneumatic drives
    • E02F9/226Safety arrangements, e.g. hydraulic driven fans, preventing cavitation, leakage, overheating
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/24Safety devices, e.g. for preventing overload
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/26Indicating devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Structural Engineering (AREA)
  • Civil Engineering (AREA)
  • Mining & Mineral Resources (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Mechanical Engineering (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Component Parts Of Construction Machinery (AREA)
  • Operation Control Of Excavators (AREA)

Abstract

A shovel (100) is provided with: a lower traveling body (1); an upper slewing body (3); a non-volatile storage (NM); an information acquisition device (E1) for acquiring information relating to construction; and a controller (30) for controlling the audio output device (D2). The controller (30) is configured to determine the dangerous situation based on the information acquired by the information acquisition device (E1). The shovel (100) may be configured so that information relating to the dangerous situation determined to have occurred is displayed on a display device.

Description

Excavator and construction system
Technical Field
The invention relates to an excavator and a construction system.
Background
Conventionally, a shovel configured to be able to detect a person present in the surroundings from an image acquired by a camera attached to an upper revolving structure is known (see patent document 1).
Prior art documents
Patent document
Patent document 1: japanese patent laid-open No. 2014-183500
Disclosure of Invention
Technical problem to be solved by the invention
However, although the excavator can detect a person who enters a predetermined range provided around the excavator, the relative positional relationship between the excavator and the person who enters is simply compared, and the state of the work site is not grasped.
Therefore, it is preferable to provide a machine or a system capable of grasping the state of the work site.
Means for solving the technical problem
An excavator according to an embodiment of the present invention includes: a lower traveling body; an upper revolving body which is rotatably mounted on the lower traveling body; a storage device provided on the upper slewing body; an information acquisition device that acquires information related to construction; and a control device that controls the notification device, the control device determining the dangerous situation based on the information acquired by the information acquisition device.
Effects of the invention
The excavator can prevent dangerous situations in advance.
Drawings
Fig. 1 is a side view of a shovel according to an embodiment of the present invention.
Fig. 2 is a top view of the excavator of fig. 1.
Fig. 3 is a diagram showing a configuration example of a basic system mounted on the shovel of fig. 1.
Fig. 4 is a schematic diagram showing an example of the relationship between the risk judging unit and the risk information database.
Fig. 5 is a diagram showing an example of display of an input image.
Fig. 6 is a diagram showing another display example of an input image.
Fig. 7 is a top view of an excavator excavating a hole.
Fig. 8 is a diagram showing still another display example of an input image.
Fig. 9 is a diagram showing still another display example of an input image.
Fig. 10 is a diagram showing a configuration example of the shovel support system.
Fig. 11 is a diagram showing a configuration example of a construction system.
Fig. 12 is a schematic diagram showing another example of the relationship between the risk judging unit and the risk information database.
Fig. 13 is a diagram showing a configuration example of the shovel support system.
Fig. 14 is a schematic diagram showing an example of the determination process by the determination unit.
Fig. 15 is a timing chart showing an example of the operation of the shovel support system.
Fig. 16 is a schematic diagram showing another example of the determination process by the determination unit.
Fig. 17 is a schematic diagram showing another example of the determination process by the determination unit.
Fig. 18 is a diagram showing another configuration example of the shovel support system.
Fig. 19 is a diagram showing a configuration example of an image display unit and an operation unit of the display device.
Fig. 20 is a schematic diagram showing an example of a construction system.
Fig. 21 is a schematic view showing another example of the construction system.
Detailed Description
First, a shovel 100 as an excavator according to an embodiment of the present invention will be described with reference to fig. 1 to 3. Fig. 1 is a side view of an excavator 100. Fig. 2 is a top view of the shovel 100. Fig. 3 shows a configuration example of a basic system mounted on the shovel 100 shown in fig. 1.
In the present embodiment, the lower traveling body 1 of the shovel 100 includes a crawler belt 1C. The crawler belt 1C is driven by a traveling hydraulic motor 2M as a traveling actuator mounted on the lower traveling body 1. Specifically, as shown in fig. 2, the crawler belt 1C includes a left crawler belt 1CL and a right crawler belt 1CR, and the traveling hydraulic motor 2M includes a left traveling hydraulic motor 2ML and a right traveling hydraulic motor 2 MR. The left crawler belt 1CL is driven by a left traveling hydraulic motor 2ML, and the right crawler belt 1CR is driven by a right traveling hydraulic motor 2 MR.
An upper turning body 3 is rotatably mounted on the lower traveling body 1 via a turning mechanism 2. The turning mechanism 2 is driven by a turning hydraulic motor 2A as a turning actuator mounted on the upper turning body 3. However, the slewing actuator may be a slewing motor generator as an electric actuator.
A boom 4 is attached to the upper slewing body 3. An arm 5 is attached to a front end of the boom 4, and a bucket 6 as a terminal attachment is attached to a front end of the arm 5. The boom 4, the arm 5, and the bucket 6 constitute an excavation attachment AT as an example of an attachment. Boom 4 is driven by boom cylinder 7, arm 5 is driven by arm cylinder 8, and bucket 6 is driven by bucket cylinder 9.
The boom 4 is supported to be vertically rotatable with respect to the upper slewing body 3. Further, a boom angle sensor S1 is attached to the boom 4. The boom angle sensor S1 can detect a boom angle θ 1 which is a turning angle of the boom 4. The boom angle θ 1 is, for example, a rising angle from a state in which the boom 4 is lowered to the lowest position. Therefore, the boom angle θ 1 becomes maximum when the boom 4 is lifted to the highest position.
The arm 5 is supported rotatably with respect to the boom 4. Further, the arm 5 is attached with an arm angle sensor S2. The arm angle sensor S2 can detect an arm angle θ 2 that is a rotation angle of the arm 5. The arm angle θ 2 is, for example, an opening angle from a state where the arm 5 is closed to the maximum. Therefore, the arm angle θ 2 is maximized when the arm 5 is maximally opened.
The bucket 6 is supported rotatably with respect to the arm 5. Further, a bucket angle sensor S3 is attached to the bucket 6. The bucket angle sensor S3 can detect a bucket angle θ 3 as a rotation angle of the bucket 6. The bucket angle θ 3 is an opening angle from a state where the bucket 6 is closed to the maximum. Therefore, the bucket angle θ 3 is maximized when the bucket 6 is maximally opened.
In the embodiment of fig. 1, the boom angle sensor S1, the arm angle sensor S2, and the bucket angle sensor S3 are each configured by a combination of an acceleration sensor and a gyro sensor. However, the acceleration sensor may be constituted only by the acceleration sensor. The boom angle sensor S1 may be a stroke sensor attached to the boom cylinder 7, or may be a rotary encoder, a potentiometer, an inertial measurement unit, or the like. The same applies to the stick angle sensor S2 and the bucket angle sensor S3.
The upper slewing body 3 is provided with a cab 10 as a cab, and is mounted with a power source such as an engine 11. The upper slewing body 3 is provided with a space recognition device 70, a direction detection device 71, a positioning device 73, a body inclination sensor S4, a slewing angular velocity sensor S5, and the like. The cabin 10 is provided therein with an operation device 26, an operation pressure sensor 29, a controller 30, an information input device 72, a display device D1, an audio output device D2, and the like. In the present specification, for convenience, the side (+ X side) of the upper revolving structure 3 to which the excavation attachment AT is attached is referred to as the front side, and the side to which the counterweight is attached is referred to as the (-X side) rear side.
The operation device 26 is a device used by an operator to operate the actuator. The operation device 26 includes, for example, an operation lever and an operation pedal. The actuator includes at least one of a hydraulic actuator and an electric actuator. In the present embodiment, as shown in fig. 3, the operation device 26 is configured to be able to supply the hydraulic oil discharged from the pilot pump 15 to the pilot port of the corresponding one of the control valves 17 via the pilot line. The pressure (pilot pressure) of the hydraulic oil supplied to each pilot port is a pressure corresponding to the operation direction and the operation amount of the operation device 26 corresponding to each hydraulic actuator. However, the operation device 26 may be of an electrically controlled type, instead of such a pilot pressure type. At this time, the control valve in the control valve 17 may be an electromagnetic solenoid type spool valve.
Specifically, as shown in fig. 2, the operation device 26 includes a left operation lever and a right operation lever. The left operation lever is used for the swing operation and the operation of the arm 5. The right operation lever is used for the operation of the boom 4 and the operation of the bucket 6.
The operation pressure sensor 29 is configured to be able to detect the content of an operation performed by the operator on the operation device 26. In the present embodiment, the operation pressure sensor 29 detects the operation direction and the operation amount of the operation device 26 corresponding to each actuator as a pressure (operation pressure), and outputs the detected values to the controller 30. The operation content of the operation device 26 may be detected by a sensor other than the operation pressure sensor.
Specifically, the operation pressure sensor 29 includes a left operation pressure sensor and a right operation pressure sensor. The left operation pressure sensor detects the contents of the operation in the front-rear direction by the operator on the left operation lever and the contents of the operation in the left-right direction by the operator on the left operation lever as pressure, and outputs the detected values to the controller 30. The operation contents include, for example, a lever operation direction and a lever operation amount (lever operation angle). The same applies to the right operating lever.
The space recognition device 70 is configured to acquire information on a three-dimensional space around the shovel 100. The space recognition device 70 may be configured to calculate a distance from the space recognition device 70 or the shovel 100 to the object recognized by the space recognition device 70. The space recognition device 70 is, for example, an ultrasonic sensor, a millimeter wave radar, a monocular camera, a stereo camera, a LIDAR, a range image sensor, an infrared sensor, or the like. In the present embodiment, space recognition device 70 includes a front camera 70F attached to the front end of the upper surface of cab 10, a rear camera 70B attached to the rear end of the upper surface of upper revolving unit 3, a left camera 70L attached to the left end of the upper surface of upper revolving unit 3, and a right camera 70R attached to the right end of the upper surface of upper revolving unit 3. The front camera 70F may also be omitted.
The space recognition device 70 is, for example, a monocular camera having an imaging element such as a CCD or a CMOS, and outputs a captured image to the display device D1. The space recognition device 70 may detect the distance and direction of an object from a reflection signal by transmitting a plurality of signals (laser light, etc.) to the object and receiving the reflection signal, not only using the captured image, but also when the LIDAR, the millimeter wave radar, the ultrasonic sensor, the laser radar, or the like is used as the space recognition device 70.
The space recognition device 70 may be configured to detect an object existing around the shovel 100. Examples of the object include a topographic shape (e.g., a slope or a hole), an electric wire, a utility pole, a human, an animal, a vehicle, a construction machine, a building, a wall, a helmet, a safety vest, a work garment, and a predetermined mark in a helmet. The space recognition device 70 may be configured to recognize at least one of the type, position, shape, and the like of the object. The space recognition device 70 may be configured to be able to distinguish between persons and objects other than persons.
The direction detection device 71 is configured to detect information relating to the relative relationship between the direction of the upper revolving unit 3 and the direction of the lower traveling unit 1. Direction detecting device 71 may be constituted by a combination of a geomagnetic sensor attached to lower traveling structure 1 and a geomagnetic sensor attached to upper revolving structure 3, for example. Alternatively, the direction detection device 71 may be constituted by a combination of a GNSS receiver mounted on the lower traveling structure 1 and a GNSS receiver mounted on the upper revolving structure 3. The orientation detection device 71 may be a rotary encoder, a rotary position sensor, or the like. In the configuration in which the upper slewing body 3 is rotationally driven by the slewing motor generator, the direction detector 71 may be constituted by a resolver. The orientation detection device 71 may be attached to, for example, a center joint portion provided in association with the turning mechanism 2 that realizes relative rotation between the lower traveling body 1 and the upper turning body 3.
The orientation detection device 71 may be constituted by a camera attached to the upper revolving unit 3. At this time, the orientation detection device 71 performs known image processing on an image (input image) captured by a camera attached to the upper revolving structure 3 to detect an image of the lower traveling structure 1 included in the input image. Then, the orientation detection device 71 detects the image of the lower traveling body 1 by using a known image recognition technique, and determines the longitudinal direction of the lower traveling body 1. Then, the angle formed between the direction of the front-rear axis of the upper revolving structure 3 and the longitudinal direction of the lower traveling structure 1 is derived by the direction detector 71. The front-rear axis direction of the upper revolving unit 3 is derived from the input image. This is because the relationship between the direction of the optical axis of the camera and the direction of the front-rear axis of the upper revolving structure 3 is known. Since the crawler belt 1C protrudes from the upper revolving structure 3, the orientation detection device 71 can determine the longitudinal direction of the lower traveling structure 1 by detecting an image of the crawler belt 1C. The orientation detection means 71 may be incorporated into the controller 30.
The information input device 72 is configured to allow an operator of the excavator to input information to the controller 30. In the present embodiment, the information input device 72 is a switch panel provided in the vicinity of the image display unit 41 of the display device D1. However, the information input device 72 may be a touch panel disposed on the image display portion 41 of the display device D1, a control panel or a cross button provided at the tip of an operation lever, or a voice input device such as a microphone disposed in the cabin 10. The information input device 72 may be a communication device. At this time, the operator can input information to the controller 30 via a communication terminal such as a smartphone.
The positioning device 73 is configured to measure a current position. In the present embodiment, positioning device 73 is a GNSS receiver that detects the position of upper revolving unit 3 and outputs the detected value to controller 30. The positioning device 73 may also be a GNSS compass. At this time, positioning device 73 can detect the position and orientation of upper revolving unit 3.
The body inclination sensor S4 detects the inclination of the upper slewing body 3 with respect to a predetermined plane. In the present embodiment, the body inclination sensor S4 is an acceleration sensor that detects the inclination angle (roll angle) of the upper slewing body 3 about the front-rear axis and the inclination angle (inclination angle) about the left-right axis with respect to the horizontal plane. The front-rear axis and the left-right axis of the upper revolving structure 3 are orthogonal to each other through, for example, a shovel center point which is one point on the revolving shaft of the shovel 100.
The rotation angular velocity sensor S5 detects the rotation angular velocity of the upper slewing body 3. In the present embodiment, it is a gyro sensor. The rotational angular velocity sensor S5 may be a resolver, a rotary encoder, or the like. The revolution angular velocity sensor S5 may also detect a revolution speed. The slew velocity may be calculated from the slew angular velocity.
Hereinafter, at least one of the boom angle sensor S1, the arm angle sensor S2, the bucket angle sensor S3, the body inclination sensor S4, and the turning angular velocity sensor S5 is also referred to as a posture detection device. The posture of the excavation attachment AT is detected from the outputs of the boom angle sensor S1, the arm angle sensor S2, and the bucket angle sensor S3, for example.
The display device D1 is an example of a notification device, and is configured to be capable of displaying various information. In the present embodiment, the display device D1 is a liquid crystal display provided in the cabin 10. However, the display device D1 may be a display of a communication terminal such as a smartphone.
The sound output device D2 is another example of the notification device, and is configured to be capable of outputting sound. The sound output device D2 includes at least one of a device for outputting sound to an operator in the cab 10 and a device for outputting sound to a worker outside the cab 10. The audio output device D2 may be a speaker attached to the communication terminal.
The controller 30 is a control device for controlling the shovel 100. In the present embodiment, the controller 30 is configured by a computer including a CPU, a volatile storage device VM (see fig. 3), a nonvolatile storage device NM (see fig. 3), and the like. The controller 30 reads a program corresponding to each function from the nonvolatile storage device NM, loads the program into the volatile storage device VM, and causes the CPU to execute the corresponding processing. The functions include, for example, a machine guide function for guiding (guiding) a manual operation of the shovel 100 by an operator, and a machine control function for supporting the manual operation of the shovel 100 by the operator or automatically or autonomously operating the shovel 100.
The controller 30 may have a contact avoidance function of automatically or autonomously operating or stopping the shovel 100 in order to avoid contact between an object present in a monitoring area around the shovel 100 and the shovel 100. The monitoring of objects around the excavator 100 may be performed not only within the monitoring area but also outside the monitoring area. In this case, the controller 30 may be configured to detect the type of the object and the position of the object.
Next, a basic system mounted on the shovel 100 shown in fig. 1 will be described with reference to fig. 3. In fig. 3, the mechanical power transmission line is indicated by a double line, the working oil line by a thick solid line, the pilot line by a broken line, the electric power line by a thin solid line, and the electric control line by a one-dot chain line.
The basic system mainly includes an engine 11, a main pump 14, a pilot pump 15, a control valve 17, an operation device 26, an operation pressure sensor 29, a controller 30, a switching valve 35, an engine control device 74, an engine rotational speed adjustment control panel 75, a battery 80, a display device D1, a sound output device D2, an information acquisition device E1, and the like.
The engine 11 is a diesel engine employing synchronous control in which the engine speed is constantly maintained regardless of an increase or decrease in load. The fuel injection amount, the fuel injection timing, the supercharging pressure, and the like in the engine 11 are controlled by the engine control device 74.
The rotary shaft of the engine 11 is connected to the rotary shafts of a main pump 14 and a pilot pump 15, which are hydraulic pumps, respectively. Main pump 14 is connected to control valve 17 via a working oil line. The pilot pump 15 is connected to the operation device 26 via a pilot line. However, the pilot pump 15 may be omitted. In this case, the function of the pilot pump 15 can be realized by the main pump 14. That is, in addition to the function of supplying the hydraulic oil to the control valve 17, the main pump 14 may also have a function of supplying the hydraulic oil to the operation device 26 and the like after reducing the pressure of the hydraulic oil by an orifice and the like.
The control valve 17 is a hydraulic control device that controls the hydraulic system of the shovel 100. The control valve 17 is connected to hydraulic actuators such as the left traveling hydraulic motor 2ML, the right traveling hydraulic motor 2MR, the boom cylinder 7, the arm cylinder 8, the bucket cylinder 9, and the turning hydraulic motor 2A.
Specifically, the control valve 17 includes a plurality of spool valves corresponding to the respective hydraulic actuators. Each spool is configured to be displaceable in accordance with the pilot pressure so as to be able to increase or decrease the opening area of the PC port and the opening area of the CT port. The PC port is a port that constitutes a part of an oil passage that connects the main pump 14 and the hydraulic actuator. The CT port is a port constituting a part of an oil passage connecting the hydraulic actuator and the operating oil tank.
The switching valve 35 is configured to be capable of switching between an active state and an inactive state of the operation device 26. The active state of the operating device 26 is a state in which the operator can operate the hydraulic actuator using the operating device 26. The invalid state of the operating device 26 is a state in which the operator cannot operate the hydraulic actuator using the operating device 26. In the present embodiment, the switching valve 35 is a door lock valve as an electromagnetic valve configured to operate in response to a command from the controller 30. Specifically, switching valve 35 is disposed in a pilot conduit that connects pilot pump 15 and operation device 26, and is configured to be able to switch between opening and closing of the pilot conduit in response to a command from controller 30. The operation device 26 is in an active state when, for example, a door lock lever, not shown, is lifted up and the door lock valve is opened, and is in an inactive state when the door lock lever is depressed and the door lock valve is closed.
The display device D1 includes a control unit 40, an image display unit 41, and an operation unit 42 as an input unit. The control unit 40 is configured to be able to control the image displayed on the image display unit 41. In the present embodiment, the control unit 40 is configured by a computer including a CPU, a volatile memory device, a nonvolatile memory device, and the like. At this time, the control unit 40 reads the programs corresponding to the respective functional elements from the nonvolatile storage device, loads the programs into the volatile storage device, and causes the CPU to execute the corresponding processes. However, each functional element may be constituted by hardware, or may be constituted by a combination of software and hardware. The image displayed on the image display unit 41 may be controlled by the controller 30 or the space recognition device 70.
In the example shown in fig. 3, the operation unit 42 is a panel including a hardware switch. The operation unit 42 may be a touch panel. The display device D1 operates upon receiving power supply from the battery 80. The battery 80 is charged with electric power generated by the alternator 11a, for example. The electric power of the battery 80 is also supplied to the controller 30 and the like. For example, the starter 11b of the engine 11 is driven by electric power from the battery 80 to start the engine 11.
Engine control device 74 transmits data relating to the state of engine 11, such as the coolant temperature, to controller 30. The regulator 14a of the main pump 14 sends data relating to the swash plate deflection angle to the controller 30. The discharge pressure sensor 14b transmits data relating to the discharge pressure of the main pump 14 to the controller 30. The oil temperature sensor 14c of the oil passage provided between the hydraulic oil tank and the main pump 14 sends data relating to the temperature of the hydraulic oil flowing in the oil passage to the controller 30. The controller 30 can accumulate these data in the volatile storage device VM and send them to the display device D1 as necessary.
The engine speed control dial 75 is a dial for adjusting the speed of the engine 11. The engine speed dial 75 transmits data relating to the setting state of the engine speed to the controller 30. The engine speed control dial 75 is configured to be able to switch the engine speed in 4 stages of the SP mode, the H mode, the a mode, and the IDLE mode. The SP mode is a rotational speed mode selected when priority is given to the amount of work, and uses the highest engine rotational speed. The H-mode is a rotational speed mode selected when the workload and the fuel efficiency are both considered, and the second highest engine rotational speed is used. The a mode is a rotational speed mode selected when the excavator 100 is operated with low noise while priority is given to fuel efficiency, and the third highest engine rotational speed is used. The IDLE mode is a rotational speed mode selected when the engine 11 is intended to be in an IDLE state, and uses the lowest engine rotational speed. The engine 11 is controlled to be constant at the engine speed corresponding to the speed pattern set by the engine speed dial 75.
The sound output device D2 is configured to be able to call attention of a person involved in the work of the shovel 100. The sound output device D2 may be configured by a combination of an indoor alarm device and an outdoor alarm device, for example. The indoor warning device is a device for calling the attention of the operator of the excavator 100 located in the cab 10, and includes at least one of a speaker, a vibration generating device, and a light emitting device provided in the cab 10, for example. The indoor alarm device may be a display device D1 as an example of the notification device. The outdoor warning device is a device for calling attention of an operator who works around the shovel 100, and includes at least one of a speaker and a light emitting device provided outside the cab 10, for example. The speaker as the outdoor warning device includes, for example, a walking warning device attached to the bottom surface of upper revolving unit 3. The outdoor warning device may be a light emitting device provided in the upper revolving structure 3. However, the outdoor alarm device may be omitted. The sound output device D2 may notify a person related to the work of the shovel 100 of the content when the space recognition device 70 functioning as the object detection device detects a predetermined object, for example. The outdoor warning device may be a portable information terminal device carried by an operator located outside the cab 10. Examples of the portable information terminal device include a smartphone, a tablet terminal, a smart watch, and a helmet with a speaker.
The notification device may be provided outside the shovel 100. The notification device may be attached to a column or a tower installed in a work site, for example.
In the example of fig. 3, the controller 30 is configured to receive a signal output from at least one of the information acquisition devices E1, perform various calculations, and output a control command to at least one of the display device D1 and the audio output device D2.
The information acquisition device E1 is configured to be able to acquire construction-related information. In the present embodiment, the information acquisition device E1 includes at least one of a boom angle sensor S1, an arm angle sensor S2, a bucket angle sensor S3, a body tilt sensor S4, a turning angular velocity sensor S5, a boom pressure sensor, a boom cylinder bottom pressure sensor, an arm bottom pressure sensor, an arm pressure sensor, a bucket bottom pressure sensor, a boom cylinder stroke sensor, an arm cylinder stroke sensor, a bucket cylinder stroke sensor, a discharge pressure sensor, an operation pressure sensor 29, a space recognition device 70, a direction detection device 71, an information input device 72, a positioning device 73, and a communication device. The information acquiring device E1 acquires, as the information relating to the excavator 100, at least one of a boom angle, an arm angle, a bucket angle, a body tilt angle, a swing angular velocity, an arm pressure, a boom cylinder bottom pressure, an arm rod bottom pressure, a bucket rod pressure, a bucket cylinder bottom pressure, a boom stroke amount, an arm stroke amount, a bucket stroke amount, a discharge pressure of the main pump 14, an operation pressure of the operation device 26, information relating to a three-dimensional space around the excavator 100, information relating to a relative relationship between the orientation of the upper revolving body 3 and the orientation of the lower traveling body 1, information input to the controller 30, information relating to the current position, and the like. The information acquisition device E1 may acquire information from other construction machines, aircraft, and the like. The aircraft is, for example, a multi-axis aircraft or a flying boat that acquires information related to a work site. Further, the information acquisition device E1 may acquire the work environment information. The work environment information is, for example, information related to at least one of sand characteristics, weather, altitude, and the like.
The controller 30 mainly has a risk judging unit 30A as a functional element. The risk judging unit 30A may be configured by hardware or software. Specifically, the risk judging unit 30A is configured to be able to judge whether or not a risk situation occurs, based on the information acquired by the information acquiring device E1 and the information stored in the risk information database DB. The risk information database DB is stored, for example, in a nonvolatile storage NM in the controller 30. As another example, the risk information database DB may be provided in a management device 200 described later and configured to be able to communicate with the shovel 100 via a communication network.
The risk information database DB is an aggregate of information systematically configured to be able to search for information related to a risk situation that may occur at a work site. The danger information database DB stores information related to dangerous situations, for example, due to the position of a hole excavated by the excavator 100 and the temporary placement position of a side block buried in the hole. Specifically, the risk information database DB specifies at least one of the conditions of the risk situation, the degree of risk, and the like, using the depth of the hole excavated by the excavator 100, the volume of the side pocket, the distance from the edge of the hole to the side pocket, and the like.
Specifically, for example, as shown in fig. 4, the risk judging unit 30A derives, as input information, a relative positional relationship between a plurality of objects such as a hole and a side block dug by the shovel 100. Fig. 4 is a schematic diagram showing an example of the relationship between the risk judging unit 30A and the risk information database DB. The risk judging unit 30A checks the derived input information against the reference information corresponding to the input information stored in the risk information database DB. In this example, the reference information corresponding to the input information is, for example, reference information that establishes a corresponding association with a hole and a side pocket excavated by the excavator 100 among the plurality of reference information. When it is determined that the situation indicated by the input information matches or is similar to the situation indicated by the reference information, the risk determination unit 30A determines that a dangerous situation has occurred.
More specifically, the risk judging unit 30A derives, as input information, the depth of the hole excavated by the excavator 100, the volume of the side pocket, the distance from the edge of the hole to the side pocket, and the like, based on the information acquired by the information acquiring device E1. The risk determination unit 30A then checks the derived input information against reference information indicating a risk situation stored in the risk information database DB. When it is determined that the situation indicated by the input information matches or is similar to the situation indicated by the reference information, the risk determination unit 30A determines that a dangerous situation has occurred. The risk judging unit 30A may judge that a dangerous situation occurs when the input information is compared with reference information indicating a situation where the input information does not coincide with or is not similar to the situation indicated by the reference information. The risk judging unit 30A may judge whether or not a dangerous situation occurs by using work environment information such as information on sand characteristics or information on weather.
For example, when recognizing the positional relationship shown in fig. 5 from the input image acquired by the front camera 70F as an example of the information acquisition device E1, the risk determination unit 30A determines that a risk situation has occurred.
Fig. 5 shows an example of an input image displayed on the display device D1 and acquired by the front camera 70F. The displayed input images include a message window G0, an image G1 of the stick 5, an image G2 of the bucket 6, an image G3 of a hole excavated by the excavator 100, an image G4 of a side pocket temporarily placed in the vicinity of the hole, and a frame image G4F surrounding the image G4. Message window G0 indicates that the current risk level is level 4, due to the "risk of block overturning".
The risk judging unit 30A recognizes the presence of the side block and the hole dug by the shovel 100 by performing image processing on the input image, and derives the distance between the side block and the edge of the hole. When the risk judging unit 30A judges that the distance between the side slot block and the edge of the hole is smaller than the threshold value stored in the risk information database DB, it judges that a risk situation occurs.
When it is determined that a dangerous situation occurs, the danger determining unit 30A operates the notification device to notify the outside of the possibility of occurrence of the dangerous situation. In the present embodiment, the risk judging unit 30A operates the display device D1 and the indoor warning device to notify the operator of the shovel 100 of the contents. The danger determining unit 30A may operate the outdoor warning device to notify the operator who works around the shovel 100 of the operation. At this time, the determination result of whether or not the dangerous situation occurs may be changed depending on at least one of the position of the center of gravity of the side pocket, the size (width, height, length) of the hole, and the like. Therefore, the risk determining unit 30A may vary the risk level (the level of the unsafe situation) in stages.
The risk determination unit 30A may notify the content of the risk situation. For example, the risk assessment unit 30A may cause the audio output device D2 to output a voice message conveying the content of the situation in which the occurrence of the situation is likely, such as "the edge of the hole is likely to collapse", or may cause the display device D1 to display a text message conveying the content of the situation in which the occurrence of the situation is likely to occur.
Fig. 6 shows another example of an input image displayed on the display device D1 and acquired by the front camera 70F. The displayed input images include a message window G0, an image G1 of the stick 5, an image G2 of the bucket 6, an image G3 of a hole excavated by the excavator 100, an image G4 of a side pocket temporarily placed in the vicinity of the hole, a frame image G4F surrounding the image G4, an image G5 of a worker who enters the hole, and a frame image G5F surrounding the image G5. The message window G0 indicates that the current risk level is level 5, which is due to "danger of major accident".
The risk assessment unit 30A recognizes the presence of a side block, the presence of a hole excavated by the excavator 100, and the presence of a worker in the hole by performing image processing on the input image. The risk judging unit 30A derives the distance between the side pocket and the edge of the hole and the distance between the side pocket and the operator. The risk judging unit 30A judges that a risk situation occurs when the distance between the side slot block and the edge of the hole is smaller than the 1 st threshold value stored in the risk information database DB and the distance between the side slot block and the operator is smaller than the 2 nd threshold value stored in the risk information database DB. In this case, even if the side groove blocks have the same positional relationship, the determination result of whether or not a dangerous situation occurs may change if the side groove blocks have different sizes or different hole sizes. Therefore, the risk judging unit 30A may change at least one of the 1 st threshold and the 2 nd threshold according to the size of the side pocket block, the size of the hole, and the work environment information.
When it is determined that a dangerous situation has occurred, the danger determining unit 30A operates the notification device in a manner different from that when the notification device is operated in the situation shown in fig. 5. This is because the operator is not concerned with the dangerous situation in the situation shown in fig. 5, but is concerned with the dangerous situation in the situation shown in fig. 6. Specifically, the risk assessment unit 30A operates the notification device so that the attention of each of the operator and the worker of the shovel 100 can be more reliably called.
The risk determination unit 30A may be configured to estimate the construction situation after a predetermined time has elapsed based on the information acquired by the information acquisition device E1, and determine whether or not a risk situation has occurred after the predetermined time has elapsed based on the information related to the estimated construction situation and the information stored in the risk information database DB.
Specifically, as shown in fig. 7, the risk assessment unit 30A estimates the shape of the hole TR after a predetermined time has elapsed, based on the shape of the hole TR excavated by the excavator 100. Fig. 7 is a plan view of a work site where the shovel 100 is located. The virtual broken line in fig. 7 indicates the shape of the hole TR after a predetermined time has elapsed, which is estimated by the risk assessment unit 30A, that is, the shape of the unearthed hole TRx.
Further, the risk judging unit 30A derives the relative positional relationship between the unearthed hole TRx and the side groove block BL as input information. The risk judging unit 30A recognizes the position of the side slot block BL from the input image acquired by the left camera 70L. Then, the risk judging unit 30A checks the derived input information against the reference information corresponding to the input information stored in the risk information database DB. When it is determined that the situation indicated by the input information matches or is similar to the situation indicated by the reference information, the risk determination unit 30A determines that a risk situation is likely to occur after a predetermined time has elapsed.
More specifically, the risk judging unit 30A derives the current shape of the hole TR excavated by the excavator 100, based on the information acquired by the information acquiring device E1. Then, the risk assessment unit 30A estimates the shape of the hole TRx after a predetermined time has elapsed, based on the current shape of the hole TR excavated by the excavator 100. Then, the risk judging unit 30A derives, as input information, a distance X1 from the edge of the hole TRx to the side slot block BL after a predetermined time has elapsed. The risk determination unit 30A then checks the derived input information against reference information indicating a risk situation stored in the risk information database DB. When it is determined that the situation indicated by the input information matches or is similar to the situation indicated by the reference information, the risk determination unit 30A determines that a risk situation is likely to occur after a predetermined time has elapsed.
Alternatively, the risk assessment unit 30A may be configured to be able to assess whether a risk situation will occur in the future before the hole is excavated by the shovel 100.
Specifically, the risk judging unit 30A may judge whether or not a risk situation is likely to occur in the future at the time of temporarily placing the side block BL as shown in fig. 8. Alternatively, the risk judging unit 30A may judge that a risk situation is likely to occur in the future at the time when excavation of a hole starts in the vicinity of the side pocket block BL temporarily placed as shown in fig. 8.
Fig. 8 shows still another example of an input image displayed on the display device D1 and acquired by the front camera 70F. The displayed input images include a message window G0, an image G1 of the stick 5, an image G2 of the bucket 6, an image G4 of the temporarily placed side slot block BL, a frame image G4F surrounding the image G4, and an image G6 representing the shape of an unearthed hole to be excavated by the excavator 100. Message window G0 indicates that the current risk level is level 4, due to the "risk of block overturning".
In the example of fig. 8, the image G6 is generated based on information related to the construction plan, such as design data stored in advance in the nonvolatile storage device NM of the controller 30. However, the image G6 may be generated from data relating to the posture of the excavation attachment at the current time, data relating to the orientation of the upper slewing body 3, and the like.
The risk judging unit 30A recognizes the presence of the side groove block BL by performing image processing on the input image, and derives the distance between the edge of the hole to be excavated in the future and the side groove block BL. When the risk judging unit 30A judges that the distance between the edge of the hole not excavated and the side slot block BL is smaller than the threshold value stored in the risk information database DB, it judges that a risk situation is likely to occur in the future.
Alternatively, the risk judging unit 30A may recognize that the side slot block BL exists at a position other than the region set as the temporary placement position of the side slot block BL by performing image processing on the input image. At this time, the risk judging unit 30A may specify the region set as the temporary placement position of the side block BL based on the design data. The risk judging unit 30A may judge that a risk situation is likely to occur in the future based on the fact that the side block BL is temporarily placed at a position other than the region set as the temporary placement position. In this way, the risk determining unit 30A may determine whether or not a risk situation is likely to occur in the future based on information on the arrangement of the material such as the side slot block BL.
Alternatively, the risk judging unit 30A may recognize that there is a hole dug by the shovel 100 by performing image processing on the input image, and derive the distance between the temporary placement position of the material such as the side block and the edge of the hole. Further, the risk judging unit 30A may judge that a risk situation is likely to occur in the future when the distance between the temporary placement location where the material has not been temporarily placed and the edge of the hole is determined to be smaller than the threshold value stored in the risk information database DB. This is because, after the hole is excavated, in the case where the material is temporarily placed at the temporary placement site in accordance with the construction plan, there is a possibility that the edge of the hole collapses.
The risk determination unit 30A may determine that a risk situation occurs when recognizing the positional relationship shown in fig. 9 from the input image acquired by the front camera 70F as an example of the information acquisition device E1.
Fig. 9 shows still another example of an input image displayed on the display device D1 and acquired by the front camera 70F. The displayed input images include a message window G0, an image G1 of the arm 5, an image G2 of the bucket 6, an image G7 of the dump truck, an image G8 of an iron plate loaded in the carriage of the dump truck, a frame image G8F surrounding the image G8, and an image G9 of a crane cable (wire rope) for hoisting the iron plate as a hoisted object. Message window G0 indicates that the current risk level is level 4, which is due to "risk of load imbalance".
The risk determination unit 30A recognizes that there are a dump truck on which an iron plate is loaded and an iron plate lifted by the excavator 100 operating in the crane mode by performing image processing on the input image, and derives the shape of the lifted iron plate, the number and position of the lifting points, and the horizontal distance between the center of gravity of the iron plate and the center of the lifting points. The danger determining unit 30A determines that a dangerous situation occurs, for example, when it is determined that the relationship between the shape of the iron plate to be lifted and the number and positions of the lifting points matches or is similar to the relationship stored in the danger information database DB. Alternatively, the risk assessment unit 30A may assess that a risk situation has occurred when assessing that the horizontal distance between the center of gravity of the iron plate and the centers of the plurality of hanging points is equal to or greater than a threshold value stored in the risk information database DB.
When it is determined that a dangerous situation occurs, the danger determining unit 30A operates the notification device to notify the outside of the possibility of occurrence of the dangerous situation. In the present embodiment, the risk judging unit 30A operates the display device D1 and the indoor warning device to notify the operator of the shovel 100 of the contents. The danger determining unit 30A may operate the outdoor warning device to notify the operator who works around the shovel 100 of the operation.
The risk determination unit 30A may notify that a risk situation is likely to occur. For example, the risk assessment unit 30A may output a voice message or a text message conveying the content of a situation in which there is a possibility of occurrence, such as "there is a possibility of shaking of a suspended object".
In the above embodiment, the risk assessment unit 30A is implemented as a functional element of the controller 30 mounted on the shovel 100, but may be provided outside the shovel 100. In this case, the risk assessment unit 30A may increase the risk level when the worker enters a place where the inclination of the iron plate is predicted to occur, in a case where the inclination of the iron plate is predicted to occur due to the improper position of the hanging point.
Specifically, as shown in fig. 10, the risk assessment unit 30A may be implemented as a functional element of a management device 200 provided in a management center or the like located outside the shovel 100. Fig. 10 is a diagram showing a configuration example of the shovel support system. The shovel support system mainly includes 1 or more shovels 100, 1 or more management devices 200, 1 or more support devices 300, and 1 or more fixed point cameras 70X. The shovel support system of fig. 10 includes 1 shovel 100, 1 management device 200, 1 support device 300, and 3 fixed point cameras 70X. The support apparatus 300 is a mobile terminal such as a smartphone or a tablet PC carried by the worker WK.
The shovel 100, the management device 200, the support device 300, and the fixed point camera 70X are communicably connected to each other via at least one of a mobile phone communication network, a satellite communication network, a wireless LAN communication network, and the like.
The 3 fixed point cameras 70X are mounted on structures PL such as columns and towers provided at the work site, and are arranged so as to be separated from each other so that the entire area of the work site can be included in the imaging range.
In the example of fig. 10, the risk judging unit 30A is configured to be able to judge whether or not a risk situation occurs, based on information acquired by the information acquiring device E1 attached to the shovel 100, the structure PL, or the like and information stored in the risk information database DB. The information acquisition device E1 includes a pointing camera 70X. The risk information database DB is stored in a nonvolatile storage NM of the management apparatus 200.
Specifically, the risk judging unit 30A judges that a risk situation is likely to occur when, for example, the positional relationship shown in fig. 5 to 8 is recognized from the input image acquired by the pointing camera 70X as an example of the information acquiring device E1.
The risk assessment unit 30A and the risk information database DB may be mounted on the support device 300, or may be mounted on two of the shovel 100, the management device 200, and the support device 300.
The risk determination unit 30A may be configured to determine whether or not a risk situation is likely to occur at the stage of the construction plan. In this case, the risk assessment unit 30A is typically mounted on the management device 200 or the support device 300, and constitutes a construction system that supports the creation of a construction plan.
Fig. 11 is a diagram showing a configuration example of a construction system. The construction system is a computer system installed in a management center or the like, for example, and is mainly composed of a display device MD1, a voice output device MD2, an information input device MD3, and a controller MD 4.
The display device MD1 is an example of a notification device, and is configured to be capable of displaying various information. In the example of fig. 11, the display device MD1 is a liquid crystal display provided in the management center.
The sound output device MD2 is another example of a notification device, and is configured to be capable of outputting sound. In the example of fig. 11, the audio output device MD2 is a speaker that outputs audio to an administrator using the construction system.
The information input device MD3 is configured so that a manager who creates a construction plan can input information to the controller MD 4. In the example of fig. 11, the information input device MD3 is a touch panel disposed on the image display section of the display device MD 1. However, the information input device MD3 may be a digitizer, a stylus, a mouse, a trackball, or the like.
The controller MD4 is a control device for controlling the construction system. In the example of fig. 11, the controller MD4 is configured by a computer including a CPU, a volatile storage device VM, a nonvolatile storage device NM, and the like. The controller MD4 reads programs corresponding to the respective functions from the nonvolatile storage device NM, loads the programs into the volatile storage device VM, and causes the CPU to execute the corresponding processing. The risk judging unit 30A is implemented as a functional element of the controller MD 4.
An image displayed when the manager makes a construction plan relating to the embedding construction of the side blocks is displayed on the image display portion of the display device MD1 of fig. 11. Specifically, the displayed images include an image G10 showing a range of digging a hole for burying a side slot block, an image G11 showing a normal side slot block, an image G12 showing a side slot block for corner, an image G13 showing a cursor, an image G14 showing a selected side slot block (subjected to a drag operation), and an image G15 showing a pop-up window containing a text message.
The manager can determine the range of the hole for forming the embedding-side pocket block by, for example, arranging the image G10 in a desired position with a desired size and a desired shape. The range shown in the image G10 indicates the range excavated by the excavator 100. For example, the administrator can specify a desired range in the image display portion by using a digitizer or the like to determine the shape and size of the image G10.
Further, the administrator can specify the temporary placement position of the normal side pocket by moving the image G11 or its copy displayed in the material display region R1 to a desired position in the job site display region R2 by the drag-and-drop operation of the image G11. The same applies to the corner side pocket block. The material display area is an area in which images indicating the respective types of the plurality of materials at which the temporary placement position is specified in the construction system are displayed in a manner selectable by the manager. The work site display region R2 is a region that displays a plan view of the work site.
Fig. 11 shows a state in which after the range of the burying-side slot block is set as shown in an image G10, a copy of an image G11 is moved to the vicinity of an image G10 by a drag operation and is arranged in the vicinity of an image G10 by a drop operation. The manager may create a construction plan (material temporary placement plan) so as to temporarily place the side wall blocks at desired positions before the excavator 100 actually excavates the hole, or may create a construction plan (material temporary placement plan) so as to temporarily place the side wall blocks at desired positions after the excavator 100 actually excavates the hole.
The risk judging unit 30A derives, as input information, the distance from the edge of the hole planned to be excavated to the side pocket temporarily placed, and the like, based on the information acquired by the information input device MD3 as the information acquiring device E1.
The information acquired by the information input device MD3 includes, for example, information on the position of a hole planned to be excavated, which is represented by an image G10, information on the position of a side slot block temporarily placed, which is represented by an image G14, and the like. The information on the position of the hole planned to be excavated is an example of the predetermined information after a predetermined time.
The risk determination unit 30A then checks the derived input information against reference information indicating a risk situation stored in the risk information database DB. When it is determined that the situation indicated by the input information matches or is similar to the situation indicated by the reference information, the risk determination unit 30A determines that there is a possibility of a future occurrence of a risk situation.
When determining that there is a possibility of a dangerous situation in the future, the danger determining unit 30A operates the notification device to notify the administrator of the possibility. In the example of fig. 11, the risk judging unit 30A displays an image G15 including a warning message "too close to the gutter" on the image display unit of the display device MD1 to call the attention of the manager. This is because, when the work is performed according to such a construction plan, there is a possibility that the edge of the hole collapses due to the weight of the side pocket block. The risk assessment unit 30A may output a voice message from the audio output device MD2 to call the attention of the administrator.
With this configuration, the construction system can prevent the manager from creating a construction plan that may cause a dangerous situation in the future.
The risk assessment unit 30A may be configured to identify an input scene indicated by the presence or absence of one or more specific objects, and the like, and then assess whether or not the identified input scene is a scene indicating a risk situation, without quantitatively deriving the relative positional relationship between the specific objects, such as holes and side blocks, excavated by the excavator 100.
The input scene is, for example, a scene in which only a hole excavated by the shovel 100 exists, a scene in which a hole excavated by the shovel 100 and a side pocket exist, a scene in which a hole excavated by the shovel 100, a side pocket and an operator exist, and the like.
Fig. 12 is a schematic diagram showing another example of the relationship between the risk judging unit 30A and the risk information database DB. In the example of fig. 12, the risk judging unit 30A checks the recognized input scene against the reference scene indicating the risk situation stored in the risk information database DB. When it is determined that the input scene matches or is similar to the reference scene, the risk determination unit 30A determines that a risk situation has occurred.
The reference scene representing the dangerous situation is, for example, information generated from accumulated past story cases or the like, and includes, for example, information based on an image of a work site immediately before an accident.
Specifically, the risk judging unit 30A identifies the input scene by identifying one or more objects using a neural network without deriving numerical values such as the depth of the hole excavated by the excavator 100, the volume of the side blocks, and the distance from the edge of the hole to the side blocks. Further, the risk judging unit 30A judges whether or not the recognized input scene is a reference scene indicating a risk situation using a neural network. The risk judging unit 30A may judge whether or not the input scene matches or is similar to any of a plurality of reference scenes having different degrees of risk by using an image classification technique using a neural network.
Here, an excavator support system using a neural network will be described with reference to fig. 13 to 15. Fig. 13 is a diagram showing a configuration example of the shovel support system. In the example of fig. 13, the shovel 100 includes a controller 30, a recording device 32, and a determination device 34.
When an object to be monitored (e.g., a person, a truck, another construction machine, a utility pole, a crane, a tower, a building, etc.) is detected in a predetermined monitoring area around the shovel 100 (e.g., a working area within 5 meters from the shovel 100) by the determination device 34, the controller 30 determines the type of the object to be monitored, and performs control such as avoiding contact between the object and the shovel 100 according to the type (hereinafter, referred to as "contact avoidance control"). The controller 30 includes a notification unit 302 and an operation control unit 304 as functional units related to the abutment avoidance control, which are realized by executing one or more programs stored in a ROM, an auxiliary storage device, or the like on a CPU, for example.
Even if an object is present in the monitored area of the shovel 100, the avoidance control may not be executed according to the type of the object. For example, in the case of the crane mode, even if the wire rope is present near the back surface of the bucket 6, the avoidance control is not executed on the wire rope because the wire rope is a part of the work tool. Thus, whether avoidance control is possible or not is judged according to the position and the place of the object.
Further, even if the controller 30 detects a temporarily placed sand pile, in the case where the sand pile is a sand pile to be loaded, the avoiding control is not executed for the sand pile at the time of the loading work, and the excavation operation is permitted. However, during the traveling work, if the excavator climbs the pile, the pile becomes unstable, and therefore avoidance control is performed on the pile. In this way, whether avoidance control (avoidance operation) is possible or not can be determined based on the position, location, work content, and the like of the object. Further, it is possible to determine whether or not the avoidance control is possible, and whether or not the action content is possible according to the position, the place, the work content, and the like of the object.
The recording device 32 records an image (input image) acquired by the camera as the space recognition device 70 at a predetermined timing. The recording device 32 may be implemented by any hardware or any combination of hardware and software. For example, the recording device 32 may be configured centering on the same computer as the controller 30. The recording device 32 includes a recording control unit 322 as a functional unit realized by executing one or more programs stored in the ROM or the auxiliary storage device on the CPU, for example. The recording device 32 includes a storage unit 324 as a storage area defined in the internal memory.
The determination device 34 performs determination regarding an object around the shovel 100 (for example, determination of detection of an object, determination of classification of an object, and the like) based on the input image. The determination means 34 may be implemented by any hardware or any combination of hardware and software. For example, the determination device 34 may be configured mainly by a computer including an image processing arithmetic device that performs high-speed arithmetic operation by parallel processing in conjunction with processing in the CPU, in addition to the CPU, the RAM, the ROM, the auxiliary storage device, various input/output interfaces, and the like, which are the same configuration as the controller 30 and the like. Hereinafter, the control device 210 of the management device 200 described later has the same configuration. The image Processing arithmetic device may include a GPU (Graphics Processing Unit), an FPGA (Field-Programmable Gate Array), and an ASIC (Application Specific Integrated Circuit). The determination device 34 includes, for example, a display control unit 342 and a determination unit 344 as functional units realized by executing one or more programs stored in a ROM, an auxiliary storage device, or the like on a CPU. The determination device 34 includes a storage unit 346 that is a storage area defined in the nonvolatile internal memory. In addition, a part or all of the controller 30, the recording device 32, and the determination device 34 may be integrated into one.
The display device D1 displays an image indicating the state of the surroundings of the shovel 100 based on the input image under the control of the determination device 34 (display control unit 342). Specifically, the display device D1 displays an input image. The display device D1 displays a converted image generated by the determination device 34 (display control unit 342) and subjected to a predetermined conversion process (for example, a viewpoint conversion process) or the like on the input image. The converted image may be, for example, a viewpoint converted image in which an overhead image viewed from directly above the shovel 100 and a horizontal image viewed from a distance in the horizontal direction from the shovel 100 are combined. The converted viewpoint images may be synthesized images obtained by converting the images acquired by the front camera 70F, the rear camera 70B, the left camera 70L, and the right camera 70R into converted viewpoint images based on the overhead image and the horizontal image.
The communication device 90 is an arbitrary device that is connected to a communication network and communicates with the outside of the management apparatus 200 or the like. The communication device 90 may be a mobile communication module corresponding to a mobile communication standard defined in LTE (Long Term Evolution), 4G (4th Generation), 5G (5th Generation), or the like, for example.
When the object to be monitored is detected in the monitoring area around the shovel 100 by the determination device 34 (determination unit 344), the notification unit 302 notifies the operator or the like of the object. Thus, when an object to be monitored intrudes into a relatively close range around the shovel 100, even if the object is located in a blind spot when viewed from the cab 10, the operator or the like can recognize the intrusion, and safety such as stopping the operation of the operation device 26 can be ensured.
For example, the notification unit 302 outputs a control signal to the sound output device D2 to notify an operator or the like that an object to be monitored is detected in the monitoring area near the shovel 100. Further, the display device D1 may notify that the object to be monitored is detected in the monitoring area around the shovel 100 by the determination device 34.
When an object to be monitored is detected in the monitoring area around the shovel 100 by the determination device 34 (determination unit 344), the operation control unit 304 restricts the operation of the shovel 100. Thus, the operation control unit 304 can restrict the operation of the shovel 100 when the object to be monitored intrudes into the monitoring area near the shovel 100, and can reduce the possibility that the shovel 100 comes into contact with the object to be monitored. In this case, the limitation of the operation of the shovel 100 may include a case where the operations of various operational elements of the shovel 100, which are outputs of the operation contents (operation amounts) to the operator or the like in the operation device 26, are slowed down. The limitation of the operation of the shovel 100 may include a case where the operation of the operational element of the shovel 100 is stopped regardless of the operation content of the operation device 26. The operational requirements of the shovel 100 that are the targets of the restriction of the operation of the shovel 100 may be all operational requirements that can be operated by the operating device 26, or may be some operational requirements that are necessary to avoid contact between the shovel 100 and the object to be monitored.
For example, operation control unit 304 may output a control signal to a pressure reducing valve of a pilot line provided on the secondary side of operation device 26 to reduce the pilot pressure according to the content of an operation on operation device 26 by an operator or the like. When the operation device 26 is electrically operated, the operation control unit 304 may control the solenoid valve by transmitting a signal to the solenoid valve, the signal being restricted to an operation amount smaller than an operation content (operation amount) corresponding to the signal input from the operation device 26, so as to reduce the pilot pressure applied from the solenoid valve to the control valve. As a result, the pilot pressure corresponding to the operation content of the operation device 26, which acts on the control valve that controls the hydraulic oil supplied to the hydraulic actuator, is reduced, and the operation of various operation components can be restricted.
The recording control unit 322 (an example of the recording unit) records the images acquired by the cameras (the front camera 70F, the rear camera 70B, the left camera 70L, and the right camera 70R) in the storage unit 324 at a predetermined timing (hereinafter, referred to as "recording timing"). Thus, when the capacity of the storage unit 324 is limited, the necessary timing can be defined in advance and the input image can be recorded in the storage unit 324. As will be described later, the transmission capacity of the storage unit 324 when the input image is transmitted to the management device 200 can be reduced, and the communication cost can be reduced. Specifically, for example, when the recording timing is reached, the recording control unit 322 acquires an input image corresponding to the recording timing among the input images in the ring buffer defined in the RAM or the like, including the past portions, and records the acquired input image in the storage unit 324.
The recording timing may be, for example, a predefined periodic timing. The recording timing may be a state in which the shovel 100 is likely to make an erroneous determination when the determination device 34 (determination unit 344) based on the input image determines that the object around the shovel 100 is determined. Specifically, the recording timing may be when the shovel 100 is walking or turning. The recording timing may be a time when the determination unit 344 determines that an object is detected in the monitoring area around the shovel 100. The recording start timing may be started when the controller is turned ON (ON), may be started when the door lock lever is released, or may be started when the operation lever is turned ON (ON).
In fig. 13, the determination result of the determination unit 344 is input to the recording device 32 (recording control unit 322), but when the recording timing is limited without depending on the determination result of the determination unit 344, the determination result of the determination unit 344 may not be input to the recording device 32.
The input image IM1 is recorded in the storage unit 324 under the control of the recording control unit 322 during a period from when the initial process after the shovel 100 is started is completed to when the shovel 100 is stopped. The one or more input images IM1 recorded in the storage unit 324 are transmitted to the management apparatus 200 via the communication device 90 (an example of an environment information transmitting unit) at a predetermined timing (hereinafter, referred to as "image transmission timing").
The image transmission timing may be, for example, when the shovel 100 stops operating. The transmission timing may be when the free capacity of the storage unit 324 is less than a predetermined threshold. This is because the total capacity of the input image IM1 recorded in the storage unit 324 may be relatively increased during the period from the start to the stop of the shovel 100. The image transmission timing may be, for example, after the initial processing after the shovel 100 is started is completed. At this time, the storage unit 324 is a storage area defined in the nonvolatile internal memory, and may transmit the input image IM1 recorded during the period from the last start to the stop of the shovel 100 to the management apparatus 200.
Note that the input image IM1 may be transmitted to the management apparatus 200 via the communication device 90 in sequence each time it is recorded in the storage unit 324.
The display control unit 342 causes the display device D1 to display an image indicating the state of the surroundings of the shovel 100 (hereinafter, referred to as "shovel surroundings image").
For example, the display control unit 342 causes the display device D1 to display the input image as an excavator peripheral image. Specifically, the display control unit 342 may display the input image of a part of the cameras selected from the front camera 70F, the rear camera 70B, the left camera 70L, and the right camera 70R on the display device D1. In this case, the display control unit 342 may be configured to switch the camera corresponding to the input image displayed on the display device D1 in accordance with a predetermined operation performed by the operator or the like. The display control unit 342 may display all input images of the front camera 70F, the rear camera 70B, the left camera 70L, and the right camera 70R on the display device D1.
For example, the display control unit 342 generates a converted image obtained by applying a predetermined conversion process to the input image as an excavator peripheral image, and displays the generated converted image on the display device D1. The converted image may be, for example, a viewpoint converted image in which a bird's eye view image viewed from directly above the shovel 100 and a horizontal image viewed in a distance from the shovel 100 in the horizontal direction are combined. The viewpoint conversion image may be a synthesized image (hereinafter, referred to as a "viewpoint conversion synthesized image") obtained by converting the input image of each of the front camera 70F, the rear camera 70B, the left camera 70L, and the right camera 70R into a viewpoint conversion image based on a combination of the overhead image and the horizontal image and then synthesizing the converted image by a predetermined method.
When the determination unit 344 detects an object to be monitored in a predetermined monitoring area around the shovel 100, the display control unit 342 displays an image in which a region corresponding to the detected object (hereinafter referred to as a "detected object region") in the shovel periphery image is highlighted in an overlapping manner. This allows an operator or the like to easily confirm the detected object on the image of the excavator periphery.
The determination unit 344 uses the learned model LM stored in the storage unit 346 and subjected to machine learning to determine the object around the shovel 100 from the input image. Specifically, the determination unit 344 loads the learned model LM from the storage unit 346 to a main storage device such as a RAM (path 344A) and causes the CPU to execute the learned model LM, thereby performing determination regarding the object around the shovel 100 based on the input image. For example, as described above, the determination unit 344 determines the presence or absence of an object to be monitored and detects the object to be monitored in the monitoring area around the shovel 100. For example, the determination unit 344 determines (specifies) the type of the detected object to be monitored, that is, classifies the detected object to be monitored in a predefined classification list of objects to be monitored (hereinafter, referred to as "monitoring object list"). The list of monitored objects may include people, trucks, other construction machines, utility poles, hoists, towers, buildings, and the like.
For example, as shown in fig. 14, the learned model LM is configured with a Neural Network (Neural Network)401 as the center.
In this example, the neural network 401 is a so-called deep neural network having one or more intermediate layers (hidden layers) between an input layer and an output layer. In the neural network 401, a weighting parameter indicating the strength of connection with a lower layer is defined for each of a plurality of neurons constituting each intermediate layer. The neural network 401 is configured such that each neuron element of each layer outputs, to a neuron element of a lower layer, a sum of values obtained by multiplying each input value from a plurality of neuron elements of an upper layer by a weighting parameter defined for each neuron element of the upper layer by a threshold function.
As will be described later, the neural network 401 is targeted for machine Learning, specifically, Deep Learning (Deep Learning), by the management apparatus 200 (Learning unit 2103), and optimization of the weighting parameters is realized. Thus, for example, as shown in fig. 14, the neural network 401 inputs an input image as an input signal x, and outputs, as an output signal y, a probability (predicted probability) of the presence of an object of each object type corresponding to a predefined monitoring target list (in this example, "scene 1 (e.g., scene (situation) near the mining block) and its risk level", "scene 2 (e.g., scene (situation) where a person enters the hole when mining near the block) and its risk level", … …), a scene (situation) based on the positional relationship between them, and the risk level at that time. The Neural Network 401 is, for example, a Convolutional Neural Network (CNN). CNN is a neural network to which conventional image processing techniques (convolution processing and pooling processing) are applied. Specifically, the CNN extracts feature amount data (feature map) having a size smaller than that of the input image by repeating a combination of convolution processing and pooling processing for the input image. The pixel value of each pixel of the extracted feature map is input to a neural network composed of a plurality of full-combined layers, and an output layer of the neural network can output a prediction probability of an object (including a topographic shape or the like) for each object type, for example. The neural network 401 can output a prediction probability of a scene assumed from the positional relationship and the change in the positional relationship for each object type. Then, the neural network 401 can output a scene with a high prediction probability and a degree of risk of the scene.
In this manner, the neural network 401 may be configured to input an input image as the input signal x and output the position and size of an object in the input image (that is, the occupied area of the object on the input image) and the type of the object as the output signal y. That is, the neural network 401 may be configured to detect an object on the input image (determine an occupied area portion of the object on the input image) and determine a classification of the object. In this case, the output signal y may be configured in the form of image data obtained by superimposing information on the occupied area of the object and its classification on the input image as the input signal x. Thus, the determination unit 344 can determine the relative position (distance and direction) of the object with respect to the shovel 100 based on the position and size of the occupied area of the object in the input image output from the learned model LM (neural network 401). The determination unit 344 can identify a scene in which the object is present. The scene may also be determined based on changes in the position and size of the objects. This is because the cameras (the front camera 70F, the rear camera 70B, the left camera 70L, and the right camera 70R) are fixed to the upper revolving structure 3, and the imaging range (the view angle) is previously limited (fixed). The determination unit 344 can determine that the object to be monitored is detected in the monitoring area when the position of the object detected by the learned model LM is in the monitoring area and the object is classified into the object list to be monitored. For example, the neural network 401 may have a configuration having neural networks corresponding to a process of extracting an occupied region (window) of an object existing in the input image and a process of determining the kind of the object of the extracted region. That is, the neural network 401 may be a structure in which detection of an object and classification of an object are performed in stages. For example, the neural network 401 may be a structure having a neural network corresponding to a process of defining a classification of an object and an occupied area of the object (Bounding box) for each grid cell in which all areas of the input image are divided into a predetermined number of partial areas, and a process of determining a final occupied area of the object in association with the occupied area of the object of each type from the classification of the object for each grid cell. That is, the neural network 401 may be a structure that performs detection of an object and classification of the object in parallel.
The result of the determination by the determination unit 344 is displayed on the display device D1, for example, by the display control unit 342.
For example, as shown in fig. 5, the main screen 41V is displayed on the display device D1, and the input image is displayed in the camera image display area in the main screen 41V. In this example, an input image of the rear camera 70B in which the side groove block provided in front of the shovel 100 and the side groove that has already been dug are captured is displayed in the camera image display area.
As described above, the determination unit 344 inputs the image data of the input image of the rear camera 70B to the learned model LM (the neural network 401), acquires the occupied area of the object in the input image output from the learned model LM, and determines the type and positional relationship of the object in the occupied area. Then, the type of scene can be derived from the determined type and positional relationship of the object. Then, the degree of risk is calculated from the derived scene type. Therefore, in this example, a block diagram 501, which is outputted from the learned model LM, so as to surround the occupied area of the object (block) classified as the "side slot block", and a character information icon 502, which indicates that the detected (classified) object is the side slot block, are displayed in a superimposed manner on the input image. Then, a block diagram 503 of a system surrounding an occupied area of an object (trench) classified as a "trench digging groove" output from the learned model LM and a character information icon 504 indicating that the detected (classified) object is a trench of one of the topographic shapes are displayed superimposed on the input image. This allows the operator or the like to easily recognize the detected object and to easily recognize the type of the detected object. In addition, the prediction probabilities used in the determination by the determination unit 344, specifically, the prediction probability of the presence of the "side slot block" or the prediction probability of the presence of the "excavation slot" may be displayed together in the camera image display area of the display device D1. The determination unit 344 classifies the scene in which the shovel 100 is present as a "scene near the excavation block" based on the type and positional relationship of the objects and the scene obtained from the learned model LM. At this time, the prediction probabilities classified as "scenes in the vicinity of the mining block" may be displayed together in the camera image display area of the display device D1. Further, a level display (for example, 5 stages) indicating the degree of risk may be displayed. This makes it possible for the operator of the shovel 100 to easily confirm the classification of the dangerous scene and the cause thereof, and to quickly perform work for reducing the degree of danger. The determination unit 344 can also determine the job content in the scene type. For example, when the image of the dump truck at an arbitrary position in the image is recognized and the image of the temporarily placed sand and soil at another position in the image is recognized, the determination unit 344 can determine that the work content in the scene is the loading work, based on the dump truck and the position thereof, and further based on the temporarily placed sand and soil and the position thereof. In fig. 9, the determination unit 344 can determine that the work content in the scene is the crane work, based on the recognized position of the image of the wire rope and the recognized position of the image of the bucket 6. In this way, the determination unit 344 can determine the work content from the recognized object and the position thereof by learning the model. As the learned model LM, a Support Vector Machine (SVM) or the like may be applied in addition to the neural network 401.
Further, a converted image based on the input image (for example, the above-described synthesized viewpoint converted image) may be displayed on the display device D1. At this time, the frame icon or the character information icon may be displayed in a portion corresponding to the occupied area of the object on the converted image in an overlapping manner.
The learning completed model LM is stored in the storage unit 346. When the learning-completed model LM in the storage unit 346 receives an updated version of the learning-completed model, that is, a learning-completed model subjected to additional learning (hereinafter, referred to as an "additional learning-completed model") from the management apparatus 200 via the communication device 90, the learning-completed model LM is updated to the received additional learning-completed model. Thus, the determination unit 344 can use the additionally learned model additionally learned in the management device 200, and therefore, the determination accuracy with respect to the object around the shovel 100 can be improved by updating the learned model.
The management device 200 includes a control device 210, a communication device 220, a display device 230, an input device 240, and a computer graphics image generation device 250 (hereinafter, referred to as "cg (computer graphics) image generation device") 250.
The control device 210 controls various operations of the management device 200. The control device 210 includes, for example, a determination unit 2101, a teaching data generation unit 2102, and a learning unit 2103 as functional units realized by executing one or more programs stored in a ROM or a nonvolatile auxiliary storage device on a CPU. The control device 210 includes, for example, storage sections 2104 and 2105 as storage areas defined in a nonvolatile internal memory such as an auxiliary storage device.
The communication device 220 is an arbitrary device that is connected to a communication network and communicates with the outside of the plurality of excavators 100 and the like.
The display device 230 is, for example, a liquid crystal display or an organic EL display, and displays various information images under the control of the control device 210.
The input device 240 receives an operation input from a user. The input device 240 includes, for example, a touch panel mounted on a liquid crystal display or an organic EL display. The input device 240 may include a touch panel, a keyboard, a mouse, a trackball, and the like. Information relating to the operating state of the input device 240 is taken into the control device 210.
The determination unit 2101 performs determination regarding objects around the shovel 100 based on the input image IM1 received from the plurality of shovels 100, that is, the input image IM1 (path 2101A) read from the storage unit 2104, using the learned model LM subjected to machine learning by the learning unit 2103 stored in the storage unit 2105. Specifically, the determination unit 2101 loads the learned model LM from the storage unit 2105 into a main storage device such as a RAM (path 2101B) and causes the CPU to execute it, thereby performing determination regarding an object around the shovel 100 based on the input image IM1 read from the storage unit 2104. More specifically, the determination unit 2101 sequentially inputs the plurality of input images IM1 stored in the storage unit 2104 into the learned model LM to perform determination regarding the object around the shovel 100. The determination result 2101D of the determination unit 2101 is input to the teaching data generation unit 2102. At this time, the determination result 2101D may be sequentially input to the teaching data generation unit 2102 for each input image IM1, or may be collectively input to the teaching data generation unit 2102 after tabulation or the like, for example.
The teaching data generation unit 2102 (an example of teaching information generation unit) generates teaching data (an example of teaching information) for the learning unit 2103 to perform machine learning of the learning model, based on the plurality of input images IM1 received from the plurality of excavators 100. The teaching data represents a combination of an arbitrary input image IM1 and a correct answer to be output by the learning model when the input image IM1 is input as the learning model. The learning model is a target of machine learning, and is naturally configured mainly with the same configuration as the learned model LM, for example, the neural network 401.
For example, the teaching data generation unit 2102 reads the input images IM1 received from the plurality of excavators 100 from the storage unit 2104 (path 2102A), displays the input images IM1 on the display device D1, and displays a GUI (Graphical User Interface) (hereinafter, referred to as a "teaching data creation GUI") for creating teaching data by a manager or an operator of the management device 200. Then, the administrator or the operator operates the teaching data creating GUI using the input device 240 to instruct correct answers corresponding to the input images IM1, thereby creating teaching data in the form of an algorithm in accordance with the learning model. In other words, the teaching data generation unit 2102 can generate a plurality of teaching data (teaching data set) based on operations (jobs) performed by a manager or an operator, who targets the plurality of input images IM 1.
The teaching data generation unit 2102 generates teaching data for the learning unit 2103 to additionally learn the learned model LM, based on the plurality of input images IM1 received from the plurality of excavators 100.
For example, the teaching data generating unit 2102 reads a plurality of input images IM1 from the storage unit 2104 (path 2102A), and displays each input image IM1 and a determination result (output result) 2101D of the determining unit 2101 corresponding to the input image IM1 in an aligned manner on the display device 230. Thus, the administrator or operator of the management apparatus 200 can select a combination corresponding to the erroneous determination from the combinations of the input image IM1 displayed on the display apparatus 230 and the corresponding determination results through the input apparatus 240. Further, the administrator or the operator can operate the teaching data preparation GUI using the input device 240, and can prepare additional teaching data for learning, which represents a combination of the input image IM1 corresponding to the erroneous determination, that is, a combination of the input image IM1 for making the learned model LM erroneously determined and the correct answer to be output by the learned model LM when the input image IM1 is used as the input. In other words, the teaching data generating unit 2102 can generate a plurality of teaching data (teaching data set) for additional learning based on an operation (work) performed by a manager or an operator or the like who selects the input image IM1 corresponding to the erroneous determination in the learned model LM from the plurality of input images IM 1.
That is, the teaching data generation unit 2102 generates teaching data for generating the first learned model LM based on the plurality of input images IM1 received from the plurality of excavators 100. The teaching data generation unit 2102 then generates additional learning teaching data from the input image IM1, in which the input image IM1 is an input image in which the most recently learned model LM is attached to the plurality of excavators 100 at predetermined timings (hereinafter, referred to as "additional learning timings"), and the learned model LM is selected from among the input images IM1 received by the plurality of excavators 100 and erroneously determined.
In addition, a part of the input image IM1 received from each of the excavators 100 may be used as a basis for the verification dataset of the learned model LM. That is, the input images IM1 received from the respective excavators 100 may be assigned to the teaching data generation input image IM1 and the verification data set generation input image IM 1.
The additional learning timing may be a timing defined periodically, for example, when 1 month has elapsed since the previous machine learning (additional learning). The timing of the additional learning may be, for example, a timing when the number of input images IM1 exceeds a predetermined threshold, that is, a timing when the number of input images IM1 necessary for the additional learning by the learning unit 2103 is collected.
The learning unit 2103 performs machine learning on the learning model based on the teaching data 2102B (teaching data set) generated by the teaching data generation unit 2102, thereby generating a learning-completed model LM. The generated learned model LM is subjected to accuracy verification using a verification dataset prepared in advance, and then stored in the storage unit 2105 (path 2103B).
The learning unit 2103 additionally learns the learned model LM (path 2103A) read from the storage unit 2105 based on the teaching data (teaching data set) generated by the teaching data generation unit 2102, thereby generating an additional learned model. After the additional learned model performs accuracy verification using a verification dataset prepared in advance, the learned model LM in the storage unit 2105 is updated with the additional learned model subjected to accuracy verification (path 2103B).
For example, as described above, when the learning model is configured mainly on the neural network 401, the learning unit 2103 optimizes the weighting parameters by applying a known algorithm such as an error back propagation method (back propagation) so that the error between the output of the learning model and the teaching data is reduced, thereby generating the learned model LM. The same applies to the generation of the additional learned model.
The first learned model LM generated from the learning model may be generated by an external device different from the management device 200. In this case, the teaching data generation unit 2102 may generate only the additional learning teaching data, and the learning unit 2103 may generate only the additional learning completed model.
The storage unit 2104 stores (saves) the input images IM1 received from the plurality of excavators 100 through the communication device 220.
The input image IM1 used for generation of teaching data by the teaching data generation unit 2102 may be stored in a storage device different from the storage unit 2104.
The learned model LM is stored (saved) in the storage unit 2105. The learned model LM updated by the additional learned model generated by the learning unit 2103 is transmitted to each of the plurality of excavators 100 at a predetermined timing (hereinafter, referred to as "model transmission timing") via the communication device 220 (an example of a model transmission unit). This makes it possible to share the same updated learned model LM, i.e., the additional learned model, among the plurality of excavators 100.
The model transmission timing may be a timing immediately after the update of the learned model LM in the storage unit 2105, or a timing when a predetermined time has elapsed after the update of the learned model LM. Further, the model transmission timing may be, for example, when a reply to a notification of update of the learned model LM transmitted to the plurality of excavators 100 via the communication device 220 is received by the communication device 220 after the update of the learned model LM.
Next, a specific operation of the shovel support system will be described with reference to fig. 15. Fig. 15 is a timing chart showing an example of the operation of the shovel support system.
In step S10, the communication devices 90 of the plurality of excavators 100 transmit the input image IM1 to the management apparatus 200 at the respective image transmission timings. Thus, the management apparatus 200 receives the input images IM1 from the respective excavators 100 via the communication device 220 and stores the images in the storage 2104 in an accumulated manner.
In step S12, the determination unit 2101 of the management device 200 inputs the plurality of input images IM1 received from the plurality of excavators 100 and stored in the storage unit 2104 to the learned model LM, and performs determination processing.
In step S14, the administrator or operator of the management device 200 verifies the determination result based on the learned model LM, and specifies (selects) an input image IM erroneously determined by the learned model LM from the plurality of input images IM through the input device 240.
In step S16, the teaching data generating unit 2102 of the management device 200 generates an additional teaching data set for learning based on an operation of the administrator, the operator, or the like on the GUI for teaching data creation through the input device 240.
In step S18, the learning unit 2103 of the management device 200 performs additional learning of the learned model LM using the additional learning teaching data set, generates an additional learned model, and updates the learned model LM in the storage unit 2104 with the additional learned model.
In step S20, the communication device 220 of the management apparatus 200 transmits the updated version of the learned model LM to each of the excavators 100.
As described above, the timing at which the updated learned model LM is transmitted to the shovel 100 (model transmission timing) may be different for each of the plurality of shovels 100.
In step S22, each of the excavators 100 updates the learned model LM in the storage unit 346 to the updated version of the learned model received from the management device 200.
The CG image generating device 250 generates a computer graphics image (hereinafter referred to as "CG image") IM3 indicating the state of the surroundings of the shovel 100 at the work site, based on an operation performed by the operator or the like of the management device 200. For example, the CG image generating device 250 is configured mainly by a computer including a CPU, a RAM, a ROM, an auxiliary storage device, various input/output interfaces, and the like, and is configured by installing application software that enables an operator or the like to create the CG image IM3 in advance. Then, the worker or the like creates a CG image IM3 on the display screen of the CG image generating device 250 by a predetermined input device. Thus, the CG image generating device 250 can generate the CG image IM3 indicating the state of the surroundings of the excavator 100 at the work site in accordance with the work (operation) performed by the operator or the like of the management device 200. The CG image generating device 250 can also generate a CG image IM3 corresponding to a weather condition corresponding to the captured image, a weather condition different from the sunshine condition, a work environment under the sunshine condition, and the like, from an image (for example, the input image IM1) around the actual shovel 100. The CG image IM3 generated by the CG image generating device 250 is taken into the control device 210.
The CG image IM3 may be generated (created) outside the management apparatus 200.
The control device 210 includes a determination unit 2101, a teaching data generation unit 2102, a learning unit 2103, and storage units 2104 and 2105, as in the above example.
The determination unit 2101 performs determination regarding an object around the shovel 100 based on the plurality of input images IM1 (path 2101A) and CG images IM3 (path 2101C) read from the storage unit 2104 using the learned model LM subjected to machine learning by the learning unit 2103 stored in the storage unit 2105. Specifically, the determination unit 2101 loads the learned model LM from the storage unit 346 into a main storage device such as a RAM (path 2101B) and causes the CPU to execute it, thereby performing determination regarding the object around the shovel 100 based on the input image IM1 and the CG image IM3 read from the storage unit 2104. More specifically, the determination unit 2101 sequentially inputs the plurality of input images IM1 and CG images IM3 stored in the storage unit 2104 to the learned model LM to perform determination regarding the object around the shovel 100. The determination result 2101D of the determination unit 2101 is input to the teaching data generation unit 2102. At this time, the determination result 2101D may be sequentially input to the teaching data generating unit 2102 for each of the plurality of input images IM1 and CG images IM3, or may be collectively input to the teaching data generating unit 2102 after tabulation or the like, for example.
The teaching data generation unit 2102 generates teaching data for the learning unit 2103 to perform machine learning of the learning model, based on the plurality of input images IM1 received from the plurality of excavators 100 and the CG image IM3 (stored in the storage unit 2104) generated by the CG image generation device 250.
For example, the teaching data generation unit 2102 reads the input images IM1 received from the plurality of excavators 100 and the CG image IM3 generated by the CG image generation device 250 ( paths 2102A and 2102C) from the storage unit 2104, displays the images on the display device D1, and displays a GUI for teaching data preparation. Then, the administrator or the operator operates the teaching data creating GUI using the input device 240 to instruct correct answers corresponding to the input images IM1 or CG images IM3, thereby creating teaching data in the form of an algorithm in accordance with the learning model. In other words, the tutorial data generation section 2102 can generate a plurality of tutorial data (tutorial data set) based on operations (jobs) performed by a manager or a worker who targets the plurality of input images IM1 and CG image IM 3.
The teaching data generation unit 2102 then generates teaching data for the learning unit 2103 to additionally learn the learned model LM, based on the plurality of input images IM1 received from the plurality of excavators 100 and the CG image IM3 (stored in the storage unit 2104) generated by the CG image generation device 250.
The teaching data generation unit 2102 reads the plurality of input images IM1 and CG images IM3 ( paths 2102A and 2102C) from the storage unit 2104, and displays the respective input images IM1 and CG images IM3 and determination results (output results) of the determination unit 2101 (learned model LM) corresponding to the input images IM1 and CG images IM3 in an aligned manner on the display device 230. Thus, the administrator or operator of the management apparatus 200 can select a combination corresponding to the erroneous determination from the combinations of the input image IM1 or CG image IM3 displayed on the display device 230 and the determination results of the corresponding learned model LM through the input device 240. Further, the administrator or the operator can create additional learning teaching data indicating a combination of the input image IM1 or CG image IM3 corresponding to the combination of the erroneous determination and the correct answer to be output when the input image IM1 or CG image IM3 is used as the learning-completed model LM for input by operating the teaching data creation GUI using the input device 240. In other words, the tutorial data generation unit 2102 can generate a plurality of tutorial data (tutorial data set) for additional learning based on an operation (work) performed by a manager or an operator or the like for at least one of the input image IM1 and the CG image IM3 corresponding to the erroneous determination in the learned model LM selected from the plurality of input images IM1 and CG image IM 3. Thus, the tutorial data can be generated using the CG image IM3 in addition to the input image IM1 collected from the plurality of excavators 100, and therefore, the tutorial data can be enriched. In particular, in the CG image IM3, it is possible to virtually and freely create various work site situations, i.e., various environmental conditions. Therefore, by generating the teaching data set using the CG image IM3, the learned model LM can realize relatively high determination accuracy corresponding to various work sites at an earlier timing.
Since the CG image IM3 generated by the CG image generating device 250 is artificially created, the presence or absence of a person, a monitoring object such as a truck, a tower, or a utility pole, or the position thereof in the CG image IM3 is known, that is, the correct answer to be output by the learned model LM when the CG image IM3 is input is known. Therefore, the CG image generating device 250 can output data on the correct answer to be output by the learned model LM when the CG image IM3 is input (hereinafter referred to as "correct answer data") to the control device 210 together with the CG image IM 3. Therefore, the control device 210 (teaching data generating unit 2102) can automatically extract an erroneous determination in the determination process based on the learned model LM (determination unit 2101) having the CG image IM3 as an input, based on the correct answer data input from the CG image generating device 250, and automatically generate a plurality of teaching data (teaching data set) for additional learning representing a combination of the CG image IM3 corresponding to the extracted erroneous determination and the correct answer to be output by the learned model LM having the CG image IM3 as an input. The learning unit 2103 can additionally learn the learned model LM based on the teaching data automatically generated by the teaching data generation unit 2102, for example, the error back propagation method (back propagation) described above. That is, the control device 210 can automatically generate the additional learned model from the CG image IM3 generated by the CG image generating device 250 and the correct answer data.
Next, another example of the determination process by the determination unit 2101 will be described with reference to fig. 16. Fig. 16 is a schematic diagram showing another example of the determination process by the determination unit 2101. In the example shown in fig. 16, the learned model LM is mainly composed of a1 st neural network 401A and a2 nd neural network 401B.
The 1 st neural network 401A inputs an input image as an input signal x, and can output, as an output signal y, a probability (predicted probability) that an object of each object type exists in a predefined monitoring target list and position information of the object. In the example shown in fig. 16, the input image is a captured image captured by the front camera 70F, and objects in the monitoring target list include a clay pipe, a hole, and the like.
For example, when a captured image as shown in fig. 16 is input as the input signal x, the 1 st neural network 401A estimates that there is a clay pipe with a high probability. The 1 st neural network 401A derives the position (for example, latitude, longitude, and altitude) of the clay pipe from the information on the position of the front camera 70F. The information on the position of the front camera 70F is, for example, the latitude, longitude, and altitude of the front camera 70F, and is derived from the output of the positioning device 73. Specifically, the 1 st neural network 401A can derive the position of the argillaceous tube from the position, size, and the like of the argillaceous tube image in the captured image. In the example shown in fig. 16, the 1 st neural network 401A outputs, as an output signal y, an estimation result that the argillals located at east longitude e1, north latitude n1, and height h1 are present.
The 1 st neural network 401A can output the probability (predicted probability) that an object of each object type exists in a predefined monitoring target list and the position information of the object, based on the information on the construction plan. In the example shown in fig. 16, the 1 st neural network 401A derives the position (e.g., latitude, longitude, and altitude) of a hole for burying a side block from information on the range of digging the hole as shown in fig. 11. Specifically, the 1 st neural network 401A can derive the position of the hole from the position-related information contained in the design data. In the example shown in fig. 16, the 1 st neural network 401A outputs, as an output signal y, the result of identification of a hole planned to be formed in east longitude e2, north latitude n2, height h 2.
The 2 nd neural network 401B can input the output signal y of the 1 st neural network 401A as the input signal y, and output the risk level at that time relating to each scene (situation) based on the positional relationship or the like of each object whose presence is estimated or recognized by the 1 st neural network 401A as the output signal z.
Next, another example of the determination process by the determination unit 2101 will be described with reference to fig. 17. Fig. 17 is a schematic diagram showing another example of the determination process by the determination unit 2101. In the example shown in fig. 17, the learned model LM is mainly composed of the 1 st neural network 401A and the 3 rd neural network 401C.
In the example shown in fig. 17, the degree of risk when the dump truck is stopped in front of the shovel 100 is determined. The dump truck is stopped in front of the shovel 100 so that the shovel 100 can load the soil into the bed of the dump truck. However, in the example shown in fig. 17, since the dump truck does not appropriately apply the side brake, the dump truck moves in a direction away from the shovel 100, contrary to the intention of the driver of the dump truck. In addition, the carriage of the dump truck is not loaded with sand.
In the example shown in fig. 17, when the captured image shown in fig. 17 is input as the input signal x at time t1, the 1 st neural network 401A recognizes the dump truck. The 1 st neural network 401A derives the position (for example, latitude, longitude, and altitude) of the dump truck from the information on the position of the front camera 70F. In the example shown in fig. 17, the 1 st neural network 401A outputs the recognition result of the dump truck at the east longitude e1, the north latitude n1, and the height h1 as the output signal y at time t 1.
When the captured image shown in fig. 17 is input as the input signal x at time t2, the 1 st neural network 401A recognizes the dump truck located at a position farther from the shovel 100 than at time t 1. The 1 st neural network 401A derives the position (for example, latitude, longitude, and altitude) of the dump truck from the information on the position of the front camera 70F. In the example shown in fig. 17, the 1 st neural network 401A outputs the recognition result of the dump truck at the east longitude e2, the north latitude n2, and the height h2 as the output signal y at time t 2.
When the captured image shown in fig. 17 is input as the input signal x at time t3, the 1 st neural network 401A recognizes the dump truck located at a position farther from the shovel 100 than at time t 2. The 1 st neural network 401A derives the position (for example, latitude, longitude, and altitude) of the dump truck from the information on the position of the front camera 70F. In the example shown in fig. 17, the 1 st neural network 401A outputs the recognition result of the dump truck at the east longitude e3, the north latitude n3, and the height h3 as the output signal y at time t 3.
The 3 rd neural network 401C can input the output signal y of the 1 st neural network 401A at a past predetermined time and the output signal y of the 1 st neural network 401A at the current time as the input signal y, and output the risk level at the current time relating to each scene (situation) based on the positional relationship of the object at each time recognized by the 1 st neural network 401A, and the like, as the output signal z.
In the example shown in fig. 17, at time t2, the 3 rd neural network 401C is input with the output signal y of the 1 st neural network 401A at time t1 and the output signal y of the 1 st neural network 401A at time t 2. The 3 rd neural network 401C can output the risk level at the time t2 identified by the 1 st neural network 401A with respect to each scene (situation) based on the position of the dump truck at the time t1 and the position of the dump truck at the time t 2.
In the example shown in fig. 17, the scene 1 is, for example, a scene (situation) in which the dump truck to which the side brake is not suitably applied advances, and the scene 2 is, for example, a scene (situation) in which the dump truck to which the side brake is not suitably applied retreats. The 3 rd neural network 401C determines that the dump truck is traveling forward based on the position of the dump truck at time t1 and the position of the dump truck at time t2 identified by the 1 st neural network 401A, and can output the risk of the scene 1 at a high level.
Also, at time t3, the 3 rd neural network 401C is input with the output signal y of the 1 st neural network 401A at time t2 and the output signal y of the 1 st neural network 401A at time t 3. The 3 rd neural network 401C can output the risk level at the time t3, which is identified by the 1 st neural network 401A, for each scene (situation) based on the position of the dump truck at the time t2 and the position of the dump truck at the time t 3.
The 3 rd neural network 401C determines that the dump truck is traveling forward based on the position of the dump truck at time t2 and the position of the dump truck at time t3 identified by the 1 st neural network 401A, and can output the risk of the scene 1 more highly.
Next, another configuration example of an excavator support system using a neural network will be described with reference to fig. 18. Fig. 18 is a diagram showing another configuration example of the shovel support system, and corresponds to fig. 13.
Fig. 18 shows a configuration in which the 3 excavators 100 (the excavator 100A, the excavator 100B, and the excavator 100C) are wirelessly connected to the communication device 220 of the management device 200 via the communication device 90, respectively. Fig. 18 shows a configuration in which the support apparatus 300 including the display unit 310, the input unit 320, and the communication unit 330 is wirelessly connected to the communication device 220 of the management apparatus 200 via the communication unit 330.
The management device 200 constituting the shovel support system shown in fig. 18 is different from the management device 200 shown in fig. 13 mainly in that the control device 210 includes an operation control command generation unit 2106. Each of the excavators 100 shown in fig. 18 is different from the excavator 100 shown in fig. 13 mainly in that the determination device 34 is omitted.
Specifically, in the example shown in fig. 18, the operation control command generating unit 2106, which is a function of the control device 210 in the management device 200, functions in the same manner as the determining unit 344, which is a function of the determining device 34 in the shovel 100 shown in fig. 13. Specifically, the operation control command generation unit 2106 can generate an operation control command for the operation control unit 304, which is one function of the controller 30 mounted on the shovel 100, based on the determination result 2101E of the determination unit 2101. The determination result 2101E is the same as the determination result 2101D, for example.
Therefore, in the example shown in fig. 18, the operation control command generation unit 2106 in the management device 200 can individually function the operation control units 304 in the controllers 30 mounted on the plurality of excavators 100 (the excavator 100A, the excavator 100B, and the excavator 100C) via wireless communication.
The operation control command generation unit 2106 of the management device 200 can display the excavator peripheral image on the display unit 310 of the support device 300, for example, in response to an input by the operator using the support device 300 via the input unit 320 of the support device 300. The operation control command generation unit 2106 can display the determination result by the determination unit 2101 on the display unit 310.
Next, another configuration example of the image display unit 41 and the operation unit 42 of the display device D1 will be described with reference to fig. 19. Fig. 19 is a diagram showing another configuration example of the image display unit 41 and the operation unit 42 of the display device D1. In the example shown in fig. 19, the input image of fig. 5 is displayed on the image display unit 41.
First, the image display unit 41 will be described. As shown in fig. 19, the image display unit 41 includes a date and time display area 41a, a travel mode display area 41b, an accessory display area 41c, a fuel consumption rate display area 41d, an engine control state display area 41e, an engine operating time display area 41f, a cooling water temperature display area 41g, a remaining fuel amount display area 41h, a rotational speed mode display area 41i, a remaining urea water amount display area 41j, an operating oil temperature display area 41k, an air-conditioning operating state display area 41m, an image display area 41n, and a menu display area 41 p.
The travel mode display area 41b, the attachment display area 41c, the engine control state display area 41e, the rotational speed mode display area 41i, and the air-conditioning operation state display area 41m are areas in which setting state information, which is information related to the setting state of the shovel 100, is displayed. The fuel consumption rate display area 41d, the engine operating time display area 41f, the cooling water temperature display area 41g, the remaining fuel amount display area 41h, the remaining urea water amount display area 41j, and the operating oil temperature display area 41k are areas for displaying operating state information, which is information related to the operating state of the shovel 100.
Specifically, the date and time display area 41a is an area for displaying the current date and time. The walking pattern display area 41b is an area for displaying the current walking pattern. The accessory display area 41c is an area in which an image representing the currently mounted accessory is displayed. The fuel consumption rate display area 41d is an area for displaying the fuel consumption rate information calculated by the controller 30. The fuel consumption rate display area 41d includes an average fuel consumption rate display area 41d1 for displaying the life average fuel consumption rate or the section average fuel consumption rate, and an instantaneous fuel consumption rate display area 41d2 for displaying the instantaneous fuel consumption rate.
The engine control state display region 41e is a region that displays the control state of the engine 11. The engine operating time display area 41f is an area that displays the integrated operating time of the engine 11. The cooling water temperature display area 41g is an area that displays the current temperature state of the engine cooling water. The remaining fuel amount display area 41h is an area that displays the state of the remaining amount of fuel stored in the fuel tank. The rotation speed mode display area 41i is an area in which the current rotation speed mode set by the engine rotation speed dial 75 is displayed with an image. The remaining urea solution amount display area 41j is an area for displaying the remaining urea solution amount stored in the urea solution tank by an image. The hydraulic oil temperature display region 41k is a region for displaying the temperature state of the hydraulic oil in the hydraulic oil tank.
The air-conditioning operation state display region 41m includes a discharge port display region 41m1 for displaying the current position of the discharge port, an operation mode display region 41m2 for displaying the current operation mode, a temperature display region 41m3 for displaying the current set temperature, and an air volume display region 41m4 for displaying the current set air volume.
The image display region 41n is a region for displaying an image output by the space recognition device 70 or the like. In the example shown in fig. 19, the image display area 41n displays an image captured by the front camera. The image display area 41n may display an overhead image or a rear image. The overhead image is, for example, a virtual viewpoint image generated by the control unit 40, and is generated from images acquired by the rear camera 70B, the left camera 70L, and the right camera 70R. Further, a shovel pattern corresponding to the shovel 100 may be disposed in the center portion of the overhead image. This is to allow the operator to intuitively grasp the positional relationship between the shovel 100 and objects existing around the shovel 100. The rear image is an image of a rear space of the shovel 100 including an image of the counterweight. The rearward image is, for example, an actual viewpoint image generated by the control unit 40, and is generated from an image acquired by the rear camera 70B. In the example shown in fig. 19, the image display region 41n is a vertically long region, but the image display region 41n may be a horizontally long region.
The menu display area 41p has tabs 41p 1-41 p 7. In the example shown in fig. 19, tabs 41p1 to 41p7 are arranged at intervals in the left-right direction at the lowermost portion of image display unit 41. Icons for displaying various information are displayed on the tabs 41p 1-41 p 7.
A menu detail item icon for displaying the menu detail item is displayed on the tab 41p 1. When the tab 41p1 is selected by the operator, the icons displayed on the tabs 41p2 to 41p7 are switched to icons that establish corresponding associations with the menu detailed items.
An icon for displaying information related to the digital level is displayed on the label 41p 4. When the tab 41p4 is selected by the operator, the currently displayed image is switched to a screen representing information relating to the digital level. However, a screen showing information on the digital level may be displayed by overlapping with or reducing the currently displayed image.
An icon for displaying information related to the information-oriented construction is displayed on the label 41p 6. When the tab 41p6 is selected by the operator, the currently displayed image is switched to a screen showing information related to the information-based construction. However, a screen showing information related to the information-oriented construction may be displayed by overlapping the currently displayed image or by reducing the currently displayed image.
An icon for displaying information related to the crane mode is displayed on the tab 41p 7. When the tab 41p7 is selected by the operator, the currently displayed image is switched to a screen representing information related to the crane mode. However, the screen showing the information on the crane mode may be displayed by overlapping the currently displayed image or by reducing the currently displayed image.
No icon is displayed on the labels 41p2, 41p3, 41p 5. Therefore, even if the operator operates the tabs 41p2, 41p3, and 41p5, the image displayed on the image display unit 41 does not change.
The icons displayed on tabs 41p 1-41 p7 are not limited to the above example, and icons for displaying other information may be displayed.
Next, the operation unit 42 will be explained. As shown in fig. 19, the operation unit 42 is configured by one or more push-button switches for selection, setting input, and the like of the tabs 41p1 to 41p7 by the operator. In the example shown in fig. 19, the operation unit 42 includes 7 switches 42a1 to 42a7 arranged in the upper layer and 7 switches 42b1 to 42b7 arranged in the lower layer. The switches 42b 1-42 b7 are disposed below the switches 42a 1-42 a7, respectively. However, the number, mode, and arrangement of the switches of the operation unit 42 are not limited to the above examples, and for example, the functions of a plurality of push-button switches may be integrated into one by a wheel, a jog switch, or the like, or the operation unit 42 may be provided separately from the display device D1. Note that, the tabs 41p1 to 41p7 may be directly operated on a touch panel in which the image display unit 41 and the operation unit 42 are integrated.
Switches 42a1 to 42a7 are disposed below tags 41p1 to 41p7 corresponding to tags 41p1 to 41p7, respectively, and function as switches for selecting tags 41p1 to 41p7, respectively. Switches 42a 1-42 a7 are disposed below tabs 41p 1-41 p7 in correspondence with tabs 41p 1-41 p7, respectively, so that the operator can intuitively select tabs 41p 1-41 p 7. In the example shown in fig. 19, when switch 42a1 is operated, for example, tab 41p1 is selected, menu display area 41p is changed from one-layer display to two-layer display, and icons corresponding to the 1 st menu are displayed on tabs 41p2 to 41p 7. Then, the size of the image currently displayed is reduced in accordance with the change from the one-layer display to the two-layer display of the menu display area 41 p. At this time, since the size of the overhead image remains unchanged, visibility when the operator confirms the surroundings of the shovel 100 does not deteriorate.
The switch 42b1 is a switch for switching the captured image displayed in the image display area 41 n. The captured image displayed in the image display area 41n is configured to be switched among, for example, a rearward image, a left image, a right image, and an overhead image, each time the switch 42b1 is operated.
The switches 42b2 and 42b3 are switches for adjusting the air volume of the air conditioner. In the example of fig. 4, the air volume of the air conditioner is reduced when the switch 42b2 is operated, and the air volume of the air conditioner is increased when the switch 42b3 is operated.
The switch 42b4 is a switch for switching the cooling/heating function between ON (ON) and OFF (OFF). In the example of fig. 4, the ON/OFF (ON) of the cooling/heating function is switched every time the switch 42b4 is operated.
The switches 42b5 and 42b6 are switches for adjusting the set temperature of the air conditioner. In the example of fig. 4, the set temperature is lowered when the switch 42b5 is operated, and the set temperature is raised when the switch 42b6 is operated.
The switch 42b7 is a switch capable of switching the display of the engine operating time display area 41 f.
The switches 42a2 to 42a6 and 42b2 to 42b6 are configured to allow input of numbers displayed on the respective switches or in the vicinity of the switches. The switches 42a3, 42a4, 42a5, and 42b4 are configured to allow the cursor to move left, up, right, and down, respectively, when the cursor is displayed on the menu screen.
The functions provided to the switches 42a1 to 42a7 and 42b1 to 42b7 are examples, and other functions may be executed.
As described above, when the tab 41p1 is selected in the state where a predetermined image is displayed in the image display region 41n, the 1 st menu detail item is displayed in the tabs 41p2 to 41p7 in the state where the predetermined image is displayed. Therefore, the operator can confirm the detailed items of the 1 st menu while confirming the predetermined image.
In the image display area 41n, the overhead image is displayed before and after the selection tab 41p1 without changing the size. The visibility of the operator when confirming the surroundings of the shovel 100 is not deteriorated.
Next, the construction system SYS will be described with reference to fig. 20. Fig. 20 is a schematic diagram showing an example of the construction system SYS. As shown in fig. 20, the construction system SYS includes a shovel 100, a management device 200, and a support device 300. The construction system SYS is configured to support construction by 1 or more excavators 100.
The information acquired by the shovel 100 may be shared with a manager and other shovel operators through the construction system SYS. The excavator 100, the management device 200, and the support device 300 constituting the construction system SYS may be 1 or more than one. In the example shown in fig. 20, the construction system SYS includes 1 excavator 100, 1 management device 200, and 1 support device 300.
The management device 200 is typically a fixed terminal device, and is, for example, a server computer (so-called cloud server) installed in a management center or the like outside a construction site. The management device 200 may be an edge server installed at a construction site, for example. The management device 200 may be a portable terminal device (e.g., a mobile terminal such as a laptop computer terminal, a tablet terminal, or a smartphone).
The support apparatus 300 is typically a mobile terminal apparatus, such as a laptop computer terminal, a tablet terminal, or a smartphone carried by an operator or the like located at a construction site. The support device 300 may be a mobile terminal carried by the operator of the shovel 100. The support apparatus 300 may be a fixed terminal apparatus.
At least one of the management device 200 and the support device 300 may include a monitor and a remote operation device. At this time, the operator using the management apparatus 200 or the support apparatus 300 may operate the shovel 100 using the remote operation operating device. The remote operation device is connected to the controller 30 mounted on the shovel 100 so as to be able to communicate with the remote operation device via a wireless communication network such as a short-range wireless communication network, a mobile phone communication network, or a satellite communication network.
Various information images (for example, image information indicating the state of the surroundings of the shovel 100, various setting screens, and the like) displayed on the display device D1 provided in the cab 10 may be displayed on a display device connected to at least one of the management device 200 and the support device 300. The image information indicating the state of the surroundings of the shovel 100 may be generated from the captured image of an imaging device (e.g., an imaging device as the space recognition device 70). Thus, the administrator using the management device 200 or the operator using the support device 300 can perform remote operation of the shovel 100 or perform various settings related to the shovel 100 while confirming the state around the shovel 100.
For example, in the construction system SYS, the controller 30 of the shovel 100 may transmit information related to at least one of a time and a place when a predetermined switch for starting the autonomous operation is pressed, a target trajectory used when the shovel 100 autonomously operates, a trajectory actually followed by a predetermined position during the autonomous operation, and the like to at least one of the management device 200 and the support device 300. At this time, the controller 30 may transmit the image captured by the space recognition device 70 to at least one of the management device 200 and the support device 300. The image may be a plurality of images captured during the autonomous operation. The controller 30 may transmit information on at least one of data related to the operation content of the shovel 100 in the autonomous operation, data related to the posture of the shovel 100, data related to the posture of the excavation attachment, and the like to at least one of the management device 200 and the support device 300. Thus, the manager using the management device 200 or the operator using the support device 300 can obtain information on the shovel 100 during autonomous operation.
In this way, in the management device 200 or the support device 300, the type and the position of the monitoring target outside the monitoring area of the shovel 100 are stored in the storage unit in chronological order.
In this way, the construction system SYS can share information related to the shovel 100 with a manager and other shovel operators and the like.
As shown in fig. 20, the communication device mounted on the shovel 100 may be configured to transmit and receive information to and from a communication device T2 provided in the remote control room RC via wireless communication. In the example shown in fig. 20, the communication device and the communication device T2 mounted on the shovel 100 are configured to transmit and receive information via a 5th generation mobile communication line (5G line), an LTE line, a satellite line, or the like.
The remote control room RC is provided with a remote controller 30R, a sound output device a2, an indoor imaging device C2, a display device RP, a communication device T2, and the like. Further, a driver seat DS on which an operator OP of the remote control shovel 100 sits is provided in the remote control room RC.
The remote controller 30R is an arithmetic device that performs various operations. In the example shown in fig. 20, the remote controller 30R is constituted by a microcomputer including a CPU and a memory, as in the case of the controller 30. Also, various functions of the remote controller 30R are realized by executing a program stored in a memory by the CPU.
The sound output device a2 is configured to output sound. In the example shown in fig. 20, the sound output device a2 is a speaker and is configured to play sound collected by a sound collector (not shown) attached to the shovel 100.
The indoor imaging device C2 is configured to image the inside of the remote control room RC. In the example shown in fig. 20, the indoor imaging device C2 is a camera provided inside the remote control room RC and is configured to image the operator OP sitting on the driver seat DS.
The communication device T2 is configured to control wireless communication with a communication device mounted on the shovel 100.
In the example shown in fig. 20, the operator's seat DS has the same configuration as an operator's seat provided in an operator's cabin of a general excavator. Specifically, a left steering box is disposed on the left side of the driver seat DS, and a right steering box is disposed on the right side of the driver seat DS. A left operating lever is disposed at the top surface front end of the left console box, and a right operating lever is disposed at the top surface front end of the right console box. A travel lever and a travel pedal are disposed in front of the driver seat DS. An engine speed control panel 75 is disposed in the center of the upper surface of the right console box. The left operating lever, the right operating lever, the travel pedal, and the engine speed control dial 75 constitute an operating device 26A, respectively.
The operation device 26A is provided with an operation sensor 29A for detecting the operation content of the operation device 26A. The operation sensor 29A is, for example, a tilt sensor for detecting a tilt angle of the operation lever, an angle sensor for detecting a swing angle around a swing axis of the operation lever, or the like. The operation sensor 29A may be configured by another sensor such as a pressure sensor, a current sensor, a voltage sensor, or a distance sensor. The operation sensor 29A outputs information related to the detected operation content of the operation device 26A to the remote controller 30R. The remote controller 30R generates an operation signal based on the received information, and transmits the generated operation signal to the shovel 100. The operation sensor 29A may be configured to generate an operation signal. At this time, the operation sensor 29A may output an operation signal to the communication device T2 without going through the remote controller 30R.
The display device RP is configured to display information related to the situation around the shovel 100. In the example shown in fig. 20, the display device RP is a multifunction display including 9 monitors in 3 stages in the vertical direction and 3 rows in the horizontal direction, and is configured to be able to display the state of the space in the front, left, and right sides of the excavator 100. Each monitor is a liquid crystal monitor, an organic EL monitor, or the like. However, the display device RP may be constituted by 1 or more curved monitors, or may be constituted by a projector.
The display device RP may be a display device wearable by the operator OP. For example, the display device RP may be a head-mounted display, and may be configured to transmit and receive information to and from the remote controller 30R by wireless communication. The head mounted display may also be wired to a remote controller. The head-mounted display may be a transmissive head-mounted display or a non-transmissive head-mounted display. The head-mounted display may be a monocular type head-mounted display or a binocular type head-mounted display.
The display device RP is configured to display an image that allows the operator OP in the remote control room RC to recognize the periphery of the shovel 100. That is, the display device RP displays an image so that the operator can confirm the situation around the shovel 100 as in the cab 10 of the shovel 100, even though the operator is in the remote control room RC.
Next, another configuration example of the construction system SYS will be described with reference to fig. 21. In the example shown in fig. 21, the construction system SYS is configured to support construction by the shovel 100. Specifically, the construction system SYS includes a communication device CD and a control device CTR that communicate with the shovel 100. The control device CTR is configured to determine a dangerous situation based on the information acquired by the information acquisition device E1.
Alternatively, the control device CTR may be configured to estimate the construction situation after a predetermined time has elapsed from the information acquired by the information acquisition device E1, and determine the dangerous situation from the information on the estimated construction situation. Alternatively, the control device CTR may be configured to determine a risk level from the estimated construction situation, and determine that a risk situation occurs when the risk level exceeds a predetermined value.
Alternatively, the control device CTR may be configured to determine the scene of the construction site based on the information acquired by the information acquisition device E1. Alternatively, the control device CTR may be configured to estimate the scene of the construction site based on predetermined information after a predetermined time.
As described above, the shovel 100 according to the embodiment of the present invention includes: a lower traveling body 1; an upper revolving structure 3 which is rotatably mounted on the lower traveling structure 1; a nonvolatile storage device NM provided on the upper slewing body 3; an information acquisition device E1 that acquires information relating to construction; and a controller 30 as a control device for controlling at least one of the display device D1 and the audio output device D2. The controller 30 is configured to operate the notification device when it is determined that a dangerous situation has occurred based on the information acquired by the information acquisition device E1 and information in the dangerous information database DB, which is a database stored in the nonvolatile storage device NM. Alternatively, the controller 30 may be configured to operate the notification device when the construction situation after a predetermined time has elapsed is estimated from the information acquired by the information acquisition device E1 and it is determined that a dangerous situation has occurred based on the information on the estimated construction situation and the information stored in the dangerous information database DB in the nonvolatile storage device NM. With this configuration, the shovel 100 can prevent a dangerous situation from actually occurring in advance.
The controller 30 may be configured to determine a risk level based on the estimated construction situation and the risk information database DB stored in the nonvolatile storage NM, and determine that a risk situation occurs when the risk level exceeds a predetermined value.
The shovel 100 may display information related to the dangerous situation determined to have occurred on the display device D1. This is to more accurately deliver to the operator the contents of a dangerous situation that may occur.
The information related to construction may include an image of the periphery of the excavator 100, information related to a construction plan, or information related to material arrangement.
A construction system according to an embodiment of the present invention is a construction system that supports creation of a construction plan, and includes, for example, as shown in fig. 11: a non-volatile storage NM; an information input device MD3 as an information acquisition device E1 that acquires information related to construction; and a controller MD4 as a control device for controlling a notification device as at least one of the display device MD1 and the audio output device MD 2. The controller MD4 is configured to operate the notification device when it is determined that a dangerous situation has occurred based on the information acquired by the information input device MD3 and the dangerous information database DB, which is a database stored in the nonvolatile storage device NM. With this configuration, the construction system can determine whether or not a dangerous situation occurs at the stage of creating the construction plan, and can prevent the actual occurrence of the dangerous situation in advance.
The preferred embodiments of the present invention have been described in detail above. However, the present invention is not limited to the above embodiments. The above-described embodiment can be applied to various modifications, replacements, and the like without departing from the scope of the present invention. Further, the features described separately can be combined as long as technically contradictory results are not generated.
This application claims priority based on japanese patent application No. 2019-069472, filed on 30/3/2019, the entire contents of which are incorporated by reference in the present specification.
Description of the symbols
1-lower traveling body, 1C-track, 1 CL-left track, 1 CR-right track, 2-swing mechanism, 2A-swing hydraulic motor, 2M-travel hydraulic motor, 2 ML-left travel hydraulic motor, 2 MR-right travel hydraulic motor, 3-upper swing body, 4-boom, 5-arm, 6-bucket, 7-boom cylinder, 8-arm cylinder, 9-bucket cylinder, 10-cabin, 11-engine, 11 a-alternator, 11 b-starting device, 14-main pump, 14 a-regulator, 14 b-discharge pressure sensor, 14C-oil temperature sensor, 15-pilot pump, 17-control valve, 26-operating device, 29-operating pressure sensor, 30-a controller, 30A-a hazard determination section, 35-a switching valve, 40-a control section, 41-an image display section, 42-an operation section, 70-a space recognition device, 70B-a rear camera, 70F-a front camera, 70L-a left side camera, 70R-a right side camera, 71-an orientation detection device, 72-an information input device, 73-a positioning device, 74-an engine control device, 75-an engine speed adjustment control panel, 80-a battery, 100-an excavator, AT-an excavation attachment, D1-a display device, D2-a sound output device, DB-a hazard information database, E1-an information acquisition device, S1-a boom angle sensor, S2-a stick angle sensor, s3-bucket angle sensor, S4-body inclination sensor, S5-rotation angular speed sensor.

Claims (17)

1. A shovel is provided with:
a lower traveling body;
an upper revolving body which is rotatably mounted on the lower traveling body;
a storage device provided on the upper slewing body;
an information acquisition device that acquires information related to construction; and
a control device for controlling the operation of the motor,
the control device determines a dangerous situation based on the information acquired by the information acquisition device.
2. A shovel is provided with:
a lower traveling body;
an upper revolving body which is rotatably mounted on the lower traveling body;
a storage device provided on the upper slewing body;
an information acquisition device that acquires information related to construction; and
a control device for controlling the operation of the motor,
the control device estimates the construction situation after a predetermined time has elapsed based on the information acquired by the information acquisition device, and determines the dangerous situation based on the information on the estimated construction situation.
3. The shovel of claim 2,
the control device determines a risk level based on the estimated construction situation, and determines that a risk situation occurs when the risk level exceeds a predetermined value.
4. The shovel according to claim 1, wherein information related to the dangerous situation determined to occur is displayed on a display device.
5. The shovel of claim 1,
the construction-related information includes an image of surroundings of the excavator.
6. The shovel of claim 1,
the construction-related information includes information related to a construction plan.
7. The shovel of claim 1,
the construction-related information includes information related to material configuration.
8. A construction system is provided with:
a storage device;
an information acquisition device that acquires information related to construction; and
a control device for controlling the operation of the motor,
the control device determines a dangerous situation based on the information acquired by the information acquisition device.
9. The construction system according to claim 8,
information relating to the dangerous situation determined to have occurred is displayed on the display device.
10. The construction system according to claim 8,
the construction-related information includes an image of surroundings of the excavator.
11. The construction system according to claim 8,
the construction-related information includes information related to a construction plan.
12. The construction system according to claim 8,
the construction-related information includes information related to material configuration.
13. A shovel is provided with:
a lower traveling body;
an upper revolving body which is rotatably mounted on the lower traveling body;
a storage device provided on the upper slewing body;
an information acquisition device that acquires information related to construction; and
a control device for controlling the operation of the motor,
the control device determines the scene of the construction site according to the information acquired by the information acquisition device.
14. The construction system according to claim 13,
the control device calculates the scene of the construction site according to the preset information after the preset time.
15. A construction system is provided with:
a storage device;
an information acquisition device that acquires information related to construction; and
a control device for controlling the operation of the motor,
the control device determines the scene of the construction site according to the information acquired by the information acquisition device.
16. The construction system according to claim 15,
the control device calculates the scene of the construction site according to the preset information after the preset time.
17. A shovel is provided with:
a lower traveling body;
an upper revolving body which is rotatably mounted on the lower traveling body;
a storage device provided on the upper slewing body; and
a control device for controlling the operation of the motor,
the control device determines whether or not the action content is feasible based on the type and position of the object recognized based on the output of the space recognition device.
CN202080024901.8A 2019-03-30 2020-03-30 Excavator and construction system Pending CN113631779A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2019-069472 2019-03-30
JP2019069472 2019-03-30
PCT/JP2020/014696 WO2020204007A1 (en) 2019-03-30 2020-03-30 Shovel excavator and construction system

Publications (1)

Publication Number Publication Date
CN113631779A true CN113631779A (en) 2021-11-09

Family

ID=72667854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080024901.8A Pending CN113631779A (en) 2019-03-30 2020-03-30 Excavator and construction system

Country Status (6)

Country Link
US (1) US20220018096A1 (en)
EP (1) EP3951089A4 (en)
JP (1) JPWO2020204007A1 (en)
KR (1) KR20210140737A (en)
CN (1) CN113631779A (en)
WO (1) WO2020204007A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113646487B (en) * 2019-04-04 2023-07-07 株式会社小松制作所 System including work machine, method executed by computer, method for manufacturing learned posture estimation model, and data for learning
US11278361B2 (en) 2019-05-21 2022-03-22 Verb Surgical Inc. Sensors for touch-free control of surgical robotic systems
CN114080481B (en) * 2019-07-17 2024-01-16 住友建机株式会社 Construction machine and support device for supporting work by construction machine
JP2023050799A (en) * 2021-09-30 2023-04-11 株式会社小松製作所 Display system for work machine and display method for work machine
WO2023153722A1 (en) * 2022-02-08 2023-08-17 현대두산인프라코어(주) Transparent display-based work assistance method and device for construction machinery
US20240018746A1 (en) * 2022-07-12 2024-01-18 Caterpillar Inc. Industrial machine remote operation systems, and associated devices and methods
WO2024075670A1 (en) * 2022-10-03 2024-04-11 日立建機株式会社 Working machine
JP7365738B1 (en) * 2023-04-12 2023-10-20 サン・シールド株式会社 Crane operation simulation system and crane operation simulation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008248613A (en) * 2007-03-30 2008-10-16 Hitachi Constr Mach Co Ltd Work machine periphery monitoring device
CN103180522A (en) * 2010-10-22 2013-06-26 日立建机株式会社 Work machine peripheral monitoring device
US20140343820A1 (en) * 2011-12-13 2014-11-20 Volvo Construction Equipment Ab All-round hazard sensing device for construction apparatus
JP2019002242A (en) * 2017-06-19 2019-01-10 株式会社神戸製鋼所 Overturn preventing device and work machine

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6662880B2 (en) * 2000-06-09 2003-12-16 Gedalyahu Manor Traveling rolling digger for sequential hole drilling and for producing sequential cultivated spots in soil
JP5755576B2 (en) * 2012-01-25 2015-07-29 住友重機械工業株式会社 Driving assistance device
JP6545430B2 (en) * 2013-03-19 2019-07-17 住友重機械工業株式会社 Shovel
US10294636B2 (en) * 2014-12-24 2019-05-21 Cqms Pty Ltd System and method of estimating fatigue in a lifting member
KR101762498B1 (en) 2015-06-16 2017-07-27 백주혁 Smart Seeder
US10344450B2 (en) * 2015-12-01 2019-07-09 The Charles Machine Works, Inc. Object detection system and method
JP6819462B2 (en) * 2017-05-30 2021-01-27 コベルコ建機株式会社 Work machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008248613A (en) * 2007-03-30 2008-10-16 Hitachi Constr Mach Co Ltd Work machine periphery monitoring device
CN103180522A (en) * 2010-10-22 2013-06-26 日立建机株式会社 Work machine peripheral monitoring device
US20140343820A1 (en) * 2011-12-13 2014-11-20 Volvo Construction Equipment Ab All-round hazard sensing device for construction apparatus
JP2019002242A (en) * 2017-06-19 2019-01-10 株式会社神戸製鋼所 Overturn preventing device and work machine

Also Published As

Publication number Publication date
JPWO2020204007A1 (en) 2020-10-08
US20220018096A1 (en) 2022-01-20
EP3951089A4 (en) 2022-09-14
KR20210140737A (en) 2021-11-23
EP3951089A1 (en) 2022-02-09
WO2020204007A1 (en) 2020-10-08

Similar Documents

Publication Publication Date Title
CN113631779A (en) Excavator and construction system
CN112996963B (en) Shovel and shovel support system
WO2019189203A1 (en) Shovel
WO2020196874A1 (en) Construction machine and assistance system
CN112955610A (en) Shovel, information processing device, information processing method, information processing program, terminal device, display method, and display program
EP4012120A1 (en) Excavator and information processing device
US20220002979A1 (en) Shovel and shovel management apparatus
US20220136215A1 (en) Work machine and assist device to assist in work with work machine
US20210270013A1 (en) Shovel, controller for shovel, and method of managing worksite
US20240026654A1 (en) Construction machine and support system of construction machine
US20230008338A1 (en) Construction machine, construction machine management system, and machine learning apparatus
US20230009234A1 (en) Information communications system for construction machine and machine learning apparatus
JP2021095718A (en) Shovel and information processor
KR20210141950A (en) shovel
EP4368781A2 (en) Work machine and information processing device
JP2024068678A (en) Work machine and information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination