WO2019215924A1 - Operation data classification system, operation data classification method, and program - Google Patents

Operation data classification system, operation data classification method, and program Download PDF

Info

Publication number
WO2019215924A1
WO2019215924A1 PCT/JP2018/018380 JP2018018380W WO2019215924A1 WO 2019215924 A1 WO2019215924 A1 WO 2019215924A1 JP 2018018380 W JP2018018380 W JP 2018018380W WO 2019215924 A1 WO2019215924 A1 WO 2019215924A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
work
basic data
feature
timeline
Prior art date
Application number
PCT/JP2018/018380
Other languages
French (fr)
Japanese (ja)
Inventor
俊二 菅谷
Original Assignee
株式会社オプティム
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社オプティム filed Critical 株式会社オプティム
Priority to PCT/JP2018/018380 priority Critical patent/WO2019215924A1/en
Publication of WO2019215924A1 publication Critical patent/WO2019215924A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Definitions

  • the present invention obtains data related to various operations such as image data from cameras, logs from machines, logs of smartphone terminals held by users, etc., and classifies which operation is related to unknown data,
  • the present invention relates to a work data classification system, a work data classification method, and a program capable of generating a timeline for the work.
  • Patent Document 1 A method has been proposed that allows each sensor data to be properly classified using only sensor data collected in time series from a large number of sensors even in an environment where noise exists and cannot be measured separately.
  • the present invention can acquire a plurality of basic data related to various operations, classify which basic data the unknown basic data is based on from the feature amount, and generate a timeline for the classified operations. It is an object to provide a work data classification system, work data classification method, and program.
  • the present invention provides the following solutions.
  • the invention according to the first feature is An acquisition means for acquiring a plurality of basic data relating to various operations performed by a person or a device; Extraction means for extracting feature quantities of the plurality of basic data; Storage means for storing the feature quantity and the work in association with each other; Determining means for determining which feature quantity of the feature data of the unknown basic data is similar to the feature quantity in the feature data of the stored basic data; Based on the result of the determination, classification means for classifying what the unknown basic data is data about what work by the person or device; Timeline generating means for determining the content of work from the classified data and generating a timeline or a percentage of work time for each person or device based on the work time associated with the basic data; A work data classification system is provided.
  • an acquisition unit that acquires a plurality of basic data relating to various operations performed by a person or a device, and an extraction unit that extracts feature amounts of the plurality of basic data
  • storage means for associating and storing the feature quantity and the work, and determination means for determining which feature quantity in the feature quantity of the stored basic data is similar to the feature quantity of the stored basic data
  • classifying means for classifying what the unknown basic data is data relating to what work by the person or equipment, and determining the content of the work from the classified data
  • Timeline generating means for generating a timeline or a ratio of work time for each person or device based on the work time associated with the basic data.
  • the invention according to the first feature is a category of the work data classification system, but the work data classification method and program have the same operations and effects.
  • the invention according to the second feature is a work data classification system which is the invention according to the first feature, Schedule storage means for storing a schedule of work performed by the person or device; A schedule comparison means for comparing the timeline generated by the timeline generation means with the stored schedule; A work data classification system is provided.
  • the schedule storage means for storing a work schedule performed by the person or the device, and the timeline generation means And a schedule comparison means for comparing the stored timeline with the stored schedule.
  • the invention according to the third feature is a work data classification system which is the invention according to the first feature or the second feature,
  • the determination means performs machine learning of past feature amounts, and determines which feature amount in the feature amount of the basic data is similar to the feature amount of the unknown basic data.
  • the determination means performs machine learning of a past feature amount, and the unknown basis It is determined which of the feature quantities of the basic data is similar to the feature quantity of the basic data.
  • the invention according to the fourth feature is a work data classification system which is the invention according to any one of the first feature to the third feature,
  • the schedule comparison unit provides an operation data classification system that emphasizes and outputs when the generated timeline is delayed with respect to the schedule or when the difference is large.
  • the schedule comparison means is configured to generate the timeline generated with respect to the schedule. Is output when it is delayed or when the difference is large.
  • the invention according to the fifth feature is Obtaining a plurality of basic data on various tasks performed by a person or device; Extracting features of the plurality of basic data; Storing the feature quantity and the work in association with each other; Determining which feature quantity of the unknown basic data feature quantity is similar to the feature quantity of the stored basic data; and Classifying the unknown basic data based on the result of the determination as to what work by the person or device; Determining the content of work from the classified data, and generating a timeline or percentage of work time for each person or device based on the work time associated with the basic data;
  • a work data classification method comprising:
  • the invention according to the sixth feature is Work data classification system Obtaining a plurality of basic data relating to various operations performed by a person or device; Extracting features of the plurality of basic data; Storing the feature quantity and the work in association with each other; Determining which feature quantity of the unknown basic data feature quantity is similar to the feature quantity of the stored basic data; Classifying the unknown basic data based on the result of the determination as to what work by the person or device, Determining the content of work from the classified data, and generating a timeline or a percentage of work time for each person or device based on the work time associated with the basic data; Provide a program to execute.
  • acquiring a plurality of basic data relating to various operations, classifying which basic data the unknown basic data is based on from the feature amount, and further generating a timeline for the classified operations It is possible to provide a work data classification system, a work data classification method, and a program that can be used.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention.
  • FIG. 2 is a diagram illustrating the functional blocks of the apparatus 100 and the computer 200 and the relationship between the functions.
  • FIG. 3 is a flowchart of the timeline generation process.
  • FIG. 4 is a flowchart when the computer 200 classifies the unknown basic data according to the number of determinations.
  • FIG. 5 is an example of a flowchart for determining whether or not the feature amount of unknown basic data is similar to the feature amount of stored work data in the computer 200.
  • FIG. 6 is a flowchart in the case of performing determination processing by machine learning of a combination of past work and feature values of basic data.
  • FIG. 7 is a diagram illustrating the functional blocks of the apparatus 100 and the computer 200 and the relationship between the functions when performing schedule comparison processing.
  • FIG. 8 is a flowchart for performing the schedule comparison process.
  • FIG. 9 is an example of a table showing the data structure of work and basic data stored in the storage unit 230.
  • FIG. 10 is an example of a table showing the data structure of unknown basic data.
  • FIG. 11 is an example of a table including determination results of stored basic data having a feature quantity similar to unknown basic data.
  • FIG. 12 is an example of the timeline output display of the present invention.
  • FIG. 13 is an example of a detailed display of work status.
  • FIG. 14 is an example of a work time ratio display.
  • FIG. 15 is an example of the schedule comparison display of the present invention.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention. The outline of the present invention will be described with reference to FIG.
  • the work data classification system includes an apparatus 100, a computer 200, and a communication network 300.
  • the number of devices 100 is not limited to one and may be plural.
  • a WEB camera is illustrated as the device 100A
  • a wearable device is illustrated as the device 100B.
  • the computer 200 is not limited to a real device, and may be a virtual device. The computer 200 may be the same device as the device 100.
  • the apparatus 100 includes a sensor unit 10, a control unit 110, a communication unit 120, and a storage unit 130 as shown in FIG.
  • the computer 200 includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240.
  • the control unit 210 implements the acquisition module 211 in cooperation with the communication unit 220 and the storage unit 230. Further, the control unit 210 implements an extraction module 212, a determination module 213, and a classification module 214 in cooperation with the storage unit 230. In addition, the control unit 210 implements the timeline generation module 215 in cooperation with the storage unit 230 and the input / output unit 240.
  • the storage unit 230 implements the storage module 231 in cooperation with the control unit 210.
  • the communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the apparatus 100 and the computer 200.
  • the apparatus 100 includes the sensor unit 10 capable of acquiring images, machine logs, and data of various sensors, and is an apparatus capable of data communication with the computer 200.
  • a WEB camera is illustrated as the apparatus 100A
  • a wearable device is illustrated as the apparatus 100B.
  • Various work data may be stored in the storage unit 130.
  • the computer 200 is a computing device capable of data communication with the device 100.
  • a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electrical appliances such as a netbook terminal, a slate terminal, an electronic book terminal, and a portable music player Or a wearable terminal such as a smart glass or a head-mounted display.
  • the storage module 231 of the computer 200 stores in the storage unit 230 the work associated with the features of a plurality of basic data (step S01).
  • the work associated with the feature quantities of the plurality of basic data may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, a dedicated database may be provided in the storage unit 230.
  • the basic data is a generic term that represents images, machine logs, and data of various sensors acquired from the apparatus 100. The basic data always includes information for specifying the time of work.
  • FIG. 9 is an example of a table showing the data structure of the work stored in the storage unit 230 and the basic data.
  • two basic data are stored as operation X, of which the feature amount of data A is v, the data type is moving image, the feature amount of data B is w, and the data type is It indicates that the sensor is an acceleration sensor.
  • operation Y three basic data are stored, of which the feature amount of the data C is x, the type of data is moving image, the feature amount of the data D is y, and the type of data is machine
  • the feature amount of the log and data E is z, which indicates that the data type is GPS.
  • Each data itself may be stored together with these data.
  • FIG. 9 shows an example in which the storage destination of each data itself is described in the rightmost column of the table.
  • the apparatus 100 transmits unknown basic data to the computer 200 (step S02), and the acquisition module 211 of the computer 200 acquires unknown basic data (step S03).
  • unknown basic data all of them are acquired as work data relating to one work.
  • the acquisition module 211 instructs the device 100 to transmit unknown basic data, and the device 100 receives the unknown basic data in response to the instruction. May be sent.
  • the acquisition module 211 not only acquires the work data that the apparatus 100 has acquired in real time, but also acquires the work data that the apparatus 100 has acquired in the past and stored in the storage unit 130. Good.
  • FIG. 10 is an example of a table showing the data structure of unknown basic data.
  • the work consists of three basic data, of which the feature amount of data F is s, the data type is GPS, the feature amount of data G is t, the data type is machine log, and data H The feature amount is u, indicating that the data type is a moving image.
  • Each data itself may be stored together with these data.
  • FIG. 10 shows an example in which the storage destination of each data itself is described in the rightmost column of the table.
  • the extraction module 212 of the computer 200 extracts the feature quantity of the unknown basic data acquired in step S03 (step S04).
  • the feature amount here may be, for example, according to the type of data, or may be determined by analyzing the content of the data, and depending on the system. It may be used as For example, if the data type is a moving image, the image obtained by analyzing the moving image may be used as the feature amount. If the data type is the acceleration sensor, the operation amount may be used as the feature amount. For example, an analysis of machine operation and time may be used as the feature amount, and in the case of GPS data, the location, height, and weather may be used as the feature amount with reference to map data, weather data, and the like.
  • the determination module 213 of the computer 200 determines which basic data feature quantity stored in the storage unit 230 is similar to the characteristic quantity of unknown basic data (step S05). Details of the determination method will be described later.
  • FIG. 11 is an example of a table including determination results of stored basic data having a feature quantity similar to unknown basic data.
  • the feature amount of data F is s and similar to data E
  • the feature amount of data G is t and similar to data D
  • the feature amount of data H is u and similar to data A This is shown as an example.
  • the classification module 214 of the computer 200 classifies which work the acquired unknown basic data is related to (step S06). 9 and 11, the data F is similar to the data E of the work Y, the data G is similar to the data D of the work Y, and the data H is similar to the data A of the work X. I understand.
  • a classification method for example, when classification is performed with priority on the number of determinations of basic data, based on the result that two of three sensors are similar to work Y and one is similar to work X, data F, data G, The work based on data H can be classified as work Y.
  • the classification module 214 performs the determination. , Unclassifiable or unclassified.
  • the timeline generation module 215 of the computer 200 determines the content of the work based on the result classified in step S06, and generates a timeline based on the work time associated with the acquired basic data ( Step S07).
  • a timeline is generated for each person or device performing the work X.
  • the ratio of the work time for each person or device performing the work X is generated.
  • the timeline includes work contents and work time.
  • information for specifying a work time included in the basic data is used.
  • information on which device is the basic data acquired is used together.
  • the generated timeline information may be output by display or voice.
  • FIG. 12 is an example of the timeline output display of the present invention.
  • the work status of a certain construction site from 8:00 to 18:00 on February 2, 2018 is displayed as a timeline.
  • the timeline for each work is displayed.
  • FIG. 13 is an example of detailed display of work status.
  • the user of the work classification system selects the slope formation that is the work of the excavator A from 8:00 to 12:00.
  • a balloon 1301 in FIG. 13 an image, a schematic diagram, etc. based on information included in the acquired basic data so that the work position and work status of each time from 8:00 to 12:00 can be understood. You may display with.
  • FIG. 14 is an example of a work time ratio display. Instead of timeline generation, work time ratio generation may be performed.
  • the window 1401 shows the ratio of work during the work time of February 2, 2018, 9 hours.
  • the percentage of each work during the work time is displayed as a percentage for each of the excavator A, the excavator B, the crawler dumper A, and the worker A as the person or equipment that performed the work. Since not all work of a person or equipment can be specified, the total percentage of work time does not necessarily need to be 100%.
  • the operations of all the people or devices are collectively displayed. However, individual display may be performed for each person or for each device.
  • a plurality of basic data relating to various works is obtained, and the work related to the unknown basic data is classified from the feature amount, and the timeline for the classified work is further classified.
  • FIG. 2 is a diagram illustrating the functional blocks of the apparatus 100 and the computer 200 and the relationship between the functions.
  • the apparatus 100 includes a sensor unit 10, a control unit 110, a communication unit 120, and a storage unit 130.
  • the computer 200 includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240.
  • the control unit 210 implements the acquisition module 211 in cooperation with the communication unit 220 and the storage unit 230. Further, the control unit 210 implements an extraction module 212, a determination module 213, and a classification module 214 in cooperation with the storage unit 230.
  • control unit 210 implements a timeline generation module 215 in cooperation with the storage unit 230 and the input / output unit 240.
  • the storage unit 230 implements the storage module 231 in cooperation with the control unit 210.
  • the communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the apparatus 100 and the computer 200.
  • the apparatus 100 includes the sensor unit 10 capable of acquiring images, machine logs, and data of various sensors, and is an apparatus capable of data communication with the computer 200.
  • a WEB camera is illustrated as the apparatus 100A
  • a wearable device is illustrated as the apparatus 100B.
  • Various basic data may be stored in the storage unit 130.
  • the apparatus 100 includes a sensor capable of acquiring data of images, machine logs, and various sensors as the sensor unit 10. Further, the obtained data is assumed to have the accuracy necessary for extracting the feature amount.
  • the control unit 110 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • a device for enabling communication with other devices for example, a WiFi (Wireless Fidelity) compatible device compliant with IEEE 802.11 or an IMT-2000 standard such as a third generation or fourth generation mobile communication system. Compliant wireless device etc. It may be a wired LAN connection.
  • WiFi Wireless Fidelity
  • the storage unit 130 includes a data storage unit such as a hard disk or a semiconductor memory, and stores captured images, necessary data such as imaging conditions, and the like.
  • the computer 200 is a computing device capable of data communication with the device 100.
  • a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electrical appliances such as a netbook terminal, a slate terminal, an electronic book terminal, and a portable music player Or a wearable terminal such as a smart glass or a head-mounted display.
  • the computer 200 is not limited to a real device, and may be a virtual device.
  • the computer 200 may be the same device as the device 100.
  • the control unit 210 includes a CPU, RAM, ROM, and the like.
  • the control unit 210 implements the acquisition module 211 in cooperation with the communication unit 220 and the storage unit 230. Further, the control unit 210 implements an extraction module 212, a determination module 213, and a classification module 214 in cooperation with the storage unit 230. Further, the control unit 210 implements a timeline generation module 215 in cooperation with the storage unit 230 and the input / output unit 240.
  • a device for enabling communication with other devices as the communication unit 220 for example, a WiFi compatible device compliant with IEEE802.11 or a wireless device compliant with the IMT-2000 standard such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection.
  • the storage unit 230 includes a data storage unit such as a hard disk or a semiconductor memory, which associates features with features of a plurality of basic data, teacher data, determination results, classification results, data necessary for processing, etc.
  • the storage unit 230 implements the storage module 231 in cooperation with the control unit 210.
  • the storage unit 230 may include a database for storing a work associated with feature quantities of a plurality of basic data.
  • the input / output unit 240 has functions necessary for using the work data classification system.
  • a liquid crystal display that realizes a touch panel function, a keyboard, a mouse, a pen tablet, a hardware button on the apparatus, a microphone for performing voice recognition, and the like can be provided.
  • forms such as a liquid crystal display, a PC display, a projection on a projector, and an audio output can be considered.
  • the function of the present invention is not particularly limited by the input / output method.
  • FIG. 3 is a flowchart of the timeline generation process. Processing executed by each module described above will be described in accordance with this processing.
  • the storage module 231 of the computer 200 stores in the storage unit 230 the work associated with the feature quantities of a plurality of basic data (step S301).
  • the basic data is a generic term that represents images, machine logs, and data of various sensors acquired from the apparatus 100.
  • the basic data always includes information for specifying the time of work.
  • the work associated with the feature quantities of the plurality of basic data may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, a dedicated database may be provided in the storage unit 230.
  • the process of step S301 is skipped when there is already stored an association between the work and the feature values of the plurality of basic data, or there is no association between the work to be newly stored and the feature data of the basic data. I shall do it.
  • FIG. 9 is an example of a table showing the data structure of the work stored in the storage unit 230 and the basic data.
  • two basic data are stored as operation X, of which the feature amount of data A is v, the data type is moving image, the feature amount of data B is w, and the data type is It indicates that the sensor is an acceleration sensor.
  • operation Y three basic data are stored, of which the feature amount of the data C is x, the type of data is moving image, the feature amount of the data D is y, and the type of data is machine
  • the feature amount of the log and data E is z, which indicates that the data type is GPS.
  • Each data itself may be stored together with these data.
  • FIG. 9 shows an example in which the storage destination of each data itself is described in the rightmost column of the table.
  • the acquisition module 211 of the computer 200 requests the apparatus 100 to transmit basic data (step 302).
  • the apparatus 100 When there are a plurality of apparatuses 100, it is assumed that transmission of basic data is requested to all apparatuses.
  • control unit 110 of the apparatus 100 stores the basic data in the storage unit 130 (step S303).
  • the device 100 transmits unknown basic data to the computer 200 via the communication unit 120 (step S304).
  • the acquisition module 211 of the computer 200 acquires unknown basic data (step S305).
  • unknown basic data When there are a plurality of devices 100, that is, when there are a plurality of unknown basic data, all are acquired as basic data relating to one operation.
  • the acquisition module 211 may not only acquire basic data acquired in real time by the device 100 but also acquire basic data acquired by the device 100 in the past and stored in the storage unit 130.
  • FIG. 10 is an example of a table showing the data structure of unknown basic data.
  • the work consists of three basic data, of which the feature amount of data F is s, the data type is GPS, the feature amount of data G is t, the data type is machine log, and data H The feature amount is u, indicating that the data type is a moving image.
  • Each data itself may be stored together with these data.
  • FIG. 10 shows an example in which the storage destination of each data itself is described in the rightmost column of the table.
  • the extraction module 212 of the computer 200 extracts the feature quantity of the unknown basic data acquired in step S305 (step S306).
  • the feature amount here may be, for example, according to the type of data, or may be determined by analyzing the content of the data, and depending on the system. It may be used as For example, if the data type is a moving image, the image obtained by analyzing the moving image may be used as the feature amount. If the data type is the acceleration sensor, the operation amount may be used as the feature amount. For example, an analysis of machine operation and time may be used as the feature amount, and in the case of GPS data, the location, height, and weather may be used as the feature amount with reference to map data, weather data, and the like.
  • the determination module 213 of the computer 200 determines which basic data feature quantity stored in the storage unit 230 is similar to the characteristic quantity of unknown basic data (step S307). Details of the determination method will be described later.
  • FIG. 11 is an example of a table including determination results of stored basic data having a feature quantity similar to unknown basic data.
  • the feature amount of data F is s and similar to data E
  • the feature amount of data G is t and similar to data D
  • the feature amount of data H is u and similar to data A This is shown as an example.
  • the classification module 214 of the computer 200 classifies which work the acquired unknown basic data is related to (step S308).
  • the data F is similar to the data E of the work Y
  • the data G is similar to the data D of the work Y
  • the data H is similar to the data A of the work X.
  • a classification method for example, when classification is performed with priority on the number of determinations of basic data, based on the result that two of three sensors are similar to work Y and one is similar to work X, data F, data G, The work based on data H can be classified as work Y.
  • the classification module 214 performs the determination. , Unclassifiable or unclassified.
  • the timeline generation module 215 of the computer 200 determines the content of the work based on the result classified in step S308, and generates a timeline based on the work time associated with the acquired basic data ( Step S309).
  • a timeline is generated for each person or device performing the work X.
  • the ratio of the work time for each person or device performing the work X is generated.
  • the timeline includes work contents and work time.
  • information for specifying a work time included in the basic data is used.
  • information on which device is the basic data acquired is used together.
  • the generated timeline information may be output by display or voice.
  • FIG. 12 is an example of the timeline output display of the present invention.
  • the work status of a certain construction site from 8:00 to 18:00 on February 2, 2018 is displayed as a timeline.
  • the timeline for each work is displayed.
  • FIG. 13 is an example of detailed display of work status.
  • the user of the work classification system selects the slope formation that is the work of the excavator A from 8:00 to 12:00.
  • a balloon 1301 in FIG. 13 an image, a schematic diagram, etc. based on information included in the acquired basic data so that the work position and work status of each time from 8:00 to 12:00 can be understood. You may display with.
  • FIG. 14 is an example of a work time ratio display. Instead of timeline generation, work time ratio generation may be performed.
  • the window 1401 shows the ratio of work during the work time of February 2, 2018, 9 hours.
  • the percentage of each work during the work time is displayed as a percentage for each of the excavator A, the excavator B, the crawler dumper A, and the worker A as the person or equipment that performed the work. Since not all work of a person or equipment can be specified, the total percentage of work time does not necessarily need to be 100%.
  • the operations of all the people or devices are collectively displayed. However, individual display may be performed for each person or for each device.
  • a plurality of basic data relating to various works is obtained, and the work related to the unknown basic data is classified from the feature amount, and the timeline for the classified work is further classified.
  • FIG. 4 is a flowchart when the computer 200 classifies the unknown basic data according to the number of determinations.
  • the configuration is the same as that of the apparatus 100 and the computer 200 of FIG. This process corresponds to step S307 and step S308 in the flowchart of FIG.
  • basic data classification processing according to the number of unknown basic data determinations will be described as processing after the flow up to step S306 in FIG.
  • the data examples shown in FIGS. 9, 10, and 11 are used.
  • the feature amount of each basic data has been extracted in step S306.
  • the number of acquired unknown basic data is counted (step S401). In the example of FIG. 10 described above, the number of unknown basic data is three.
  • step S402 one unknown basic data is selected.
  • data F in FIG. 10 is selected.
  • the determination module 213 of the computer 200 determines which feature value of the basic data stored in the storage unit 230 is similar to the feature value of the unknown basic data (step S403).
  • the feature amount of the data F is determined to be similar to the data E as shown in FIG.
  • the determination module 213 confirms whether determination of all unknown basic data has been completed (step S404).
  • step S402 the process returns to step S402 to select one unknown basic data.
  • data G in FIG. 10 is selected.
  • step S403 the determination module 213 determines that the feature amount of the data G is similar to the data D as shown in FIG.
  • the determination module 213 confirms again whether or not determination of all unknown basic data has been completed in step S404.
  • step S402 determination of all basic data has not been completed yet, so the process returns to step S402 and one unknown basic data is selected.
  • the data H in FIG. 10 is selected.
  • step S403 the determination module 213 determines that the feature amount of the data H is similar to the data A as shown in FIG.
  • the determination module 213 confirms whether or not determination of all unknown basic data has been completed in step S404.
  • step S405 the unknown basic data is classified into the work having a large number determined to be similar to the feature value of the related basic data.
  • the data F is similar to the data E of the work Y
  • the data G is similar to the data D of the work Y
  • the data H is similar to the data A of the work X
  • three unknowns Among the basic data the number determined to be similar to the feature value of the work Y is two, and the number determined to be similar to the feature value of the work X is one. Therefore, the classification module 214 classifies the data F, data G, and data H as basic data related to the work Y.
  • a plurality of basic data relating to various works is acquired, a certain regularity and correlation of the series of works is determined as a feature quantity, and the feature quantity and the work are associated with each other.
  • FIG. 5 is an example of a flowchart for determining whether or not the feature quantity of the unknown basic data is similar to the feature quantity of the stored basic data in the computer 200.
  • the configuration is the same as that of the apparatus 100 and the computer 200 of FIG.
  • the feature amount determination process will be described as the process after the flow up to step S306 in FIG.
  • the data examples shown in FIGS. 9, 10, and 11 are used.
  • step S305 For the unknown basic data acquired in step S305, the feature amount of each basic data has been extracted in step S306.
  • one unknown basic data is selected (step S501).
  • data F in FIG. 10 is selected.
  • the determination module 213 of the computer 200 selects stored basic data to be compared (step S502). Here, it is assumed that data A in FIG. 9 is selected.
  • the determination module 213 obtains the inner product of the feature quantity s of the data F that is unknown basic data and the feature quantity v of the data A that is stored basic data to be compared (step S503).
  • the determination module 213 obtains the product of the absolute value of the feature value s of the data F, which is unknown basic data, and the absolute value of the feature value v of the data A, which is stored basic data to be compared (step S504).
  • the determination module 213 obtains a difference between the inner product obtained in step S503 and the product obtained in step S504 (step S505).
  • step S506 determines that the data F is similar to the data A (step S506), and the difference is greater than or equal to the predetermined range. Is determined not to be similar to data A (step S507). Here, it is assumed that the data F is determined not to be similar to the data A.
  • the determination module 213 confirms whether the determination of all stored basic data has been completed (step S508). If the determination has not ended, the process returns to step S502 to continue the process. If completed, the process proceeds to the next step S509. That is, when all the determinations as to whether the data F, which is unknown basic data, are similar to the data A, data B, data C, data D, data E, which are stored basic data, have been completed, Proceed to the next Step S509. Here, it is assumed that data F is determined to be similar to data E only.
  • the determination module 213 confirms whether the determination of all unknown basic data has been completed (step S509). If the determination has not ended, the process returns to step S501 to continue the process. If it has ended, the feature amount determination processing ends. That is, when it is determined that the stored basic data is similar to the data F, data G, and data H, which are unknown basic data, the process ends. Here, it is assumed that the data F is similar to only the data E, the data G is similar to only the data D, and the data H is similar to only the data A.
  • step S308 classification is performed based on the determination result that the data F is similar to the data E of the work Y, the data G is similar to the data D of the work Y, and the data H is similar to the data A of the work X. .
  • priority is given to the number of determinations of basic data, and based on the result that two of the three sensors are similar to work Y, data F, data G, data The work by H is classified as work Y.
  • the determination result that the data H is similar to the data A of the work X It may be possible to classify the work using data F, data G, and data H as work X with emphasis on.
  • data F resembles only data E
  • data G resembles only data D
  • data H resembles only data A
  • a plurality of centers may be held as similar sensors, and when the determination of all stored basic data is completed in step S508, the determination is made in step S505.
  • the classification module 214 may determine that classification is not possible or that classification is not possible.
  • the feature quantity determination method of the flowchart of FIG. 5 is that when the feature quantity vector of unknown basic data is a and the feature quantity vector b of stored basic data is “absolute value a ⁇ absolute value b” which is the inner product of these.
  • the feature vector b is similar.
  • this is merely an example of a feature amount determination method, and the determination method in step S308 is not limited to this method.
  • a plurality of basic data relating to various works is acquired, a certain regularity and correlation of the series of works is determined as a feature quantity, and the feature quantity and the work are associated with each other.
  • FIG. 6 is a flowchart in the case of performing determination processing by machine learning of a combination of past work and feature values of basic data. As a configuration, it is assumed that the configuration is the same as that of the apparatus 100 and the computer 200 of FIG.
  • the storage module 231 of the computer 200 stores a combination of work and feature quantities of a plurality of basic data in the storage unit 230 as teacher data (step S601).
  • a combination of work and feature quantities of a plurality of basic data may be acquired from another computer or a storage medium, or may be created by the computer 200.
  • the feature quantity of the unknown basic data classified in the past and the classified work may be used as the teacher data.
  • a database dedicated to teacher data may be provided in the storage unit 230.
  • the determination module 213 of the computer 200 performs machine learning of the determination method using the teacher data (step S602). It is assumed that supervised learning is used as a machine learning method here. Based on a large number of teacher data stored in the storage unit 230 by the storage module 231, what kind of basic data features are determined to be similar to the basic data features of the work learn. The processing in step S601 and step S602 may be skipped when machine learning of the determination method is unnecessary. Further, since the accuracy is expected to decrease when the number of teacher data is small, it is desirable to use the flow of FIG. The advantage of having the determination module 213 machine-learned by supervised learning is that it uses the features of unknown basic data that have been classified in the past and the classified work as teacher data. It is a point that can be improved. However, since it is assumed that it takes time for machine learning, the processing in step S602 may be performed in a time zone where the load of the work data classification system is low.
  • step S603 to step S609 corresponds to the processing from step S302 to step S308 in FIG. 3, description thereof is omitted here.
  • the combination of the feature value and the work is used as the teacher data, and the determination module 213 performs the supervised learning, so that the feature value of the unknown basic data is stored in any sensor. It is possible to improve the accuracy of determining whether or not the feature amount is similar to the feature amount, thereby improving the classification accuracy for classifying which work the unknown basic data is related to.
  • FIG. 7 is a diagram illustrating the functional blocks of the apparatus 100 and the computer 200 and the relationship between the functions when performing schedule comparison processing.
  • the control unit 210 implements the schedule storage module 216 in cooperation with the storage unit 230. Further, the control unit 210 realizes a schedule comparison module 217 in cooperation with the storage unit 230 and the input / output unit 240.
  • FIG. 8 is a flowchart for performing the schedule comparison process. Processing executed by each module described above will be described in accordance with this processing.
  • the storage module 231 of the computer 200 stores the work associated with the feature quantities of the plurality of basic data in the storage unit 230 (step S801).
  • the basic data is a generic term that represents images, machine logs, and data of various sensors acquired from the apparatus 100.
  • the basic data always includes information for specifying the time of work.
  • the work associated with the feature quantities of the plurality of basic data may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, a dedicated database may be provided in the storage unit 230.
  • the process of step S801 is skipped when there is already stored an association between the work and the feature values of the plurality of basic data, or there is no association between the work to be newly stored and the feature data of the basic data. I shall do it.
  • the schedule storage module 216 of the computer 200 stores the work schedule of the person or device in the storage unit 230 (step S802).
  • the user of the work data classification system may input a schedule via the input / output unit 240 or may receive schedule data via the communication unit 220.
  • step S803 to step S810 in FIG. 8 corresponds to the processing from step S302 to step S309 in FIG.
  • the schedule comparison module 217 of the computer 200 compares the schedule stored in step S802 with the timeline generated in step S810 (step S309). When comparing timelines, it shall be done for each person or device.
  • FIG. 15 is an example of the schedule comparison display of the present invention.
  • a schedule stored in advance and a timeline generated based on the actual work situation shown in FIG. 12 are compared and displayed.
  • a balloon 1501 indicates that “schedule exceeded 30 minutes”.
  • the portion from 11:00 to 12:00 of the bulldozer A is displayed with a dotted line, and a balloon 1502 indicates that “the schedule has been shortened by 60 minutes”.
  • the generated timeline is delayed or has a large difference with respect to the stored schedule, it is desirable to emphasize and output it so that it can be easily understood by the user.
  • emphasizing and outputting for example, displaying in a color that calls attention, such as red, blinking, displaying in bold, displaying large, sounding a warning sound, and the like can be given as examples.
  • the generated timeline is compared with the schedule stored in advance, and if the generated timeline is delayed or has a large difference, it is emphasized so that the user can easily understand. Output, it becomes possible to easily manage the progress of the work.
  • the means and functions described above are realized by a computer (including a CPU, an information processing apparatus, and various terminals) reading and executing a predetermined program.
  • the program may be, for example, in the form (SaaS: Software as a Service) provided from a computer via a network, or a flexible disk, CD (CD-ROM, etc.), DVD (DVD-ROM, DVD). -RAM, etc.) and a computer-readable recording medium such as a compact memory.
  • the computer reads the program from the recording medium, transfers it to the internal storage device or the external storage device, stores it, and executes it.
  • the program may be recorded in advance in a storage device (recording medium) such as a magnetic disk, an optical disk, or a magneto-optical disk, and provided from the storage device to a computer via a communication line.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Multimedia (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Operations Research (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

[Problem] To provide an operation data classification system, an operation data classification method, and a program with which it is possible to classify to which operation newly acquired data relates, and generate a timeline for a person or device that performs the operation. [Solution] This operation data classification system comprises: an acquisition module 211 which acquires a plurality of sets of basic data relating to various operations; an extraction module 212 which extracts feature quantities of the plurality of sets of basic data; a storage module 231 which associates and stores the feature quantities with the operations; a determination module 213 which determines which feature quantity of the stored operation data is similar to a feature quantity of unknown basic data; a classification module 214 which, on the basis of the determination result, classifies to which operation the unknown basic data relates; and a timeline generation module 215 which generates a timeline for each person or device on the basis of the operation times associated with the basic data.

Description

作業データ分類システム、作業データ分類方法、およびプログラムWork data classification system, work data classification method, and program
 本発明は、カメラからの画像データ、機械からのログ、ユーザが持つスマホの端末のログ、等の様々な作業に関するデータを取得して未知のデータがどの作業に関するデータであるかを分類し、その作業についてのタイムラインを生成することが可能な作業データ分類システム、作業データ分類方法、およびプログラムに関する。 The present invention obtains data related to various operations such as image data from cameras, logs from machines, logs of smartphone terminals held by users, etc., and classifies which operation is related to unknown data, The present invention relates to a work data classification system, a work data classification method, and a program capable of generating a timeline for the work.
 ノイズが存在しかつそのノイズを分離して測定できない環境下でも、多数のセンサから時系列に収集したセンサデータのみを用いて、各センサデータを適切に分類できるようにする方法が提案されている(特許文献1)。 A method has been proposed that allows each sensor data to be properly classified using only sensor data collected in time series from a large number of sensors even in an environment where noise exists and cannot be measured separately. (Patent Document 1).
特開2016-099888JP2016-099888
 しかしながら、特許文献1の手法では、ノイズが存在しかつそのノイズを分離して測定できない環境下においても、多数のセンサから時系列に収集したセンサデータのみを用いて、各センサデータを適切に分類できるようにすることは可能であるが、それぞれのセンサデータが、どのような作業に関するデータであるのかを、自動で分類することはできない。そのため、分類した作業についてのタイムラインを生成することもできない。 However, in the method of Patent Document 1, even in an environment where noise exists and the noise cannot be separated and measured, each sensor data is appropriately classified using only sensor data collected in time series from many sensors. It is possible to make it possible, but it is not possible to automatically classify what kind of work each sensor data is. Therefore, it is not possible to generate a timeline for the classified work.
 実際に建設現場、工事現場、店舗現場、生産現場、農作業、オフィスでの業務等を想定した場合、カメラ撮影による画像と、各機器のセンサから取得したデータから、対象となる人又は機器の作業内容を自動で分類し、さらに、作業内容と作業時間からなるタイムラインまで自動で生成することが望ましい。 When actually assuming construction sites, construction sites, store sites, production sites, farm work, office work, etc., work of the target person or device from images taken by the camera and data acquired from the sensors of each device It is desirable to classify the contents automatically, and further to automatically generate a timeline consisting of work contents and work time.
 本発明は、様々な作業に関する複数の基礎データを取得し、その特徴量から未知の基礎データがどの作業に関するデータであるかを分類し、さらに分類した作業についてのタイムラインを生成することが可能な作業データ分類システム、作業データ分類方法、およびプログラムを提供することを目的とする。 The present invention can acquire a plurality of basic data related to various operations, classify which basic data the unknown basic data is based on from the feature amount, and generate a timeline for the classified operations. It is an object to provide a work data classification system, work data classification method, and program.
 本発明では、以下のような解決手段を提供する。 The present invention provides the following solutions.
 第1の特徴に係る発明は、
 人又は機器が行う様々な作業に関する複数の基礎データを取得する取得手段と、
 前記複数の基礎データの特徴量を抽出する抽出手段と、
 前記特徴量と前記作業とを関連付けて記憶する記憶手段と、
 未知の基礎データの特徴量が、前記記憶した基礎データの特徴量の中のどの特徴量と似ているかを判定する判定手段と、
 前記判定の結果に基づいて、前記未知の基礎データが、前記人又は機器による何の作業に関するデータであるかを分類する分類手段と、
 前記分類されたデータから作業の内容を決定し、前記基礎データに対応づけられた作業時刻に基づいて、前記人又は機器ごとのタイムライン又は作業時間の割合を生成するタイムライン生成手段と、
 を備えることを特徴とする作業データ分類システムを提供する。
The invention according to the first feature is
An acquisition means for acquiring a plurality of basic data relating to various operations performed by a person or a device;
Extraction means for extracting feature quantities of the plurality of basic data;
Storage means for storing the feature quantity and the work in association with each other;
Determining means for determining which feature quantity of the feature data of the unknown basic data is similar to the feature quantity in the feature data of the stored basic data;
Based on the result of the determination, classification means for classifying what the unknown basic data is data about what work by the person or device;
Timeline generating means for determining the content of work from the classified data and generating a timeline or a percentage of work time for each person or device based on the work time associated with the basic data;
A work data classification system is provided.
 第1の特徴に係る発明によれば、作業データ分類システムにおいて、人又は機器が行う様々な作業に関する複数の基礎データを取得する取得手段と、前記複数の基礎データの特徴量を抽出する抽出手段と、前記特徴量と前記作業とを関連付けて記憶する記憶手段と、未知の基礎データの特徴量が、前記記憶した基礎データの特徴量の中のどの特徴量と似ているかを判定する判定手段と、前記判定の結果に基づいて、前記未知の基礎データが、前記人又は機器による何の作業に関するデータであるかを分類する分類手段と、前記分類されたデータから作業の内容を決定し、前記基礎データに対応づけられた作業時刻に基づいて、前記人又は機器ごとのタイムライン又は作業時間の割合を生成するタイムライン生成手段と、を備える。 According to the first aspect of the invention, in the work data classification system, an acquisition unit that acquires a plurality of basic data relating to various operations performed by a person or a device, and an extraction unit that extracts feature amounts of the plurality of basic data And storage means for associating and storing the feature quantity and the work, and determination means for determining which feature quantity in the feature quantity of the stored basic data is similar to the feature quantity of the stored basic data And, based on the result of the determination, classifying means for classifying what the unknown basic data is data relating to what work by the person or equipment, and determining the content of the work from the classified data, Timeline generating means for generating a timeline or a ratio of work time for each person or device based on the work time associated with the basic data.
 第1の特徴に係る発明は、作業データ分類システムのカテゴリであるが、作業データ分類方法、およびプログラムであっても同様の作用、効果を奏する。 The invention according to the first feature is a category of the work data classification system, but the work data classification method and program have the same operations and effects.
 第2の特徴に係る発明は、第1の特徴に係る発明である作業データ分類システムであって、
 前記人又は機器が行う作業のスケジュールを記憶するスケジュール記憶手段と、
 前記タイムライン生成手段が生成したタイムラインと、記憶された前記スケジュールとを比較するスケジュール比較手段と、
 を備えることを特徴とする作業データ分類システムを提供する。
The invention according to the second feature is a work data classification system which is the invention according to the first feature,
Schedule storage means for storing a schedule of work performed by the person or device;
A schedule comparison means for comparing the timeline generated by the timeline generation means with the stored schedule;
A work data classification system is provided.
 第2の特徴に係る発明によれば、第1の特徴に係る発明である作業データ分類システムにおいて、前記人又は機器が行う作業のスケジュールを記憶するスケジュール記憶手段と、前記タイムライン生成手段が生成したタイムラインと、記憶された前記スケジュールとを比較するスケジュール比較手段とを備える。 According to the second aspect of the invention, in the work data classification system according to the first aspect of the invention, the schedule storage means for storing a work schedule performed by the person or the device, and the timeline generation means And a schedule comparison means for comparing the stored timeline with the stored schedule.
 第3の特徴に係る発明は、第1の特徴又は第2の特徴に係る発明である作業データ分類システムであって、
 前記判定手段は、過去の特徴量を機械学習して、前記未知の基礎データの特徴量が、前記基礎データの特徴量の中のどの特徴量と似ているかを判定することを特徴とする作業データ分類システムを提供する。
The invention according to the third feature is a work data classification system which is the invention according to the first feature or the second feature,
The determination means performs machine learning of past feature amounts, and determines which feature amount in the feature amount of the basic data is similar to the feature amount of the unknown basic data. Provide a data classification system.
 第3の特徴に係る発明によれば、第1の特徴又は第2の特徴に係る発明である作業データ分類システムにおいて、前記判定手段は、過去の特徴量を機械学習して、前記未知の基礎データの特徴量が、前記基礎データの特徴量の中のどの特徴量と似ているかを判定する。 According to the invention relating to the third feature, in the work data classification system which is the invention relating to the first feature or the second feature, the determination means performs machine learning of a past feature amount, and the unknown basis It is determined which of the feature quantities of the basic data is similar to the feature quantity of the basic data.
 第4の特徴に係る発明は、第1の特徴から第3の特徴のいずれかに係る発明である作業データ分類システムであって、
 前記スケジュール比較手段は、前記スケジュールに対して前記生成したタイムラインが遅延している場合又は差異が大きい場合に、強調して出力することを特徴とする作業データ分類システムを提供する。
The invention according to the fourth feature is a work data classification system which is the invention according to any one of the first feature to the third feature,
The schedule comparison unit provides an operation data classification system that emphasizes and outputs when the generated timeline is delayed with respect to the schedule or when the difference is large.
 第4の特徴に係る発明によれば、第1の特徴から第3の特徴のいずれかに係る発明である作業データ分類システムにおいて、前記スケジュール比較手段は、前記スケジュールに対して前記生成したタイムラインが遅延している場合又は差異が大きい場合に、強調して出力する。 According to the invention according to the fourth feature, in the work data classification system according to any one of the first feature to the third feature, the schedule comparison means is configured to generate the timeline generated with respect to the schedule. Is output when it is delayed or when the difference is large.
 第5の特徴に係る発明は、
 人又は機器が行う様々な作業に関する複数の基礎データを取得するステップと、
 前記複数の基礎データの特徴量を抽出するステップと、
 前記特徴量と前記作業とを関連付けて記憶するステップと、
 未知の基礎データの特徴量が、前記記憶した基礎データの特徴量の中のどの特徴量と似ているかを判定するステップと、
 前記判定の結果に基づいて、前記未知の基礎データが、前記人又は機器による何の作業に関するデータであるかを分類するステップと、
 前記分類されたデータから作業の内容を決定し、前記基礎データに対応づけられた作業時刻に基づいて、前記人又は機器ごとのタイムライン又は作業時間の割合を生成するステップと、
 を備える作業データ分類方法を提供する。
The invention according to the fifth feature is
Obtaining a plurality of basic data on various tasks performed by a person or device;
Extracting features of the plurality of basic data;
Storing the feature quantity and the work in association with each other;
Determining which feature quantity of the unknown basic data feature quantity is similar to the feature quantity of the stored basic data; and
Classifying the unknown basic data based on the result of the determination as to what work by the person or device;
Determining the content of work from the classified data, and generating a timeline or percentage of work time for each person or device based on the work time associated with the basic data;
A work data classification method comprising:
 第6の特徴に係る発明は、
 作業データ分類システムに、
 人又は機器が行う様々な作業に関する複数の基礎データを取得するステップ、
 前記複数の基礎データの特徴量を抽出するステップ、
 前記特徴量と前記作業とを関連付けて記憶するステップ、
 未知の基礎データの特徴量が、前記記憶した基礎データの特徴量の中のどの特徴量と似ているかを判定するステップ、
 前記判定の結果に基づいて、前記未知の基礎データが、前記人又は機器による何の作業に関するデータであるかを分類するステップ、
 前記分類されたデータから作業の内容を決定し、前記基礎データに対応づけられた作業時刻に基づいて、前記人又は機器ごとのタイムライン又は作業時間の割合を生成するステップ、
 を実行させるためのプログラムを提供する。
The invention according to the sixth feature is
Work data classification system
Obtaining a plurality of basic data relating to various operations performed by a person or device;
Extracting features of the plurality of basic data;
Storing the feature quantity and the work in association with each other;
Determining which feature quantity of the unknown basic data feature quantity is similar to the feature quantity of the stored basic data;
Classifying the unknown basic data based on the result of the determination as to what work by the person or device,
Determining the content of work from the classified data, and generating a timeline or a percentage of work time for each person or device based on the work time associated with the basic data;
Provide a program to execute.
 本発明によれば、様々な作業に関する複数の基礎データを取得し、その特徴量から未知の基礎データがどの作業に関するデータであるかを分類し、さらに分類した作業についてのタイムラインを生成することが可能な作業データ分類システム、作業データ分類方法、およびプログラムを提供することが可能となる。 According to the present invention, acquiring a plurality of basic data relating to various operations, classifying which basic data the unknown basic data is based on from the feature amount, and further generating a timeline for the classified operations It is possible to provide a work data classification system, a work data classification method, and a program that can be used.
図1は、本発明の好適な実施形態の概要図である。FIG. 1 is a schematic diagram of a preferred embodiment of the present invention. 図2は、装置100とコンピュータ200の機能ブロックと各機能の関係を示す図である。FIG. 2 is a diagram illustrating the functional blocks of the apparatus 100 and the computer 200 and the relationship between the functions. 図3は、タイムライン生成処理のフローチャート図である。FIG. 3 is a flowchart of the timeline generation process. 図4は、コンピュータ200で未知の基礎データの判定数に応じて分類する場合のフローチャート図である。FIG. 4 is a flowchart when the computer 200 classifies the unknown basic data according to the number of determinations. 図5は、コンピュータ200で未知の基礎データの特徴量と記憶済作業データの特徴量とが似ているかどうかを判定するフローチャート図の一例である。FIG. 5 is an example of a flowchart for determining whether or not the feature amount of unknown basic data is similar to the feature amount of stored work data in the computer 200. 図6は、過去の作業と基礎データの特徴量との組み合わせを機械学習して、判定処理を行う場合のフローチャート図である。FIG. 6 is a flowchart in the case of performing determination processing by machine learning of a combination of past work and feature values of basic data. 図7は、スケジュール比較処理を行う場合の装置100とコンピュータ200の機能ブロックと各機能の関係を示す図である。FIG. 7 is a diagram illustrating the functional blocks of the apparatus 100 and the computer 200 and the relationship between the functions when performing schedule comparison processing. 図8は、スケジュール比較処理を行う場合のフローチャート図である。FIG. 8 is a flowchart for performing the schedule comparison process. 図9は、記憶部230に記憶済の作業と基礎データのデータ構造を示す表の一例である。FIG. 9 is an example of a table showing the data structure of work and basic data stored in the storage unit 230. 図10は、未知の基礎データのデータ構造を示す表の一例である。FIG. 10 is an example of a table showing the data structure of unknown basic data. 図11は、未知の基礎データに類似の特徴量を持つ記憶済基礎データの判定結果を含む表の一例である。FIG. 11 is an example of a table including determination results of stored basic data having a feature quantity similar to unknown basic data. 図12は、本発明のタイムライン出力表示の一例である。FIG. 12 is an example of the timeline output display of the present invention. 図13は、作業状況の詳細表示の一例である。FIG. 13 is an example of a detailed display of work status. 図14は、作業時間の割合表示の一例である。FIG. 14 is an example of a work time ratio display. 図15は、本発明のスケジュール比較表示の一例である。FIG. 15 is an example of the schedule comparison display of the present invention.
 以下、本発明を実施するための最良の形態について図を参照しながら説明する。なお、これはあくまでも一例であって、本発明の技術的範囲はこれに限られるものではない。 Hereinafter, the best mode for carrying out the present invention will be described with reference to the drawings. This is merely an example, and the technical scope of the present invention is not limited to this.
 [作業データ分類システムの概要]
 図1は、本発明の好適な実施形態の概要図である。この図1に基づいて、本発明の概要を説明する。作業データ分類システムは、装置100、コンピュータ200、通信網300から構成される。
[Outline of work data classification system]
FIG. 1 is a schematic diagram of a preferred embodiment of the present invention. The outline of the present invention will be described with reference to FIG. The work data classification system includes an apparatus 100, a computer 200, and a communication network 300.
 なお、図1において、装置100の数は1つに限らず複数であってもよい。ここでは、装置100AとしてWEBカメラを、装置100Bとしてウェアラブルデバイスを例として図示している。また、コンピュータ200は、実在する装置に限らず、仮想的な装置であってもよい。また、コンピュータ200は装置100と同一の装置であってもよい。 In FIG. 1, the number of devices 100 is not limited to one and may be plural. Here, a WEB camera is illustrated as the device 100A, and a wearable device is illustrated as the device 100B. Further, the computer 200 is not limited to a real device, and may be a virtual device. The computer 200 may be the same device as the device 100.
 装置100は、図2に示すように、センサ部10、制御部110、通信部120、記憶部130から構成される。また、コンピュータ200は、同じく図2に示すように、制御部210、通信部220、記憶部230、入出力部240、から構成される。制御部210は通信部220、記憶部230と協働して取得モジュール211を実現する。また、制御部210は記憶部230と協働して抽出モジュール212、判定モジュール213、分類モジュール214、を実現する。また、制御部210は記憶部230、入出力部240と協働してタイムライン生成モジュール215を実現する。記憶部230は、制御部210と協働して記憶モジュール231を実現する。通信網300は、インターネット等の公衆通信網でも専用通信網でもよく、装置100とコンピュータ200間の通信を可能とする。 The apparatus 100 includes a sensor unit 10, a control unit 110, a communication unit 120, and a storage unit 130 as shown in FIG. As shown in FIG. 2, the computer 200 includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240. The control unit 210 implements the acquisition module 211 in cooperation with the communication unit 220 and the storage unit 230. Further, the control unit 210 implements an extraction module 212, a determination module 213, and a classification module 214 in cooperation with the storage unit 230. In addition, the control unit 210 implements the timeline generation module 215 in cooperation with the storage unit 230 and the input / output unit 240. The storage unit 230 implements the storage module 231 in cooperation with the control unit 210. The communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the apparatus 100 and the computer 200.
 装置100は、画像や機械ログや各種センサのデータを取得可能なセンサ部10を備え、コンピュータ200とデータ通信可能な装置である。ここでは、例として、装置100AとしてWEBカメラを、装置100Bとしてウェアラブルデバイスを図示しているが、デジタルカメラ、デジタルビデオ、防犯カメラ、車載カメラ、360度カメラ、工業用装置、農業用装置、ドローン、ウェアラブルデバイス、等の必要な機能を備える装置であってよい。また、記憶部130に各種作業データを保存可能としてもよい。 The apparatus 100 includes the sensor unit 10 capable of acquiring images, machine logs, and data of various sensors, and is an apparatus capable of data communication with the computer 200. Here, as an example, a WEB camera is illustrated as the apparatus 100A, and a wearable device is illustrated as the apparatus 100B. However, a digital camera, a digital video, a security camera, an in-vehicle camera, a 360-degree camera, an industrial apparatus, an agricultural apparatus, a drone , A wearable device, etc., and a device having necessary functions. Various work data may be stored in the storage unit 130.
 コンピュータ200は、装置100とデータ通信可能な計算装置である。ここでは、例としてデスクトップ型のコンピュータを図示しているが、携帯電話、携帯情報端末、タブレット端末、パーソナルコンピュータに加え、ネットブック端末、スレート端末、電子書籍端末、携帯型音楽プレーヤ等の電化製品や、スマートグラス、ヘッドマウントディスプレイ等のウェアラブル端末等であってよい。 The computer 200 is a computing device capable of data communication with the device 100. Here, a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electrical appliances such as a netbook terminal, a slate terminal, an electronic book terminal, and a portable music player Or a wearable terminal such as a smart glass or a head-mounted display.
 図1の作業データ分類システムにおいて、まず、コンピュータ200の記憶モジュール231は、記憶部230に作業と複数の基礎データの特徴量を関連付けたものを記憶する(ステップS01)。作業と複数の基礎データの特徴量を関連付けたものは、他のコンピュータや記憶媒体から取得してもよいし、コンピュータ200で作成してもよい。また、記憶部230に専用のデータベースを設けてもよい。本発明において、基礎データとは、装置100から取得した画像や機械ログや各種センサのデータのことを総称して表すものとする。また、基礎データには、作業の時刻を特定するための情報を必ず含むものとする。 In the work data classification system of FIG. 1, first, the storage module 231 of the computer 200 stores in the storage unit 230 the work associated with the features of a plurality of basic data (step S01). The work associated with the feature quantities of the plurality of basic data may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, a dedicated database may be provided in the storage unit 230. In the present invention, the basic data is a generic term that represents images, machine logs, and data of various sensors acquired from the apparatus 100. The basic data always includes information for specifying the time of work.
 図9は、記憶部230に記憶済の作業と基礎データのデータ構造を示す表の一例である。ここでは、作業Xとして、2つの基礎データが記憶されており、そのうちデータAの特徴量はvであり、データの種類としては動画、データBの特徴量はwであり、データの種類としては加速度センサのものであることを示している。また、作業Yとして、3つの基礎データが記憶されており、そのうちデータCの特徴量はxであり、データの種類としては動画、データDの特徴量はyであり、データの種類としては機械ログ、データEの特徴量はzであり、データの種類としてはGPSのものであることを示している。これらのデータとあわせて、各データそのものを記憶してもよい。図9では、各データそのものの保存先を、表の一番右の列に記載した例を示している。 FIG. 9 is an example of a table showing the data structure of the work stored in the storage unit 230 and the basic data. Here, two basic data are stored as operation X, of which the feature amount of data A is v, the data type is moving image, the feature amount of data B is w, and the data type is It indicates that the sensor is an acceleration sensor. Also, as the operation Y, three basic data are stored, of which the feature amount of the data C is x, the type of data is moving image, the feature amount of the data D is y, and the type of data is machine The feature amount of the log and data E is z, which indicates that the data type is GPS. Each data itself may be stored together with these data. FIG. 9 shows an example in which the storage destination of each data itself is described in the rightmost column of the table.
 図1に戻り、装置100は未知の基礎データを、コンピュータ200に送信し(ステップS02)、コンピュータ200の取得モジュール211は、未知の基礎データを取得する(ステップS03)。未知の基礎データが複数ある場合には、1つの作業に関する作業データとして、全てを取得するものとする。ここでは、装置100から未知の基礎データを送信するフローを記載したが、取得モジュール211が装置100に対して、未知の基礎データの送信指示を行い、それを受けて装置100が未知の基礎データの送信を行ってもよい。また、取得モジュール211は、装置100がリアルタイムに取得している作業データの取得を行うだけでなく、装置100が過去に取得して記憶部130に保存しておいた作業データを取得してもよい。 Returning to FIG. 1, the apparatus 100 transmits unknown basic data to the computer 200 (step S02), and the acquisition module 211 of the computer 200 acquires unknown basic data (step S03). When there are a plurality of unknown basic data, all of them are acquired as work data relating to one work. Here, the flow for transmitting unknown basic data from the device 100 has been described. However, the acquisition module 211 instructs the device 100 to transmit unknown basic data, and the device 100 receives the unknown basic data in response to the instruction. May be sent. The acquisition module 211 not only acquires the work data that the apparatus 100 has acquired in real time, but also acquires the work data that the apparatus 100 has acquired in the past and stored in the storage unit 130. Good.
 図10は、未知の基礎データのデータ構造を示す表の一例である。ここでは、作業は3つの基礎データからなり、そのうちデータFの特徴量はsであり、データの種類としてはGPS、データGの特徴量はtであり、データの種類としては機械ログ、データHの特徴量はuであり、データの種類としては動画であることを示している。これらのデータとあわせて、各データそのものを記憶してもよい。図10では、各データそのものの保存先を、表の一番右の列に記載した例を示している。 FIG. 10 is an example of a table showing the data structure of unknown basic data. Here, the work consists of three basic data, of which the feature amount of data F is s, the data type is GPS, the feature amount of data G is t, the data type is machine log, and data H The feature amount is u, indicating that the data type is a moving image. Each data itself may be stored together with these data. FIG. 10 shows an example in which the storage destination of each data itself is described in the rightmost column of the table.
 再び図1に戻り、コンピュータ200の抽出モジュール212は、ステップS03で取得した未知の基礎データの特徴量を抽出する(ステップS04)。ここでの特徴量とは、例えば、データの種類に応じたものとしてもよいし、又は、データの内容を分析して、それに応じたものとしてもよく、システムにあわせて適切なものを特徴量として使用してよいものとする。例えば、データ種類が動画であれば、動画の画像解析を行ったものを特徴量としてよいし、データ種類が加速度センサであれば、動作を解析したものを特徴量としてよいし、機械ログであれば、機械の動作や時間を解析したものを特徴量としてよいし、GPSデータであれば、地図データや気象データ等を参照して場所や高さや天候を特徴量としてもよい。 1 again, the extraction module 212 of the computer 200 extracts the feature quantity of the unknown basic data acquired in step S03 (step S04). The feature amount here may be, for example, according to the type of data, or may be determined by analyzing the content of the data, and depending on the system. It may be used as For example, if the data type is a moving image, the image obtained by analyzing the moving image may be used as the feature amount. If the data type is the acceleration sensor, the operation amount may be used as the feature amount. For example, an analysis of machine operation and time may be used as the feature amount, and in the case of GPS data, the location, height, and weather may be used as the feature amount with reference to map data, weather data, and the like.
 次に、コンピュータ200の判定モジュール213は、未知の基礎データの特徴量が、記憶部230に記憶済の、どの基礎データの特徴量と似ているかを判定する(ステップS05)。判定方法の詳細については、後述する。 Next, the determination module 213 of the computer 200 determines which basic data feature quantity stored in the storage unit 230 is similar to the characteristic quantity of unknown basic data (step S05). Details of the determination method will be described later.
 図11は、未知の基礎データに類似の特徴量を持つ記憶済基礎データの判定結果を含む表の一例である。ここでは、データFの特徴量はsでありデータEに似ていること、データGの特徴量はtでありデータDに似ていること、データHの特徴量はuでありデータAに似ていることを例として示している。 FIG. 11 is an example of a table including determination results of stored basic data having a feature quantity similar to unknown basic data. Here, the feature amount of data F is s and similar to data E, the feature amount of data G is t and similar to data D, the feature amount of data H is u and similar to data A This is shown as an example.
 次に、コンピュータ200の分類モジュール214は、取得した未知の基礎データがどの作業に関するデータであるかを分類する(ステップS06)。ここでは、図9と図11より、データFは作業YのデータEに似ている、データGは作業YのデータDに似ている、データHは作業XのデータAに似ていることが分かる。分類方法としては、例えば、基礎データの判定数を優先して分類する場合、3つのセンサのうち2つが作業Y、1つが作業Xに似ているという結果を基に、データF、データG、データHによる作業は、作業Yであると分類できる。又は、データの種類を優先して分類する場合、例えば動画による分類結果を優先するという設定であれば、データHは作業が作業XのデータAに似ているという判定結果を重視して、データF、データG、データHによる作業は、作業Xであると分類できる。ここでは、未知の基礎データに類似の特徴量を持つ記憶済基礎データがあると判定される場合について記載したが、類似の記憶済基礎データが無いと判定された場合には、分類モジュール214により、分類不可能、又は未分類としてもよいものとする。 Next, the classification module 214 of the computer 200 classifies which work the acquired unknown basic data is related to (step S06). 9 and 11, the data F is similar to the data E of the work Y, the data G is similar to the data D of the work Y, and the data H is similar to the data A of the work X. I understand. As a classification method, for example, when classification is performed with priority on the number of determinations of basic data, based on the result that two of three sensors are similar to work Y and one is similar to work X, data F, data G, The work based on data H can be classified as work Y. Or, when classification is performed with priority given to the type of data, for example, if the classification result based on moving images is prioritized, the data H gives priority to the determination result that the work is similar to the data A of the work X, and the data The work by F, data G, and data H can be classified as work X. Here, the case where it is determined that there is stored basic data having a similar feature amount in the unknown basic data is described. However, when it is determined that there is no similar stored basic data, the classification module 214 performs the determination. , Unclassifiable or unclassified.
 最後に、コンピュータ200のタイムライン生成モジュール215は、ステップS06で分類された結果に基づいて作業の内容を決定し、取得した基礎データに対応づけられた作業時刻に基づいてタイムラインを生成する(ステップS07)。タイムラインを生成する際には、作業Xを行っている人又は機器ごとのタイムラインを生成する。又は、作業Xを行っている人又は機器ごとの作業時間の割合を生成する。本発明において、タイムラインには作業内容と作業時間を含むものとする。ここで、タイムラインの生成又は作業時間の割合の生成を行う場合には、基礎データに含まれる作業の時刻を特定するための情報を利用する。人又は機器ごとの作業時間のタイムラインを生成するために、どの機器から取得した基礎データであるかという情報を、あわせて利用するものとする。生成したタイムライン情報は、表示又は音声等により出力を行ってよい。 Finally, the timeline generation module 215 of the computer 200 determines the content of the work based on the result classified in step S06, and generates a timeline based on the work time associated with the acquired basic data ( Step S07). When generating the timeline, a timeline is generated for each person or device performing the work X. Alternatively, the ratio of the work time for each person or device performing the work X is generated. In the present invention, the timeline includes work contents and work time. Here, when generating a timeline or generating a work time ratio, information for specifying a work time included in the basic data is used. In order to generate a timeline of work time for each person or device, information on which device is the basic data acquired is used together. The generated timeline information may be output by display or voice.
 図12は、本発明のタイムライン出力表示の一例である。ここでは、2018年2月2日の8:00から18:00までの、ある工事現場の作業状況をタイムラインとして表示した例を示している。作業を行った人又は機器として、油圧ショベルA、油圧ショベルB、クローラダンプA、ブルドーザA、振動ローラA、ダンプトラックA、ダンプトラックB、作業者A、作業者B、作業者Cがあり、それぞれの作業内容についてのタイムラインを表示している。ここで、例えば作業分類システムのユーザが、ダンプトラックAの13:00から17:00までの作業である公道移動を選択した場合、吹き出し1201に示すように、「13:00出発 15:30地点Z 17:00到着」等、その作業内での詳細な履歴が分かるような出力を行ってもよい。 FIG. 12 is an example of the timeline output display of the present invention. Here, an example is shown in which the work status of a certain construction site from 8:00 to 18:00 on February 2, 2018 is displayed as a timeline. There are hydraulic excavator A, hydraulic excavator B, crawler dump A, bulldozer A, vibration roller A, dump truck A, dump truck B, worker A, worker B, worker C as the person or equipment who performed the work. The timeline for each work is displayed. Here, for example, when the user of the work classification system selects the public road movement that is the work from 13:00 to 17:00 of the dump truck A, as shown in a balloon 1201, “13:00 departure 15:30 point The output may be performed so that a detailed history within the work can be understood, such as “Z 17:00 arrival”.
 また、図13は、作業状況の詳細表示の一例である。図12の画面において、作業分類システムのユーザが、油圧ショベルAの8:00から12:00までの作業である法面形成を選択した場合の例とする。図13の吹き出し1301に示すように、8:00から12:00の各時間の作業位置や作業の状況が分かるように、取得した基礎データに含まれる情報をもとに、画像や模式図等での表示を行ってもよい。 FIG. 13 is an example of detailed display of work status. In the screen of FIG. 12, it is assumed that the user of the work classification system selects the slope formation that is the work of the excavator A from 8:00 to 12:00. As shown in a balloon 1301 in FIG. 13, an image, a schematic diagram, etc. based on information included in the acquired basic data so that the work position and work status of each time from 8:00 to 12:00 can be understood. You may display with.
 図14は、作業時間の割合表示の一例である。タイムライン生成の代わりに、作業時間の割合生成を行ってもよい。ここでは、ウインドウ1401に、2018年2月2日の作業時間9時間中の、作業の割合を示している。作業を行った人又は機器として、油圧ショベルA、油圧ショベルB、クローラダンプA、作業者A、それぞれについて、作業時間中の各作業の割合をパーセント表示した例である。人又は機器のすべての作業を特定できるわけではないので、作業時間の割合の合計は必ずしも100パーセントにならなくてもよいものとする。また、図14では、すべての人又は機器の作業をまとめて表示しているが、人ごと又は機器ごとに個別の表示を行ってもよいものとする。 FIG. 14 is an example of a work time ratio display. Instead of timeline generation, work time ratio generation may be performed. Here, the window 1401 shows the ratio of work during the work time of February 2, 2018, 9 hours. In this example, the percentage of each work during the work time is displayed as a percentage for each of the excavator A, the excavator B, the crawler dumper A, and the worker A as the person or equipment that performed the work. Since not all work of a person or equipment can be specified, the total percentage of work time does not necessarily need to be 100%. In FIG. 14, the operations of all the people or devices are collectively displayed. However, individual display may be performed for each person or for each device.
 このように、本発明によれば、様々な作業に関する複数の基礎データを取得し、その特徴量から未知の基礎データがどの作業に関するデータであるかを分類し、さらに分類した作業についてのタイムライン又は作業時間の割合を生成することが可能な作業データ分類システム、作業データ分類方法、およびプログラムを提供することが可能となる。 As described above, according to the present invention, a plurality of basic data relating to various works is obtained, and the work related to the unknown basic data is classified from the feature amount, and the timeline for the classified work is further classified. Alternatively, it is possible to provide a work data classification system, a work data classification method, and a program capable of generating a work time ratio.
 [各機能の説明]
 図2は、装置100とコンピュータ200の機能ブロックと各機能の関係を示す図である。装置100は、センサ部10、制御部110、通信部120、記憶部130から構成される。また、コンピュータ200は、制御部210、通信部220、記憶部230、入出力部240、から構成される。制御部210は通信部220、記憶部230と協働して取得モジュール211を実現する。また、制御部210は記憶部230と協働して抽出モジュール212、判定モジュール213、分類モジュール214、を実現する。また、制御部210は記憶部230、入出力部240と協働してタイムライン生成モジュール215、を実現する。記憶部230は、制御部210と協働して記憶モジュール231を実現する。通信網300は、インターネット等の公衆通信網でも専用通信網でもよく、装置100とコンピュータ200間の通信を可能とする。
[Description of each function]
FIG. 2 is a diagram illustrating the functional blocks of the apparatus 100 and the computer 200 and the relationship between the functions. The apparatus 100 includes a sensor unit 10, a control unit 110, a communication unit 120, and a storage unit 130. The computer 200 includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240. The control unit 210 implements the acquisition module 211 in cooperation with the communication unit 220 and the storage unit 230. Further, the control unit 210 implements an extraction module 212, a determination module 213, and a classification module 214 in cooperation with the storage unit 230. Further, the control unit 210 implements a timeline generation module 215 in cooperation with the storage unit 230 and the input / output unit 240. The storage unit 230 implements the storage module 231 in cooperation with the control unit 210. The communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the apparatus 100 and the computer 200.
 装置100は、画像や機械ログや各種センサのデータを取得可能なセンサ部10を備え、コンピュータ200とデータ通信可能な装置である。ここでは、例として、装置100AとしてWEBカメラを、装置100Bとしてウェアラブルデバイスを図示しているが、デジタルカメラ、デジタルビデオ、防犯カメラ、車載カメラ、360度カメラ、工業用装置、農業用装置、ドローン、ウェアラブルデバイス、等の必要な機能を備える装置であってよい。また、記憶部130に各種基礎データを保存可能としてもよい。 The apparatus 100 includes the sensor unit 10 capable of acquiring images, machine logs, and data of various sensors, and is an apparatus capable of data communication with the computer 200. Here, as an example, a WEB camera is illustrated as the apparatus 100A, and a wearable device is illustrated as the apparatus 100B. However, a digital camera, a digital video, a security camera, an in-vehicle camera, a 360-degree camera, an industrial apparatus, an agricultural apparatus, a drone , A wearable device, etc., and a device having necessary functions. Various basic data may be stored in the storage unit 130.
 装置100は、センサ部10として、画像や機械ログや各種センサのデータを取得可能なセンサを備える。また、得られるデータは、特徴量の抽出に必要な精度を持つものとする。 The apparatus 100 includes a sensor capable of acquiring data of images, machine logs, and various sensors as the sensor unit 10. Further, the obtained data is assumed to have the accuracy necessary for extracting the feature amount.
 制御部110として、CPU(Central Processing Unit)、RAM(Random Access Memory)、ROM(Read Only Memory)等を備える。 The control unit 110 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like.
 通信部120として、他の機器と通信可能にするためのデバイス、例えば、IEEE802.11に準拠したWiFi(Wireless Fidelity)対応デバイス又は第3世代、第4世代移動通信システム等のIMT-2000規格に準拠した無線デバイス等を備える。有線によるLAN接続であってもよい。 As the communication unit 120, a device for enabling communication with other devices, for example, a WiFi (Wireless Fidelity) compatible device compliant with IEEE 802.11 or an IMT-2000 standard such as a third generation or fourth generation mobile communication system. Compliant wireless device etc. It may be a wired LAN connection.
 記憶部130として、ハードディスクや半導体メモリによる、データのストレージ部を備え、撮像画像や、撮像条件等の必要なデータ等を記憶する。 The storage unit 130 includes a data storage unit such as a hard disk or a semiconductor memory, and stores captured images, necessary data such as imaging conditions, and the like.
 コンピュータ200は、装置100とデータ通信可能な計算装置である。ここでは、例としてデスクトップ型のコンピュータを図示しているが、携帯電話、携帯情報端末、タブレット端末、パーソナルコンピュータに加え、ネットブック端末、スレート端末、電子書籍端末、携帯型音楽プレーヤ等の電化製品や、スマートグラス、ヘッドマウントディスプレイ等のウェアラブル端末等であってよい。また、コンピュータ200は、実在する装置に限らず、仮想的な装置であってもよい。また、コンピュータ200は装置100と同一の装置であってもよい。 The computer 200 is a computing device capable of data communication with the device 100. Here, a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electrical appliances such as a netbook terminal, a slate terminal, an electronic book terminal, and a portable music player Or a wearable terminal such as a smart glass or a head-mounted display. Further, the computer 200 is not limited to a real device, and may be a virtual device. The computer 200 may be the same device as the device 100.
 制御部210として、CPU、RAM、ROM等を備える。制御部210は通信部220、記憶部230と協働して取得モジュール211を実現する。また、制御部210は記憶部230と協働して抽出モジュール212、判定モジュール213、分類モジュール214、を実現する。また、制御部210は記憶部230、入出力部240と協働してタイムライン生成モジュール215、を実現する。 The control unit 210 includes a CPU, RAM, ROM, and the like. The control unit 210 implements the acquisition module 211 in cooperation with the communication unit 220 and the storage unit 230. Further, the control unit 210 implements an extraction module 212, a determination module 213, and a classification module 214 in cooperation with the storage unit 230. Further, the control unit 210 implements a timeline generation module 215 in cooperation with the storage unit 230 and the input / output unit 240.
 通信部220として、他の機器と通信可能にするためのデバイス、例えば、IEEE802.11に準拠したWiFi対応デバイス又は第3世代、第4世代移動通信システム等のIMT-2000規格に準拠した無線デバイス等を備える。有線によるLAN接続であってもよい。 A device for enabling communication with other devices as the communication unit 220, for example, a WiFi compatible device compliant with IEEE802.11 or a wireless device compliant with the IMT-2000 standard such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection.
 記憶部230として、ハードディスクや半導体メモリによる、データのストレージ部を備え、作業と複数の基礎データの特徴量を関連付けたもの、教師データ、判定結果、分類結果、等の処理に必要なデータ等を記憶する。記憶部230は、制御部210と協働して記憶モジュール231を実現する。また、記憶部230に、作業と複数の基礎データの特徴量を関連付けたものを記憶するためのデータベースを備えてもよい。 The storage unit 230 includes a data storage unit such as a hard disk or a semiconductor memory, which associates features with features of a plurality of basic data, teacher data, determination results, classification results, data necessary for processing, etc. Remember. The storage unit 230 implements the storage module 231 in cooperation with the control unit 210. In addition, the storage unit 230 may include a database for storing a work associated with feature quantities of a plurality of basic data.
 入出力部240は、作業データ分類システムを利用するために必要な機能を備えるものとする。入力を実現するための例として、タッチパネル機能を実現する液晶ディスプレイ、キーボード、マウス、ペンタブレット、装置上のハードウェアボタン、音声認識を行うためのマイク等を備えることが可能である。また、出力を実現するための例として、液晶ディスプレイ、PCのディスプレイ、プロジェクターへの投影等の表示と音声出力等の形態が考えられる。入出力方法により、本発明は特に機能を限定されるものではない。 Suppose that the input / output unit 240 has functions necessary for using the work data classification system. As an example for realizing the input, a liquid crystal display that realizes a touch panel function, a keyboard, a mouse, a pen tablet, a hardware button on the apparatus, a microphone for performing voice recognition, and the like can be provided. Further, as an example for realizing the output, forms such as a liquid crystal display, a PC display, a projection on a projector, and an audio output can be considered. The function of the present invention is not particularly limited by the input / output method.
 [タイムライン生成処理]
 図3は、タイムライン生成処理のフローチャート図である。上述した各モジュールが実行する処理について、本処理にあわせて説明する。
[Timeline generation processing]
FIG. 3 is a flowchart of the timeline generation process. Processing executed by each module described above will be described in accordance with this processing.
 まず、コンピュータ200の記憶モジュール231は、記憶部230に作業と複数の基礎データの特徴量を関連付けたものを記憶する(ステップS301)。基礎データとは、装置100から取得した画像や機械ログや各種センサのデータのことを総称して表すものとする。また、基礎データには、作業の時刻を特定するための情報を必ず含むものとする。作業と複数の基礎データの特徴量を関連付けたものは、他のコンピュータや記憶媒体から取得してもよいし、コンピュータ200で作成してもよい。また、記憶部230に専用のデータベースを設けてもよい。ステップS301の処理は、既に、作業と複数の基礎データの特徴量を関連付けたものが記憶されている場合、新しく記憶すべき作業と基礎データの特徴量を関連付けたものが存在しない場合にはスキップしてよいものとする。 First, the storage module 231 of the computer 200 stores in the storage unit 230 the work associated with the feature quantities of a plurality of basic data (step S301). The basic data is a generic term that represents images, machine logs, and data of various sensors acquired from the apparatus 100. The basic data always includes information for specifying the time of work. The work associated with the feature quantities of the plurality of basic data may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, a dedicated database may be provided in the storage unit 230. The process of step S301 is skipped when there is already stored an association between the work and the feature values of the plurality of basic data, or there is no association between the work to be newly stored and the feature data of the basic data. I shall do it.
 図9は、記憶部230に記憶済の作業と基礎データのデータ構造を示す表の一例である。ここでは、作業Xとして、2つの基礎データが記憶されており、そのうちデータAの特徴量はvであり、データの種類としては動画、データBの特徴量はwであり、データの種類としては加速度センサのものであることを示している。また、作業Yとして、3つの基礎データが記憶されており、そのうちデータCの特徴量はxであり、データの種類としては動画、データDの特徴量はyであり、データの種類としては機械ログ、データEの特徴量はzであり、データの種類としてはGPSのものであることを示している。これらのデータとあわせて、各データそのものを記憶してもよい。図9では、各データそのものの保存先を、表の一番右の列に記載した例を示している。 FIG. 9 is an example of a table showing the data structure of the work stored in the storage unit 230 and the basic data. Here, two basic data are stored as operation X, of which the feature amount of data A is v, the data type is moving image, the feature amount of data B is w, and the data type is It indicates that the sensor is an acceleration sensor. Also, as the operation Y, three basic data are stored, of which the feature amount of the data C is x, the type of data is moving image, the feature amount of the data D is y, and the type of data is machine The feature amount of the log and data E is z, which indicates that the data type is GPS. Each data itself may be stored together with these data. FIG. 9 shows an example in which the storage destination of each data itself is described in the rightmost column of the table.
 図3に戻り、コンピュータ200の取得モジュール211は、装置100に対して、基礎データの送信を要求する(ステップ302)。装置100が複数ある場合には、全ての装置に対して、基礎データの送信を要求するものとする。 Returning to FIG. 3, the acquisition module 211 of the computer 200 requests the apparatus 100 to transmit basic data (step 302). When there are a plurality of apparatuses 100, it is assumed that transmission of basic data is requested to all apparatuses.
 装置100の制御部110は、コンピュータ200からの基礎データ送信要求を受けて、記憶部130に基礎データの保存を行う(ステップS303)。 In response to the basic data transmission request from the computer 200, the control unit 110 of the apparatus 100 stores the basic data in the storage unit 130 (step S303).
 そして、装置100は通信部120を介して、未知の基礎データを、コンピュータ200に送信する(ステップS304)。 Then, the device 100 transmits unknown basic data to the computer 200 via the communication unit 120 (step S304).
 コンピュータ200の取得モジュール211は、未知の基礎データを取得する(ステップS305)。装置100が複数ある場合、つまり未知の基礎データが複数ある場合には、1つの作業に関する基礎データとして、全てを取得するものとする。取得モジュール211は、装置100がリアルタイムに取得している基礎データの取得を行うだけでなく、装置100が過去に取得して記憶部130に保存しておいた基礎データを取得してもよい。 The acquisition module 211 of the computer 200 acquires unknown basic data (step S305). When there are a plurality of devices 100, that is, when there are a plurality of unknown basic data, all are acquired as basic data relating to one operation. The acquisition module 211 may not only acquire basic data acquired in real time by the device 100 but also acquire basic data acquired by the device 100 in the past and stored in the storage unit 130.
 図10は、未知の基礎データのデータ構造を示す表の一例である。ここでは、作業は3つの基礎データからなり、そのうちデータFの特徴量はsであり、データの種類としてはGPS、データGの特徴量はtであり、データの種類としては機械ログ、データHの特徴量はuであり、データの種類としては動画であることを示している。これらのデータとあわせて、各データそのものを記憶してもよい。図10では、各データそのものの保存先を、表の一番右の列に記載した例を示している。 FIG. 10 is an example of a table showing the data structure of unknown basic data. Here, the work consists of three basic data, of which the feature amount of data F is s, the data type is GPS, the feature amount of data G is t, the data type is machine log, and data H The feature amount is u, indicating that the data type is a moving image. Each data itself may be stored together with these data. FIG. 10 shows an example in which the storage destination of each data itself is described in the rightmost column of the table.
 再び図3に戻り、コンピュータ200の抽出モジュール212は、ステップS305で取得した未知の基礎データの特徴量を抽出する(ステップS306)。ここでの特徴量とは、例えば、データの種類に応じたものとしてもよいし、又は、データの内容を分析して、それに応じたものとしてもよく、システムにあわせて適切なものを特徴量として使用してよいものとする。例えば、データ種類が動画であれば、動画の画像解析を行ったものを特徴量としてよいし、データ種類が加速度センサであれば、動作を解析したものを特徴量としてよいし、機械ログであれば、機械の動作や時間を解析したものを特徴量としてよいし、GPSデータであれば、地図データや気象データ等を参照して場所や高さや天候を特徴量としてもよい。 3 again, the extraction module 212 of the computer 200 extracts the feature quantity of the unknown basic data acquired in step S305 (step S306). The feature amount here may be, for example, according to the type of data, or may be determined by analyzing the content of the data, and depending on the system. It may be used as For example, if the data type is a moving image, the image obtained by analyzing the moving image may be used as the feature amount. If the data type is the acceleration sensor, the operation amount may be used as the feature amount. For example, an analysis of machine operation and time may be used as the feature amount, and in the case of GPS data, the location, height, and weather may be used as the feature amount with reference to map data, weather data, and the like.
 次に、コンピュータ200の判定モジュール213は、未知の基礎データの特徴量が、記憶部230に記憶済の、どの基礎データの特徴量と似ているかを判定する(ステップS307)。判定方法の詳細については、後述する。 Next, the determination module 213 of the computer 200 determines which basic data feature quantity stored in the storage unit 230 is similar to the characteristic quantity of unknown basic data (step S307). Details of the determination method will be described later.
 図11は、未知の基礎データに類似の特徴量を持つ記憶済基礎データの判定結果を含む表の一例である。ここでは、データFの特徴量はsでありデータEに似ていること、データGの特徴量はtでありデータDに似ていること、データHの特徴量はuでありデータAに似ていることを例として示している。 FIG. 11 is an example of a table including determination results of stored basic data having a feature quantity similar to unknown basic data. Here, the feature amount of data F is s and similar to data E, the feature amount of data G is t and similar to data D, the feature amount of data H is u and similar to data A This is shown as an example.
 次に、コンピュータ200の分類モジュール214は、取得した未知の基礎データがどの作業に関するデータであるかを分類する(ステップS308)。ここでは、図9と図11より、データFは作業YのデータEに似ている、データGは作業YのデータDに似ている、データHは作業XのデータAに似ていることが分かる。分類方法としては、例えば、基礎データの判定数を優先して分類する場合、3つのセンサのうち2つが作業Y、1つが作業Xに似ているという結果を基に、データF、データG、データHによる作業は、作業Yであると分類できる。又は、データの種類を優先して分類する場合、例えば動画による分類結果を優先するという設定であれば、データHは作業が作業XのデータAに似ているという判定結果を重視して、データF、データG、データHによる作業は、作業Xであると分類できる。ここでは、未知の基礎データに類似の特徴量を持つ記憶済基礎データがあると判定される場合について記載したが、類似の記憶済基礎データが無いと判定された場合には、分類モジュール214により、分類不可能、又は未分類としてもよいものとする。 Next, the classification module 214 of the computer 200 classifies which work the acquired unknown basic data is related to (step S308). 9 and 11, the data F is similar to the data E of the work Y, the data G is similar to the data D of the work Y, and the data H is similar to the data A of the work X. I understand. As a classification method, for example, when classification is performed with priority on the number of determinations of basic data, based on the result that two of three sensors are similar to work Y and one is similar to work X, data F, data G, The work based on data H can be classified as work Y. Or, when classification is performed with priority given to the type of data, for example, if the classification result based on moving images is prioritized, the data H gives priority to the determination result that the work is similar to the data A of the work X, and the data The work by F, data G, and data H can be classified as work X. Here, the case where it is determined that there is stored basic data having a similar feature amount in the unknown basic data is described. However, when it is determined that there is no similar stored basic data, the classification module 214 performs the determination. , Unclassifiable or unclassified.
 最後に、コンピュータ200のタイムライン生成モジュール215は、ステップS308で分類された結果に基づいて作業の内容を決定し、取得した基礎データに対応づけられた作業時刻に基づいてタイムラインを生成する(ステップS309)。タイムラインを生成する際には、作業Xを行っている人又は機器ごとのタイムラインを生成する。又は、作業Xを行っている人又は機器ごとの作業時間の割合を生成する。本発明において、タイムラインには作業内容と作業時間を含むものとする。ここで、タイムラインの生成又は作業時間の割合の生成を行う場合には、基礎データに含まれる作業の時刻を特定するための情報を利用する。人又は機器ごとの作業時間のタイムラインを生成するために、どの機器から取得した基礎データであるかという情報を、あわせて利用するものとする。生成したタイムライン情報は、表示又は音声等により出力を行ってよい。 Finally, the timeline generation module 215 of the computer 200 determines the content of the work based on the result classified in step S308, and generates a timeline based on the work time associated with the acquired basic data ( Step S309). When generating the timeline, a timeline is generated for each person or device performing the work X. Alternatively, the ratio of the work time for each person or device performing the work X is generated. In the present invention, the timeline includes work contents and work time. Here, when generating a timeline or generating a work time ratio, information for specifying a work time included in the basic data is used. In order to generate a timeline of work time for each person or device, information on which device is the basic data acquired is used together. The generated timeline information may be output by display or voice.
 図12は、本発明のタイムライン出力表示の一例である。ここでは、2018年2月2日の8:00から18:00までの、ある工事現場の作業状況をタイムラインとして表示した例を示している。作業を行った人又は機器として、油圧ショベルA、油圧ショベルB、クローラダンプA、ブルドーザA、振動ローラA、ダンプトラックA、ダンプトラックB、作業者A、作業者B、作業者Cがあり、それぞれの作業内容についてのタイムラインを表示している。ここで、例えば作業分類システムのユーザが、ダンプトラックAの13:00から17:00までの作業である公道移動を選択した場合、吹き出し1201に示すように、「13:00出発 15:30地点Z 17:00到着」等、その作業内での詳細な履歴が分かるような出力を行ってもよい。 FIG. 12 is an example of the timeline output display of the present invention. Here, an example is shown in which the work status of a certain construction site from 8:00 to 18:00 on February 2, 2018 is displayed as a timeline. There are hydraulic excavator A, hydraulic excavator B, crawler dump A, bulldozer A, vibration roller A, dump truck A, dump truck B, worker A, worker B, worker C as the person or equipment who performed the work. The timeline for each work is displayed. Here, for example, when the user of the work classification system selects the public road movement that is the work from 13:00 to 17:00 of the dump truck A, as shown in a balloon 1201, “13:00 departure 15:30 point The output may be performed so that a detailed history within the work can be understood, such as “Z 17:00 arrival”.
 また、図13は、作業状況の詳細表示の一例である。図12の画面において、作業分類システムのユーザが、油圧ショベルAの8:00から12:00までの作業である法面形成を選択した場合の例とする。図13の吹き出し1301に示すように、8:00から12:00の各時間の作業位置や作業の状況が分かるように、取得した基礎データに含まれる情報をもとに、画像や模式図等での表示を行ってもよい。 FIG. 13 is an example of detailed display of work status. In the screen of FIG. 12, it is assumed that the user of the work classification system selects the slope formation that is the work of the excavator A from 8:00 to 12:00. As shown in a balloon 1301 in FIG. 13, an image, a schematic diagram, etc. based on information included in the acquired basic data so that the work position and work status of each time from 8:00 to 12:00 can be understood. You may display with.
 図14は、作業時間の割合表示の一例である。タイムライン生成の代わりに、作業時間の割合生成を行ってもよい。ここでは、ウインドウ1401に、2018年2月2日の作業時間9時間中の、作業の割合を示している。作業を行った人又は機器として、油圧ショベルA、油圧ショベルB、クローラダンプA、作業者A、それぞれについて、作業時間中の各作業の割合をパーセント表示した例である。人又は機器のすべての作業を特定できるわけではないので、作業時間の割合の合計は必ずしも100パーセントにならなくてもよいものとする。また、図14では、すべての人又は機器の作業をまとめて表示しているが、人ごと又は機器ごとに個別の表示を行ってもよいものとする。 FIG. 14 is an example of a work time ratio display. Instead of timeline generation, work time ratio generation may be performed. Here, the window 1401 shows the ratio of work during the work time of February 2, 2018, 9 hours. In this example, the percentage of each work during the work time is displayed as a percentage for each of the excavator A, the excavator B, the crawler dumper A, and the worker A as the person or equipment that performed the work. Since not all work of a person or equipment can be specified, the total percentage of work time does not necessarily need to be 100%. In FIG. 14, the operations of all the people or devices are collectively displayed. However, individual display may be performed for each person or for each device.
 このように、本発明によれば、様々な作業に関する複数の基礎データを取得し、その特徴量から未知の基礎データがどの作業に関するデータであるかを分類し、さらに分類した作業についてのタイムライン又は作業時間の割合を生成することが可能な作業データ分類システム、作業データ分類方法、およびプログラムを提供することが可能となる。 As described above, according to the present invention, a plurality of basic data relating to various works is obtained, and the work related to the unknown basic data is classified from the feature amount, and the timeline for the classified work is further classified. Alternatively, it is possible to provide a work data classification system, a work data classification method, and a program capable of generating a work time ratio.
 [未知の基礎データの判定数に応じた分類処理]
 図4は、コンピュータ200で未知の基礎データの判定数に応じて分類する場合のフローチャート図である。構成としては、図2の装置100とコンピュータ200と同等の構成を備えるものとする。また、図3のフローチャートのステップS307とステップS308に相当する処理である。以下では、図3のステップS306までのフロー後の処理として未知の基礎データの判定数に応じた基礎データ分類処理を説明する。ここでは、説明のため、前述の図9、図10、図11のデータ例を使用するものとする。
[Classification process according to the number of judgments of unknown basic data]
FIG. 4 is a flowchart when the computer 200 classifies the unknown basic data according to the number of determinations. As a configuration, it is assumed that the configuration is the same as that of the apparatus 100 and the computer 200 of FIG. This process corresponds to step S307 and step S308 in the flowchart of FIG. Hereinafter, basic data classification processing according to the number of unknown basic data determinations will be described as processing after the flow up to step S306 in FIG. Here, for the sake of explanation, the data examples shown in FIGS. 9, 10, and 11 are used.
 ステップS305で取得した未知の基礎データについて、ステップS306でそれぞれの基礎データの特徴量の抽出済みである。ここで、取得した未知の基礎データの個数をカウントする(ステップS401)。前述した図10の例では、未知の基礎データの個数は3個である。 For the unknown basic data acquired in step S305, the feature amount of each basic data has been extracted in step S306. Here, the number of acquired unknown basic data is counted (step S401). In the example of FIG. 10 described above, the number of unknown basic data is three.
 次に、未知の基礎データを1つ選択する(ステップS402)。ここでは、図10のデータFを選択したものとする。 Next, one unknown basic data is selected (step S402). Here, it is assumed that data F in FIG. 10 is selected.
 コンピュータ200の判定モジュール213は、未知の基礎データの特徴量が、記憶部230に記憶済の、どの基礎データの特徴量と似ているかを判定する(ステップS403)。ここでは、図11に示すように、データFの特徴量はデータEに似ていると判定したものとする。 The determination module 213 of the computer 200 determines which feature value of the basic data stored in the storage unit 230 is similar to the feature value of the unknown basic data (step S403). Here, it is assumed that the feature amount of the data F is determined to be similar to the data E as shown in FIG.
 次に、判定モジュール213は、すべての未知の基礎データの判定が終了したかどうかを確認する(ステップS404)。 Next, the determination module 213 confirms whether determination of all unknown basic data has been completed (step S404).
 この時点では、終了していないので、ステップS402に戻って、未知の基礎データを1つ選択する。ここでは、図10のデータGを選択したものとする。 At this point, since the process has not been completed, the process returns to step S402 to select one unknown basic data. Here, it is assumed that data G in FIG. 10 is selected.
 次に、判定モジュール213は、ステップS403で、図11に示すように、データGの特徴量はデータDに似ていると判定する。 Next, in step S403, the determination module 213 determines that the feature amount of the data G is similar to the data D as shown in FIG.
 判定モジュール213は、ステップS404で、再度、すべての未知の基礎データの判定が終了したかどうかを確認する。 The determination module 213 confirms again whether or not determination of all unknown basic data has been completed in step S404.
 この時点でも、まだすべての基礎データの判定は終了していないので、ステップS402に戻って、未知の基礎データを1つ選択する。ここでは、図10のデータHを選択したものとする。 At this point in time, determination of all basic data has not been completed yet, so the process returns to step S402 and one unknown basic data is selected. Here, it is assumed that the data H in FIG. 10 is selected.
 次に、判定モジュール213は、ステップS403で、図11に示すように、データHの特徴量はデータAに似ていると判定する。 Next, in step S403, the determination module 213 determines that the feature amount of the data H is similar to the data A as shown in FIG.
 判定モジュール213は、ステップS404で、すべての未知の基礎データの判定が終了したかどうかを確認する。 The determination module 213 confirms whether or not determination of all unknown basic data has been completed in step S404.
 ここで、すべての基礎データの判定が終了しているので、ステップS405に進み、関連付けられた基礎データの特徴量と似ていると判定された数が多い作業に未知の基礎データを分類する。ここでは、データFは作業YのデータEに似ていること、データGは作業YのデータDに似ていること、データHは作業XのデータAに似ていることから、3個の未知の基礎データのうち、作業Yの特徴量に似ていると判断された数が2個、作業Xの特徴量に似ていると判断された数が1個である。そこで、分類モジュール214は、データF、データG、データHは、作業Yに関する基礎データであると分類する。 Here, since the determination of all the basic data has been completed, the process proceeds to step S405, and the unknown basic data is classified into the work having a large number determined to be similar to the feature value of the related basic data. Here, since the data F is similar to the data E of the work Y, the data G is similar to the data D of the work Y, and the data H is similar to the data A of the work X, three unknowns Among the basic data, the number determined to be similar to the feature value of the work Y is two, and the number determined to be similar to the feature value of the work X is one. Therefore, the classification module 214 classifies the data F, data G, and data H as basic data related to the work Y.
 このように、本発明によれば、様々な作業に関する複数の基礎データを取得し、その一連の作業が持つ一定の規則性と相関性を特徴量として判断し、特徴量と作業とを関連付けて記憶しておき、未知の基礎データの判定数に応じた基礎データ分類処理を行うことで、未知の基礎データがどの作業に関するデータであるかを適切に分類することが可能となる。 As described above, according to the present invention, a plurality of basic data relating to various works is acquired, a certain regularity and correlation of the series of works is determined as a feature quantity, and the feature quantity and the work are associated with each other. By storing and performing basic data classification processing according to the number of determinations of unknown basic data, it is possible to appropriately classify which work the unknown basic data is.
 [特徴量判定処理]
 図5は、コンピュータ200で未知の基礎データの特徴量と記憶済基礎データの特徴量とが似ているかどうかを判定するフローチャート図の一例である。構成としては、図2の装置100とコンピュータ200と同等の構成を備えるものとする。また、図3のフローチャートのステップS307に相当する処理の一例とする。以下では、図3のステップS306までのフロー後の処理として特徴量判定処理を説明する。ここでは、説明のため、前述の図9、図10、図11のデータ例を使用するものとする。
[Feature determination processing]
FIG. 5 is an example of a flowchart for determining whether or not the feature quantity of the unknown basic data is similar to the feature quantity of the stored basic data in the computer 200. As a configuration, it is assumed that the configuration is the same as that of the apparatus 100 and the computer 200 of FIG. Also, an example of processing corresponding to step S307 in the flowchart of FIG. Hereinafter, the feature amount determination process will be described as the process after the flow up to step S306 in FIG. Here, for the sake of explanation, the data examples shown in FIGS. 9, 10, and 11 are used.
 ステップS305で取得した未知の基礎データについて、ステップS306でそれぞれの基礎データの特徴量の抽出済みである。ここで、未知の基礎データを1つ選択する(ステップS501)。ここでは、図10のデータFを選択したものとする。 For the unknown basic data acquired in step S305, the feature amount of each basic data has been extracted in step S306. Here, one unknown basic data is selected (step S501). Here, it is assumed that data F in FIG. 10 is selected.
 次に、コンピュータ200の判定モジュール213は、比較したい記憶済基礎データを選択する(ステップS502)。ここでは、図9のデータAを選択したものとする。 Next, the determination module 213 of the computer 200 selects stored basic data to be compared (step S502). Here, it is assumed that data A in FIG. 9 is selected.
 判定モジュール213は、未知の基礎データであるデータFの特徴量sと、比較したい記憶済基礎データであるデータAの特徴量vの内積を求める(ステップS503)。 The determination module 213 obtains the inner product of the feature quantity s of the data F that is unknown basic data and the feature quantity v of the data A that is stored basic data to be compared (step S503).
 また、判定モジュール213は、未知の基礎データであるデータFの特徴量sの絶対値と、比較したい記憶済基礎データであるデータAの特徴量vの絶対値の積を求める(ステップS504)。 Further, the determination module 213 obtains the product of the absolute value of the feature value s of the data F, which is unknown basic data, and the absolute value of the feature value v of the data A, which is stored basic data to be compared (step S504).
 判定モジュール213は、ステップS503で求めた内積と、ステップS504で求めた積との差を求める(ステップS505)。 The determination module 213 obtains a difference between the inner product obtained in step S503 and the product obtained in step S504 (step S505).
 そして、判定モジュール213は、ステップS506で求めた差が所定の範囲よりも小さい場合には、データFはデータAに似ていると判定し(ステップS506)、その差が所定の範囲以上の場合には、データFはデータAに似ていないと判定する(ステップS507)。ここでは、データFはデータAに似ていないと判定したものとする。 If the difference obtained in step S506 is smaller than the predetermined range, the determination module 213 determines that the data F is similar to the data A (step S506), and the difference is greater than or equal to the predetermined range. Is determined not to be similar to data A (step S507). Here, it is assumed that the data F is determined not to be similar to the data A.
 次に、判定モジュール213は、すべての記憶済基礎データの判定が終了したかどうかを確認し(ステップS508)、判定が終了していないときは、ステップS502に戻って処理を継続し、判定が終了しているときは、次のステップS509に進む。つまり、未知の基礎データであるデータFが、記憶済基礎データであるデータA、データB、データC、データD、データE、のそれぞれと似ているかどうかの判定がすべて終了しているときには、次のステップS509に進む。ここでは、データFはデータEにのみ、似ていると判定されたものとする。 Next, the determination module 213 confirms whether the determination of all stored basic data has been completed (step S508). If the determination has not ended, the process returns to step S502 to continue the process. If completed, the process proceeds to the next step S509. That is, when all the determinations as to whether the data F, which is unknown basic data, are similar to the data A, data B, data C, data D, data E, which are stored basic data, have been completed, Proceed to the next Step S509. Here, it is assumed that data F is determined to be similar to data E only.
 最後に、判定モジュール213は、すべての未知の基礎データの判定が終了したかどうかを確認し(ステップS509)、判定が終了していないときは、ステップS501に戻って処理を継続し、判定が終了しているときは、特徴量判定処理を終了する。つまり、未知の基礎データであるデータF、データG、データHが,それぞれ、どの記憶済基礎データと似ているかの判定が終了している場合には、処理を終了する。ここでは、データFはデータEにのみ似ている、データGはデータDにのみ似ている、データHはデータAにのみ似ている、と判定されたものとする。 Finally, the determination module 213 confirms whether the determination of all unknown basic data has been completed (step S509). If the determination has not ended, the process returns to step S501 to continue the process. If it has ended, the feature amount determination processing ends. That is, when it is determined that the stored basic data is similar to the data F, data G, and data H, which are unknown basic data, the process ends. Here, it is assumed that the data F is similar to only the data E, the data G is similar to only the data D, and the data H is similar to only the data A.
 以上の特徴量判定処理後に、図3のフローチャートのステップS308に戻り、コンピュータ200の分類モジュール214により、取得した未知の基礎データがどの作業に関するデータであるかを分類する。ステップS308では、データFは作業YのデータEに似ている、データGは作業YのデータDに似ている、データHは作業XのデータAに似ているという判定結果に基づく分類を行う。分類方法の例として、図4の説明として前述した通り、基礎データの判定数を優先し、3つのセンサのうち2つが作業Yに似ているという結果を基に、データF、データG、データHによる作業は、作業Yであると分類するものが挙げられる。別の分類方法として、データの種類を優先して分類する例では、例えば例えば動画による分類結果を優先するという設定であれば、データHは作業が作業XのデータAに似ているという判定結果を重視して、データF、データG、データHによる作業は、作業Xであると分類するというものも考えられる。ここでは、データFはデータEにのみ似ている、データGはデータDにのみ似ている、データHはデータAにのみ似ている、と判定した場合を記載してきたが、未知の基礎データと類似の記憶済基礎データが複数ある場合には、類似のセンサとして複数のセンタを保持できるものとしてもよいし、ステップS508で全記憶済基礎データの判定が終了した際に、ステップS505で求めた内積と積との差が最も小さいものを1つだけ類似のセンサとして保持してもよい。また、未知の基礎データと類似の記憶済基礎データが無いと判定された場合には、分類モジュール214により、分類不可能、又は未分類としてもよいものとする。 After the above feature amount determination processing, the process returns to step S308 in the flowchart of FIG. 3, and the classification module 214 of the computer 200 classifies which work the acquired unknown basic data is related to. In step S308, classification is performed based on the determination result that the data F is similar to the data E of the work Y, the data G is similar to the data D of the work Y, and the data H is similar to the data A of the work X. . As an example of the classification method, as described above with reference to FIG. 4, priority is given to the number of determinations of basic data, and based on the result that two of the three sensors are similar to work Y, data F, data G, data The work by H is classified as work Y. As another classification method, in an example in which classification is performed with priority on the type of data, for example, if the setting is to prioritize the classification result by moving images, for example, the determination result that the data H is similar to the data A of the work X It may be possible to classify the work using data F, data G, and data H as work X with emphasis on. Here, a case has been described in which it is determined that data F resembles only data E, data G resembles only data D, and data H resembles only data A, but unknown basic data If there are a plurality of stored basic data similar to the above, a plurality of centers may be held as similar sensors, and when the determination of all stored basic data is completed in step S508, the determination is made in step S505. Alternatively, only one sensor having the smallest difference between the inner product and the product may be held as a similar sensor. If it is determined that there is no stored basic data similar to the unknown basic data, the classification module 214 may determine that classification is not possible or that classification is not possible.
 図5のフローチャートの特徴量判定方法は、未知の基礎データの特徴量ベクトルをaとし、記憶済基礎データの特徴量ベクトルbとした場合に、これらの内積である『絶対値a×絶対値b×コサインθ』は、θが小さい場合ほど、特徴量ベクトルaと特徴量ベクトルbの向きが似ていると考えられることを利用したものである。つまり、角度θを0に近づけると、コサインθの値は1に近づくので、絶対値a×絶対値b×コサインθという内積の値も、絶対値a×絶対値b×1=絶対値a×絶対値bに近づく。すなわち、『絶対値a×絶対値b×コサインθ』と『絶対値a×絶対値b』との差が、所定の範囲内にあるとき、つまり0に近い値の時ほど、特徴量ベクトルaと特徴量ベクトルbが似ているということができる。ただし、これはあくまでも特徴量判定方法の一例であり、ステップS308の判定方法は、この方法のみに限るものではない。 The feature quantity determination method of the flowchart of FIG. 5 is that when the feature quantity vector of unknown basic data is a and the feature quantity vector b of stored basic data is “absolute value a × absolute value b” which is the inner product of these. The “× cosine θ” uses the fact that the direction of the feature vector a and the feature vector b is considered to be more similar as θ is smaller. That is, when the angle θ approaches 0, the value of cosine θ approaches 1, so the value of the inner product of absolute value a × absolute value b × cosine θ is also absolute value a × absolute value b × 1 = absolute value a ×. It approaches the absolute value b. That is, when the difference between “absolute value a × absolute value b × cosine θ” and “absolute value a × absolute value b” is within a predetermined range, that is, a value closer to 0, the feature quantity vector a It can be said that the feature vector b is similar. However, this is merely an example of a feature amount determination method, and the determination method in step S308 is not limited to this method.
 このように、本発明によれば、様々な作業に関する複数の基礎データを取得し、その一連の作業が持つ一定の規則性と相関性を特徴量として判断し、特徴量と作業とを関連付けて記憶しておくことで、未知の基礎データの特徴量が記憶済みのどのセンサの特徴量と似ているかを判定することが可能となる。 As described above, according to the present invention, a plurality of basic data relating to various works is acquired, a certain regularity and correlation of the series of works is determined as a feature quantity, and the feature quantity and the work are associated with each other. By storing it, it becomes possible to determine which of the stored sensor features is similar to the feature value of the unknown basic data.
 [機械学習を利用した特徴量判定処理]
 図6は、過去の作業と基礎データの特徴量との組み合わせを機械学習して、判定処理を行う場合のフローチャート図である。構成としては、図2の装置100とコンピュータ200と同等の構成を備えるものとする。
[Feature amount judgment processing using machine learning]
FIG. 6 is a flowchart in the case of performing determination processing by machine learning of a combination of past work and feature values of basic data. As a configuration, it is assumed that the configuration is the same as that of the apparatus 100 and the computer 200 of FIG.
 まず、コンピュータ200の記憶モジュール231は、作業と複数の基礎データの特徴量の組み合わせを教師データとして、記憶部230に記憶する(ステップS601)。作業と複数の基礎データの特徴量を組み合わせたものは、他のコンピュータや記憶媒体から取得してもよいし、コンピュータ200で作成してもよい。又は、過去に分類した未知の基礎データの特徴量と分類された作業を、教師データとして使用してもよい。また、記憶部230に教師データ専用のデータベースを設けてもよい。 First, the storage module 231 of the computer 200 stores a combination of work and feature quantities of a plurality of basic data in the storage unit 230 as teacher data (step S601). A combination of work and feature quantities of a plurality of basic data may be acquired from another computer or a storage medium, or may be created by the computer 200. Alternatively, the feature quantity of the unknown basic data classified in the past and the classified work may be used as the teacher data. Further, a database dedicated to teacher data may be provided in the storage unit 230.
 次に、コンピュータ200の判定モジュール213は、教師データを使用して、判定方法の機械学習を行う(ステップS602)。ここでの機械学習の方法として、教師あり学習(Supervised Learning)を使用することを想定する。記憶モジュール231が記憶部230に記憶した、多数の教師データをもとに、どのような基礎データの特徴量の時に、どの作業の基礎データの特徴量と似ていると判定するかを、機械学習する。この、ステップS601、ステップS602の処理は、判定方法の機械学習が不要な場合にはスキップしてよいものとする。また、教師データの数が少ないうちは、精度が低くなることが予想されるため、図3のフローを利用することが望ましい。教師あり学習で判定モジュール213を機械学習させることの利点は、過去に分類した未知の基礎データの特徴量と分類された作業を教師データとして使用することで、人手をかけずに、より判定精度を向上させることが可能な点である。ただし、機械学習のための時間がかかることが想定されるため、ステップS602の処理は、作業データ分類システムの負荷が小さい時間帯等に行うものとしてもよい。 Next, the determination module 213 of the computer 200 performs machine learning of the determination method using the teacher data (step S602). It is assumed that supervised learning is used as a machine learning method here. Based on a large number of teacher data stored in the storage unit 230 by the storage module 231, what kind of basic data features are determined to be similar to the basic data features of the work learn. The processing in step S601 and step S602 may be skipped when machine learning of the determination method is unnecessary. Further, since the accuracy is expected to decrease when the number of teacher data is small, it is desirable to use the flow of FIG. The advantage of having the determination module 213 machine-learned by supervised learning is that it uses the features of unknown basic data that have been classified in the past and the classified work as teacher data. It is a point that can be improved. However, since it is assumed that it takes time for machine learning, the processing in step S602 may be performed in a time zone where the load of the work data classification system is low.
 次の、ステップS603からステップS609の処理は、図3のステップS302から、ステップS308の処理に相当するため、ここでの説明は省略する。 Since the next processing from step S603 to step S609 corresponds to the processing from step S302 to step S308 in FIG. 3, description thereof is omitted here.
 このように、本発明によれば、特徴量と作業との組み合わせを教師データとして使用し、判定モジュール213に教師あり学習をさせることで、未知の基礎データの特徴量が記憶済みのどのセンサの特徴量と似ているかを判定する精度を向上することが可能であり、それにより、未知の基礎データがどの作業に関するデータであるかを分類する、分類精度を向上させることが可能となる。 As described above, according to the present invention, the combination of the feature value and the work is used as the teacher data, and the determination module 213 performs the supervised learning, so that the feature value of the unknown basic data is stored in any sensor. It is possible to improve the accuracy of determining whether or not the feature amount is similar to the feature amount, thereby improving the classification accuracy for classifying which work the unknown basic data is related to.
 [スケジュール比較処理]
 図7は、スケジュール比較処理を行う場合の装置100とコンピュータ200の機能ブロックと各機能の関係を示す図である。図2の構成に加えて、制御部210は記憶部230と協働してスケジュール記憶モジュール216を実現する。また、制御部210は記憶部230、入出力部240と協働してスケジュール比較モジュール217、を実現する。
[Schedule comparison process]
FIG. 7 is a diagram illustrating the functional blocks of the apparatus 100 and the computer 200 and the relationship between the functions when performing schedule comparison processing. In addition to the configuration of FIG. 2, the control unit 210 implements the schedule storage module 216 in cooperation with the storage unit 230. Further, the control unit 210 realizes a schedule comparison module 217 in cooperation with the storage unit 230 and the input / output unit 240.
 図8は、スケジュール比較処理を行う場合のフローチャート図である。上述した各モジュールが実行する処理について、本処理にあわせて説明する。 FIG. 8 is a flowchart for performing the schedule comparison process. Processing executed by each module described above will be described in accordance with this processing.
 まず、コンピュータ200の記憶モジュール231は、記憶部230に作業と複数の基礎データの特徴量を関連付けたものを記憶する(ステップS801)。基礎データとは、装置100から取得した画像や機械ログや各種センサのデータのことを総称して表すものとする。また、基礎データには、作業の時刻を特定するための情報を必ず含むものとする。作業と複数の基礎データの特徴量を関連付けたものは、他のコンピュータや記憶媒体から取得してもよいし、コンピュータ200で作成してもよい。また、記憶部230に専用のデータベースを設けてもよい。ステップS801の処理は、既に、作業と複数の基礎データの特徴量を関連付けたものが記憶されている場合、新しく記憶すべき作業と基礎データの特徴量を関連付けたものが存在しない場合にはスキップしてよいものとする。 First, the storage module 231 of the computer 200 stores the work associated with the feature quantities of the plurality of basic data in the storage unit 230 (step S801). The basic data is a generic term that represents images, machine logs, and data of various sensors acquired from the apparatus 100. The basic data always includes information for specifying the time of work. The work associated with the feature quantities of the plurality of basic data may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, a dedicated database may be provided in the storage unit 230. The process of step S801 is skipped when there is already stored an association between the work and the feature values of the plurality of basic data, or there is no association between the work to be newly stored and the feature data of the basic data. I shall do it.
 次に、コンピュータ200のスケジュール記憶モジュール216は、人又は機器の作業のスケジュールを記憶部230に記憶する(ステップS802)。ここでは、作業データ分類システムのユーザに、入出力部240を介してスケジュールの入力を行わせてもよいし、通信部220を介してスケジュールのデータを受信してもよい。 Next, the schedule storage module 216 of the computer 200 stores the work schedule of the person or device in the storage unit 230 (step S802). Here, the user of the work data classification system may input a schedule via the input / output unit 240 or may receive schedule data via the communication unit 220.
 図8のステップS803からステップS810の処理は、図3のステップS302からステップS309の処理に相当するため、ここでの説明は省略する。 The processing from step S803 to step S810 in FIG. 8 corresponds to the processing from step S302 to step S309 in FIG.
 最後に、コンピュータ200のスケジュール比較モジュール217は、ステップS802で記憶したスケジュールと、ステップS810で生成したタイムラインを比較する(ステップS309)。タイムライン比較の際は、人又は機器ごとに行うものとする。 Finally, the schedule comparison module 217 of the computer 200 compares the schedule stored in step S802 with the timeline generated in step S810 (step S309). When comparing timelines, it shall be done for each person or device.
 図15は、本発明のスケジュール比較表示の一例である。ここでは、事前に記憶したスケジュールと、図12に示した実際の作業状況をもとに生成したタイムラインとを比較して表示した例を示している。例えば、油圧ショベルAの11:30から12:00の部分が強調表示され、吹き出し1501に「予定を30分超過」した旨を表示している。また、ブルドーザAの11:00から12:00の部分は点線で表示され、吹き出し1502に「予定を60分短縮」した旨を表示している。記憶した予定のスケジュールに対して、生成したタイムラインが遅延している場合や差異が大きい場合には、ユーザに分かりやすいよう強調して出力することが望ましい。強調して出力するための例としては、例えば赤色等の注意を喚起する色で表示する、点滅させる、太字で表示する、大きく表示する、警告音を鳴らす、等が例として挙げられる。 FIG. 15 is an example of the schedule comparison display of the present invention. Here, an example is shown in which a schedule stored in advance and a timeline generated based on the actual work situation shown in FIG. 12 are compared and displayed. For example, the portion from 11:30 to 12:00 of the hydraulic excavator A is highlighted, and a balloon 1501 indicates that “schedule exceeded 30 minutes”. Further, the portion from 11:00 to 12:00 of the bulldozer A is displayed with a dotted line, and a balloon 1502 indicates that “the schedule has been shortened by 60 minutes”. When the generated timeline is delayed or has a large difference with respect to the stored schedule, it is desirable to emphasize and output it so that it can be easily understood by the user. As an example for emphasizing and outputting, for example, displaying in a color that calls attention, such as red, blinking, displaying in bold, displaying large, sounding a warning sound, and the like can be given as examples.
 このように、本発明によれば、生成したタイムラインと予め記憶させたスケジュールとを比較し、生成したタイムラインが遅延している場合や差異が大きい場合には、ユーザに分かりやすいよう強調して出力することで、作業の進捗の管理を容易にすることが可能となる。 As described above, according to the present invention, the generated timeline is compared with the schedule stored in advance, and if the generated timeline is delayed or has a large difference, it is emphasized so that the user can easily understand. Output, it becomes possible to easily manage the progress of the work.
 上述した手段、機能は、コンピュータ(CPU、情報処理装置、各種端末を含む)が、所定のプログラムを読み込んで、実行することによって実現される。プログラムは、例えば、コンピュータからネットワーク経由で提供される(SaaS:ソフトウェア・アズ・ア・サービス)形態であってもよいし、フレキシブルディスク、CD(CD-ROM等)、DVD(DVD-ROM、DVD-RAM等)、コンパクトメモリ等のコンピュータ読取可能な記録媒体に記録された形態で提供される。この場合、コンピュータはその記録媒体からプログラムを読み取って内部記憶装置又は外部記憶装置に転送し記憶して実行する。また、そのプログラムを、例えば、磁気ディスク、光ディスク、光磁気ディスク等の記憶装置(記録媒体)に予め記録しておき、その記憶装置から通信回線を介してコンピュータに提供するようにしてもよい。 The means and functions described above are realized by a computer (including a CPU, an information processing apparatus, and various terminals) reading and executing a predetermined program. The program may be, for example, in the form (SaaS: Software as a Service) provided from a computer via a network, or a flexible disk, CD (CD-ROM, etc.), DVD (DVD-ROM, DVD). -RAM, etc.) and a computer-readable recording medium such as a compact memory. In this case, the computer reads the program from the recording medium, transfers it to the internal storage device or the external storage device, stores it, and executes it. In addition, the program may be recorded in advance in a storage device (recording medium) such as a magnetic disk, an optical disk, or a magneto-optical disk, and provided from the storage device to a computer via a communication line.
 以上、本発明の実施形態について説明したが、本発明は上述したこれらの実施形態に限るものではない。また、本発明の実施形態に記載された効果は、本発明から生じる最も好適な効果を列挙したに過ぎず、本発明による効果は、本発明の実施形態に記載されたものに限定されるものではない。 As mentioned above, although embodiment of this invention was described, this invention is not limited to these embodiment mentioned above. The effects described in the embodiments of the present invention are only the most preferable effects resulting from the present invention, and the effects of the present invention are limited to those described in the embodiments of the present invention. is not.
100 装置、200 コンピュータ、300 通信網 100 devices, 200 computers, 300 communication networks

Claims (6)

  1.  人又は機器が行う様々な作業に関する複数の基礎データを取得する取得手段と、
     前記複数の基礎データの特徴量を抽出する抽出手段と、
     前記特徴量と前記作業とを関連付けて記憶する記憶手段と、
     未知の基礎データの特徴量が、前記記憶した基礎データの特徴量の中のどの特徴量と似ているかを判定する判定手段と、
     前記判定の結果に基づいて、前記未知の基礎データが、前記人又は機器による何の作業に関するデータであるかを分類する分類手段と、
     前記分類されたデータから作業の内容を決定し、前記基礎データに対応づけられた作業時刻に基づいて、前記人又は機器ごとのタイムライン又は作業時間の割合を生成するタイムライン生成手段と、
     を備えることを特徴とする作業データ分類システム。
    An acquisition means for acquiring a plurality of basic data relating to various operations performed by a person or a device;
    Extraction means for extracting feature quantities of the plurality of basic data;
    Storage means for storing the feature quantity and the work in association with each other;
    Determining means for determining which feature quantity of the feature data of the unknown basic data is similar to the feature quantity in the feature data of the stored basic data;
    Based on the result of the determination, classification means for classifying what the unknown basic data is data about what work by the person or device;
    Timeline generating means for determining the content of work from the classified data and generating a timeline or a percentage of work time for each person or device based on the work time associated with the basic data;
    A work data classification system comprising:
  2.  前記人又は機器が行う作業のスケジュールを記憶するスケジュール記憶手段と、
     前記タイムライン生成手段が生成したタイムラインと、記憶された前記スケジュールとを比較するスケジュール比較手段と、
     を備えることを特徴とする請求項1に記載の作業データ分類システム。
    Schedule storage means for storing a schedule of work performed by the person or device;
    A schedule comparison means for comparing the timeline generated by the timeline generation means with the stored schedule;
    The work data classification system according to claim 1, further comprising:
  3.  前記判定手段は、過去の特徴量を機械学習して、前記未知の基礎データの特徴量が、前記基礎データの特徴量の中のどの特徴量と似ているかを判定することを特徴とする請求項1又は請求項2に記載の作業データ分類システム。 The determination unit is configured to machine-learn past feature amounts and determine which feature amount in the feature amount of the basic data is similar to the feature amount of the unknown basic data. The work data classification system according to claim 1 or claim 2.
  4.  前記スケジュール比較手段は、前記スケジュールに対して前記生成したタイムラインが遅延している場合又は差異が大きい場合に、強調して出力することを特徴とする請求項1から請求項3のいずれか一項に記載の作業データ分類システム。 The said schedule comparison means emphasizes and outputs when the produced | generated timeline is late with respect to the said schedule, or when a difference is large. The work data classification system described in the section.
  5.  人又は機器が行う様々な作業に関する複数の基礎データを取得するステップと、
     前記複数の基礎データの特徴量を抽出するステップと、
     前記特徴量と前記作業とを関連付けて記憶するステップと、
     未知の基礎データの特徴量が、前記記憶した基礎データの特徴量の中のどの特徴量と似ているかを判定するステップと、
     前記判定の結果に基づいて、前記未知の基礎データが、前記人又は機器による何の作業に関するデータであるかを分類するステップと、
     前記分類されたデータから作業の内容を決定し、前記基礎データに対応づけられた作業時刻に基づいて、前記人又は機器ごとのタイムライン又は作業時間の割合を生成するステップと、
     を備えることを特徴とする作業データ分類方法。
    Obtaining a plurality of basic data on various tasks performed by a person or device;
    Extracting features of the plurality of basic data;
    Storing the feature quantity and the work in association with each other;
    Determining which feature quantity of the unknown basic data feature quantity is similar to the feature quantity of the stored basic data; and
    Classifying the unknown basic data based on the result of the determination as to what work by the person or device;
    Determining the content of work from the classified data, and generating a timeline or percentage of work time for each person or device based on the work time associated with the basic data;
    A work data classification method comprising:
  6.  作業データ分類システムに、
     人又は機器が行う様々な作業に関する複数の基礎データを取得するステップ、
     前記複数の基礎データの特徴量を抽出するステップ、
     前記特徴量と前記作業とを関連付けて記憶するステップ、
     未知の基礎データの特徴量が、前記記憶した基礎データの特徴量の中のどの特徴量と似ているかを判定するステップ、
     前記判定の結果に基づいて、前記未知の基礎データが、前記人又は機器による何の作業に関するデータであるかを分類するステップ、
     前記分類されたデータから作業の内容を決定し、前記基礎データに対応づけられた作業時刻に基づいて、前記人又は機器ごとのタイムライン又は作業時間の割合を生成するステップ、
     を実行させるためのプログラム。
    Work data classification system
    Obtaining a plurality of basic data relating to various operations performed by a person or device;
    Extracting features of the plurality of basic data;
    Storing the feature quantity and the work in association with each other;
    Determining which feature quantity of the unknown basic data feature quantity is similar to the feature quantity of the stored basic data;
    Classifying the unknown basic data based on the result of the determination as to what work by the person or device,
    Determining the content of work from the classified data, and generating a timeline or a percentage of work time for each person or device based on the work time associated with the basic data;
    A program for running
PCT/JP2018/018380 2018-05-11 2018-05-11 Operation data classification system, operation data classification method, and program WO2019215924A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/018380 WO2019215924A1 (en) 2018-05-11 2018-05-11 Operation data classification system, operation data classification method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/018380 WO2019215924A1 (en) 2018-05-11 2018-05-11 Operation data classification system, operation data classification method, and program

Publications (1)

Publication Number Publication Date
WO2019215924A1 true WO2019215924A1 (en) 2019-11-14

Family

ID=68467367

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/018380 WO2019215924A1 (en) 2018-05-11 2018-05-11 Operation data classification system, operation data classification method, and program

Country Status (1)

Country Link
WO (1) WO2019215924A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021131998A1 (en) * 2018-12-27 2021-07-01 株式会社クボタ Waterworks management system, notification device, waterworks management method, and program
US20220215570A1 (en) * 2021-01-04 2022-07-07 Kabushiki Kaisha Toshiba Progress determination system, progress determination method, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003296782A (en) * 2002-03-29 2003-10-17 Casio Comput Co Ltd Device and program for recording action
JP2010161991A (en) * 2009-01-16 2010-07-29 Fujitsu Ltd Work recording device, work recording system, and work recording program
JP2011085990A (en) * 2009-10-13 2011-04-28 Fujitsu Ltd Program, device, and method for managing work
US20150262467A1 (en) * 2010-09-30 2015-09-17 Fitbit, Inc. Methods and Systems for Generation and Rendering Interactive Events Having Combined Activity and Location Information
JP2016058029A (en) * 2014-09-12 2016-04-21 株式会社東芝 Behavior analyzing apparatus, behavior analyzing method and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003296782A (en) * 2002-03-29 2003-10-17 Casio Comput Co Ltd Device and program for recording action
JP2010161991A (en) * 2009-01-16 2010-07-29 Fujitsu Ltd Work recording device, work recording system, and work recording program
JP2011085990A (en) * 2009-10-13 2011-04-28 Fujitsu Ltd Program, device, and method for managing work
US20150262467A1 (en) * 2010-09-30 2015-09-17 Fitbit, Inc. Methods and Systems for Generation and Rendering Interactive Events Having Combined Activity and Location Information
JP2016058029A (en) * 2014-09-12 2016-04-21 株式会社東芝 Behavior analyzing apparatus, behavior analyzing method and program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021131998A1 (en) * 2018-12-27 2021-07-01 株式会社クボタ Waterworks management system, notification device, waterworks management method, and program
US20220215570A1 (en) * 2021-01-04 2022-07-07 Kabushiki Kaisha Toshiba Progress determination system, progress determination method, and storage medium

Similar Documents

Publication Publication Date Title
US11483268B2 (en) Content navigation with automated curation
US20210200423A1 (en) Information processing apparatus, method, and non-transitory computer readable medium that controls a representation of a user object in a virtual space
US9582719B2 (en) Geographical area condition determination
JP6300792B2 (en) Enhancing captured data
US11430211B1 (en) Method for creating and displaying social media content associated with real-world objects or phenomena using augmented reality
JP6474946B1 (en) Image analysis result providing system, image analysis result providing method, and program
US11954536B2 (en) Data engine
US20230146563A1 (en) Automated image processing and insight presentation
WO2019215924A1 (en) Operation data classification system, operation data classification method, and program
US11086928B2 (en) Composable templates for managing disturbing image and sounds
JP6447992B2 (en) Image management apparatus and control method thereof
WO2019026166A1 (en) Work data classification system, work data classification method, and program
US20210240925A1 (en) Electronic device and operation method thereof
KR20230056498A (en) Apparatus and method for managing defects of apartment houses
WO2024089999A1 (en) Program, method, information processing device, and system
CN111428613A (en) Data processing method, device, equipment and storage medium
US20180188912A1 (en) Information processing apparatus and information processing method
KR102647904B1 (en) Method, system, and computer program for classify place review images based on deep learning
JP2023158433A (en) Analysis apparatus, information provision system, information processing system, program, and information provision method
US9201954B1 (en) Machine-assisted publisher classification

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 18918213

Country of ref document: EP

Kind code of ref document: A1