WO2023008540A1 - Wearable information processing device and program - Google Patents

Wearable information processing device and program Download PDF

Info

Publication number
WO2023008540A1
WO2023008540A1 PCT/JP2022/029209 JP2022029209W WO2023008540A1 WO 2023008540 A1 WO2023008540 A1 WO 2023008540A1 JP 2022029209 W JP2022029209 W JP 2022029209W WO 2023008540 A1 WO2023008540 A1 WO 2023008540A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
situation
map
accident
report
Prior art date
Application number
PCT/JP2022/029209
Other languages
French (fr)
Japanese (ja)
Inventor
晶 鈴木
洋平 鈴木
Original Assignee
晶 鈴木
洋平 鈴木
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 晶 鈴木, 洋平 鈴木 filed Critical 晶 鈴木
Priority to JP2022576445A priority Critical patent/JP7239126B1/en
Publication of WO2023008540A1 publication Critical patent/WO2023008540A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/04Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using a single signalling line, e.g. in a closed loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems

Definitions

  • the present invention relates to a wearable information processing device and a program that generate a situation report of an accident or incident using information such as camera images of the wearable device.
  • Patent Literature 1 by storing a camera image of a drive recorder, it can be used for post-event analysis of the cause of an accident and the like.
  • the present invention uses wearable information such as a camera image of a wearable device worn by a person to automatically generate a status report of an accident or incident in which the person is involved.
  • An object is to provide a wearable information processing device and a program.
  • a device is a wearable information processing device that processes information including a camera video from a wearable device worn by a person, and stores the camera video of the wearable device as video data.
  • a mobile terminal connected via a network to a video processing device, a map processing device that stores map data, and a mobile terminal that can collect situation data including time data and GPS data at the time of occurrence of an accident or incident.
  • a situation data acquisition unit that acquires situation data from the device, a video data acquisition unit that acquires video data based on time data from the video processing device, and a map data acquisition unit that acquires map data based on GPS data from the map processing device.
  • a schematic situation map generator for generating a schematic situation map based on at least the video data and the map data
  • a situation report generator for generating a situation report of the accident or incident including the schematic situation map.
  • the wearable information processing device of the present invention it is possible to automatically generate a situation report of an accident or incident based on a camera image from a wearable device worn by a person.
  • a situation report of an accident in which the person is involved but also a situation report of an incident such as a snatch.
  • video data can be obtained from the time data when an accident or incident occurs
  • map data can be obtained from GPS data, and even a schematic diagram of the situation can be automatically generated. can also generate on-the-fly situation reports that accurately convey the situation on the ground.
  • the camera image is an image of the surroundings of the person wearing the wearable device.
  • the image of the surroundings of the person wearing the wearable device is used as the camera image, for example, in the case of an accident in which a bicycle collides, it is possible to know from which direction the bicycle collided. You can also see which direction the snatcher is coming from.
  • the schematic situation diagram generation unit detects an image of the target that caused the accident or the incident from the video data acquired by the video data acquisition unit, replaces the target image with a predetermined symbol, and displays the image on the map.
  • a schematic map of the site situation shown in the upper position is generated, a map of the vicinity of the site including the position of the GPS data is generated from the map data acquired by the map data acquisition unit, and a schematic situation map is generated from the schematic map of the site situation and the map of the vicinity of the site.
  • an image of a target that caused an accident or incident for example, a vehicle such as a bicycle, a motorcycle, a car, or a person such as a pedestrian or a runner
  • a vehicle such as a bicycle, a motorcycle, a car, or a person such as a pedestrian or a runner
  • the target image is replaced with a predetermined symbol and displayed on the map. Since a schematic situation map can be automatically generated from the schematic map of the site situation indicated by the position of and the map of the vicinity of the site, compared to simply pasting the image of the video data to the situation report, the process of processing the image to the schematic situation map is saved. be able to.
  • a situation report is automatically generated that includes a schematic diagram of the situation that can be sent to the health company or the police without editing, so even if you are upset about being involved in an accident or incident, you can accurately report the situation. can tell
  • it is possible to automatically generate a schematic situation map by replacing the images of the person or vehicle that caused the accident or incident with a predetermined symbol it is possible to directly match the schematic situation map of a court decision, etc., and paste the image of the person or vehicle as it is. Collation accuracy can be greatly improved compared to the case of attaching.
  • the wearable device is connected to the mobile terminal device, and the video processing device receives camera video from the wearable device via the mobile terminal device and generates a plurality of video data for each predetermined video time.
  • the image data acquisition unit acquires image data including at least the image at the time of the time data from among the plurality of image data.
  • the camera video from the wearable device is divided and stored as a plurality of video data for each predetermined video time. It is easy to identify images before and after the time of occurrence.
  • the schematic situation diagram generation unit determines whether or not a plurality of images of a person or a vehicle that is a target candidate has been detected from the video data acquired by the video data acquisition unit. If it is determined that only one image of a person or vehicle that is a candidate for the target is detected, that one image is specified as the target that caused the accident or incident, and a schematic map of the site situation and the site When it is determined that a plurality of images of a person or a vehicle, which are candidates for a target, have been detected by generating a map of the vicinity and a schematic diagram of the situation, candidates for the target are identified by comparing the video data with previous video data.
  • Detect the position and movement of each person or vehicle that becomes the target generate an object candidate map, and identify the object that caused the accident or incident from among multiple images based on the position and movement of the object candidate map , to generate a map of the site situation and a map of the vicinity of the site and a map of the situation.
  • the position and movement of a person or vehicle, which is a target candidate can be detected.
  • the target can be specified based on the position and movement of the target candidate, so the target can be specified more accurately than when the target is specified simply from the difference in size of the target in the image. can be identified.
  • the portable terminal device is provided with an impact sensor for detecting an impact, and when the impact sensor detects an impact, it is possible to select whether an accident has occurred, an incident has occurred, or neither.
  • the accident case selection screen is displayed on the display unit, and the situation data acquisition unit acquires situation data including the information that the occurrence of the accident or the occurrence of the incident is selected from the portable terminal device. Even if an impact is detected, there are cases where an accident or incident does not occur, such as when the portable terminal device is simply dropped.
  • the situation data acquired from the portable terminal device by the situation data acquisition unit includes information that the occurrence of an accident or an incident has been selected. Since the report can be prevented from being automatically generated, it is possible to suppress wasteful generation of the status report.
  • the situation report generation unit determines whether it is an accident or an incident based on the situation data from the portable terminal device. form), and if it is determined to be an incident, a situation report is generated from a predetermined incident occurrence status report form. According to this aspect, it is possible to automatically generate a situation report according to the form of each accident or incident, thereby saving the trouble of recreating the situation report later according to a predetermined form.
  • a judicial case database that stores judicial precedent data including schematic situation diagrams is compared with the schematic situation diagram generated by the schematic situation diagram generation unit and the schematic situation diagrams included in the judicial cases in the schematic situation diagrams to make the schematic situation diagrams similar.
  • a percentage-of-fault report generator for selecting a case, extracting percentage-of-failure from the selected case, and generating a percentage-of-fault report including a schematic of the situation of the selected case and the percentage-of-fault.
  • the judicial precedent database is divided into an accident judicial precedent database and an incident judicial precedent database. If so, it selects a precedent with a similar outline of the situation from the accident precedent database, and if it determines that it is a case, selects a precedent with a similar outline of the situation from the case precedent database.
  • a judicial precedent with a similar schematic diagram of the situation is selected from the accident judicial precedent database, and in the case of a case, a judicial precedent with a similar schematic diagram of the situation is selected from the case judicial precedent database.
  • Precedents can be selected, and appropriate precedents and crime types (articles, etc.) can be selected according to the case.
  • a program of the present invention is a program for causing a computer to execute a process of generating a situation report of an accident or incident performed by a wearable information processing device, wherein the wearable information processing device is a wearable device worn on a person.
  • a video processing device that stores camera video of the device as video data
  • a map processing device that stores map data
  • a mobile terminal device that can collect situation data including time data and GPS data when an accident or incident occurs, is connected via a network
  • the situation report generating process includes a step of acquiring situation data from a mobile terminal device, a step of acquiring video data based on time data from a video processing device, and a step of acquiring GPS data from a map processing device.
  • generating a situation diagram based at least on the video data and the map data; and generating a situation report including the situation diagram.
  • wearable information such as camera images of a wearable device worn by a person
  • a status report of an accident or incident can be automatically generated based on this video.
  • FIG. 4 is a data table showing a specific example of video data; It is a data table showing a specific example of user data. 4 is a flowchart for explaining processing of the entire system of the first embodiment; It is a figure which shows the specific example of an accident case selection screen. It is a figure which shows the specific example of a report instruction
  • FIG. 6 is a flow chart showing a specific example of the status report generation processing of FIG. 5;
  • FIG. FIG. 11 is a flow chart showing a specific example of the schematic situation diagram generation process of FIG. 10;
  • FIG. It is a figure which shows the specific example of the video data selected by the situation schematic drawing production
  • FIG. 9 is a flowchart for explaining processing of the entire system of the second embodiment;
  • FIG. 11 is a flow chart showing a specific example of fault rate report generation processing;
  • FIG. 10 is a diagram showing a specific example of a form for an Accident-Negligence Percentage Report;
  • FIG. 10 is a diagram showing a specific example of the form of the Incident Negligence Percentage Report. It is a figure which shows the specific example of the form of a case constituent element report.
  • FIG. 10 is a diagram showing another specific example of the accident case selection screen;
  • FIG. 10 is a diagram showing another specific example of the report instruction screen;
  • FIG. 11 is a flowchart showing a specific example of target specifying processing according to a modification;
  • FIG. 10A is a diagram showing an example of images in the object specifying process, where (a) is an omnidirectional image and (b) is an image generated from the omnidirectional image. 26A is an omnidirectional image, and FIG. 26B is an image generated from the omnidirectional image.
  • FIG. 10 is a diagram showing a specific example of a target candidate diagram;
  • FIG. 10 is a diagram showing a specific example of a target candidate verification diagram;
  • FIG. 1 is a diagram showing the configuration of a wearable information processing system 100 according to the first embodiment.
  • a wearable information processing system 100 of FIG. 1 includes a wearable information processing device 10 , a mobile terminal device 20 , a video processing device 40 and a map processing device 50 .
  • the wearable information processing device 10 is configured so that the mobile terminal device 20, the video processing device 40, and the map processing device 50 can communicate with each other via a network N such as the Internet.
  • the wearable information processing device 10, the video processing device 40, and the map processing device 50 may be composed of a personal computer or a cloud server.
  • Each of the wearable information processing device 10, the image processing device 40, and the map processing device 50 may be configured to perform distributed processing by a plurality of devices, or may be configured by a plurality of virtual machines provided in one server device.
  • the mobile terminal device 20 is a portable information processing device used by the user.
  • the mobile terminal device 20 is, for example, a smart phone, a tablet, a PDA (Personal Digital Assistant), or the like. Although the case where one mobile terminal device 20 is connected to the network N is illustrated, two or more mobile terminal devices 20 may be connected.
  • a wearable device 30 can be connected to the mobile terminal device 20 via a wireless communication network based on a communication standard such as Bluetooth (registered trademark).
  • the wearable device 30 is equipped with a camera 34 and can be worn on a person's body to take camera images of the person's surroundings.
  • a wearable device 30 exemplified in the illustration of FIG. 1 is an ear hook type that is worn on the ear, and is provided with a camera 34 capable of capturing omnidirectional camera images.
  • the wearable information processing device 10 automatically generates a status report of an accident or incident based on wearable information (camera image around the body, etc.).
  • the wearable information processing device 10 of the first embodiment is configured by a server computer having the mobile terminal device 20 as a client.
  • the video processing device 40 receives the camera video from the wearable device 30 via the mobile terminal device 20 and stores it as video data.
  • the map processing device 50 stores map data, and transmits map data of the vicinity of the site including the latitude and longitude of the received GPS data to the wearable information processing device 10 .
  • FIG. 2 is a block diagram of the wearable information processing system 100 of FIG.
  • the mobile terminal device 20 shown in FIG. 2 includes a communication section 21, a control section 22, a storage section 23, a camera 24, a microphone 25, a sensor section 26, an input section 27, a display section 28, and the like.
  • the control unit 22, the communication unit 21, the storage unit 23, the camera 24, the microphone 25, the sensor unit 26, the input unit 27, and the display unit 28 are each connected to the bus line 20L, and can exchange data (information) with each other. is.
  • the communication unit 21 is connected to the network N by wire or wirelessly, and transmits and receives data (information) to and from the wearable information processing device 10 and the video processing device 40 .
  • the communication unit 21 functions as a communication interface for the Internet or an intranet, and is capable of communication using, for example, TCP/IP, Wi-Fi (registered trademark), and Bluetooth (registered trademark).
  • the communication unit 21 also transmits and receives data (information) to and from the wearable device 30 .
  • the communication unit 21 can be connected to the wearable device 30 via a wireless communication network based on a communication standard such as Bluetooth (registered trademark).
  • the control unit 22 comprehensively controls the mobile terminal device 20 as a whole.
  • the control unit 22 is composed of an integrated circuit such as an MPU (Micro Processing Unit).
  • the control unit 22 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), and a ROM (Read Only Memory).
  • the control unit 22 loads necessary programs into the ROM and executes the programs using the RAM as a work area, thereby performing various processes.
  • the storage unit 23 is an example of a storage medium (computer-readable tangible storage medium: a tangible storage medium) that stores various programs executed by the control unit 22 and data used by these programs.
  • the storage unit 23 is configured by a storage device such as a hard disk, an optical disk, or a magnetic disk.
  • the configuration of the storage unit 23 is not limited to these, and the storage unit 23 may be configured by a semiconductor memory such as RAM or flash memory.
  • the storage unit 23 can be configured with an SSD (Solid State Drive).
  • the storage unit 23 includes a program storage unit 231, a data storage unit 232, and the like.
  • a plurality of application programs to be executed by the control unit 22 are stored in the program storage unit 231 .
  • Program storage unit 231 stores a dedicated application program for performing wearable information processing in cooperation with wearable device 30 , wearable information processing device 10 , and video processing device 40 .
  • This dedicated application program is a program that can be downloaded to the mobile terminal device 20 via the network N.
  • the data storage unit 232 stores various data used by application programs.
  • the control unit 22 includes a sensor data acquisition unit 221, an impact detection unit 222, an impact data collection unit 223, a situation data collection unit 224, and the like.
  • the sensor data acquisition unit 221 acquires sensor data (such as acceleration data) from the sensor unit 26 .
  • the sensor unit 26 includes an impact sensor (for example, an acceleration sensor) that detects impact of the mobile terminal device 20 .
  • the impact detection unit 222 detects impact of the mobile terminal device 20 based on sensor data (acceleration data, etc.) acquired by the sensor data acquisition unit 221 . For example, acceleration data is monitored based on sensor data, and impact is detected when the acceleration exceeds a predetermined value. This predetermined value of acceleration may be changeable by user input.
  • the impact data collection unit 223 collects time data at the time of the impact (when the impact is detected), GPS data at the time of impact, etc. as impact data.
  • the control unit 22 transmits the impact data collected by the impact data collection unit 223 to the wearable information processing device 10 through the communication unit 21 .
  • the wearable information processing device 10 receives the impact data, the wearable information processing device 10 transmits a push notification to the mobile terminal device 20 .
  • the situation data collection unit 224 can collect situation data by displaying an accident case selection screen, a situation input screen, and the like on the display unit 28 by push notification.
  • the accident case selection screen is a screen where you can select, for example, "Is there an incident?", "Is there an accident?"
  • the situation input screen is a screen on which situation explanations and detailed information can be input, such as the appearance of the other party, the signal situation, and what kind of accident or incident has occurred. This situation input screen may be configured to allow input in the form of a questionnaire.
  • the data (information) input from the input unit 27 through the accident case selection screen, the situation input screen, etc. are collected as situation data.
  • the situation data includes impact data such as acceleration data, time data, and GPS data at the time of impact.
  • the input unit 27 is, for example, a touch panel or buttons.
  • the input unit 27 may be an external input device (keyboard, mouse, etc.) connected to the mobile terminal device 20 via Bluetooth (registered trademark).
  • the display unit 28 is a liquid crystal display, an organic EL display, or the like, and displays various information according to instructions from the control unit 22 .
  • the display unit 28 here exemplifies a liquid crystal display with a touch panel.
  • the buttons of the input section 27 may be buttons displayed on the display section 28 .
  • a wearable device 30 in FIG. 2 includes a communication unit 31, a camera 34, and the like.
  • the communication unit 31 transmits and receives data (information) to and from the mobile terminal device 20 .
  • the communication unit 31 can be connected to the mobile terminal device 20 via a wireless communication network based on a communication standard such as Bluetooth (registered trademark).
  • the camera 34 is attached to the human body and can take camera images of the surroundings (360 degrees horizontally).
  • the camera 34 is composed of a CCD (Charge Coupled Device) camera or the like capable of taking an omnidirectional camera image.
  • the camera 34 may not necessarily be an omnidirectional camera.
  • Any wearable device 30 may be used as long as it has a camera 34 capable of capturing camera images of the surroundings including the front, rear, left, and right of the human body.
  • a plurality of cameras 34 may be used to capture camera images around the body.
  • the type of wearable device 30 is not limited to the ear hook type shown in the figure.
  • the camera 34 capable of capturing images of the surroundings of the human body is attached, it may be of any type, such as a glasses type, a headphone type, or a neck type.
  • the camera 34 is not limited to a CCD camera, and may be a web camera, an IoT camera, or the like.
  • the wearable device 30 When the wearable device 30 is connected to the mobile terminal device 20 , it activates the camera 34 and transmits the camera image to the mobile terminal device 20 .
  • the mobile terminal device 20 transmits camera video from the wearable device 30 to the video processing device 40 .
  • the video processing device 40 includes a communication section 41, a control section 42, a storage section 43, and the like.
  • the control unit 42, the communication unit 41, and the storage unit 43 are each connected to the bus line 40L, and can exchange data (information) with each other.
  • the communication unit 41 is connected to the network N by wire or wirelessly, and transmits and receives data (information) to and from the wearable information processing device 10 and the mobile terminal device 20 .
  • the communication unit 41 functions as a communication interface for the Internet or an intranet, and is capable of communication using, for example, TCP/IP, Wi-Fi (registered trademark), and Bluetooth (registered trademark).
  • the control unit 42 comprehensively controls the entire video processing device 40 .
  • the control unit 42 is composed of an integrated circuit such as an MPU.
  • the control unit 42 includes a CPU, RAM, and ROM.
  • the control unit 42 loads a necessary program into the ROM and executes the program using the RAM as a work area, thereby performing various processes.
  • the storage unit 43 is an example of a storage medium that stores various programs executed by the control unit 42 and data used by these programs.
  • the storage unit 43 is configured by a storage device such as a hard disk, an optical disk, or a magnetic disk.
  • the configuration of the storage unit 43 is not limited to these, and the storage unit 43 may be configured by a semiconductor memory such as RAM or flash memory.
  • the storage unit 43 can be configured with an SSD (Solid State Drive).
  • the storage unit 43 stores the camera video from the wearable device 30 received via the mobile terminal device 20 as a plurality of video data 432 for each predetermined video time. Specifically, the storage unit 43 stores a plurality of video data for each predetermined video time, such as the data table in FIG. In the data table of FIG. 3, a user ID, video data, video date and time, video time, location, etc. are associated and stored.
  • the video date and time in FIG. 3 are the shooting date and video start time of the video data 432 . Since FIG. 3 illustrates a case where the video time is set to 5 minutes, video data for every 5 minutes is stored.
  • the video time can be set, and is not limited to 5 minutes, and may be set any time such as every minute or every few seconds.
  • the video time can be freely changed by user input from the mobile terminal device 20 .
  • the video time may be changed according to the user's speed.
  • the user's speed can be detected by the acceleration sensor of the mobile terminal device 20 .
  • the video time can be changed depending on whether the user is walking or riding a bicycle, and the faster the user is, the shorter the video time can be. According to this, the video data at the time of impact does not become too long or conversely too short, and the video data of the optimum video time corresponding to the speed of the user can be obtained.
  • the location is the shooting location of the video data, and stores, for example, the location specified from the GPS data at the start of the video.
  • the video data 432 is not limited to the data table shown in FIG. 3, and the items of the data table are not limited to those shown in FIG.
  • the storage unit 43 includes a program storage unit, a data storage unit, and the like (not shown). Programs executed by the control unit 42 are stored in the program storage unit. Various data used by the program are stored in the data storage unit.
  • the storage unit 43 may be composed of a hard disk, or may be composed of a semiconductor memory such as a flash memory.
  • the storage unit 43 can be configured with an SSD.
  • the control unit 42 includes a time data acquisition unit 421 and a video data selection unit 422 .
  • the time data acquisition unit 421 acquires time data at the time of impact from the wearable information processing device 10 .
  • the image data selection unit 422 acquires image data including at least the image at that time based on the time data at the time of impact from a plurality of image data as shown in FIG.
  • the acquired video data is transmitted to the wearable information processing device 10 via the communication unit 41 .
  • the map processing device 50 includes a communication section 51, a control section 52, a storage section 53, and the like.
  • the control unit 52, the communication unit 51, and the storage unit 53 are each connected to the bus line 50L, and can exchange data (information) with each other.
  • the communication unit 51 is connected to the network N by wire or wirelessly, and transmits and receives data (information) to and from the wearable information processing device 10 .
  • the communication unit 51 functions as a communication interface for the Internet or an intranet, and is capable of communication using, for example, TCP/IP, Wi-Fi (registered trademark), and Bluetooth (registered trademark).
  • the control unit 52 comprehensively controls the map processing device 50 as a whole.
  • the control unit 52 is composed of an integrated circuit such as an MPU.
  • the control unit 52 includes a CPU, RAM, and ROM.
  • the control unit 52 loads a necessary program into the ROM and executes the program using the RAM as a work area, thereby performing various processes.
  • the storage unit 53 is an example of a storage medium that stores various programs executed by the control unit 52 and data used by these programs.
  • the storage unit 53 is configured by a storage device such as a hard disk, an optical disk, or a magnetic disk.
  • the configuration of the storage unit 53 is not limited to these, and the storage unit 53 may be configured by semiconductor memory such as RAM and flash memory.
  • the storage unit 53 can be configured with an SSD (Solid State Drive).
  • the storage unit 53 stores map data 532 and the like.
  • the storage unit 53 includes a program storage unit, a data storage unit, and the like (not shown).
  • Programs executed by the control unit 52 are stored in the program storage unit.
  • Various data used by the program are stored in the data storage unit.
  • the storage unit 53 may be composed of a hard disk, or may be composed of a semiconductor memory such as a flash memory.
  • the storage unit 53 can be configured with an SSD.
  • the control unit 52 includes a GPS data acquisition unit 521 and a map data selection unit 522.
  • the GPS data acquisition unit 521 acquires GPS data at the time of impact from the wearable information processing device 10 .
  • the map data selection unit 522 acquires map data including a map of at least the latitude and longitude based on the acquired GPS data. The acquired map data is transmitted to the wearable information processing device 10 via the communication unit 51 .
  • the wearable information processing device 10 in FIG. 2 includes a communication unit 11, a control unit 12, and a storage unit 14.
  • the control unit 12, the communication unit 11, the control unit 12, and the storage unit 14 are each connected to the bus line 10L, and can exchange data (information) with each other.
  • the communication unit 11 is connected to the network N by wire or wirelessly, and transmits and receives data (information) among the mobile terminal device 20, the video processing device 40, and the map processing device 50.
  • the communication unit 11 functions as a communication interface for the Internet or an intranet, and is capable of communication using TCP/IP, Wi-Fi (registered trademark), Bluetooth (registered trademark), or the like.
  • the control unit 12 comprehensively controls the wearable information processing apparatus 10 as a whole.
  • the control unit 12 is composed of an integrated circuit such as an MPU.
  • the control unit 12 includes a CPU, RAM, and ROM.
  • the control unit 12 loads a necessary program into the ROM and executes the program using the RAM as a work area, thereby performing various processes.
  • the storage unit 14 is a computer-readable storage medium that stores various programs executed by the control unit 12 and data used by these programs.
  • the storage unit 14 is configured by a storage device such as a hard disk, an optical disk, or a magnetic disk.
  • the configuration of the storage unit 14 is not limited to these, and the storage unit 14 may be configured by a semiconductor memory such as RAM or flash memory.
  • the storage unit 14 can be configured with an SSD (Solid State Drive).
  • the storage unit 14 includes a program storage unit 231 and a data storage unit 232. Programs to be executed by the control unit 12 are stored in the program storage unit 231 .
  • the data storage unit 232 stores various data used by the program.
  • the control unit 12 reads necessary programs from the program storage unit 15 and executes various processes.
  • the data storage unit 16 stores user data 161, acquired data 162, generated data 163, and the like. In addition to the above, the data storage unit 16 stores a form of an accident occurrence status report 126A illustrated in FIG. 8, a form of an incident occurrence status report 126B illustrated in FIG. 9, and the like.
  • the user data 161 stores user ID, name, telephone number, address, type of wearable device, report destination, etc., as in the data table of FIG. 4, for example. These user data 161 are pre-stored as previously input by the user. “Type of wearable device” is the type of wearable device 30 connected to mobile terminal device 20 .
  • ear hook type, glasses type, headphone type, and neck type are stored in the "wearable device type" item in FIG.
  • the “report destination” is an insurance company, the police, or the like that reports the occurrence of an accident or incident.
  • the situation report and data automatically generated by the wearable information processing device 10 can be automatically sent from the mobile terminal device 20 according to the user's instruction (for example, clicking a button on the application).
  • the user data 161 is not limited to the data table shown in FIG. 4, and the items of the data table are not limited to those shown in FIG.
  • the acquired data 162 is various data acquired by the wearable information processing device 10 via the communication unit 11 .
  • the acquired data 162 includes video data 432 acquired from the video processing device 40, situation data acquired from the portable terminal device 20, map data 532 acquired from the map processing device 50, and the like.
  • the generated data 163 is data generated by the wearable information processing device 10 .
  • the generated data 163 includes situation schematic data and situation report data generated by the wearable information processing device 10 .
  • the control unit 12 includes a video data acquisition unit 121, a situation data acquisition unit 122, a map data acquisition unit 123, a schematic situation diagram generation unit 125, and a situation report generation unit 126.
  • the video data acquisition unit 121 acquires video data (for example, video data at the time of impact) from the video processing device 40 via the communication unit 11 .
  • the situation data acquisition unit 122 acquires situation data (for example, situation data including time data at the time of impact and GPS data) from the mobile terminal device 20 via the communication unit 11 .
  • the map data acquisition unit 123 acquires map data (for example, map data at the time of impact) from the map processing device 50 via the communication unit 11 .
  • the schematic situation diagram generation unit 125 generates a schematic situation diagram based on at least the video data acquired by the video data acquisition unit 121 and the map data by the map data acquisition unit 123 .
  • the schematic situation map includes, for example, a schematic accident situation map in the case of an accident and a schematic incident situation map in the case of an incident.
  • the schematic situation diagram generation unit 125 automatically generates a schematic accident situation diagram in the case of an accident, and automatically generates a schematic incident situation diagram in the case of an incident.
  • the schematic situation diagram generated by the schematic situation diagram generation unit 125 is stored as generated data 163 in the data storage unit 16 .
  • the situation report generation unit 126 generates a situation report including a schematic diagram of the situation.
  • the situation report includes, for example, an accident occurrence situation report in the case of an accident and an incident occurrence situation report in the case of an incident.
  • the situation report generation unit 126 automatically inputs a schematic diagram of the accident situation into the column "Schematic diagram of the accident situation" in the form of the accident occurrence situation report 126A in FIG.
  • the situation report generation unit 126 automatically fills in the relevant data in the relevant columns (columns for detailed accident information and accident situation explanation) in the form of the accident occurrence situation report 126A in FIG. to enter.
  • the situation report generator 126 automatically generates an accident occurrence situation report 126A as shown in FIG.
  • the situation report generation unit 126 automatically enters the incident situation diagram into the incident situation diagram column of the form of the incident occurrence situation report 126B in FIG. Based on the situation data from the mobile terminal device 20, the situation report generation unit 126 automatically fills in the relevant data in the corresponding fields (fields for the detailed information on the incident and the description of the incident situation) in the form of the incident occurrence status report 126B shown in FIG. to enter. In this way, the situation report generator 126 automatically generates an incident occurrence situation report 126B as shown in FIG.
  • step S10 the mobile terminal device 20 determines whether or not a dedicated application program has been started.
  • This dedicated application program is a dedicated application program for portable terminal device 20 to perform wearable information processing in cooperation with wearable device 30 , wearable information processing device 10 and video processing device 40 .
  • the portable terminal device 20 determines in step S10 that the dedicated application program has been activated, it asks the user to confirm connection with the wearable device 30 .
  • the wearable device 30 When the user activates the wearable device 30 worn on the human body (for example, the ear in the case of the ear hook type), the wearable device 30 is connected to the mobile terminal device 20 via Bluetooth (registered trademark) or the like.
  • the wearable device 30 When connected to the mobile terminal device 20 , the wearable device 30 activates the camera 34 and transmits the camera video 34 a to the video processing device 40 . Then, the video processing device 40 stores the camera video 34a as a plurality of video data 432 for each predetermined video time (for example, every five minutes in FIG. 3).
  • step S11 the control unit 22 of the mobile terminal device 20 determines whether or not an impact has been detected.
  • the impact detection determination is continued. Specifically, the control unit 22 monitors the sensor data (acceleration data) acquired by the sensor data acquisition unit 221, and detects an impact when the impact detection unit 222 determines that the acceleration exceeds a predetermined value, for example.
  • the controller 22 of the mobile terminal device 20 determines in step S11 that an impact has been detected, it collects the impact data 223a and transmits the impact data 223a to the wearable device 30.
  • the impact data 223 a is collected by the impact data collection section 223 of the control section 22 .
  • the impact data 223a includes time data at the time of impact, GPS data at the time of impact, acceleration data at the time of impact, and the like.
  • the wearable device 30 transmits a push notification P to the mobile terminal device 20 upon receiving the impact data 223a.
  • the mobile terminal device 20 determines whether it is an accident or an incident in step S12. Specifically, the mobile terminal device 20 displays an accident case selection screen as shown in FIG. 6 on the display unit 28, for example.
  • the accident case selection screen shown in FIG. 6 is a display for informing the user that there was an impact and asking the user to select "occurrence of accident", "occurrence of incident", or "neither".
  • step S12 the mobile terminal device 20 continues to determine impact detection, assuming that this impact is not an accident.
  • the user selects "occurrence of an accident” or "occurrence of an incident” on the display screen of FIG.
  • an accident questionnaire is displayed on the display unit 28 if an accident has occurred
  • an incident questionnaire is displayed on the display unit 28 if an incident has occurred.
  • the accident questionnaire is a questionnaire for obtaining data to be entered in the form of the accident occurrence status report 126A illustrated in FIG.
  • data to be entered in the form of the accident occurrence status report 126A for example, unknown data that cannot be obtained from the data (sensor data, weather data, time data, etc.) that can be acquired by the mobile terminal device 20 is entered by the user in the accident questionnaire. Can be supplemented with data.
  • the unknown data here includes, for example, traffic conditions and signal conditions at the time of the accident occurrence, data of the other party (address, name, telephone number, etc.), explanation of the accident situation, and the like among the detailed accident information shown in FIG.
  • the incident questionnaire is a questionnaire for acquiring data to be entered in the incident occurrence status report 126B form illustrated in FIG.
  • the unknown data includes, for example, the site situation and crowd at the time of the incident occurrence, the other party's data (age, gender, appearance, etc.) and the description of the incident situation among the incident detailed information shown in FIG.
  • the mobile terminal device 20 transmits the situation data 122a at the time of impact to the wearable information processing device 10.
  • the situation data 122a at the time of impact includes time data 224a at the time of impact, GPS data at the time of impact, accident case selection data (data selected from the accident case selection screen in FIG. 6), questionnaire input data, sensor data ( Acceleration data from impact sensors, etc.), weather data at the time of impact, and other data necessary for generating a situation report.
  • the wearable information processing device 10 When the wearable information processing device 10 receives the situation data 122 a at the time of impact via the communication unit 11 , it transmits the time data 224 a at the time of impact from the situation data 122 a to the video processing device 40 .
  • the video processing device 40 When the video processing device 40 receives the time data 224 a at the time of impact via the communication unit 41 , the video processing device 40 converts the video data 422 a at the time of impact including the video at the time of the time data 224 a to the video data 432 stored in the storage unit 43 . and transmits it to the wearable information processing device 10 .
  • the wearable information processing device 10 receives the image data 422 a at the time of impact via the communication unit 11 .
  • the image data acquisition unit 121 of the wearable information processing device 10 acquires the image of the time of the time data 224a from the plurality of image data 432 (camera images of the wearable device 30) stored in the storage unit 43 of the image processing device 40.
  • Image data 422a at the time of impact is acquired.
  • the image data acquiring unit 121 may acquire image data including the image at the time of the time data 224a at the time of impact and image data before and after that as the image data at the time of impact 422a.
  • the wearable information processing device 10 receives the situation data 122 a at the time of impact via the communication unit 11 , it transmits the GPS data 224 b at the time of impact from the situation data 122 a to the map processing device 50 .
  • the map processing device 50 When the map processing device 50 receives the GPS data 224b at the time of the impact via the communication unit 51, the map processing device 50 extracts a map including the latitude and longitude of the GPS data 224b at the time of the impact from the map data 532 stored in the storage unit 50. The map data 522a at the time of impact of the range is acquired and transmitted to the wearable information processing device 10. FIG. The wearable information processing device 10 receives the map data 522a at the time of impact via the communication unit 11 .
  • the map data acquisition unit 123 of the wearable information processing device 10 acquires map data including a map of latitude and longitude of the GPS data 224b at the time of impact from the map processing device 50.
  • the map data acquisition unit 123 acquires a map of a predetermined range around the latitude and longitude of the GPS data 224b at the time of impact as map data 522a at the time of impact.
  • the predetermined range is set in advance, and the predetermined setting may be changed by the user's input from the mobile terminal device 20 .
  • the wearable information processing device 10 executes a situation report generation process in step S20.
  • the situation report generation process automatically generates a situation report (accident occurrence situation report or incident occurrence situation report) based on the situation data 122a at the time of impact, the video data 422a at the time of impact, and the map data 522a at the time of impact. It is a process to The details of this status report generation process will be described later.
  • the situation report data 126 a (accident occurrence situation report or incident occurrence situation report data) generated by the wearable information processing device 10 is transmitted to the mobile terminal device 20 via the communication unit 11 .
  • the mobile terminal device 20 displays the situation report (accident occurrence situation report or incident occurrence situation report) on the display unit 28 in step S14, and determines whether or not to notify in step S15. to judge.
  • the mobile terminal device 20 displays the notification instruction screen shown in FIG.
  • the report instruction screen shown in FIG. 7 displays pre-registered insurance companies, etc., and is a display for allowing the user to select "report" or "not report” to the insurance company.
  • the mobile terminal device 20 notifies the pre-registered insurance company or the like in step S16.
  • the situation report data 126a (accident occurrence situation report or incident occurrence situation report data) may be transmitted together with the report.
  • the report destination is not limited to the pre-stored insurance company or the like.
  • police and fire stations may also be included. By reporting to the police and transmitting the status report data 126a at the same time, the police can also grasp the accident at an early stage. A series of processing is terminated by the notification in step S16, and the transmission of the camera image is stopped.
  • step S15 the mobile terminal device 20 continues to determine impact detection without reporting. A series of such processes by the wearable information processing system 100 are executed until the dedicated application program of the mobile terminal device 20 is terminated.
  • the transmission of the camera image by the camera 34 of the wearable device 30 may be stopped at a predetermined timing. For example, when the camera 34 of the wearable device 30 is turned off, when the power of the wearable device 30 is turned off, when the connection with the wearable device 30 is turned off, etc., transmission of the camera image is stopped. good too.
  • the timing of stopping transmission of camera images is not limited to this.
  • the video processing device 40 may delete the video data 432 stored in the storage unit 43 at a predetermined timing.
  • the video data 432 may be deleted at the timing when the dedicated application program of the mobile terminal device 20 is terminated or the power of the wearable device 30 is turned off.
  • the timing of deleting the video data 432 is not limited to this.
  • the video data 432 may be deleted after being saved for a certain period of time. The retention period of the video data 432 may be freely changed by user input from the mobile terminal device 20 .
  • FIG. 10 is a flow chart showing a specific example of the status report generation process.
  • the situation report generation process of FIG. The necessary program is read and executed.
  • control unit 12 determines whether the situation data acquisition unit 122 acquired the situation data 122a at the time of impact from the portable terminal device 20 in step S210 of FIG. If the control unit 12 determines that the situation data 122a at the time of impact has not been acquired, it returns to the process of step S210.
  • the unit 121 determines whether the image data 422a at the time of impact has been acquired from the image processing device 40 .
  • step S210 If the controller 12 determines in step S210 that the image data 422a at the time of impact has not been acquired, the process returns to step S210. Then, it is determined whether the map data acquisition unit 123 has acquired the map data 522a at the time of impact from the map processing device 50 .
  • step S220 determines in step S220 that the map data 522a at the time of impact has not been acquired. Specifically, the control unit 12 determines whether the event is an accident or an incident based on the accident case selection data (data selected from the accident case selection screen in FIG. 6) among the situation data from the portable terminal device 20 .
  • control unit 12 determines that there is an accident in step S240, it acquires the form of the accident occurrence status report 126A of FIG. 8 from the data storage unit 16 in step S242. If the controller 12 determines in step S240 that there is no accident, it determines in step S250 whether there is an incident.
  • control unit 12 determines in step S250 that there is no incident, it returns to the process of step S220, and if it determines in step S250 that there is an incident, in step S252 the incident occurrence status report 126B of FIG. form is obtained from the data storage unit 16, and a schematic situation diagram generation process is performed in step S260.
  • FIG. 11 is a flow chart showing a specific example of the schematic situation diagram generation process.
  • FIG. 12 is a diagram showing a specific example of video data selected in the schematic situation diagram generation process.
  • FIG. 13 is a diagram showing a specific example of the schematic site situation map generated by the schematic situation map generation process.
  • FIG. 14 is a diagram showing a specific example of the near-site map generated by the schematic situation diagram generation process.
  • FIG. 15 is a diagram showing a specific example of the schematic situation diagram generated by the schematic situation diagram generation process.
  • FIG. 11 is a process for automatically generating a schematic situation map (a schematic accident situation map or a schematic incident situation map) based on the video data 422a at the time of impact, the map data 522a at the time of impact, and the situation data 122a at the time of impact.
  • a predetermined program is read out and executed by the schematic situation diagram generator 125 .
  • the schematic situation diagram generation unit 125 selects a video for schematic diagram generation in step S261. Specifically, an image (still image) for schematic drawing generation as shown in FIG. 12 is selected from the image data 422a at the time of impact.
  • FIG. 12 shows a video (still image) of a predetermined time (for example, several seconds) before the time of impact.
  • a predetermined time for example, several seconds
  • FIG. 12 it is a schematic diagram generation image 422b when B's bicycle is about to enter the pedestrian crossing. In this way, it is preferable to select the image several seconds before the time of the impact, instead of the image at the time of the impact, as the schematic diagram generation image 422b.
  • the schematic situation diagram generation unit 125 detects B and the bicycle appearing in the image at the time of the impact from the image data 422a at the time of impact, and selects the image several seconds before the B in the image.
  • Persons and vehicles, such as B and the bicycle driven by B, are detected from the video using a trained model trained by machine learning or AI (artificial intelligence), for example.
  • the schematic situation diagram generation unit 125 generates a schematic site situation diagram from the selected schematic diagram generation image 422b.
  • the schematic diagram 422c of the site situation is a schematic diagram of a predetermined range including the bicycles driven by User A and the bicycles driven by User A several seconds before, centering on User A's head. is replaced with A will be replaced with a symbol representing a person, and B and a bicycle will be replaced with a predetermined symbol representing a bicycle.
  • A will be replaced with a symbol representing a person
  • B and a bicycle will be replaced with a predetermined symbol representing a bicycle.
  • the predetermined symbols for example, if symbols such as people, bicycles, and motorbikes are determined in an accident occurrence situation report, the symbols are stored in advance in the data storage unit 16, and users A and B are replaced with those symbols. In this manner, since a schematic situation diagram can be generated using predetermined symbols, the situation report automatically generated in this embodiment can be submitted to various places as a formal document as it is.
  • the positions of users A and B can be determined by analyzing the GPS data 224b and video data at the time of impact. Since the image data of this embodiment is a 360-degree camera image, the approximate direction and distance of B can be known from the image several seconds before. Since the latitude and longitude of User A's position can be obtained from GPS data, the approximate GPS data of User B can be calculated from User B's direction and distance. In this way, the positions of users A and B on the map (GPS data) can be replaced. Note that the distance between User A and User B may be calculated from the size of User B's image by recognizing an image several seconds before.
  • the schematic situation map generator 125 selects a map for schematic map generation in step S263, and generates a near-site map from the map for schematic map generation in step S264. Specifically, a range of a map for schematic drawing generation as shown in FIG. 14 is selected from the map data 522a at the time of the impact, and a near-site map 522b is generated.
  • FIG. 14 is a near-site map 522b including the map positions of users A and B in FIG. In FIG. 14, the dotted line overlaps the range of the schematic diagram 422c of the site situation, in which the position of A and the position of B several seconds before the impact are indicated by x.
  • step S265 the schematic situation diagram generation unit 125 generates the schematic situation diagram 125a from the schematic site situation diagram 422c and the site vicinity map 522b. Specifically, the schematic situation diagram generation unit 125 superimposes the schematic site situation diagram 422c as shown in FIG. 13 on the site vicinity map 522b as shown in FIG. 14 to generate the schematic situation diagram 125a as shown in FIG.
  • step S266 the schematic situation diagram generation unit 125 stores the schematic situation diagram 125a in the data storage unit 16 as the generated data 163, ends the series of schematic situation diagram generation processing, and proceeds to the processing of step S270 in FIG.
  • the control unit 12 generates a status report. Specifically, when it is determined that there is an accident in step S240, the situation report generation unit 126 inputs the schematic diagram of the accident situation and the corresponding data into the form of the accident occurrence situation report 126A shown in FIG. is automatically generated. On the other hand, if it is determined to be an incident in step S240, the situation report generation unit 126 inputs the incident situation diagram and relevant data into the incident occurrence situation report 126B form shown in FIG. 9, and automatically generates the incident occurrence situation report 126B. Generate with
  • step S280 the control unit 12 stores the generated status report (accident occurrence status report 126A or incident occurrence status report 126B) in the data storage unit 16 as the generated data 163, and performs a series of status report generation processing. finish.
  • the first embodiment it is possible to automatically generate a situation report of an accident or incident based on the camera image from the wearable device 30 worn by the user.
  • the user can see not only when the user is driving a car but also when walking, riding a bicycle or a motorcycle.
  • the image of the surroundings can be stored. Therefore, it is possible to automatically generate not only a status report of an accident in which the user is involved, but also a status report of an incident such as a snatch.
  • video data can be obtained from the time data when an accident or incident occurs (at the time of impact in the first embodiment)
  • map data can be obtained from GPS data, and a schematic diagram of the situation can be automatically generated. Even if you are caught up in and upset, you can generate a situation report on the spot that accurately conveys the situation on the ground. As a result, the circumstances of accidents and incidents can be quickly and accurately reported to insurance companies and the police, and early resolution of accidents and incidents can be expected.
  • the image around the user wearing the wearable device 30 is used as the camera image, for example, in the case of an accident in which a bicycle collides, it is possible to know from which direction the bicycle collided. You can also see which direction the snatcher is coming from.
  • At least an image of an object (for example, a vehicle such as a bicycle, a motorcycle, a car, or a person such as a pedestrian or a runner) that caused an accident or an incident is detected from image data around the user, and the image of the object is detected.
  • the target image can be replaced with a predetermined symbol and the situation map can be automatically generated from the site situation map indicated by the position on the map and the site area map, compared to simply pasting the image of the video data to the situation report, It is possible to omit the trouble of processing the image into a schematic diagram of the situation.
  • a situation report is automatically generated that includes a schematic diagram of the situation that can be sent to the health company or the police without editing, so even if you are upset about being involved in an accident or incident, you can accurately report the situation. can tell
  • the camera video from the wearable device 30 is divided and stored as a plurality of video data for each predetermined video time, compared to the case of storing without dividing into a plurality of video data, It is easy to identify images before and after the time when an accident or incident occurred. Further, even if an impact is detected, there may be cases where an accident or incident does not occur, such as when the mobile terminal device 20 is simply dropped.
  • the situation data acquired from the portable terminal device 20 by the situation data acquiring unit 122 includes information indicating that an accident or an incident has occurred. In this case, the status report can be prevented from being automatically generated, so wasteful generation of the status report can be suppressed.
  • the situation report generation unit 126 of this embodiment determines whether it is an accident or an incident based on the situation data from the mobile terminal device 20, and if it determines that it is an accident, it reports the situation from a predetermined accident occurrence situation report form. A report is generated, and if it is determined to be an incident, a situation report is generated from a predetermined incident occurrence situation report form. In this manner, a situation report can be automatically generated in accordance with the form of each accident or incident, saving the trouble of recreating the situation report later in accordance with the prescribed form.
  • FIG. 16 is a diagram showing a schematic configuration of the wearable information processing system 100 of the second embodiment, and corresponds to FIG.
  • the storage unit 14 of FIG. 16 includes an accident precedent database 17 and a case precedent database 18 .
  • the data storage unit 16 also stores a form of an accident negligence ratio report 127A illustrated in FIG. 19 and a form of an incident negligence ratio report 127B illustrated in FIG.
  • the controller 12 of FIG. 16 includes a fault rate report generator 127 .
  • the fault rate report generation unit 127 selects judicial precedents similar to the situation report generated by the situation report generation unit 126, and automatically generates a fault rate report based on the selected judicial precedents. Specifically, in the case of an accident, the fault rate report generator 127 selects from the accident precedent database 17 a precedent similar to the accident situation advertisement data of the accident occurrence situation report 126A, and based on the selected precedent, generates a report in FIG. An exemplary accident fault percentage report 127A is automatically generated. On the other hand, in the case of a case, the fault rate report generation unit 127 selects from the case precedent database 18 similar precedents to the incident situation report data of the incident occurrence situation report 126B, and shows an example in FIG. 20 based on the selected precedents. Automatically generate the incident fault rate report 127B.
  • FIG. 17 is a flowchart for explaining the overall processing of the wearable information processing system 100 according to the second embodiment, and corresponds to FIG. In the process of FIG. 17, a fault rate report generation process is performed in step S30.
  • FIG. 18 is a flow chart showing a specific example of fault rate report generation processing. 18 is executed by reading a necessary program from the program storage unit 15 by the control unit 12 (such as the fault ratio report generating unit 127).
  • step S310 the fault rate report generation unit 127 acquires the situation report data 126a (accident situation report data or incident situation report data) generated in the situation report generation process (step S20 in FIG. 17) from the data storage unit 16. .
  • the fault rate report generator 127 determines whether or not the accident situation report data has been acquired in step S320.
  • the form of the rate report 127A is obtained from the data store 16;
  • step S324 the fault rate report generation unit 127 collates the accident precedent database 17 and selects precedents similar to the accident situation report data. Specifically, the rough accident situation diagrams included in the accident situation report data and the rough accident situation diagrams contained in the case examples in the accident case case database 17 are collated, and judicial precedents with similar schematic accident situation diagrams are selected. A learned model by machine learning or AI (artificial intelligence) can be used for collation of the schematic diagram of the accident situation.
  • AI artificial intelligence
  • step S324 was exemplified by collating accident situation diagrams, but is not limited to this.
  • data ailed information on the accident, explanation of the incident situation, etc.
  • other than the schematic diagram of the accident situation in the accident situation report data may be collated.
  • the fault ratio report generation unit 127 extracts the fault ratio from the judicial precedent selected in step S340. For example, it extracts the basic fault ratio and the fault ratio evaluation based on the correction element data as shown in the column of "failure ratio" of the case law information of the accident in FIG.
  • the selected judicial precedent may be displayed as a reference judicial precedent in the remarks column of FIG. 19, for example.
  • Correction factor data includes, for example, whether the accident site is a "main road” or "residential area" as shown in FIG.
  • the percent fault rating is a number displayed corresponding to each of these modifier data.
  • FIG. 19 is a specific example of fault rate evaluation of correction factor data for A. In FIG. Correction factor data and fault rate evaluation are not limited to those illustrated. Modifier data and percent fault assessment for B may also be displayed, depending on the case law chosen, for example.
  • the fault rate report generation unit 127 automatically generates a fault rate report (for example, the accident fault rate report 127A in FIG. 19). Specifically, detailed accident information, an accident situation diagram, an explanation of the accident situation, etc. are extracted from the accident situation report data, and entered into the form of the accident/fault ratio report 127A. In addition, a schematic diagram of the accident situation is extracted from the judicial precedent selected in step S324, and is entered in the form of the accident-fault ratio report 127A together with the fault ratio (basic fault ratio, correction element data, fault ratio evaluation, etc.) extracted in step S340.
  • a fault rate report for example, the accident fault rate report 127A in FIG. 19
  • detailed accident information, an accident situation diagram, an explanation of the accident situation, etc. are extracted from the accident situation report data, and entered into the form of the accident/fault ratio report 127A.
  • a schematic diagram of the accident situation is extracted from the judicial precedent selected in step S324, and is entered in the form of the accident-fault ratio report
  • the fault rate report generation unit 127 stores the generated fault rate report (for example, the accident fault rate report 127A in FIG. 19) in the data storage unit 16 as the generated data 163, and generates a series of fault rate reports. exit.
  • the generated fault rate report for example, the accident fault rate report 127A in FIG. 19
  • step S320 determines in step S320 that the accident situation report data has not been acquired
  • step S330 determines in step S330 whether or not the incident situation report data has been acquired. If the fault rate report generator 127 determines in step S330 that the incident situation report data has not been acquired, the process returns to step S310.
  • the fault rate report generation unit 127 determines in step S330 that the incident situation report data has been acquired, it acquires the form of the fault rate report 127B of FIG. 20 from the data storage unit 16 in step S332.
  • step S334 the fault rate report generator 127 collates the case precedent database 18 and selects precedents similar to the case status report data. Specifically, the schematic diagram of the case situation included in the case situation report data and the schematic diagram of the case situation contained in the cases in the case precedent database 18 are collated to select cases with similar schematic diagrams of the case situation. A trained model by machine learning or AI (artificial intelligence) can be used for collation of the schematic diagram of the incident situation.
  • AI artificial intelligence
  • step S334 was exemplified by collating the schematic diagram of the case situation, but it is not limited to this.
  • data incident detailed information, incident situation explanation, etc.
  • other than the incident situation schematic diagram of the incident situation report data may be collated.
  • the fault ratio report generation unit 127 extracts the fault ratio from the judicial precedent selected in step S340.
  • the basic percentage of negligence and the type of judicial precedent as shown in the "percentage of fault" column of the judicial precedent information of the case in FIG. 20 are extracted.
  • Case types include, for example, "theft,” “injury,” “robbery,” and “robbery resulting in injury,” as shown in FIG.
  • the selected judicial precedent includes these characters, "o” is displayed, and if these characters are not included, "x” is displayed. It should be noted that notation of judicial precedent type is not limited to these.
  • the percentage of negligence is often the ratio of victim 0:perpetrator 100, and there are cases where the percentage of negligence is not indicated. This can be expressed as
  • the judicial precedents selected in step S334 may be displayed as reference judicial precedents in the remarks column.
  • the fault rate report generation unit 127 automatically generates a fault rate report (for example, the incident fault rate report 127B in FIG. 20). Specifically, the detailed information of the incident, the illustration of the incident situation, the description of the incident situation, etc. are extracted from the incident situation report data, and entered into the form of the incident negligence ratio report 127B. In addition, a schematic diagram of the case situation is extracted from the judicial precedent selected in step S334, and entered into the form of the case fault ratio report 127B together with the fault ratio (basic fault ratio, correction element data, fault ratio evaluation, etc.) extracted in step S340.
  • a fault rate report for example, the incident fault rate report 127B in FIG. 20.
  • step S360 the negligence ratio report generator 127 stores the generated negligence ratio report (for example, the incident negligence ratio report 127B in FIG. 20) in the data storage unit 16 as the generated data 163 of the negligence ratio report data 127a, End the series of percent fault report generation.
  • fault rate report data 127a (data of accident fault rate report or incident fault rate report) generated by the wearable information processing device 10 is transmitted to the mobile terminal device 20 via the communication unit 11. be done.
  • step S14 the situation report (accident occurrence situation report or incident occurrence situation report) and the fault rate report (accident fault rate report or incident fault rate report) ) is displayed on the display unit 28, and it is determined whether or not to notify in step S15. Since subsequent processing is the same as in FIG. 5, detailed description is omitted.
  • the second embodiment similar to the first embodiment, it is possible to automatically generate a situation report of an accident or incident from the camera image of the wearable device 30, and further to automatically generate a fault rate report. .
  • the judicial precedent database (accident judicial precedent database 17 or incident judicial precedent database 18) is collated with a rough situation diagram, similar judicial precedents can be found more easily than when searching for judicial precedents by text search.
  • it is possible to automatically generate a schematic situation map by replacing the images of people and vehicles with predetermined symbols it is possible to directly compare with the schematic situation charts of court cases, etc., compared to pasting the images of people and vehicles as they are. Matching accuracy can be greatly improved. Since it is possible to automatically generate a percentage-of-fault report based on case law data, it is possible to know the percentage of fault in similar cases, making it easier to file lawsuits and decide whether to settle, and for insurance companies to decide whether to apply insurance.
  • the judicial precedent database of the second embodiment is divided into an accident judicial precedent database 17 and a case precedent database 18, and the fault rate report generator 127 judges whether it is an accident or a case based on the situation data from the mobile terminal device 20, If it is determined to be a case, it selects a precedent with a similar outline of the situation from the accident precedent database, and if it is determined to be a case, selects a precedent with a similar outline of the situation from the case precedent database.
  • a judicial precedent with a similar schematic diagram of the situation is selected from the accident judicial precedent database 17, and in the case of a case, a judicial precedent with a similar schematic diagram of the situation is selected from the case judicial precedent database 18. It is possible to select appropriate judicial precedents and appropriate judicial precedents and crime types (articles, etc.) according to the case.
  • a case constituent element report 127C as shown in FIG. 21 is created, and in step S14 of FIG. You may make it display on the apparatus 20.
  • FIG. The case constituent element report 127C in FIG. 21 displays the case guilt presumption instead of the fault rate in the fault rate column of FIG.
  • the case is estimated based on the judicial precedent selected in step S334 of FIG.
  • the presumption of crimes includes "theft", "injury”, “robbery”, “robbery resulting in injury”, etc., as shown in FIG. If there is no injury, there is a possibility of "theft", but if the victim's crime is suppressed, there is also a possibility of "robbery”.
  • the ratio of negligence is often 0 to the perpetrator and 100 to the perpetrator, and there are cases where the ratio of negligence is not indicated.
  • the document 127C may be created automatically. It should be noted that this case guilt presumption may be included in the case fault rate report 127C.
  • the emergency button is displayed on the display unit 28 while the mobile terminal device 20 is executing a dedicated application program for performing wearable information processing.
  • the emergency button is a button for displaying the accident case selection screen even if the mobile terminal device 20 does not detect an impact.
  • an accident case selection screen shown in FIG. 22 is displayed on the display unit 28.
  • step S11 in FIGS. 10 and 18 assumes that the impact has been detected or the emergency button has been pressed.
  • the emergency button is pressed
  • data at the time of impact in FIGS. 10 and 18 is read as data at the time of pressing the emergency button.
  • the situation data (time data, GPS data, sensor data) at the time of impact is read as the situation data (time data, GPS data, sensor data) at the time of pressing the emergency button.
  • the image data at the time of impact is read as the image data at the time of pressing the emergency button, and the map data at the time of impact is read as the map data at the time of pressing the emergency button.
  • a report instruction screen as shown in FIG. With this, it is possible to select whether or not to report to the police when an accident or incident occurs. A situation report may also be sent to the police along with the report to the police. As a result, it is possible to expect prompt resolution of accidents and incidents.
  • the emergency button is pressed, a screen for instructing to report to the police is displayed, making it possible to respond to emergencies.
  • a case is illustrated in which a video (still image) of a predetermined time (for example, several seconds) before the time of impact is selected from the video data 422a at the time of impact.
  • a predetermined time for example, several seconds
  • an image generated from part of the omnidirectional image which is the image data 422a, as shown in FIG.
  • the omnidirectional image shown in FIG. 25 may be used as it is. Recognize the person or vehicle appearing in the omnidirectional video and identify it as the object that caused the accident or incident involving User A.
  • FIG. 12 an example is shown in which a single bicycle driven by User A is shown in User A's omnidirectional video.
  • a single bicycle driven by User A is shown in User A's omnidirectional video.
  • a plurality of vehicles and people are captured in the omnidirectional video. In this case, it is necessary to identify the object (vehicle or person) that caused the accident or incident involving user A from among the vehicles or person appearing in the omnidirectional video.
  • FIG. FIG. 24 is a flowchart illustrating a specific example of target identification processing. 25 to 28, similar to FIGS. 12 to 15, when user A pA (Party pA) starts crossing the crosswalk with a green light, a bicycle driven by Party pB (Party pB) enters the crosswalk. Then, the case of an accident in which the car collides with the instep is illustrated.
  • FIGS. 25 to 28 in order to make the explanation easier to understand, only B's bicycle is displayed, and other vehicles and people are omitted.
  • Figures 25(a) and 26(a) are omnidirectional images based on the image data 422a.
  • 25(b) and 26(b) are images generated from the omnidirectional images.
  • Figures 25(a) and (b) are images several seconds before the impact or pressing of the emergency button, and
  • Figures 26(a) and (b) are several seconds before Figures 25(a) and (b).
  • It is a video of FIG. 27 is a diagram showing a specific example of the target candidate diagram 422d, showing the positions and movements of A and B based on the images of FIGS.
  • FIG. 28 is a diagram showing a specific example of the target candidate verification diagram 422e, in which the position mX of the accident or incident (position at the time of impact or pressing of the emergency button) is superimposed on FIG.
  • m1 of A and t1 of B are respectively the latitude and longitude of the GPS data of A and B based on the omnidirectional video of FIG. 25(a).
  • m2 of A and t2 of B are respectively the latitude and longitude of the GPS data of A and B based on the omnidirectional video of FIG. 26(a).
  • the target specifying process in FIG. 24 is executed by reading out a predetermined program by the control unit 12 before executing the schematic situation diagram generation process in FIG.
  • the control unit 12 first selects an image for schematic diagram generation in step S461. Specifically, the control unit 12 selects an omnidirectional image (for example, FIG. 25A) of a predetermined time (eg, several seconds before) the time of impact from the image data 422a at the time of impact.
  • a predetermined time eg, several seconds before
  • step S462 the control unit 12 detects vehicles and persons from the selected omnidirectional video, and determines whether or not there are multiple persons or vehicles that can be candidates for causing an accident or incident. Specifically, the control unit 12 detects an image of a person or vehicle from the omnidirectional video, and determines whether or not multiple images of a person or vehicle have been detected. If the control unit 12 determines in step S462 that only one target candidate (image of a person or vehicle) is detected, the control unit 12 terminates the target specifying process, shifts to the schematic situation diagram generation process of FIG. 11, and proceeds to step S462. If it is determined that a plurality of target candidates (images of persons or vehicles) have been detected, the process moves to step S463.
  • step S463 the control unit 12 extracts an omnidirectional image (for example, FIG. 26A) a predetermined time (for example, several seconds earlier) than the omnidirectional image in FIG. 24 selected in step S461 from the image data 422a. elect. Then, in step S464, the control unit 12 calculates the positions (latitude and longitude of GPS data) of the person or vehicle that is the target candidate from the two selected omnidirectional images (Fig. 25(a) and Fig. 26(a)). Then, an object candidate drawing 422d in FIG. 27 is generated, and the position and movement of the object candidate are detected.
  • an omnidirectional image for example, FIG. 26A
  • a predetermined time for example, several seconds earlier
  • the omnidirectional video in FIG. 25(a) is the video around user A, so the center m1 of the omnidirectional video can be the position of user A.
  • the camera image may be adjusted so that the center of the omnidirectional image matches the position of the user.
  • the position from A to B is indicated by a vector (m1 ⁇ t1).
  • the direction of this vector is the direction of B with respect to A, and the magnitude (length) of the vector corresponds to the distance between A and B.
  • the actual Calculate the direction and distance of B Using a learning model that has been trained in advance by associating vectors (direction and distance) to multiple objects in the omnidirectional video with vectors (direction and distance) from the actual instep to the object, the actual Calculate the direction and distance of B.
  • the latitude and longitude of Party B's GPS data can be obtained based on the latitude and longitude of Party A's GPS data.
  • the method of calculating the latitude and longitude of the GPS data of B is not limited to the above, and may be calculated by another known method from the omnidirectional video.
  • the latitude and longitude of the GPS data of Party A those received from the portable terminal device 20 are used as described above.
  • step S465 the control unit 12 determines whether or not the target candidate is approaching user A. Specifically, as shown in FIG. 27, the control unit determines whether or not the target candidate is approaching user A based on the position and movement of the target candidate obtained from the two omnidirectional images. For example, whether or not it is approaching is determined by whether or not the direction of movement is toward the course of user A.
  • the direction of movement of B's bicycle is toward the course of user A (m2 ⁇ m1), so it can be determined that User A is approaching.
  • the position should not change, so it is known that it is not a target candidate.
  • the direction of movement is in the direction away from the course of the instep, it can be seen that it is not a target candidate.
  • the object identification process even if a plurality of persons or vehicles are photographed, it is possible to automatically determine whether or not user A is a candidate for the object that caused the accident or incident.
  • step S466 the control unit 12 verifies whether or not the target candidate is the target of an accident or incident. Specifically, the control unit 12 superimposes the position mX (the latitude and longitude of the GPS data) at the time of impact or pressing of the emergency button on the target candidate diagram 422d of FIG. Generate. Then, in step S467, the control unit 12 determines whether or not the position mX of the accident or incident (the position at the time of impact or pressing of the emergency button) is on the extension line (dotted arrow) of the target movement (t2 ⁇ t1). do. In this case, there is a risk that the calculation accuracy of the GPS data for the location of B may affect the verification accuracy. Therefore, a certain tolerance may be provided according to the calculation accuracy of the GPS data of the position of B, for example, so that even if there is some deviation, it can be determined that it is on the extended line.
  • step S468 the target candidate is identified as the target that caused the incident or accident.
  • the above steps S465 to S468 are performed for each target candidate.
  • the control unit 12 arbitrarily selects a target candidate in step S465 and performs steps S465 to S468 on the target candidate. If the target candidate does not approach in step S465 or if it is determined in step S467 that the position of the accident or incident is not on the extension line of the target candidate's movement, the process returns to step S464, and another arbitrary target candidate is determined. is selected, and steps S465 to S468 are performed.
  • step S468 the control unit 12 ends the target specifying process and proceeds to the schematic situation diagram generation process of FIG.
  • the target identification process may be performed by the schematic situation diagram generation unit 125 . Since step S461 of the target specifying process and step S261 of the schematic situation drawing process overlap, when these target specifying process and the schematic situation drawing process are performed continuously, the process of step S261 in FIG. 11 is performed in the schematic situation drawing process. For example, the image shown in FIG. 25 selected in step S461 of FIG. 24 may be used as it is. Then, the object specifying process is performed before generating the site situation schematic diagram in step S262 of FIG.
  • the object identification processing even when a plurality of vehicles or persons are shown in the omnidirectional video, it is possible to automatically identify the object as the object that caused the accident or incident involving user A. .
  • the target By comparing the omnidirectional video (video data) slightly before the occurrence of the accident or incident (at the time of impact or pressing the emergency button) with the omnidirectional video (video data) even before that, the target It can detect the position and motion of candidate people and vehicles.
  • the target can be specified based on the position and movement of the target candidate, so the target can be specified more accurately than when the target is specified simply from the difference in size of the target in the image. can be identified.
  • the omnidirectional video selected in the schematic situation diagram generation process and the target identification process is obtained from the camera video from the wearable device 30 as an example.
  • the information is not necessarily limited to this, and may be acquired from the image of the surveillance camera.
  • the wearable device 30 may be damaged depending on the circumstances of an accident or incident.
  • the images before and after the damage may be supplemented from the images of surrounding surveillance cameras.
  • the situation report in Japan was used as an example, but the present invention is not limited to this.
  • it can be applied to a foreign situation report by incorporating a foreign situation report form (template).
  • the Japanese-based part can be automatically translated into the language of the country to match the form of the country.
  • symbols of people and vehicles in schematic diagrams can be easily replaced by incorporating symbols designated by the country, if any.
  • map information by taking in the map information of the world as a matter of course, it is possible to replace the images of people and vehicles with predetermined symbols and generate a rough map of the site situation corresponding to the form of each country indicated by the position on the map. .

Landscapes

  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)

Abstract

The present invention automatically generates a situation report regarding an accident or incident that a person wearing a wearable device has been involved in. The present invention comprises the following units connected via a network to a video processing device 40 that stores camera video from a wearable device 30 as video data, a map processing device 50 that stores map data, and a mobile terminal device 20 that stores situation data including GPS data and time-of-day data at the time of occurrence of an accident or incident: a situation data acquisition unit 122 for acquiring situation data; a video data acquisition unit 121 for acquiring video data on the basis of the time-of-day data; a map data acquisition unit 123 for acquiring map data on the basis of the GPS data; a situation outline generation unit 125 for generating a situation outline on the basis of at least the video data and the map data; and a situation report generation unit 126 for generating a situation report including the situation outline.

Description

ウエアラブル情報処理装置及びプログラムWearable information processing device and program
 本発明は、ウエアラブル装置のカメラ映像等の情報を利用して事故や事件の状況報告を生成するウエアラブル情報処理装置及びプログラムに関する。 The present invention relates to a wearable information processing device and a program that generate a situation report of an accident or incident using information such as camera images of the wearable device.
 近年、スマートフォンなどの携帯端末装置を運転中に使用して衝突事故を起こすケースが多発している。他方、ドライブレコーダを自動車に装着して運転時の映像を記憶しておくことで、その映像を見れば自動車の衝突事故の状況が分かるようになってきた。例えば特許文献1では、ドライブレコーダのカメラ映像を記憶しておくことで事後的に事故原因などの解析に利用できる。 In recent years, there have been many cases of collisions caused by using mobile devices such as smartphones while driving. On the other hand, by attaching a drive recorder to an automobile and storing images of driving, it has become possible to understand the circumstances of automobile collisions by watching the images. For example, in Patent Literature 1, by storing a camera image of a drive recorder, it can be used for post-event analysis of the cause of an accident and the like.
特開2006-306153公報Japanese Patent Application Laid-Open No. 2006-306153
 ところで、携帯端末の使用による衝突事故は自動車だけでなく、自転車やバイクと人、人と人との間でも多発している。しかしながら、特許文献1のようなドライブレコーダは自動車に搭載されるので、自動車事故の映像は記憶できても、自転車同士の事故や自転車と歩行者との事故などの自動車が関わらない事故の映像までは記憶できない。しかも、事故現場では被害者は気が動転しているため、状況を正確に伝えることは難しい。 By the way, collisions caused by the use of mobile devices occur not only in automobiles, but also between bicycles, motorcycles and people, and between people. However, since the drive recorder as disclosed in Patent Document 1 is installed in a vehicle, even if it can store images of vehicle accidents, it can also store images of accidents that do not involve vehicles, such as accidents between bicycles and accidents between bicycles and pedestrians. can't remember. Moreover, it is difficult to accurately convey the situation because the victim is upset at the scene of the accident.
 このような事情を考慮して、本発明は、人に装着するウエアラブル装置のカメラ映像等のウエアラブル情報を利用することで、その人が巻き込まれた事故や事件の状況報告を自動的に生成できるウエアラブル情報処理装置及びプログラムを提供することを目的とする。 In consideration of such circumstances, the present invention uses wearable information such as a camera image of a wearable device worn by a person to automatically generate a status report of an accident or incident in which the person is involved. An object is to provide a wearable information processing device and a program.
 上記課題を解決するために、本発明の装置は、人に装着されるウエアラブル装置からのカメラ映像を含む情報を処理するウエアラブル情報処理装置であって、ウエアラブル装置のカメラ映像を映像データとして記憶する映像処理装置と、地図データを記憶する地図処理装置と、事故又は事件の発生時の時刻データ及びGPSデータを含む状況データを収集可能な携帯端末装置と、にネットワークを介して接続され、携帯端末装置から状況データを取得する状況データ取得部と、映像処理装置から時刻データに基づく映像データを取得する映像データ取得部と、地図処理装置からGPSデータに基づく地図データを取得する地図データ取得部と、少なくとも映像データと地図データに基づいて状況略図を生成する状況略図生成部と、状況略図を含む事故又は事件の状況報告を生成する状況報告生成部とを備える。 In order to solve the above problems, a device according to the present invention is a wearable information processing device that processes information including a camera video from a wearable device worn by a person, and stores the camera video of the wearable device as video data. A mobile terminal connected via a network to a video processing device, a map processing device that stores map data, and a mobile terminal that can collect situation data including time data and GPS data at the time of occurrence of an accident or incident. A situation data acquisition unit that acquires situation data from the device, a video data acquisition unit that acquires video data based on time data from the video processing device, and a map data acquisition unit that acquires map data based on GPS data from the map processing device. , a schematic situation map generator for generating a schematic situation map based on at least the video data and the map data; and a situation report generator for generating a situation report of the accident or incident including the schematic situation map.
 本発明のウエアラブル情報処理装置によれば、人に装着されるウエアラブル装置からのカメラ映像に基づいて事故又は事件の状況報告を自動的に生成できる。このように、人に装着されるウエアラブル装置からのカメラ映像を用いることで、その人が巻き込まれた事故の状況報告だけでなく、ひったくりなどの事件の状況報告まで自動的に生成できる。しかも、事故又は事件の発生時の時刻データから映像データを取得し、GPSデータから地図データを取得して状況略図まで自動的に生成できるので、事故や事件に巻き込まれて気が動転していても、正確に現場の状況を伝える状況報告をその場で生成できる。 According to the wearable information processing device of the present invention, it is possible to automatically generate a situation report of an accident or incident based on a camera image from a wearable device worn by a person. In this way, by using a camera image from a wearable device worn by a person, it is possible to automatically generate not only a situation report of an accident in which the person is involved, but also a situation report of an incident such as a snatch. In addition, video data can be obtained from the time data when an accident or incident occurs, map data can be obtained from GPS data, and even a schematic diagram of the situation can be automatically generated. can also generate on-the-fly situation reports that accurately convey the situation on the ground.
 本発明の好適な態様において、カメラ映像は、ウエアラブル装置を装着した人の周囲の映像である。本態様によれば、ウエアラブル装置を装着した人の周囲の映像をカメラ映像として用いるので、例えば自転車に衝突された事故であればその自転車がどちらの方向から衝突したのかが分かる。また、ひったくり犯人がどちらの方向からやってきたのかも分かる。 In a preferred aspect of the present invention, the camera image is an image of the surroundings of the person wearing the wearable device. According to this aspect, since the image of the surroundings of the person wearing the wearable device is used as the camera image, for example, in the case of an accident in which a bicycle collides, it is possible to know from which direction the bicycle collided. You can also see which direction the snatcher is coming from.
 本発明の好適な態様において、状況略図生成部は、映像データ取得部で取得された映像データから、事故又は事件を引き起こした対象の画像を検出し、対象の画像を所定の記号に置き換えて地図上の位置で示す現場状況略図を生成し、地図データ取得部で取得された地図データからGPSデータの位置を含む現場付近地図を生成し、現場状況略図と現場付近地図から状況略図を生成する。本態様によれば、事故又は事件を引き起こした対象の画像(例えば自転車・バイク・自動車等の乗り物、歩行者・ランナー等の人物)を検出し、対象の画像を所定の記号に置き換えて地図上の位置で示す現場状況略図と現場付近地図から状況略図を自動で生成できるから、単に映像データの画像を状況報告に貼りつける場合と比較して、その画像から状況略図への加工の手間を省くことができる。このように、加工しなくても保健会社や警察にそのまま送信できる状況略図を含む状況報告書が自動で生成されるので、事故や事件に巻き込まれて気が動転していても、正確に状況を伝えることができる。また、事故又は事件を引き起こした人物や乗り物の画像を所定の記号に置き換えた状況略図を自動で生成できるので、判例などの状況略図との直接照合も可能となり、人物や乗り物の画像をそのまま貼りつける場合に比較して照合精度を大幅に向上できる。 In a preferred aspect of the present invention, the schematic situation diagram generation unit detects an image of the target that caused the accident or the incident from the video data acquired by the video data acquisition unit, replaces the target image with a predetermined symbol, and displays the image on the map. A schematic map of the site situation shown in the upper position is generated, a map of the vicinity of the site including the position of the GPS data is generated from the map data acquired by the map data acquisition unit, and a schematic situation map is generated from the schematic map of the site situation and the map of the vicinity of the site. According to this aspect, an image of a target that caused an accident or incident (for example, a vehicle such as a bicycle, a motorcycle, a car, or a person such as a pedestrian or a runner) is detected, and the target image is replaced with a predetermined symbol and displayed on the map. Since a schematic situation map can be automatically generated from the schematic map of the site situation indicated by the position of and the map of the vicinity of the site, compared to simply pasting the image of the video data to the situation report, the process of processing the image to the schematic situation map is saved. be able to. In this way, a situation report is automatically generated that includes a schematic diagram of the situation that can be sent to the health company or the police without editing, so even if you are upset about being involved in an accident or incident, you can accurately report the situation. can tell In addition, since it is possible to automatically generate a schematic situation map by replacing the images of the person or vehicle that caused the accident or incident with a predetermined symbol, it is possible to directly match the schematic situation map of a court decision, etc., and paste the image of the person or vehicle as it is. Collation accuracy can be greatly improved compared to the case of attaching.
 本発明の好適な態様において、ウエアラブル装置は、携帯端末装置に接続され、映像処理装置は、携帯端末装置を介してウエアラブル装置からのカメラ映像を受信して所定の映像時間ごとの複数の映像データとして記憶し、映像データ取得部は、複数の映像データのうち少なくとも時刻データの時刻の映像を含む映像データを取得する。本態様によれば、ウエアラブル装置からのカメラ映像を所定の映像時間ごとの複数の映像データとして分けて記憶するから、複数の映像データに分けないで記憶する場合に比較して、事故や事件が発生した時刻の前後の映像を特定しやすい。 In a preferred aspect of the present invention, the wearable device is connected to the mobile terminal device, and the video processing device receives camera video from the wearable device via the mobile terminal device and generates a plurality of video data for each predetermined video time. , and the image data acquisition unit acquires image data including at least the image at the time of the time data from among the plurality of image data. According to this aspect, the camera video from the wearable device is divided and stored as a plurality of video data for each predetermined video time. It is easy to identify images before and after the time of occurrence.
 本発明の好適な態様において、状況略図生成部は、現場状況略図を生成する前に、映像データ取得部で取得された映像データから対象の候補となる人物又は乗り物の画像が複数検出されたか否かを判断し、対象の候補となる人物又は乗り物の画像が1つしか検出されないと判断した場合には、その1つの画像を事故又は事件を引き起こした対象として特定して、現場状況略図及び現場付近地図及び状況略図を生成し、対象の候補となる人物又は乗り物の画像が複数検出されたと判断した場合には、上記映像データとそれよりも前の映像データとを比較することで対象の候補となる人物又は乗り物のそれぞれの位置と動きを検出して対象候補図を生成し、対象候補図の位置と動きに基づいて、複数の画像の中から事故又は事件を引き起こした対象を特定して、現場状況略図及び現場付近地図及び状況略図を生成する。本態様によれば、映像データに複数の乗り物や人物が写っている場合でも、事故又は事件を引き起こした対象を自動的に特定できる。また、事故又は事件の発生時に基づく映像データとそれよりも前の映像データとを比較することで、対象候補である人物や乗り物の位置と動きを検出できる。これにより、その対象候補の位置と動きに基づいて対象を特定することができるので、単に映像に写っている対象の大きさの違いなどから対象を特定する場合に比較してより正確に対象を特定できる。 In a preferred aspect of the present invention, before generating the schematic site situation diagram, the schematic situation diagram generation unit determines whether or not a plurality of images of a person or a vehicle that is a target candidate has been detected from the video data acquired by the video data acquisition unit. If it is determined that only one image of a person or vehicle that is a candidate for the target is detected, that one image is specified as the target that caused the accident or incident, and a schematic map of the site situation and the site When it is determined that a plurality of images of a person or a vehicle, which are candidates for a target, have been detected by generating a map of the vicinity and a schematic diagram of the situation, candidates for the target are identified by comparing the video data with previous video data. Detect the position and movement of each person or vehicle that becomes the target, generate an object candidate map, and identify the object that caused the accident or incident from among multiple images based on the position and movement of the object candidate map , to generate a map of the site situation and a map of the vicinity of the site and a map of the situation. According to this aspect, even when a plurality of vehicles or persons are shown in the video data, it is possible to automatically identify the object that caused the accident or incident. In addition, by comparing video data based on when an accident or incident occurred with video data before that, the position and movement of a person or vehicle, which is a target candidate, can be detected. As a result, the target can be specified based on the position and movement of the target candidate, so the target can be specified more accurately than when the target is specified simply from the difference in size of the target in the image. can be identified.
 本発明の好適な態様において、携帯端末装置は、衝撃を検知する衝撃センサを備え、衝撃センサにより衝撃が検知されると、事故の発生又は事件の発生又はどちらでもないのいずれかを選択するための事故事件選択画面を表示部に表示し、状況データ取得部は、事故の発生又は事件の発生が選択された情報を含む状況データを携帯端末装置から取得する。衝撃を検知しても携帯端末装置を落としただけなど、事故の発生又は事件の発生してない場合もある。本態様によれば、状況データ取得部が携帯端末装置から取得する状況データには事故の発生又は事件の発生が選択された情報を含まれるから、事故の発生又は事件の発生がなければ状況報告書は自動で生成されないようにすることができるので、状況報告生成の無駄を抑制できる。 In a preferred aspect of the present invention, the portable terminal device is provided with an impact sensor for detecting an impact, and when the impact sensor detects an impact, it is possible to select whether an accident has occurred, an incident has occurred, or neither. The accident case selection screen is displayed on the display unit, and the situation data acquisition unit acquires situation data including the information that the occurrence of the accident or the occurrence of the incident is selected from the portable terminal device. Even if an impact is detected, there are cases where an accident or incident does not occur, such as when the portable terminal device is simply dropped. According to this aspect, the situation data acquired from the portable terminal device by the situation data acquisition unit includes information that the occurrence of an accident or an incident has been selected. Since the report can be prevented from being automatically generated, it is possible to suppress wasteful generation of the status report.
 本発明の好適な態様において、状況報告生成部は、携帯端末装置からの状況データにより事故か事件かを判断し、事故であると判断した場合は、所定の事故発生状況報告書のフォーム(ひな形)から状況報告を生成し、事件であると判断した場合は、所定の事件発生状況報告書のフォームから状況報告を生成する。本態様によれば、事故又は事件のそれぞれのフォームに応じた状況報告を自動で生成できるので、後から所定のフォームに合わせて状況報告を作り直す手間を省ける。 In a preferred embodiment of the present invention, the situation report generation unit determines whether it is an accident or an incident based on the situation data from the portable terminal device. form), and if it is determined to be an incident, a situation report is generated from a predetermined incident occurrence status report form. According to this aspect, it is possible to automatically generate a situation report according to the form of each accident or incident, thereby saving the trouble of recreating the situation report later according to a predetermined form.
 本発明の好適な態様において、状況略図を含む判例データを記憶する判例データベースと、状況略図生成部で生成された状況略図と判例データベースの判例に含まれる状況略図と照合して状況略図が類似する判例を選出し、選出した判例から過失割合を抽出し、選出した判例の状況略図と過失割合を含む過失割合報告を生成する過失割合報告生成部とを備える。本態様によれば、判例データベースを状況略図で照合するので、テキスト検索で判例を探す場合に比較して類似の判例を見つけやすい。判例データに基づく過失割合報告を自動で生成できるので、同様のケースの過失割合が分かり、訴訟の提起や示談の判断をしやすくなり、保険会社も保険の適用を判断しやすくなる。 In a preferred embodiment of the present invention, a judicial case database that stores judicial precedent data including schematic situation diagrams is compared with the schematic situation diagram generated by the schematic situation diagram generation unit and the schematic situation diagrams included in the judicial cases in the schematic situation diagrams to make the schematic situation diagrams similar. a percentage-of-fault report generator for selecting a case, extracting percentage-of-failure from the selected case, and generating a percentage-of-fault report including a schematic of the situation of the selected case and the percentage-of-fault. According to this aspect, since the judicial precedent database is compared with the schematic situation diagram, it is easier to find similar judicial precedents than when searching for judicial precedents by text search. Since it is possible to automatically generate a percentage-of-fault report based on case law data, it is possible to know the percentage of fault in similar cases, making it easier to file lawsuits and decide whether to settle, and for insurance companies to decide whether to apply insurance.
 本発明の好適な態様において、判例データベースは、事故判例データベースと事件判例データベースに分けられ、過失割合報告生成部は、携帯端末装置からの状況データにより事故か事件かを判断し、事故であると判断した場合は、事故判例データベースから状況略図が類似する判例を選出し、事件であると判断した場合は、事件判例データベースから状況略図が類似する判例を選出する。本態様によれば、事故の場合は事故判例データベースから状況略図が類似する判例が選出され、事件の場合は事件判例データベースから状況略図が類似する判例が選出されるので、事故に応じた適切な判例を選出でき、また事件に応じた適切な判例や犯罪類型(条文など)を選出できる。 In a preferred embodiment of the present invention, the judicial precedent database is divided into an accident judicial precedent database and an incident judicial precedent database. If so, it selects a precedent with a similar outline of the situation from the accident precedent database, and if it determines that it is a case, selects a precedent with a similar outline of the situation from the case precedent database. According to this aspect, in the case of an accident, a judicial precedent with a similar schematic diagram of the situation is selected from the accident judicial precedent database, and in the case of a case, a judicial precedent with a similar schematic diagram of the situation is selected from the case judicial precedent database. Precedents can be selected, and appropriate precedents and crime types (articles, etc.) can be selected according to the case.
 上記課題を解決するために、本発明のプログラムは、ウエアラブル情報処理装置が行う事故又は事件の状況報告生成処理をコンピュータに実行させるプログラムであって、ウエアラブル情報処理装置は、人に装着されるウエアラブル装置のカメラ映像を映像データとして記憶する映像処理装置と、地図データを記憶する地図処理装置と、事故又は事件の発生時の時刻データ及びGPSデータを含む状況データを収集可能な携帯端末装置と、にネットワークを介して接続され、状況報告生成処理は、携帯端末装置からの状況データを取得するステップと、映像処理装置から時刻データに基づく映像データを取得するステップと、地図処理装置からGPSデータに基づく地図データを取得するステップと、少なくとも映像データと地図データに基づいて状況略図を生成するステップと、状況略図を含む状況報告を生成するステップとを含む。これにより、コンピュータを、事故又は事件の状況報告生成処理を行うウエアラブル情報処理装置として機能させることができる。 In order to solve the above-described problems, a program of the present invention is a program for causing a computer to execute a process of generating a situation report of an accident or incident performed by a wearable information processing device, wherein the wearable information processing device is a wearable device worn on a person. A video processing device that stores camera video of the device as video data, a map processing device that stores map data, a mobile terminal device that can collect situation data including time data and GPS data when an accident or incident occurs, is connected via a network, and the situation report generating process includes a step of acquiring situation data from a mobile terminal device, a step of acquiring video data based on time data from a video processing device, and a step of acquiring GPS data from a map processing device. generating a situation diagram based at least on the video data and the map data; and generating a situation report including the situation diagram. This allows the computer to function as a wearable information processing device for generating a situation report of an accident or incident.
 本発明によれば、人に装着するウエアラブル装置のカメラ映像等のウエアラブル情報を利用することで、人と自転車やバイク、人と人との事故や事件の映像も記憶できる。この映像に基づいて事故や事件の状況報告を自動的に生成できる。 According to the present invention, by using wearable information such as camera images of a wearable device worn by a person, it is possible to store images of people, bicycles, motorcycles, and accidents and incidents between people. A status report of an accident or incident can be automatically generated based on this video.
第1実施形態のウエアラブル情報処理システムの構成を示す図である。It is a figure which shows the structure of the wearable information processing system of 1st Embodiment. 図1のウエアラブル情報処理システムのブロック図である。2 is a block diagram of the wearable information processing system of FIG. 1; FIG. 映像データの具体例を示すデータテーブルである。4 is a data table showing a specific example of video data; ユーザデータの具体例を示すデータテーブルである。It is a data table showing a specific example of user data. 第1実施形態のシステム全体の処理を説明するフローチャートである。4 is a flowchart for explaining processing of the entire system of the first embodiment; 事故事件選択画面の具体例を示す図である。It is a figure which shows the specific example of an accident case selection screen. 通報指示画面の具体例を示す図である。It is a figure which shows the specific example of a report instruction|indication screen. 事故発生状況報告書のフォームの具体例を示す図である。It is a figure which shows the specific example of the form of an accident occurrence situation report. 事件発生状況報告書のフォームの具体例を示す図である。It is a figure which shows the specific example of the form of an incident occurrence situation report. 図5の状況報告生成処理の具体例を示すフローチャートである。FIG. 6 is a flow chart showing a specific example of the status report generation processing of FIG. 5; FIG. 図10の状況略図生成処理の具体例を示すフローチャートである。FIG. 11 is a flow chart showing a specific example of the schematic situation diagram generation process of FIG. 10; FIG. 状況略図生成処理で選出された映像データの具体例を示す図である。It is a figure which shows the specific example of the video data selected by the situation schematic drawing production|generation process. 状況略図生成処理で生成された現場状況略図の具体例を示す図である。It is a figure which shows the specific example of the site situation schematic drawing produced|generated by the situation schematic drawing production|generation process. 状況略図生成処理で生成された現場付近地図の具体例を示す図である。It is a figure which shows the specific example of the near-site map produced|generated by the situation schematic drawing production|generation process. 状況略図生成処理で生成された状況略図の具体例を示す図である。It is a figure which shows the specific example of the schematic situation map produced|generated by the schematic situation map production|generation process. 第2実施形態のウエアラブル情報処理システムのブロック図である。It is a block diagram of the wearable information processing system of 2nd Embodiment. 第2実施形態のシステム全体の処理を説明するフローチャートである。9 is a flowchart for explaining processing of the entire system of the second embodiment; 過失割合報告生成処理の具体例を示すフローチャートである。FIG. 11 is a flow chart showing a specific example of fault rate report generation processing; FIG. 事故過失割合報告書のフォームの具体例を示す図である。FIG. 10 is a diagram showing a specific example of a form for an Accident-Negligence Percentage Report; 事件過失割合報告書のフォームの具体例を示す図である。FIG. 10 is a diagram showing a specific example of the form of the Incident Negligence Percentage Report. 事件構成要件報告書のフォームの具体例を示す図である。It is a figure which shows the specific example of the form of a case constituent element report. 事故事件選択画面の他の具体例を示す図である。FIG. 10 is a diagram showing another specific example of the accident case selection screen; 通報指示画面の他の具体例を示す図である。FIG. 10 is a diagram showing another specific example of the report instruction screen; 変形例にかかる対象特定処理の具体例を示すフローチャートである。FIG. 11 is a flowchart showing a specific example of target specifying processing according to a modification; FIG. 対象特定処理における映像例を示す図であり、(a)は全方位映像、(b)は全方位映像から生成される映像である。FIG. 10A is a diagram showing an example of images in the object specifying process, where (a) is an omnidirectional image and (b) is an image generated from the omnidirectional image. 図25の所定時間前の映像例を示す図であり、(a)は全方位映像、(b)は全方位映像から生成される映像である。26A is an omnidirectional image, and FIG. 26B is an image generated from the omnidirectional image. FIG. 対象候補図の具体例を示す図である。FIG. 10 is a diagram showing a specific example of a target candidate diagram; 対象候補検証図の具体例を示す図である。FIG. 10 is a diagram showing a specific example of a target candidate verification diagram;
<第1実施形態>
 以下、本発明の第1実施形態について図面を参照しながら説明する。第1実施形態では本発明のウエアラブル情報処理装置10を備えるウエアラブル情報処理システム100を例示する。図1は、第1実施形態に係るウエアラブル情報処理システム100の構成を示す図である。図1のウエアラブル情報処理システム100は、ウエアラブル情報処理装置10と携帯端末装置20と映像処理装置40と地図処理装置50とを備える。
<First Embodiment>
A first embodiment of the present invention will be described below with reference to the drawings. In the first embodiment, a wearable information processing system 100 including the wearable information processing device 10 of the present invention is illustrated. FIG. 1 is a diagram showing the configuration of a wearable information processing system 100 according to the first embodiment. A wearable information processing system 100 of FIG. 1 includes a wearable information processing device 10 , a mobile terminal device 20 , a video processing device 40 and a map processing device 50 .
 ウエアラブル情報処理装置10は、携帯端末装置20と映像処理装置40と地図処理装置50とにインターネットなどのネットワークNを介して互いに通信可能に構成されている。ウエアラブル情報処理装置10と映像処理装置40と地図処理装置50は、パーソナルコンピュータで構成してもよく、クラウドサーバで構成してもよい。ウエアラブル情報処理装置10と映像処理装置40と地図処理装置50とはそれぞれ、複数台で分散処理するように構成してもよく、また1台のサーバ装置に設けられた複数の仮想マシンによって構成してもよい。 The wearable information processing device 10 is configured so that the mobile terminal device 20, the video processing device 40, and the map processing device 50 can communicate with each other via a network N such as the Internet. The wearable information processing device 10, the video processing device 40, and the map processing device 50 may be composed of a personal computer or a cloud server. Each of the wearable information processing device 10, the image processing device 40, and the map processing device 50 may be configured to perform distributed processing by a plurality of devices, or may be configured by a plurality of virtual machines provided in one server device. may
 携帯端末装置20は、ユーザによって利用される持ち運び可能な情報処理装置である。携帯端末装置20は、例えばスマートフォン、タブレット、PDA(Personal Digital Assistant)などである。ネットワークNには1つの携帯端末装置20が接続される場合を例示しているが、2つ以上の携帯端末装置20が接続されていてもよい。 The mobile terminal device 20 is a portable information processing device used by the user. The mobile terminal device 20 is, for example, a smart phone, a tablet, a PDA (Personal Digital Assistant), or the like. Although the case where one mobile terminal device 20 is connected to the network N is illustrated, two or more mobile terminal devices 20 may be connected.
 携帯端末装置20には、ウエアラブル装置30がブルートゥース(登録商標)などの通信規格に基づく無線通信ネットワークで接続可能である。ウエアラブル装置30はカメラ34を備え、人の身体に装着されてその人の周囲のカメラ映像を撮ることができる。図1のイラストに例示するウエアラブル装置30は、耳に装着する耳掛けタイプであり、全方位のカメラ映像を撮れるカメラ34が設けられている。 A wearable device 30 can be connected to the mobile terminal device 20 via a wireless communication network based on a communication standard such as Bluetooth (registered trademark). The wearable device 30 is equipped with a camera 34 and can be worn on a person's body to take camera images of the person's surroundings. A wearable device 30 exemplified in the illustration of FIG. 1 is an ear hook type that is worn on the ear, and is provided with a camera 34 capable of capturing omnidirectional camera images.
 ウエアラブル情報処理装置10は、ウエアラブル情報(身体周囲のカメラ映像など)に基づいて事故又は事件の状況報告を自動で生成する。第1実施形態のウエアラブル情報処理装置10は、携帯端末装置20をクライアントとするサーバコンピュータで構成する場合を例示する。映像処理装置40は、ウエアラブル装置30からのカメラ映像を、携帯端末装置20を介して受信して映像データとして記憶する。地図処理装置50は地図データを記憶し、受信したGPSデータの緯度と経度を含む現場付近の地図データをウエアラブル情報処理装置10へ送信する。 The wearable information processing device 10 automatically generates a status report of an accident or incident based on wearable information (camera image around the body, etc.). The wearable information processing device 10 of the first embodiment is configured by a server computer having the mobile terminal device 20 as a client. The video processing device 40 receives the camera video from the wearable device 30 via the mobile terminal device 20 and stores it as video data. The map processing device 50 stores map data, and transmits map data of the vicinity of the site including the latitude and longitude of the received GPS data to the wearable information processing device 10 .
 図2は、図1のウエアラブル情報処理システム100のブロック図である。図2に示す携帯端末装置20は、通信部21、制御部22、記憶部23、カメラ24,マイク25、センサ部26、入力部27、表示部28などを備える。制御部22と、通信部21、記憶部23、カメラ24,マイク25、センサ部26、入力部27、表示部28とはそれぞれバスライン20Lに接続され、相互にデータ(情報)のやり取りが可能である。 FIG. 2 is a block diagram of the wearable information processing system 100 of FIG. The mobile terminal device 20 shown in FIG. 2 includes a communication section 21, a control section 22, a storage section 23, a camera 24, a microphone 25, a sensor section 26, an input section 27, a display section 28, and the like. The control unit 22, the communication unit 21, the storage unit 23, the camera 24, the microphone 25, the sensor unit 26, the input unit 27, and the display unit 28 are each connected to the bus line 20L, and can exchange data (information) with each other. is.
 通信部21は、ネットワークNと有線又は無線で接続され、ウエアラブル情報処理装置10や映像処理装置40との間でデータ(情報)の送受信を行う。通信部21は、インターネットやイントラネットの通信インターフェースとして機能し、例えばTCP/IP、Wi-Fi(登録商標)、ブルートゥース(登録商標)を用いた通信などが可能である。通信部21は、ウエアラブル装置30との間でもデータ(情報)の送受信を行う。通信部21は、ブルートゥース(登録商標)などの通信規格に基づく無線通信ネットワークでウエアラブル装置30と接続可能である。 The communication unit 21 is connected to the network N by wire or wirelessly, and transmits and receives data (information) to and from the wearable information processing device 10 and the video processing device 40 . The communication unit 21 functions as a communication interface for the Internet or an intranet, and is capable of communication using, for example, TCP/IP, Wi-Fi (registered trademark), and Bluetooth (registered trademark). The communication unit 21 also transmits and receives data (information) to and from the wearable device 30 . The communication unit 21 can be connected to the wearable device 30 via a wireless communication network based on a communication standard such as Bluetooth (registered trademark).
 制御部22は、携帯端末装置20全体を統括的に制御する。制御部22は、MPU(Micro Processing Unit)などの集積回路で構成される。制御部22は、CPU(Central Processing Unit)、RAM(Random Access Memory)、ROM(Read Only Memory)を備える。制御部22は、必要なプログラムをROMにロードし、RAMを作業領域としてそのプログラムを実行することで、各種の処理を行う。 The control unit 22 comprehensively controls the mobile terminal device 20 as a whole. The control unit 22 is composed of an integrated circuit such as an MPU (Micro Processing Unit). The control unit 22 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), and a ROM (Read Only Memory). The control unit 22 loads necessary programs into the ROM and executes the programs using the RAM as a work area, thereby performing various processes.
 記憶部23は、制御部22で実行される各種プログラムやこれらのプログラムによって使用されるデータを記憶する記憶媒体(コンピュータ読み取り可能な有形の記憶媒体:a tangible storage medium)の例示である。記憶部23は、ハードディスク、光ディスク、磁気ディスクなどの記憶装置で構成される。記憶部23の構成はこれらに限られず、記憶部23をRAMやフラッシュメモリなどの半導体メモリなどで構成してもよい。例えば記憶部23をSSD(Solid State Drive)で構成することもできる。 The storage unit 23 is an example of a storage medium (computer-readable tangible storage medium: a tangible storage medium) that stores various programs executed by the control unit 22 and data used by these programs. The storage unit 23 is configured by a storage device such as a hard disk, an optical disk, or a magnetic disk. The configuration of the storage unit 23 is not limited to these, and the storage unit 23 may be configured by a semiconductor memory such as RAM or flash memory. For example, the storage unit 23 can be configured with an SSD (Solid State Drive).
 記憶部23は、プログラム記憶部231、データ記憶部232などを備える。プログラム記憶部231には、制御部22で実行される複数のアプリケーションプログラムが記憶される。プログラム記憶部231には、ウエアラブル装置30とウエアラブル情報処理装置10と映像処理装置40と連携してウエアラブル情報処理を行うための専用のアプリケーションプログラムが記憶される。この専用のアプリケーションプログラムは、ネットワークNを介して携帯端末装置20にダウンロード可能なプログラムである。データ記憶部232には、アプリケーションプログラムによって使用される各種データが記憶される。 The storage unit 23 includes a program storage unit 231, a data storage unit 232, and the like. A plurality of application programs to be executed by the control unit 22 are stored in the program storage unit 231 . Program storage unit 231 stores a dedicated application program for performing wearable information processing in cooperation with wearable device 30 , wearable information processing device 10 , and video processing device 40 . This dedicated application program is a program that can be downloaded to the mobile terminal device 20 via the network N. FIG. The data storage unit 232 stores various data used by application programs.
 制御部22は、センサデータ取得部221、衝撃検知部222、衝撃データ収集部223、状況データ収集部224などを備える。センサデータ取得部221は、センサ部26からのセンサデータ(加速度データなど)を取得する。センサ部26は、携帯端末装置20の衝撃を検知する衝撃センサ(例えば加速度センサ)を含む。衝撃検知部222は、センサデータ取得部221で取得したセンサデータ(加速度データなど)によって携帯端末装置20の衝撃を検知する。例えばセンサデータに基づいて加速度データを監視し、加速度が所定値を超えたときに衝撃を検知する。この加速度の所定値はユーザの入力により変更可能にしてもよい。衝撃データ収集部223は、衝撃検知部222が衝撃を検知すると、その衝撃時(衝撃を検知した時)の時刻データ、衝撃時のGPSデータなどを衝撃データとして収集する。 The control unit 22 includes a sensor data acquisition unit 221, an impact detection unit 222, an impact data collection unit 223, a situation data collection unit 224, and the like. The sensor data acquisition unit 221 acquires sensor data (such as acceleration data) from the sensor unit 26 . The sensor unit 26 includes an impact sensor (for example, an acceleration sensor) that detects impact of the mobile terminal device 20 . The impact detection unit 222 detects impact of the mobile terminal device 20 based on sensor data (acceleration data, etc.) acquired by the sensor data acquisition unit 221 . For example, acceleration data is monitored based on sensor data, and impact is detected when the acceleration exceeds a predetermined value. This predetermined value of acceleration may be changeable by user input. When the impact detection unit 222 detects an impact, the impact data collection unit 223 collects time data at the time of the impact (when the impact is detected), GPS data at the time of impact, etc. as impact data.
 制御部22は、衝撃データ収集部223で収集した衝撃データを通信部21によりウエアラブル情報処理装置10に送信する。ウエアラブル情報処理装置10は衝撃データを受信すると、携帯端末装置20にプッシュ通知を送信する。状況データ収集部224は、プッシュ通知により、事故事件選択画面や状況入力画面などを表示部28に表示して状況データを収集可能である。 The control unit 22 transmits the impact data collected by the impact data collection unit 223 to the wearable information processing device 10 through the communication unit 21 . When the wearable information processing device 10 receives the impact data, the wearable information processing device 10 transmits a push notification to the mobile terminal device 20 . The situation data collection unit 224 can collect situation data by displaying an accident case selection screen, a situation input screen, and the like on the display unit 28 by push notification.
 事故事件選択画面は、例えば「事件発生か」「事故発生か」「何も発生してないか」などを選択できる画面である。状況入力画面は、例えば相手の容姿や信号の状況、どのような事故又は事件が発生したかなどの状況説明や詳細情報を入力できる画面である。この状況入力画面は、アンケート形式で入力できるようにしてもよい。事故事件選択画面や状況入力画面などにより入力部27から入力されたデータ(情報)を状況データとして収集する。状況データには、衝撃時の加速度データ、時刻データ、GPSデータなどの衝撃データが含まれる。 The accident case selection screen is a screen where you can select, for example, "Is there an incident?", "Is there an accident?" The situation input screen is a screen on which situation explanations and detailed information can be input, such as the appearance of the other party, the signal situation, and what kind of accident or incident has occurred. This situation input screen may be configured to allow input in the form of a questionnaire. The data (information) input from the input unit 27 through the accident case selection screen, the situation input screen, etc. are collected as situation data. The situation data includes impact data such as acceleration data, time data, and GPS data at the time of impact.
 入力部27は、例えばタッチパネルやボタンである。入力部27は、ブルートゥース(登録商標)で携帯端末装置20に接続される外部入力デバイス(キーボード、マウスなど)であってもよい。表示部28は、液晶ディスプレイや有機ELディスプレイなどであり、制御部22からの指示に従って各種情報を表示する。ここでの表示部28は、タッチパネル付きの液晶ディスプレイを例示する。入力部27のボタンは、表示部28に表示されたボタンであってもよい。 The input unit 27 is, for example, a touch panel or buttons. The input unit 27 may be an external input device (keyboard, mouse, etc.) connected to the mobile terminal device 20 via Bluetooth (registered trademark). The display unit 28 is a liquid crystal display, an organic EL display, or the like, and displays various information according to instructions from the control unit 22 . The display unit 28 here exemplifies a liquid crystal display with a touch panel. The buttons of the input section 27 may be buttons displayed on the display section 28 .
 図2のウエアラブル装置30は、通信部31、カメラ34などを備える。通信部31は、携帯端末装置20との間でデータ(情報)の送受信を行う。通信部31は、ブルートゥース(登録商標)などの通信規格に基づく無線通信ネットワークで携帯端末装置20と接続可能である。 A wearable device 30 in FIG. 2 includes a communication unit 31, a camera 34, and the like. The communication unit 31 transmits and receives data (information) to and from the mobile terminal device 20 . The communication unit 31 can be connected to the mobile terminal device 20 via a wireless communication network based on a communication standard such as Bluetooth (registered trademark).
 カメラ34は、人の身体に装着されて周囲(水平360度)のカメラ映像を撮ることができる。カメラ34は、全方位のカメラ映像を撮れるCCD(Charge Coupled Device)カメラなどで構成される。なお、カメラ34は、必ずしも全方位カメラでなくてもよい。人の身体の前後左右を含む周囲のカメラ映像を撮影できるカメラ34を備えるものであれば、どのようなウエアラブル装置30であってもよい。例えば複数のカメラ34で身体周囲のカメラ映像を撮影できるものでもよい。また、ウエアラブル装置30の種類は図示する耳掛けタイプに限られない。人の身体の周囲の映像を撮影可能なカメラ34を装着していれば、メガネタイプ、ヘッドホンタイプ、首掛けタイプなどどのような種類であってもよい。カメラ34は、CCDカメラに限られず、Webカメラ、IoTカメラなどであってもよい。 The camera 34 is attached to the human body and can take camera images of the surroundings (360 degrees horizontally). The camera 34 is composed of a CCD (Charge Coupled Device) camera or the like capable of taking an omnidirectional camera image. Note that the camera 34 may not necessarily be an omnidirectional camera. Any wearable device 30 may be used as long as it has a camera 34 capable of capturing camera images of the surroundings including the front, rear, left, and right of the human body. For example, a plurality of cameras 34 may be used to capture camera images around the body. Also, the type of wearable device 30 is not limited to the ear hook type shown in the figure. As long as the camera 34 capable of capturing images of the surroundings of the human body is attached, it may be of any type, such as a glasses type, a headphone type, or a neck type. The camera 34 is not limited to a CCD camera, and may be a web camera, an IoT camera, or the like.
 ウエアラブル装置30は、携帯端末装置20に接続されると、カメラ34を起動してそのカメラ映像を携帯端末装置20に送信する。携帯端末装置20は、ウエアラブル装置30からのカメラ映像を映像処理装置40に送信する。 When the wearable device 30 is connected to the mobile terminal device 20 , it activates the camera 34 and transmits the camera image to the mobile terminal device 20 . The mobile terminal device 20 transmits camera video from the wearable device 30 to the video processing device 40 .
 映像処理装置40は、通信部41、制御部42、記憶部43などを備える。制御部42と、通信部41、記憶部43とはそれぞれバスライン40Lに接続され、相互にデータ(情報)のやり取りが可能である。通信部41は、ネットワークNと有線又は無線で接続され、ウエアラブル情報処理装置10や携帯端末装置20との間でデータ(情報)の送受信を行う。通信部41は、インターネットやイントラネットの通信インターフェースとして機能し、例えばTCP/IP、Wi-Fi(登録商標)、ブルートゥース(登録商標)を用いた通信などが可能である。 The video processing device 40 includes a communication section 41, a control section 42, a storage section 43, and the like. The control unit 42, the communication unit 41, and the storage unit 43 are each connected to the bus line 40L, and can exchange data (information) with each other. The communication unit 41 is connected to the network N by wire or wirelessly, and transmits and receives data (information) to and from the wearable information processing device 10 and the mobile terminal device 20 . The communication unit 41 functions as a communication interface for the Internet or an intranet, and is capable of communication using, for example, TCP/IP, Wi-Fi (registered trademark), and Bluetooth (registered trademark).
 制御部42は、映像処理装置40全体を統括的に制御する。制御部42は、MPUなどの集積回路で構成される。制御部42は、CPU、RAM、ROMを備える。制御部42は、必要なプログラムをROMにロードし、RAMを作業領域としてそのプログラムを実行することで、各種の処理を行う。 The control unit 42 comprehensively controls the entire video processing device 40 . The control unit 42 is composed of an integrated circuit such as an MPU. The control unit 42 includes a CPU, RAM, and ROM. The control unit 42 loads a necessary program into the ROM and executes the program using the RAM as a work area, thereby performing various processes.
 記憶部43は、制御部42で実行される各種プログラムやこれらのプログラムによって使用されるデータを記憶する記憶媒体の例示である。記憶部43は、ハードディスク、光ディスク、磁気ディスクなどの記憶装置で構成される。記憶部43の構成はこれらに限られず、記憶部43をRAMやフラッシュメモリなどの半導体メモリなどで構成してもよい。例えば記憶部43をSSD(Solid State Drive)で構成することもできる。 The storage unit 43 is an example of a storage medium that stores various programs executed by the control unit 42 and data used by these programs. The storage unit 43 is configured by a storage device such as a hard disk, an optical disk, or a magnetic disk. The configuration of the storage unit 43 is not limited to these, and the storage unit 43 may be configured by a semiconductor memory such as RAM or flash memory. For example, the storage unit 43 can be configured with an SSD (Solid State Drive).
 記憶部43には、携帯端末装置20を介して受信されるウエアラブル装置30からのカメラ映像が所定の映像時間ごとの複数の映像データ432として記憶される。具体的には記憶部43には、例えば図3のデータテーブルのように所定の映像時間ごとの複数の映像データが記憶される。図3のデータテーブルには、ユーザID、映像データ、映像日時、映像時間、場所などが関連付けられて記憶される。 The storage unit 43 stores the camera video from the wearable device 30 received via the mobile terminal device 20 as a plurality of video data 432 for each predetermined video time. Specifically, the storage unit 43 stores a plurality of video data for each predetermined video time, such as the data table in FIG. In the data table of FIG. 3, a user ID, video data, video date and time, video time, location, etc. are associated and stored.
 図3の映像日時は映像データ432の撮影日と映像開始時刻である。図3では、映像時間を5分にした場合を例示しているので、5分ごとの映像データが記憶される。映像時間は設定可能であり、5分に限られず、1分ごと、数秒ごとなどどのように設定してもよい。映像時間は、携帯端末装置20からのユーザ入力により自由に変えられる。 The video date and time in FIG. 3 are the shooting date and video start time of the video data 432 . Since FIG. 3 illustrates a case where the video time is set to 5 minutes, video data for every 5 minutes is stored. The video time can be set, and is not limited to 5 minutes, and may be set any time such as every minute or every few seconds. The video time can be freely changed by user input from the mobile terminal device 20 .
 また、映像時間は、ユーザの速度に応じて変えるようにしてもよい。ユーザの速度は、携帯端末装置20の加速度センサによって検知できる。例えばユーザが歩行する場合と、自転車を運転する場合とで映像時間を変えることもでき、ユーザの速度が速いほど映像時間を短くできる。これによれば、衝撃時の映像データの映像時間が長すぎたり、逆に短すぎたりせず、ユーザの速度に応じた最適な映像時間の映像データを得られる。場所は映像データの撮影場所であり、例えば映像開始時のGPSデータから特定した場所を記憶する。なお、映像データ432は図3に示すデータテーブルに限られず、データテーブルの項目も図3に示すものに限られない。 Also, the video time may be changed according to the user's speed. The user's speed can be detected by the acceleration sensor of the mobile terminal device 20 . For example, the video time can be changed depending on whether the user is walking or riding a bicycle, and the faster the user is, the shorter the video time can be. According to this, the video data at the time of impact does not become too long or conversely too short, and the video data of the optimum video time corresponding to the speed of the user can be obtained. The location is the shooting location of the video data, and stores, for example, the location specified from the GPS data at the start of the video. Note that the video data 432 is not limited to the data table shown in FIG. 3, and the items of the data table are not limited to those shown in FIG.
 なお、記憶部43には、図示しないプログラム記憶部、データ記憶部などを備える。プログラム記憶部には、制御部42で実行されるプログラムが記憶される。データ記憶部には、プログラムによって使用される各種データが記憶される。記憶部43は、ハードディスクで構成されていてもよく、フラッシュメモリなどの半導体メモリで構成されていてもよい。例えば記憶部43をSSDで構成することもできる。 Note that the storage unit 43 includes a program storage unit, a data storage unit, and the like (not shown). Programs executed by the control unit 42 are stored in the program storage unit. Various data used by the program are stored in the data storage unit. The storage unit 43 may be composed of a hard disk, or may be composed of a semiconductor memory such as a flash memory. For example, the storage unit 43 can be configured with an SSD.
 制御部42は、時刻データ取得部421と映像データ選択部422を備える。時刻データ取得部421は、ウエアラブル情報処理装置10からの衝撃時の時刻データを取得する。映像データ選択部422は、図3に示すような複数の映像データから衝撃時の時刻データに基づいて少なくともその時刻の映像を含む映像データを取得する。取得された映像データは、通信部41を介してウエアラブル情報処理装置10に送信される。 The control unit 42 includes a time data acquisition unit 421 and a video data selection unit 422 . The time data acquisition unit 421 acquires time data at the time of impact from the wearable information processing device 10 . The image data selection unit 422 acquires image data including at least the image at that time based on the time data at the time of impact from a plurality of image data as shown in FIG. The acquired video data is transmitted to the wearable information processing device 10 via the communication unit 41 .
 地図処理装置50は、通信部51、制御部52、記憶部53などを備える。制御部52と、通信部51、記憶部53とはそれぞれバスライン50Lに接続され、相互にデータ(情報)のやり取りが可能である。通信部51は、ネットワークNと有線又は無線で接続され、ウエアラブル情報処理装置10との間でデータ(情報)の送受信を行う。通信部51は、インターネットやイントラネットの通信インターフェースとして機能し、例えばTCP/IP、Wi-Fi(登録商標)、ブルートゥース(登録商標)を用いた通信などが可能である。 The map processing device 50 includes a communication section 51, a control section 52, a storage section 53, and the like. The control unit 52, the communication unit 51, and the storage unit 53 are each connected to the bus line 50L, and can exchange data (information) with each other. The communication unit 51 is connected to the network N by wire or wirelessly, and transmits and receives data (information) to and from the wearable information processing device 10 . The communication unit 51 functions as a communication interface for the Internet or an intranet, and is capable of communication using, for example, TCP/IP, Wi-Fi (registered trademark), and Bluetooth (registered trademark).
 制御部52は、地図処理装置50全体を統括的に制御する。制御部52は、MPUなどの集積回路で構成される。制御部52は、CPU、RAM、ROMを備える。制御部52は、必要なプログラムをROMにロードし、RAMを作業領域としてそのプログラムを実行することで、各種の処理を行う。 The control unit 52 comprehensively controls the map processing device 50 as a whole. The control unit 52 is composed of an integrated circuit such as an MPU. The control unit 52 includes a CPU, RAM, and ROM. The control unit 52 loads a necessary program into the ROM and executes the program using the RAM as a work area, thereby performing various processes.
 記憶部53は、制御部52で実行される各種プログラムやこれらのプログラムによって使用されるデータを記憶する記憶媒体の例示である。記憶部53は、ハードディスク、光ディスク、磁気ディスクなどの記憶装置で構成される。記憶部53の構成はこれらに限られず、記憶部53をRAMやフラッシュメモリなどの半導体メモリなどで構成してもよい。例えば記憶部53をSSD(Solid State Drive)で構成することもできる。 The storage unit 53 is an example of a storage medium that stores various programs executed by the control unit 52 and data used by these programs. The storage unit 53 is configured by a storage device such as a hard disk, an optical disk, or a magnetic disk. The configuration of the storage unit 53 is not limited to these, and the storage unit 53 may be configured by semiconductor memory such as RAM and flash memory. For example, the storage unit 53 can be configured with an SSD (Solid State Drive).
 記憶部53には、地図データ532などが記憶される。なお、記憶部53には、図示しないプログラム記憶部、データ記憶部などを備える。プログラム記憶部には、制御部52で実行されるプログラムが記憶される。データ記憶部には、プログラムによって使用される各種データが記憶される。記憶部53は、ハードディスクで構成されていてもよく、フラッシュメモリなどの半導体メモリで構成されていてもよい。例えば記憶部53をSSDで構成することもできる。 The storage unit 53 stores map data 532 and the like. Note that the storage unit 53 includes a program storage unit, a data storage unit, and the like (not shown). Programs executed by the control unit 52 are stored in the program storage unit. Various data used by the program are stored in the data storage unit. The storage unit 53 may be composed of a hard disk, or may be composed of a semiconductor memory such as a flash memory. For example, the storage unit 53 can be configured with an SSD.
 制御部52は、GPSデータ取得部521と地図データ選択部522を備える。GPSデータ取得部521は、ウエアラブル情報処理装置10からの衝撃時のGPSデータを取得する。地図データ選択部522は、取得したGPSデータに基づいて少なくともその緯度と経度の地図を含む地図データを取得する。取得された地図データは、通信部51を介してウエアラブル情報処理装置10に送信される。 The control unit 52 includes a GPS data acquisition unit 521 and a map data selection unit 522. The GPS data acquisition unit 521 acquires GPS data at the time of impact from the wearable information processing device 10 . The map data selection unit 522 acquires map data including a map of at least the latitude and longitude based on the acquired GPS data. The acquired map data is transmitted to the wearable information processing device 10 via the communication unit 51 .
 図2のウエアラブル情報処理装置10は、通信部11と制御部12と記憶部14とを備える。制御部12と、通信部11、制御部12、記憶部14とはそれぞれバスライン10Lに接続され、相互にデータ(情報)のやり取りが可能である。 The wearable information processing device 10 in FIG. 2 includes a communication unit 11, a control unit 12, and a storage unit 14. The control unit 12, the communication unit 11, the control unit 12, and the storage unit 14 are each connected to the bus line 10L, and can exchange data (information) with each other.
 通信部11は、ネットワークNと有線又は無線で接続され、携帯端末装置20と映像処理装置40と地図処理装置50との間でデータ(情報)の送受信を行う。通信部11は、インターネットやイントラネットの通信インターフェースとして機能し、例えばTCP/IPやWi-Fi(登録商標)、ブルートゥース(登録商標)などを用いた通信が可能である。 The communication unit 11 is connected to the network N by wire or wirelessly, and transmits and receives data (information) among the mobile terminal device 20, the video processing device 40, and the map processing device 50. The communication unit 11 functions as a communication interface for the Internet or an intranet, and is capable of communication using TCP/IP, Wi-Fi (registered trademark), Bluetooth (registered trademark), or the like.
 制御部12は、ウエアラブル情報処理装置10全体を統括的に制御する。制御部12は、MPUなどの集積回路で構成される。制御部12は、CPU、RAM、ROMを備える。制御部12は、必要なプログラムをROMにロードし、RAMを作業領域としてそのプログラムを実行することで、各種の処理を行う。 The control unit 12 comprehensively controls the wearable information processing apparatus 10 as a whole. The control unit 12 is composed of an integrated circuit such as an MPU. The control unit 12 includes a CPU, RAM, and ROM. The control unit 12 loads a necessary program into the ROM and executes the program using the RAM as a work area, thereby performing various processes.
 記憶部14は、制御部12で実行される各種プログラムやこれらのプログラムによって使用されるデータなどを記憶するコンピュータ読み取り可能な記憶媒体である。記憶部14は、ハードディスク、光ディスク、磁気ディスクなどの記憶装置で構成される。記憶部14の構成はこれらに限られず、記憶部14をRAMやフラッシュメモリなどの半導体メモリなどで構成してもよい。例えば記憶部14をSSD(Solid State Drive)で構成することもできる。 The storage unit 14 is a computer-readable storage medium that stores various programs executed by the control unit 12 and data used by these programs. The storage unit 14 is configured by a storage device such as a hard disk, an optical disk, or a magnetic disk. The configuration of the storage unit 14 is not limited to these, and the storage unit 14 may be configured by a semiconductor memory such as RAM or flash memory. For example, the storage unit 14 can be configured with an SSD (Solid State Drive).
 記憶部14は、プログラム記憶部231、データ記憶部232を備える。プログラム記憶部231には、制御部12で実行されるプログラムが記憶される。データ記憶部232には、プログラムによって使用される各種データが記憶される。制御部12は、プログラム記憶部15から必要なプログラムを読み出して各種の処理を実行する。 The storage unit 14 includes a program storage unit 231 and a data storage unit 232. Programs to be executed by the control unit 12 are stored in the program storage unit 231 . The data storage unit 232 stores various data used by the program. The control unit 12 reads necessary programs from the program storage unit 15 and executes various processes.
 データ記憶部16には、ユーザデータ161、取得データ162、生成データ163などが記憶される。データ記憶部16には、上記の他、図8に例示する事故発生状況報告書126Aのフォームや図9に例示する事件発生状況報告書126Bのフォームなどが記憶される。ユーザデータ161には、例えば図4のデータテーブルのように、ユーザID、氏名、電話番号、住所、ウエアラブル装置の種類、通報先などが記憶される。これらのユーザデータ161は、ユーザから事前に入力されたものが予め記憶される。「ウエアラブル装置の種類」は、携帯端末装置20に接続するウエアラブル装置30の種類である。 The data storage unit 16 stores user data 161, acquired data 162, generated data 163, and the like. In addition to the above, the data storage unit 16 stores a form of an accident occurrence status report 126A illustrated in FIG. 8, a form of an incident occurrence status report 126B illustrated in FIG. 9, and the like. The user data 161 stores user ID, name, telephone number, address, type of wearable device, report destination, etc., as in the data table of FIG. 4, for example. These user data 161 are pre-stored as previously input by the user. “Type of wearable device” is the type of wearable device 30 connected to mobile terminal device 20 .
 図4の「ウエアラブル装置の種類」の項目には、例えば耳掛けタイプ、メガネタイプ、ヘッドホンタイプ、首掛けタイプなどが記憶される。「通報先」は、事故の発生や事件の発生を通報する保険会社や警察などである。通報先が登録されている場合には、携帯端末装置20からユーザの指示(例えばアプリケーション上のボタンをクリック)により、ウエアラブル情報処理装置10で自動生成された状況報告書やデータを自動で送信できる。なお、ユーザデータ161は図4に示すデータテーブルに限られず、データテーブルの項目も図4に示すものに限られない。 For example, ear hook type, glasses type, headphone type, and neck type are stored in the "wearable device type" item in FIG. The “report destination” is an insurance company, the police, or the like that reports the occurrence of an accident or incident. When the report destination is registered, the situation report and data automatically generated by the wearable information processing device 10 can be automatically sent from the mobile terminal device 20 according to the user's instruction (for example, clicking a button on the application). . The user data 161 is not limited to the data table shown in FIG. 4, and the items of the data table are not limited to those shown in FIG.
 取得データ162は、ウエアラブル情報処理装置10が通信部11を介して取得する各種データである。取得データ162には、映像処理装置40から取得する映像データ432、携帯端末装置20から取得する状況データ、地図処理装置50から取得する地図データ532などが含まれる。生成データ163は、ウエアラブル情報処理装置10が生成したデータである。生成データ163としては、ウエアラブル情報処理装置10で生成された状況略図データや状況報告データなどを含む。 The acquired data 162 is various data acquired by the wearable information processing device 10 via the communication unit 11 . The acquired data 162 includes video data 432 acquired from the video processing device 40, situation data acquired from the portable terminal device 20, map data 532 acquired from the map processing device 50, and the like. The generated data 163 is data generated by the wearable information processing device 10 . The generated data 163 includes situation schematic data and situation report data generated by the wearable information processing device 10 .
 制御部12は、映像データ取得部121、状況データ取得部122、地図データ取得部123、状況略図生成部125、状況報告生成部126を備える。映像データ取得部121は、通信部11を介して映像処理装置40からの映像データ(例えば衝撃時の映像データ)を取得する。状況データ取得部122は、通信部11を介して携帯端末装置20からの状況データ(例えば衝撃時の時刻データ及びGPSデータを含む状況データ)を取得する。地図データ取得部123は、通信部11を介して地図処理装置50からの地図データ(例えば衝撃時の地図データ)を取得する。 The control unit 12 includes a video data acquisition unit 121, a situation data acquisition unit 122, a map data acquisition unit 123, a schematic situation diagram generation unit 125, and a situation report generation unit 126. The video data acquisition unit 121 acquires video data (for example, video data at the time of impact) from the video processing device 40 via the communication unit 11 . The situation data acquisition unit 122 acquires situation data (for example, situation data including time data at the time of impact and GPS data) from the mobile terminal device 20 via the communication unit 11 . The map data acquisition unit 123 acquires map data (for example, map data at the time of impact) from the map processing device 50 via the communication unit 11 .
 状況略図生成部125は、少なくとも映像データ取得部121で取得された映像データと地図データ取得部123で地図データに基づいて状況略図を生成する。状況略図には例えば事故の場合の事故状況略図、事件の場合の事件状況略図が含まれる。具体的には状況略図生成部125は、事故の場合には事故状況略図を自動的に生成し、事件の場合には事件状況略図を自動的に生成する。状況略図生成部125で生成された状況略図は、データ記憶部16に生成データ163として記憶される。 The schematic situation diagram generation unit 125 generates a schematic situation diagram based on at least the video data acquired by the video data acquisition unit 121 and the map data by the map data acquisition unit 123 . The schematic situation map includes, for example, a schematic accident situation map in the case of an accident and a schematic incident situation map in the case of an incident. Specifically, the schematic situation diagram generation unit 125 automatically generates a schematic accident situation diagram in the case of an accident, and automatically generates a schematic incident situation diagram in the case of an incident. The schematic situation diagram generated by the schematic situation diagram generation unit 125 is stored as generated data 163 in the data storage unit 16 .
 状況報告生成部126は、状況略図を含む状況報告を生成する。状況報告には、例えば事故の場合の事故発生状況報告書、事件の場合の事件発生状況報告書が含まれる。具体的には状況報告生成部126は、事故の場合には事故状況略図を図8の事故発生状況報告書126Aのフォームのうち事故状況略図」の欄に自動的に入力する。状況報告生成部126は、携帯端末装置20からの状況データに基づいて図8の事故発生状況報告書126Aのフォームのうち該当欄(事故詳細情報や事故状況説明の欄)に該当データを自動的に入力する。こうして、状況報告生成部126は図8に示すような事故発生状況報告書126Aを自動的に生成する。 The situation report generation unit 126 generates a situation report including a schematic diagram of the situation. The situation report includes, for example, an accident occurrence situation report in the case of an accident and an incident occurrence situation report in the case of an incident. Specifically, in the event of an accident, the situation report generation unit 126 automatically inputs a schematic diagram of the accident situation into the column "Schematic diagram of the accident situation" in the form of the accident occurrence situation report 126A in FIG. Based on the situation data from the mobile terminal device 20, the situation report generation unit 126 automatically fills in the relevant data in the relevant columns (columns for detailed accident information and accident situation explanation) in the form of the accident occurrence situation report 126A in FIG. to enter. Thus, the situation report generator 126 automatically generates an accident occurrence situation report 126A as shown in FIG.
 状況報告生成部126は、事件の場合には事件状況略図を図9の事件発生状況報告書126Bのフォームのうち事件状況略図の欄に自動的に入力する。状況報告生成部126は、携帯端末装置20からの状況データに基づいて図9の事件発生状況報告書126Bのフォームのうち該当欄(事件詳細情報や事件状況説明の欄)に該当データを自動的に入力する。こうして、状況報告生成部126は図9に示すような事件発生状況報告書126Bを自動的に生成する。 In the case of an incident, the situation report generation unit 126 automatically enters the incident situation diagram into the incident situation diagram column of the form of the incident occurrence situation report 126B in FIG. Based on the situation data from the mobile terminal device 20, the situation report generation unit 126 automatically fills in the relevant data in the corresponding fields (fields for the detailed information on the incident and the description of the incident situation) in the form of the incident occurrence status report 126B shown in FIG. to enter. In this way, the situation report generator 126 automatically generates an incident occurrence situation report 126B as shown in FIG.
 図5は、第1実施形態に係るウエアラブル情報処理システム100全体の処理を説明するフローチャートである。図5に示すようにウエアラブル情報処理システム100は、ウエアラブル情報処理装置10と、携帯端末装置20、映像処理装置40、地図処理装置50とでデータのやり取りを行いながら処理を実行する。 FIG. 5 is a flowchart for explaining the overall processing of the wearable information processing system 100 according to the first embodiment. As shown in FIG. 5, the wearable information processing system 100 executes processing while exchanging data between the wearable information processing device 10, the mobile terminal device 20, the video processing device 40, and the map processing device 50. FIG.
 図5に示すように、先ずステップS10にて携帯端末装置20は専用のアプリケーションプログラムが起動されたか否かを判断する。この専用のアプリケーションプログラムは、携帯端末装置20がウエアラブル装置30とウエアラブル情報処理装置10と映像処理装置40と連携してウエアラブル情報処理を行うための専用のアプリケーションプログラムである。 As shown in FIG. 5, first, in step S10, the mobile terminal device 20 determines whether or not a dedicated application program has been started. This dedicated application program is a dedicated application program for portable terminal device 20 to perform wearable information processing in cooperation with wearable device 30 , wearable information processing device 10 and video processing device 40 .
 ステップS10にて携帯端末装置20は専用のアプリケーションプログラムが起動されたと判断すると、ウエアラブル装置30との接続をユーザに確認する。ユーザは人体(例えば耳掛けタイプなら耳)に装着したウエアラブル装置30を起動すると、携帯端末装置20にブルートゥース(登録商標)などにより接続される。ウエアラブル装置30は、携帯端末装置20に接続されると、カメラ34を起動してカメラ映像34aを映像処理装置40に送信する。すると、映像処理装置40はカメラ映像34aを所定の映像時間ごと(例えば図3では5分ごと)の複数の映像データ432として記憶する。 When the portable terminal device 20 determines in step S10 that the dedicated application program has been activated, it asks the user to confirm connection with the wearable device 30 . When the user activates the wearable device 30 worn on the human body (for example, the ear in the case of the ear hook type), the wearable device 30 is connected to the mobile terminal device 20 via Bluetooth (registered trademark) or the like. When connected to the mobile terminal device 20 , the wearable device 30 activates the camera 34 and transmits the camera video 34 a to the video processing device 40 . Then, the video processing device 40 stores the camera video 34a as a plurality of video data 432 for each predetermined video time (for example, every five minutes in FIG. 3).
 次にステップS11にて携帯端末装置20の制御部22は衝撃を検知したか否かを判断する。携帯端末装置20の制御部22が衝撃を検知してないと判断した場合には、衝撃検知の判断を続ける。具体的には制御部22はセンサデータ取得部221で取得したセンサデータ(加速度データ)を監視し、衝撃検知部222が例えば加速度が所定値を超えたと判断したときに衝撃を検知する。 Next, in step S11, the control unit 22 of the mobile terminal device 20 determines whether or not an impact has been detected. When the control unit 22 of the mobile terminal device 20 determines that the impact has not been detected, the impact detection determination is continued. Specifically, the control unit 22 monitors the sensor data (acceleration data) acquired by the sensor data acquisition unit 221, and detects an impact when the impact detection unit 222 determines that the acceleration exceeds a predetermined value, for example.
 ステップS11にて携帯端末装置20の制御部22は衝撃を検知したと判断した場合は、衝撃データ223aを収集し、その衝撃データ223aをウエアラブル装置30に送信する。衝撃データ223aは、制御部22の衝撃データ収集部223により収集される。衝撃データ223aとしては、衝撃時の時刻データ、衝撃時のGPSデータ、衝撃時の加速度データなどが挙げられる。 When the controller 22 of the mobile terminal device 20 determines in step S11 that an impact has been detected, it collects the impact data 223a and transmits the impact data 223a to the wearable device 30. The impact data 223 a is collected by the impact data collection section 223 of the control section 22 . The impact data 223a includes time data at the time of impact, GPS data at the time of impact, acceleration data at the time of impact, and the like.
 ウエアラブル装置30は、衝撃データ223aを受信すると、携帯端末装置20にプッシュ通知Pを送信する。携帯端末装置20はプッシュ通知Pを受信すると、ステップS12にて事故か又は事件かを判断する。具体的には携帯端末装置20は例えば図6に示すような事故事件選択画面を表示部28に表示する。図6に示す事故事件選択画面は、ユーザに衝撃があったことを知らせ、「事故の発生」、「事件の発生」、「どちらでもない」を選択してもらうための表示である。 The wearable device 30 transmits a push notification P to the mobile terminal device 20 upon receiving the impact data 223a. When the mobile terminal device 20 receives the push notification P, it determines whether it is an accident or an incident in step S12. Specifically, the mobile terminal device 20 displays an accident case selection screen as shown in FIG. 6 on the display unit 28, for example. The accident case selection screen shown in FIG. 6 is a display for informing the user that there was an impact and asking the user to select "occurrence of accident", "occurrence of incident", or "neither".
 ステップS12にて図6の表示画面でユーザが「どちらでもない」を選択した場合は、この衝撃は事故事件の発生ではないとして、携帯端末装置20は衝撃検知の判断を続ける。これに対して図6の表示画面でユーザが「事故の発生」又は「事件の発生」を選択した場合は、ステップS13にて携帯端末装置20は状況入力処理を実行する。状況入力処理では、例えば事故発生であれば事故用アンケートを表示部28に表示し、事件発生であれば事件用アンケートを表示部28に表示する。 If the user selects "neither" on the display screen of FIG. 6 in step S12, the mobile terminal device 20 continues to determine impact detection, assuming that this impact is not an accident. On the other hand, if the user selects "occurrence of an accident" or "occurrence of an incident" on the display screen of FIG. In the situation input process, for example, an accident questionnaire is displayed on the display unit 28 if an accident has occurred, and an incident questionnaire is displayed on the display unit 28 if an incident has occurred.
 事故用アンケートは、図8に例示する事故発生状況報告書126Aのフォームに入力するデータを取得するためのアンケートである。事故発生状況報告書126Aのフォームに入力するデータのうち、例えば携帯端末装置20で取得できるデータ(センサデータ、天気データ、時刻データなど)では分からない未知データをユーザにより事故用アンケートで入力されたデータで補足できる。ここでの未知データとは、例えば図8の事故詳細情報のうち事故発生時の交通状況や信号の状態、相手のデータ(住所・氏名・電話番号など)や事故状況説明などである。これらの未知データを事故用アンケートで質問形式にすることで、ユーザに入力してもらいやすくすることができる。アンケートの事故状況説明には、例えば当たり屋のような場合を説明することで、発生前後の状況が分かるので不当請求などによる被害を未然に防ぐことができる。 The accident questionnaire is a questionnaire for obtaining data to be entered in the form of the accident occurrence status report 126A illustrated in FIG. Among the data to be entered in the form of the accident occurrence status report 126A, for example, unknown data that cannot be obtained from the data (sensor data, weather data, time data, etc.) that can be acquired by the mobile terminal device 20 is entered by the user in the accident questionnaire. Can be supplemented with data. The unknown data here includes, for example, traffic conditions and signal conditions at the time of the accident occurrence, data of the other party (address, name, telephone number, etc.), explanation of the accident situation, and the like among the detailed accident information shown in FIG. By putting these unknown data in the form of questions in the accident questionnaire, it is possible to make it easier for the user to input them. In the explanation of the accident situation in the questionnaire, for example, by explaining the case of a hit-and-run dealer, the situation before and after the accident can be understood, so that it is possible to prevent damage due to unjust billing and the like.
 事件用アンケートは、図9に例示する事件発生状況報告書126Bのフォームに入力するデータを取得するためのアンケートである。事件発生状況報告書126Bのフォームに入力するデータのうち、例えば携帯端末装置20で取得できるデータ(センサデータ、天気データ、時刻データなど)では分からない未知データをユーザにより事件用アンケートで入力されたデータで補足できる。ここでの未知データとは、例えば図9の事件詳細情報のうち事件発生時の現場状況や人混み、相手のデータ(年齢男女・容姿など)や事件状況説明などである。これらの未知データを事件用アンケートで質問形式にすることで、ユーザに入力してもらいやすくすることができる。アンケートの事件状況説明には、例えばえん罪のような場合を説明することで、発生前後の状況が分かるので虚偽告訴などの被害を未然に防ぐことができる。 The incident questionnaire is a questionnaire for acquiring data to be entered in the incident occurrence status report 126B form illustrated in FIG. Among the data to be entered in the incident occurrence status report 126B form, for example, unknown data that cannot be obtained from the data (sensor data, weather data, time data, etc.) that can be obtained by the mobile terminal device 20 is entered by the user in the incident questionnaire. Can be supplemented with data. The unknown data here includes, for example, the site situation and crowd at the time of the incident occurrence, the other party's data (age, gender, appearance, etc.) and the description of the incident situation among the incident detailed information shown in FIG. By putting these unknown data in the form of questions in the case questionnaire, it is possible to make it easier for the user to input them. In the explanation of the case situation in the questionnaire, for example, by explaining the case of false accusation, the situation before and after the occurrence can be understood, so that damage such as false accusation can be prevented.
 次に携帯端末装置20は、衝撃時の状況データ122aをウエアラブル情報処理装置10に送信する。この衝撃時の状況データ122aには、衝撃時の時刻データ224a、衝撃時のGPSデータ、事故事件選択データ(図6の事故事件選択画面から選択されたデータ)、アンケートの入力データ、センサデータ(衝撃センサの加速度データなど)、衝撃時の天気データなど状況報告生成に必要なデータが含まれる。 Next, the mobile terminal device 20 transmits the situation data 122a at the time of impact to the wearable information processing device 10. The situation data 122a at the time of impact includes time data 224a at the time of impact, GPS data at the time of impact, accident case selection data (data selected from the accident case selection screen in FIG. 6), questionnaire input data, sensor data ( Acceleration data from impact sensors, etc.), weather data at the time of impact, and other data necessary for generating a situation report.
 ウエアラブル情報処理装置10は、通信部11を介して衝撃時の状況データ122aを受信すると、その状況データ122aのうち衝撃時の時刻データ224aを映像処理装置40に送信する。 When the wearable information processing device 10 receives the situation data 122 a at the time of impact via the communication unit 11 , it transmits the time data 224 a at the time of impact from the situation data 122 a to the video processing device 40 .
 映像処理装置40は、通信部41を介して衝撃時の時刻データ224aを受信すると、その時刻データ224aの時刻の映像を含む衝撃時の映像データ422aを、記憶部43に記憶された映像データ432から取得して、ウエアラブル情報処理装置10に送信する。ウエアラブル情報処理装置10は、通信部11を介して衝撃時の映像データ422aを受信する。 When the video processing device 40 receives the time data 224 a at the time of impact via the communication unit 41 , the video processing device 40 converts the video data 422 a at the time of impact including the video at the time of the time data 224 a to the video data 432 stored in the storage unit 43 . and transmits it to the wearable information processing device 10 . The wearable information processing device 10 receives the image data 422 a at the time of impact via the communication unit 11 .
 こうして、ウエアラブル情報処理装置10の映像データ取得部121は、映像処理装置40の記憶部43に記憶された複数の映像データ432(ウエアラブル装置30のカメラ映像)から、時刻データ224aの時刻の映像を含む衝撃時の映像データ422aを取得する。なお、映像データ取得部121は、衝撃時の時刻データ224aの時刻の映像を含む映像データとその前後の映像データを衝撃時の映像データ422aとして取得してもよい。 In this way, the image data acquisition unit 121 of the wearable information processing device 10 acquires the image of the time of the time data 224a from the plurality of image data 432 (camera images of the wearable device 30) stored in the storage unit 43 of the image processing device 40. Image data 422a at the time of impact is acquired. Note that the image data acquiring unit 121 may acquire image data including the image at the time of the time data 224a at the time of impact and image data before and after that as the image data at the time of impact 422a.
 またウエアラブル情報処理装置10は、通信部11を介して衝撃時の状況データ122aを受信すると、その状況データ122aのうち衝撃時のGPSデータ224bを地図処理装置50に送信する。 Also, when the wearable information processing device 10 receives the situation data 122 a at the time of impact via the communication unit 11 , it transmits the GPS data 224 b at the time of impact from the situation data 122 a to the map processing device 50 .
 地図処理装置50は、通信部51を介して衝撃時のGPSデータ224bを受信すると、記憶部50に記憶された地図データ532から、その衝撃時のGPSデータ224bの緯度と経度の地図を含む所定範囲の衝撃時の地図データ522aを取得してウエアラブル情報処理装置10に送信する。ウエアラブル情報処理装置10は通信部11を介して衝撃時の地図データ522aを受信する。 When the map processing device 50 receives the GPS data 224b at the time of the impact via the communication unit 51, the map processing device 50 extracts a map including the latitude and longitude of the GPS data 224b at the time of the impact from the map data 532 stored in the storage unit 50. The map data 522a at the time of impact of the range is acquired and transmitted to the wearable information processing device 10. FIG. The wearable information processing device 10 receives the map data 522a at the time of impact via the communication unit 11 .
 こうして、ウエアラブル情報処理装置10の地図データ取得部123は、地図処理装置50から衝撃時のGPSデータ224bの緯度と経度の地図を含む地図データを取得する。地図データ取得部123は、衝撃時のGPSデータ224bの緯度と経度を中心とした所定範囲の地図を衝撃時の地図データ522aとして取得する。所定範囲は予め設定されており、携帯端末装置20からのユーザからの入力で所定設定を変更できるようにしてもよい。 In this way, the map data acquisition unit 123 of the wearable information processing device 10 acquires map data including a map of latitude and longitude of the GPS data 224b at the time of impact from the map processing device 50. The map data acquisition unit 123 acquires a map of a predetermined range around the latitude and longitude of the GPS data 224b at the time of impact as map data 522a at the time of impact. The predetermined range is set in advance, and the predetermined setting may be changed by the user's input from the mobile terminal device 20 .
 次にウエアラブル情報処理装置10は、映像処理装置40から衝撃時の映像データ422aを取得し、地図処理装置50から衝撃時の地図データ522aを取得すると、ステップS20にて状況報告生成処理を実行する。状況報告生成処理は、衝撃時の状況データ122a、衝撃時の映像データ422a、衝撃時の地図データ522aに基づいて状況報告書(事故発生状況報告書又は事件発生状況報告書)を自動的に生成する処理である。この状況報告生成処理の詳細は後述する。 Next, when the wearable information processing device 10 acquires the image data 422a at the time of impact from the video processing device 40 and acquires the map data 522a at the time of impact from the map processing device 50, the wearable information processing device 10 executes a situation report generation process in step S20. . The situation report generation process automatically generates a situation report (accident occurrence situation report or incident occurrence situation report) based on the situation data 122a at the time of impact, the video data 422a at the time of impact, and the map data 522a at the time of impact. It is a process to The details of this status report generation process will be described later.
 ウエアラブル情報処理装置10で生成された状況報告データ126a(事故発生状況報告書又は事件発生状況報告書のデータ)は、通信部11を介して携帯端末装置20に送信される。携帯端末装置20は、状況報告データ126aを受信すると、ステップS14にて状況報告(事故発生状況報告書又は事件発生状況報告書)を表示部28に表示し、ステップS15にて通報するか否かを判断する。具体的には携帯端末装置20は、図7に示す通報指示画面を表示部28に表示する。図7に示す通報指示画面には、予め登録された保険会社などを表示し、その保険会社などに「通報する」又は「通報しない」をユーザに選択してもらうための表示である。 The situation report data 126 a (accident occurrence situation report or incident occurrence situation report data) generated by the wearable information processing device 10 is transmitted to the mobile terminal device 20 via the communication unit 11 . When receiving the situation report data 126a, the mobile terminal device 20 displays the situation report (accident occurrence situation report or incident occurrence situation report) on the display unit 28 in step S14, and determines whether or not to notify in step S15. to judge. Specifically, the mobile terminal device 20 displays the notification instruction screen shown in FIG. The report instruction screen shown in FIG. 7 displays pre-registered insurance companies, etc., and is a display for allowing the user to select "report" or "not report" to the insurance company.
 ステップS15にて図7の表示画面でユーザが「通報する」を選択した場合は、ステップS16にて携帯端末装置20は予め登録された保険会社などに通報する。このとき、通報と共に状況報告データ126a(事故発生状況報告書又は事件発生状況報告書のデータ)を送信するようにしてもよい。なお、通報先は、予め記憶された保険会社などに限られない。警察や消防署を含めるようにしてもよい。警察への通報と状況報告データ126aの送信を同時にすることで、警察も事故事件を早期に把握できる。ステップS16の通報により、一連の処理は終了し、カメラ映像の送信が停止される。 If the user selects "report" on the display screen of FIG. 7 in step S15, the mobile terminal device 20 notifies the pre-registered insurance company or the like in step S16. At this time, the situation report data 126a (accident occurrence situation report or incident occurrence situation report data) may be transmitted together with the report. The report destination is not limited to the pre-stored insurance company or the like. Police and fire stations may also be included. By reporting to the police and transmitting the status report data 126a at the same time, the police can also grasp the accident at an early stage. A series of processing is terminated by the notification in step S16, and the transmission of the camera image is stopped.
 他方、ステップS15にて図7の表示画面でユーザが「通報しない」を選択した場合は、携帯端末装置20は通報せずに衝撃検知の判断を続ける。このようなウエアラブル情報処理システム100による一連の処理は、携帯端末装置20の専用のアプリケーションプログラムが終了するまで実行される。 On the other hand, if the user selects "do not report" on the display screen of FIG. 7 in step S15, the mobile terminal device 20 continues to determine impact detection without reporting. A series of such processes by the wearable information processing system 100 are executed until the dedicated application program of the mobile terminal device 20 is terminated.
 なお、ウエアラブル装置30のカメラ34によるカメラ映像の送信は、所定のタイミングで停止されるようにしてもよい。例えばウエアラブル装置30のカメラ34がオフにされた場合、ウエアラブル装置30の電源がオフにされた場合、ウエアラブル装置30との接続がオフにされた場合などにカメラ映像の送信を停止するようにしてもよい。カメラ映像の送信を停止するタイミングはこれに限られない。 The transmission of the camera image by the camera 34 of the wearable device 30 may be stopped at a predetermined timing. For example, when the camera 34 of the wearable device 30 is turned off, when the power of the wearable device 30 is turned off, when the connection with the wearable device 30 is turned off, etc., transmission of the camera image is stopped. good too. The timing of stopping transmission of camera images is not limited to this.
 また、映像処理装置40は、記憶部43に記憶された映像データ432を所定のタイミングで削除するようにしてもよい。例えば携帯端末装置20の専用のアプリケーションプログラムが終了されたタイミングやウエアラブル装置30の電源がオフにされたタイミングで映像データ432を削除してもよい。映像データ432を削除するタイミングはこれに限られない。また、映像データ432は一定期間保存してから削除するようにしてもよい。映像データ432の保管期間は携帯端末装置20からのユーザ入力により自由に変えられるようにしてもよい。 Also, the video processing device 40 may delete the video data 432 stored in the storage unit 43 at a predetermined timing. For example, the video data 432 may be deleted at the timing when the dedicated application program of the mobile terminal device 20 is terminated or the power of the wearable device 30 is turned off. The timing of deleting the video data 432 is not limited to this. Also, the video data 432 may be deleted after being saved for a certain period of time. The retention period of the video data 432 may be freely changed by user input from the mobile terminal device 20 .
 次に、上述した図5のステップS20にてウエアラブル情報処理装置10が行う状況報告生成処理について図面を参照しながら説明する。図10は、状況報告生成処理の具体例を示すフローチャートである。図10の状況報告生成処理は、制御部12(映像データ取得部121、状況データ取得部122、地図データ取得部123、状況略図生成部125、状況報告生成部126など)によってプログラム記憶部15から必要なプログラムが読み出されて実行される。 Next, the situation report generation processing performed by the wearable information processing device 10 in step S20 of FIG. 5 described above will be described with reference to the drawings. FIG. 10 is a flow chart showing a specific example of the status report generation process. The situation report generation process of FIG. The necessary program is read and executed.
 先ず制御部12は、図10のステップS210にて状況データ取得部122により携帯端末装置20からの衝撃時の状況データ122aを取得したかを判断する。制御部12は、衝撃時の状況データ122aを取得してないと判断した場合はステップS210の処理に戻り、衝撃時の状況データ122aを取得したと判断した場合は、ステップS220にて映像データ取得部121により映像処理装置40から衝撃時の映像データ422aを取得したかを判断する。 First, the control unit 12 determines whether the situation data acquisition unit 122 acquired the situation data 122a at the time of impact from the portable terminal device 20 in step S210 of FIG. If the control unit 12 determines that the situation data 122a at the time of impact has not been acquired, it returns to the process of step S210. The unit 121 determines whether the image data 422a at the time of impact has been acquired from the image processing device 40 .
 ステップS210にて制御部12は、衝撃時の映像データ422aを取得してないと判断した場合はステップS210の処理に戻り、衝撃時の映像データ422aを取得したと判断した場合は、ステップS230にて地図データ取得部123により地図処理装置50からの衝撃時の地図データ522aを取得したかを判断する。 If the controller 12 determines in step S210 that the image data 422a at the time of impact has not been acquired, the process returns to step S210. Then, it is determined whether the map data acquisition unit 123 has acquired the map data 522a at the time of impact from the map processing device 50 .
 制御部12は、ステップS220にて衝撃時の地図データ522aを取得してないと判断した場合はステップS210の処理に戻り、衝撃時の地図データ522aを取得したと判断した場合は、ステップS240にて事故か否かを判断する。具体的には制御部12は、携帯端末装置20からの状況データのうち事故事件選択データ(図6の事故事件選択画面から選択されたデータ)から事故か事件かを判断する。 If the control unit 12 determines in step S220 that the map data 522a at the time of impact has not been acquired, the process returns to step S210. to determine whether it is an accident or not. Specifically, the control unit 12 determines whether the event is an accident or an incident based on the accident case selection data (data selected from the accident case selection screen in FIG. 6) among the situation data from the portable terminal device 20 .
 制御部12は、ステップS240にて事故であると判断した場合は、ステップS242にて図8の事故発生状況報告書126Aのフォームをデータ記憶部16から取得する。制御部12は、ステップS240にて事故でないと判断した場合は、ステップS250にて事件か否かを判断する。 When the control unit 12 determines that there is an accident in step S240, it acquires the form of the accident occurrence status report 126A of FIG. 8 from the data storage unit 16 in step S242. If the controller 12 determines in step S240 that there is no accident, it determines in step S250 whether there is an incident.
 制御部12は、ステップS250にて事件でないと判断した場合は、ステップS220の処理に戻り、ステップS250にて事件であると判断した場合は、ステップS252にて図9の事件発生状況報告書126Bのフォームをデータ記憶部16から取得し、ステップS260にて状況略図生成処理を行う。 If the control unit 12 determines in step S250 that there is no incident, it returns to the process of step S220, and if it determines in step S250 that there is an incident, in step S252 the incident occurrence status report 126B of FIG. form is obtained from the data storage unit 16, and a schematic situation diagram generation process is performed in step S260.
 ここで、状況略図生成処理の具体例について図11乃至図15を参照しながら説明する。図11は、状況略図生成処理の具体例を示すフローチャートである。図12は、状況略図生成処理で選出された映像データの具体例を示す図である。図13は、状況略図生成処理で生成された現場状況略図の具体例を示す図である。図14は、状況略図生成処理で生成された現場付近地図の具体例を示す図である。図15は、状況略図生成処理で生成された状況略図の具体例を示す図である。 Here, a specific example of the schematic situation diagram generation process will be described with reference to FIGS. 11 to 15. FIG. FIG. 11 is a flow chart showing a specific example of the schematic situation diagram generation process. FIG. 12 is a diagram showing a specific example of video data selected in the schematic situation diagram generation process. FIG. 13 is a diagram showing a specific example of the schematic site situation map generated by the schematic situation map generation process. FIG. 14 is a diagram showing a specific example of the near-site map generated by the schematic situation diagram generation process. FIG. 15 is a diagram showing a specific example of the schematic situation diagram generated by the schematic situation diagram generation process.
 図12乃至図15は、ユーザを甲pA(Party pA)とすると、甲が青信号で横断歩道を渡りはじめたときに、乙pB(Party pB)の運転する自転車が横断歩道に侵入し、甲に衝突したという事故について、状況略図を生成する場合の例示である。これは甲が被害者、自転車の乙が加害者の場合である。 12 to 15, if the user is A pA (Party pA), when A starts crossing the crosswalk with a green light, a bicycle driven by BpB (Party pB) enters the crosswalk and hits A. This is an example of generating a schematic situation diagram for an accident of collision. This is the case where Party A is the victim and Party B on the bicycle is the perpetrator.
 図11の状況略図生成処理は、衝撃時の映像データ422a、衝撃時の地図データ522a、衝撃時の状況データ122aに基づいて状況略図(事故状況略図又は事件状況略図)を自動で生成する処理であり、状況略図生成部125により所定のプログラムが読み出されて実行される。 11 is a process for automatically generating a schematic situation map (a schematic accident situation map or a schematic incident situation map) based on the video data 422a at the time of impact, the map data 522a at the time of impact, and the situation data 122a at the time of impact. A predetermined program is read out and executed by the schematic situation diagram generator 125 .
 図11に示すように状況略図生成部125は、ステップS261にて略図生成用の映像を選出する。具体的には衝撃時の映像データ422aから図12に示すような略図生成用の映像(静止画像)を選出する。図12は、衝撃時の時刻の所定時間前(例えば数秒前)の映像(静止画像)である。図12によれば、乙の自転車が横断歩道に侵入しようとしているときの略図生成用映像422bである。このように略図生成用映像422bは、衝撃時の時刻の映像ではなく、衝撃時の時刻の数秒前の映像を選出することが好ましい。 As shown in FIG. 11, the schematic situation diagram generation unit 125 selects a video for schematic diagram generation in step S261. Specifically, an image (still image) for schematic drawing generation as shown in FIG. 12 is selected from the image data 422a at the time of impact. FIG. 12 shows a video (still image) of a predetermined time (for example, several seconds) before the time of impact. According to FIG. 12, it is a schematic diagram generation image 422b when B's bicycle is about to enter the pedestrian crossing. In this way, it is preferable to select the image several seconds before the time of the impact, instead of the image at the time of the impact, as the schematic diagram generation image 422b.
 例えば状況略図生成部125は、衝撃時の映像データ422aから衝撃時の時刻の映像に写っている乙と自転車を検出し、その乙が写っている数秒前の映像を選出する。映像からの乙と乙の運転する自転車など人物や乗り物の検出は、例えば機械学習やAI(人工知能)などにより学習させた学習済みモデルを利用する。 For example, the schematic situation diagram generation unit 125 detects B and the bicycle appearing in the image at the time of the impact from the image data 422a at the time of impact, and selects the image several seconds before the B in the image. Persons and vehicles, such as B and the bicycle driven by B, are detected from the video using a trained model trained by machine learning or AI (artificial intelligence), for example.
 次にステップS262にて状況略図生成部125は、選出した略図生成用映像422bから現場状況略図を生成する。現場状況略図422cは、例えば図13に示すようにユーザ甲の頭を中心に数秒前の乙と乙の運転する自転車を含む所定の範囲の略図であり、甲と乙の自転車を所定の略記号に置き換えたものである。甲は人を表す記号に置き換えて表示し、乙と自転車は自転車を表す所定の記号に置き換えて表示する。このように人物が乗り物に乗っている場合には、人物と乗り物をそれぞれの記号に置き換えるのではなく、人物と乗り物をまとめてその乗り物を表す記号に置き換える。所定の記号は、例えば事故発生状況報告書で人や自転車やバイクなどの記号が決められていればその記号を予めデータ記憶部16に記憶しておき、ユーザ甲や乙をその記号で置き換える。このように、予め決められた記号を使って状況略図を生成できるので、本実施形態で自動生成された状況報告書をそのまま正式な書面として各所へ提出できる。 Next, in step S262, the schematic situation diagram generation unit 125 generates a schematic site situation diagram from the selected schematic diagram generation image 422b. For example, as shown in FIG. 13, the schematic diagram 422c of the site situation is a schematic diagram of a predetermined range including the bicycles driven by User A and the bicycles driven by User A several seconds before, centering on User A's head. is replaced with A will be replaced with a symbol representing a person, and B and a bicycle will be replaced with a predetermined symbol representing a bicycle. In this way, when a person is riding a vehicle, the person and the vehicle are not replaced with individual symbols, but the person and the vehicle are collectively replaced with a symbol representing the vehicle. As for the predetermined symbols, for example, if symbols such as people, bicycles, and motorbikes are determined in an accident occurrence situation report, the symbols are stored in advance in the data storage unit 16, and users A and B are replaced with those symbols. In this manner, since a schematic situation diagram can be generated using predetermined symbols, the situation report automatically generated in this embodiment can be submitted to various places as a formal document as it is.
 図13の現場状況略図422cにおいて、ユーザ甲と乙の位置は、衝撃時のGPSデータ224bと映像データの分析によって決定できる。本実施形態の映像データは360度のカメラ映像なので数秒前の映像からおおよその乙の方向と距離が分かる。ユーザ甲の位置は、GPSデータで緯度と経度が分かるので、その乙の方向と距離によりおおよその乙のGPSデータを計算できる。こうして、ユーザ甲と乙の地図上の位置(GPSデータ)に置き換えることができる。なお、ユーザ甲と乙との距離は、数秒前の映像を画像認識して乙の画像の大きさから算出してもよい。 In the schematic diagram 422c of the site situation in FIG. 13, the positions of users A and B can be determined by analyzing the GPS data 224b and video data at the time of impact. Since the image data of this embodiment is a 360-degree camera image, the approximate direction and distance of B can be known from the image several seconds before. Since the latitude and longitude of User A's position can be obtained from GPS data, the approximate GPS data of User B can be calculated from User B's direction and distance. In this way, the positions of users A and B on the map (GPS data) can be replaced. Note that the distance between User A and User B may be calculated from the size of User B's image by recognizing an image several seconds before.
 次いで状況略図生成部125は、ステップS263にて略図生成用の地図を選出し、ステップS264にてその略図生成用の地図から現場付近地図を生成する。具体的には衝撃時の地図データ522aから図14に示すような略図生成用の地図の範囲を選出し、現場付近地図522bを生成する。図14は、図13のユーザ甲と乙の地図上の位置を含む現場付近地図522bである。図14では、衝撃時の数秒前の甲の位置と乙の位置とを×で示す現場状況略図422cの範囲を点線で重ねている。 Next, the schematic situation map generator 125 selects a map for schematic map generation in step S263, and generates a near-site map from the map for schematic map generation in step S264. Specifically, a range of a map for schematic drawing generation as shown in FIG. 14 is selected from the map data 522a at the time of the impact, and a near-site map 522b is generated. FIG. 14 is a near-site map 522b including the map positions of users A and B in FIG. In FIG. 14, the dotted line overlaps the range of the schematic diagram 422c of the site situation, in which the position of A and the position of B several seconds before the impact are indicated by x.
 続いて状況略図生成部125は、ステップS265にて現場状況略図422cと現場付近地図522bから状況略図125aを生成する。具体的には状況略図生成部125は、図13に示すような現場状況略図422cを図14に示すような現場付近地図522bに重ねて、図15に示すような状況略図125aを生成する。 Subsequently, in step S265, the schematic situation diagram generation unit 125 generates the schematic situation diagram 125a from the schematic site situation diagram 422c and the site vicinity map 522b. Specifically, the schematic situation diagram generation unit 125 superimposes the schematic site situation diagram 422c as shown in FIG. 13 on the site vicinity map 522b as shown in FIG. 14 to generate the schematic situation diagram 125a as shown in FIG.
 そして、状況略図生成部125は、ステップS266にて状況略図125aをデータ記憶部16に生成データ163として記憶して一連の状況略図生成処理を終了し、図10のステップS270の処理に移る。 Then, in step S266, the schematic situation diagram generation unit 125 stores the schematic situation diagram 125a in the data storage unit 16 as the generated data 163, ends the series of schematic situation diagram generation processing, and proceeds to the processing of step S270 in FIG.
 図10のステップS270にて制御部12は、状況報告を生成する。具体的にはステップS240にて事故と判断した場合は、状況報告生成部126が図8に示す事故発生状況報告書126Aのフォームに事故状況略図と該当データを入力し、事故発生状況報告書126Aを自動で生成する。他方、ステップS240にて事件と判断した場合は、状況報告生成部126が図9に示す事件発生状況報告書126Bのフォームに事件状況略図と該当データを入力し、事件発生状況報告書126Bを自動で生成する。 At step S270 in FIG. 10, the control unit 12 generates a status report. Specifically, when it is determined that there is an accident in step S240, the situation report generation unit 126 inputs the schematic diagram of the accident situation and the corresponding data into the form of the accident occurrence situation report 126A shown in FIG. is automatically generated. On the other hand, if it is determined to be an incident in step S240, the situation report generation unit 126 inputs the incident situation diagram and relevant data into the incident occurrence situation report 126B form shown in FIG. 9, and automatically generates the incident occurrence situation report 126B. Generate with
 次にステップS280にて制御部12は、生成した状況報告(事故発生状況報告書126A又は事件発生状況報告書126B)を生成データ163としてデータ記憶部16に記憶し、一連の状況報告生成処理を終了する。 Next, in step S280, the control unit 12 stores the generated status report (accident occurrence status report 126A or incident occurrence status report 126B) in the data storage unit 16 as the generated data 163, and performs a series of status report generation processing. finish.
 このような第1実施形態によれば、ユーザに装着されるウエアラブル装置30からのカメラ映像に基づいて事故又は事件の状況報告を自動的に生成できる。このように、ユーザに装着されるウエアラブル装置30からのカメラ映像を用いることで、ユーザが自動車を運転しているときだけでなく、歩いているときや自転車やバイクに乗っているときにもユーザの周囲の映像を記憶しておくことができる。したがって、そのユーザが巻き込まれた事故の状況報告だけでなく、ひったくりなどの事件の状況報告まで自動的に生成できる。これにより、自動車に搭載したカメラ映像では報告書を生成できない事故や事件、例えば人と自転車やバイク、人と人との事故や事件の状況報告をも自動的に生成できる。 According to the first embodiment, it is possible to automatically generate a situation report of an accident or incident based on the camera image from the wearable device 30 worn by the user. In this way, by using the camera image from the wearable device 30 worn by the user, the user can see not only when the user is driving a car but also when walking, riding a bicycle or a motorcycle. The image of the surroundings can be stored. Therefore, it is possible to automatically generate not only a status report of an accident in which the user is involved, but also a status report of an incident such as a snatch. As a result, it is possible to automatically generate a situation report of an accident or an incident that cannot be generated from a camera image mounted on an automobile, for example, an accident or incident between a person and a bicycle or a motorcycle, or an accident or incident between a person and a person.
 しかも、事故又は事件の発生時(第1実施形態では衝撃時)の時刻データから映像データを取得し、GPSデータから地図データを取得して状況略図まで自動的に生成できるので、事故や事件に巻き込まれて気が動転していても、正確に現場の状況を伝える状況報告をその場で生成できる。これにより、事故や事件の状況を迅速かつ正確に保険会社や警察に伝えることができ、事故や事件の早期解決も期待できる。 Moreover, video data can be obtained from the time data when an accident or incident occurs (at the time of impact in the first embodiment), map data can be obtained from GPS data, and a schematic diagram of the situation can be automatically generated. Even if you are caught up in and upset, you can generate a situation report on the spot that accurately conveys the situation on the ground. As a result, the circumstances of accidents and incidents can be quickly and accurately reported to insurance companies and the police, and early resolution of accidents and incidents can be expected.
 また、ウエアラブル装置30を装着したユーザの周囲の映像をカメラ映像として用いるので、例えば自転車に衝突された事故であればその自転車がどちらの方向から衝突したのかが分かる。また、ひったくり犯人がどちらの方向からやってきたのかも分かる。 In addition, since the image around the user wearing the wearable device 30 is used as the camera image, for example, in the case of an accident in which a bicycle collides, it is possible to know from which direction the bicycle collided. You can also see which direction the snatcher is coming from.
 さらに、本実施形態によれば、ユーザの周囲の映像データから少なくとも事故又は事件を引き起こした対象(例えば自転車・バイク・自動車等の乗り物、歩行者・ランナー等の人物)の画像を検出し、その対象の画像を所定の記号に置き換えて地図上の位置で示す現場状況略図と現場付近地図から状況略図を自動で生成できるから、単に映像データの画像を状況報告に貼りつける場合と比較して、その画像から状況略図への加工の手間を省くことができる。このように、加工しなくても保健会社や警察にそのまま送信できる状況略図を含む状況報告書が自動で生成されるので、事故や事件に巻き込まれて気が動転していても、正確に状況を伝えることができる。 Furthermore, according to this embodiment, at least an image of an object (for example, a vehicle such as a bicycle, a motorcycle, a car, or a person such as a pedestrian or a runner) that caused an accident or an incident is detected from image data around the user, and the image of the object is detected. Since the target image can be replaced with a predetermined symbol and the situation map can be automatically generated from the site situation map indicated by the position on the map and the site area map, compared to simply pasting the image of the video data to the situation report, It is possible to omit the trouble of processing the image into a schematic diagram of the situation. In this way, a situation report is automatically generated that includes a schematic diagram of the situation that can be sent to the health company or the police without editing, so even if you are upset about being involved in an accident or incident, you can accurately report the situation. can tell
 また、本実施形態によれば、ウエアラブル装置30からのカメラ映像を所定の映像時間ごとの複数の映像データとして分けて記憶するから、複数の映像データに分けないで記憶する場合に比較して、事故や事件が発生した時刻の前後の映像を特定しやすい。また衝撃を検知しても携帯端末装置20を落としただけなど、事故の発生又は事件の発生してない場合もある。本実施形態によれば、状況データ取得部122が携帯端末装置20から取得する状況データには事故の発生又は事件の発生が選択された情報が含まれるから、事故の発生又は事件の発生がなければ状況報告書は自動で生成されないようにすることができるので、状況報告書生成の無駄を抑制できる。 Further, according to the present embodiment, since the camera video from the wearable device 30 is divided and stored as a plurality of video data for each predetermined video time, compared to the case of storing without dividing into a plurality of video data, It is easy to identify images before and after the time when an accident or incident occurred. Further, even if an impact is detected, there may be cases where an accident or incident does not occur, such as when the mobile terminal device 20 is simply dropped. According to this embodiment, the situation data acquired from the portable terminal device 20 by the situation data acquiring unit 122 includes information indicating that an accident or an incident has occurred. In this case, the status report can be prevented from being automatically generated, so wasteful generation of the status report can be suppressed.
 また、本実施形態の状況報告生成部126は、携帯端末装置20からの状況データにより事故か事件かを判断し、事故であると判断した場合は、所定の事故発生状況報告書のフォームから状況報告を生成し、事件であると判断した場合は、所定の事件発生状況報告書のフォームから状況報告を生成する。このように、事故又は事件のそれぞれのフォームに応じた状況報告を自動で生成できるので、後から所定のフォームに合わせて状況報告を作り直す手間を省ける。 In addition, the situation report generation unit 126 of this embodiment determines whether it is an accident or an incident based on the situation data from the mobile terminal device 20, and if it determines that it is an accident, it reports the situation from a predetermined accident occurrence situation report form. A report is generated, and if it is determined to be an incident, a situation report is generated from a predetermined incident occurrence situation report form. In this manner, a situation report can be automatically generated in accordance with the form of each accident or incident, saving the trouble of recreating the situation report later in accordance with the prescribed form.
<第2実施形態>
 本発明の第2実施形態について説明する。以下に例示する各形態において実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。第1実施形態では、事故又は事件の状況報告書を自動で生成する場合を例示したが、第2実施形態では状況報告書だけでなく、さらに事故又は事件の過失割合報告書まで自動で生成する場合を例示する。なお、第2実施形態のウエアラブル情報処理装置10の構成は、第1実施形態と同様のためその詳細な説明を省略する。
<Second embodiment>
A second embodiment of the present invention will be described. Constituent elements having substantially the same functional configuration in each form illustrated below are denoted by the same reference numerals, thereby omitting redundant description. In the first embodiment, the case of automatically generating a situation report of an accident or incident was exemplified, but in the second embodiment, not only the situation report but also the percentage of fault report of the accident or incident is automatically generated. Illustrate the case. Note that the configuration of the wearable information processing apparatus 10 of the second embodiment is the same as that of the first embodiment, so detailed description thereof will be omitted.
 図16は、第2実施形態のウエアラブル情報処理システム100の概略構成を示す図であり、図2に対応する。図16の記憶部14は、事故判例データベース17及び事件判例データベース18を備える。またデータ記憶部16には、図19に例示する事故過失割合報告書127Aのフォームや図20に例示する事件過失割合報告書127Bのフォームが記憶される。図16の制御部12は、過失割合報告生成部127を備える。 FIG. 16 is a diagram showing a schematic configuration of the wearable information processing system 100 of the second embodiment, and corresponds to FIG. The storage unit 14 of FIG. 16 includes an accident precedent database 17 and a case precedent database 18 . The data storage unit 16 also stores a form of an accident negligence ratio report 127A illustrated in FIG. 19 and a form of an incident negligence ratio report 127B illustrated in FIG. The controller 12 of FIG. 16 includes a fault rate report generator 127 .
 過失割合報告生成部127は、状況報告生成部126で生成した状況報告と類似の判例を選出し、選出した判例に基づいて過失割合報告書を自動で生成する。具体的には過失割合報告生成部127は、事故の場合は、事故発生状況報告書126Aの事故状況広告データと類似の判例を事故判例データベース17から選出し、選出した判例に基づいて図19に例示する事故過失割合報告書127Aを自動で生成する。他方、過失割合報告生成部127は、事件の場合は、事件発生状況報告書126Bの事件状況報告データと類似の判例を事件判例データベース18から選出し、選出した判例に基づいて図20に例示する事件過失割合報告書127Bを自動で生成する。 The fault rate report generation unit 127 selects judicial precedents similar to the situation report generated by the situation report generation unit 126, and automatically generates a fault rate report based on the selected judicial precedents. Specifically, in the case of an accident, the fault rate report generator 127 selects from the accident precedent database 17 a precedent similar to the accident situation advertisement data of the accident occurrence situation report 126A, and based on the selected precedent, generates a report in FIG. An exemplary accident fault percentage report 127A is automatically generated. On the other hand, in the case of a case, the fault rate report generation unit 127 selects from the case precedent database 18 similar precedents to the incident situation report data of the incident occurrence situation report 126B, and shows an example in FIG. 20 based on the selected precedents. Automatically generate the incident fault rate report 127B.
 図17は、第2実施形態に係るウエアラブル情報処理システム100全体の処理を説明するフローチャートであり、図5に対応する。図17の処理では、ステップS30にて過失割合報告生成処理を行う。 FIG. 17 is a flowchart for explaining the overall processing of the wearable information processing system 100 according to the second embodiment, and corresponds to FIG. In the process of FIG. 17, a fault rate report generation process is performed in step S30.
 ここで、ステップS30の過失割合報告生成処理について図面を参照しながら説明する。図18は、過失割合報告生成処理の具体例を示すフローチャートである。図18の過失割合報告生成処理は、制御部12(過失割合報告生成部127など)によってプログラム記憶部15から必要なプログラムが読み出されて実行される。 Here, the fault rate report generation processing in step S30 will be described with reference to the drawings. FIG. 18 is a flow chart showing a specific example of fault rate report generation processing. 18 is executed by reading a necessary program from the program storage unit 15 by the control unit 12 (such as the fault ratio report generating unit 127).
 先ずステップS310にて過失割合報告生成部127は、状況報告生成処理(図17のステップS20)で生成した状況報告データ126a(事故状況報告データ又は事件状況報告データ)をデータ記憶部16から取得する。 First, in step S310, the fault rate report generation unit 127 acquires the situation report data 126a (accident situation report data or incident situation report data) generated in the situation report generation process (step S20 in FIG. 17) from the data storage unit 16. .
 次に過失割合報告生成部127は、ステップS320にて事故状況報告データを取得したか否かを判断し、事故状況報告データを取得したと判断した場合は、ステップS322にて図19の事故過失割合報告書127Aのフォームをデータ記憶部16から取得する。 Next, the fault rate report generator 127 determines whether or not the accident situation report data has been acquired in step S320. The form of the rate report 127A is obtained from the data store 16;
 そして、過失割合報告生成部127は、ステップS324にて事故判例データベース17を照合し、事故状況報告データと類似の判例を選出する。具体的には事故状況報告データに含まれる事故状況略図と事故判例データベース17の判例に含まれる事故状況略図とを照合し、事故状況略図が類似する判例を選出する。事故状況略図の照合は、機械学習やAI(人工知能)などによる学習済みモデルを用いることができる。 Then, in step S324, the fault rate report generation unit 127 collates the accident precedent database 17 and selects precedents similar to the accident situation report data. Specifically, the rough accident situation diagrams included in the accident situation report data and the rough accident situation diagrams contained in the case examples in the accident case case database 17 are collated, and judicial precedents with similar schematic accident situation diagrams are selected. A learned model by machine learning or AI (artificial intelligence) can be used for collation of the schematic diagram of the accident situation.
 なお、ステップS324における判例の選出は、事故状況略図を照合する場合を例に挙げたが、これに限られない。例えば事故状況報告データの事故状況略図以外のデータ(事故詳細情報や事件状況説明など)を照合するようにしてもよい。 It should be noted that the selection of judicial precedents in step S324 was exemplified by collating accident situation diagrams, but is not limited to this. For example, data (detailed information on the accident, explanation of the incident situation, etc.) other than the schematic diagram of the accident situation in the accident situation report data may be collated.
 過失割合報告生成部127は、ステップS324にて判例を選出すると、ステップS340にて選出した判例から過失割合を抽出する。例えば図19における事故の判例情報の「過失割合」の欄に示すような基本過失割合と修正要素データに基づく過失割合評価とを抽出する。選出した判例は、例えば図19の備考の欄に参考判例として表示するようにしてもよい。修正要素データとしては、例えば図19に示すような事故現場が「幹線道路」「住宅街」かどうかやBの過失があったかどうか「Bの著しい過失」「Bの重過失」などが挙げられる。過失割合評価は、これらの修正要素データのそれぞれに対応して表示される数字である。図19は、Aに対する修正要素データの過失割合評価の具体例である。なお、修正要素データや過失割合評価は図示するものに限られない。例えば選出した判例によっては、Bに対する修正要素データと過失割合評価も表示され得る。 After selecting the judicial precedent in step S324, the fault ratio report generation unit 127 extracts the fault ratio from the judicial precedent selected in step S340. For example, it extracts the basic fault ratio and the fault ratio evaluation based on the correction element data as shown in the column of "failure ratio" of the case law information of the accident in FIG. The selected judicial precedent may be displayed as a reference judicial precedent in the remarks column of FIG. 19, for example. Correction factor data includes, for example, whether the accident site is a "main road" or "residential area" as shown in FIG. The percent fault rating is a number displayed corresponding to each of these modifier data. FIG. 19 is a specific example of fault rate evaluation of correction factor data for A. In FIG. Correction factor data and fault rate evaluation are not limited to those illustrated. Modifier data and percent fault assessment for B may also be displayed, depending on the case law chosen, for example.
 次にステップS350にて過失割合報告生成部127は、過失割合報告(例えば図19の事故過失割合報告書127A)を自動で生成する。具体的には事故状況報告データから事故詳細情報、事故状況図、事故状況説明などを抽出して、事故過失割合報告書127Aのフォームに入力する。またステップS324で選出した判例から事故状況略図を抽出し、ステップS340にて抽出した過失割合(基本過失割合、修正要素データ、過失割合評価など)と共に事故過失割合報告書127Aのフォームに入力する。 Next, in step S350, the fault rate report generation unit 127 automatically generates a fault rate report (for example, the accident fault rate report 127A in FIG. 19). Specifically, detailed accident information, an accident situation diagram, an explanation of the accident situation, etc. are extracted from the accident situation report data, and entered into the form of the accident/fault ratio report 127A. In addition, a schematic diagram of the accident situation is extracted from the judicial precedent selected in step S324, and is entered in the form of the accident-fault ratio report 127A together with the fault ratio (basic fault ratio, correction element data, fault ratio evaluation, etc.) extracted in step S340.
 次にステップS360にて過失割合報告生成部127は、生成した過失割合報告(例えば図19の事故過失割合報告書127A)を生成データ163としてデータ記憶部16に記憶し、一連の過失割合報告生成を終了する。 Next, in step S360, the fault rate report generation unit 127 stores the generated fault rate report (for example, the accident fault rate report 127A in FIG. 19) in the data storage unit 16 as the generated data 163, and generates a series of fault rate reports. exit.
 他方、ステップS320にて過失割合報告生成部127は、事故状況報告データを取得していないと判断した場合は、ステップS330にて事件状況報告データを取得したか否かを判断する。過失割合報告生成部127は、ステップS330にて事件状況報告データを取得していないと判断した場合は、ステップS310の処理に戻る。 On the other hand, if the fault rate report generator 127 determines in step S320 that the accident situation report data has not been acquired, it determines in step S330 whether or not the incident situation report data has been acquired. If the fault rate report generator 127 determines in step S330 that the incident situation report data has not been acquired, the process returns to step S310.
 ステップS330にて過失割合報告生成部127は、事件状況報告データを取得したと判断した場合は、ステップS332にて図20の事件過失割合報告書127Bのフォームをデータ記憶部16から取得する。 When the fault rate report generation unit 127 determines in step S330 that the incident situation report data has been acquired, it acquires the form of the fault rate report 127B of FIG. 20 from the data storage unit 16 in step S332.
 そして、ステップS334にて過失割合報告生成部127は、事件判例データベース18を照合し、事件状況報告データと類似の判例を選出する。具体的には事件状況報告データに含まれる事件状況略図と事件判例データベース18の判例に含まれる事件状況略図とを照合し、事件状況略図が類似する判例を選出する。事件状況略図の照合は、機械学習やAI(人工知能)などによる学習済みモデルを用いることができる。 Then, in step S334, the fault rate report generator 127 collates the case precedent database 18 and selects precedents similar to the case status report data. Specifically, the schematic diagram of the case situation included in the case situation report data and the schematic diagram of the case situation contained in the cases in the case precedent database 18 are collated to select cases with similar schematic diagrams of the case situation. A trained model by machine learning or AI (artificial intelligence) can be used for collation of the schematic diagram of the incident situation.
 なお、ステップS334における判例の選出は、事件状況略図を照合する場合を例に挙げたが、これに限られない。例えば事件状況報告データの事件状況略図以外のデータ(事件詳細情報や事件状況説明など)を照合するようにしてもよい。 It should be noted that the selection of judicial precedents in step S334 was exemplified by collating the schematic diagram of the case situation, but it is not limited to this. For example, data (incident detailed information, incident situation explanation, etc.) other than the incident situation schematic diagram of the incident situation report data may be collated.
 過失割合報告生成部127は、ステップS334にて判例を選出すると、ステップS340にて選出した判例から過失割合を抽出する。例えば図20における事件の判例情報の「過失割合」の欄に示すような基本過失割合と判例類型とを抽出する。判例類型としては、例えば図20に示すような「窃盗」「傷害」「強盗」「強盗致傷」などが挙げられる。例えば選出した判例に、これらの文字が含まれる場合は「○」を表示し、これらの文字が含まれない場合は「×」を表示する。なお、判例類型の表記はこれらに限られない。なお、事件の判例では、過失割合は被害者0:加害者100の場合が多く、過失割合の表記がない場合もあるので、その場合には過失割合を被害者0:加害者100と推定してこれを表記するようにしてもよい。ステップS334にて選出した判例は、備考の欄に参考判例として表示するようにしてもよい。 After selecting the judicial precedent in step S334, the fault ratio report generation unit 127 extracts the fault ratio from the judicial precedent selected in step S340. For example, the basic percentage of negligence and the type of judicial precedent as shown in the "percentage of fault" column of the judicial precedent information of the case in FIG. 20 are extracted. Case types include, for example, "theft," "injury," "robbery," and "robbery resulting in injury," as shown in FIG. For example, if the selected judicial precedent includes these characters, "o" is displayed, and if these characters are not included, "x" is displayed. It should be noted that notation of judicial precedent type is not limited to these. In case precedents, the percentage of negligence is often the ratio of victim 0:perpetrator 100, and there are cases where the percentage of negligence is not indicated. This can be expressed as The judicial precedents selected in step S334 may be displayed as reference judicial precedents in the remarks column.
 次にステップS350にて過失割合報告生成部127は、過失割合報告(例えば図20の事件過失割合報告書127B)を自動で生成する。具体的には事件状況報告データから事件詳細情報、事件状況図、事件状況説明などを抽出して、事件過失割合報告書127Bのフォームに入力する。またステップS334で選出した判例から事件状況略図を抽出し、ステップS340にて抽出した過失割合(基本過失割合、修正要素データ、過失割合評価など)と共に事件過失割合報告書127Bのフォームに入力する。 Next, in step S350, the fault rate report generation unit 127 automatically generates a fault rate report (for example, the incident fault rate report 127B in FIG. 20). Specifically, the detailed information of the incident, the illustration of the incident situation, the description of the incident situation, etc. are extracted from the incident situation report data, and entered into the form of the incident negligence ratio report 127B. In addition, a schematic diagram of the case situation is extracted from the judicial precedent selected in step S334, and entered into the form of the case fault ratio report 127B together with the fault ratio (basic fault ratio, correction element data, fault ratio evaluation, etc.) extracted in step S340.
 次にステップS360にて過失割合報告生成部127は、生成した過失割合報告(例えば図20の事件過失割合報告書127B)を過失割合報告データ127aの生成データ163としてデータ記憶部16に記憶し、一連の過失割合報告生成を終了する。図17に示すように、ウエアラブル情報処理装置10で生成された過失割合報告データ127a(事故過失割合報告書又は事件過失割合報告書のデータ)は、通信部11を介して携帯端末装置20に送信される。携帯端末装置20は、過失割合報告データ127aを受信すると、ステップS14にて状況報告(事故発生状況報告書又は事件発生状況報告書)と共に、過失割合報告(事故過失割合報告書又は事件過失割合報告書)を表示部28に表示し、ステップS15にて通報するか否かを判断する。以降の処理は図5と同様であるため詳細な説明を省略する。 Next, in step S360, the negligence ratio report generator 127 stores the generated negligence ratio report (for example, the incident negligence ratio report 127B in FIG. 20) in the data storage unit 16 as the generated data 163 of the negligence ratio report data 127a, End the series of percent fault report generation. As shown in FIG. 17, fault rate report data 127a (data of accident fault rate report or incident fault rate report) generated by the wearable information processing device 10 is transmitted to the mobile terminal device 20 via the communication unit 11. be done. When the mobile terminal device 20 receives the fault rate report data 127a, in step S14, the situation report (accident occurrence situation report or incident occurrence situation report) and the fault rate report (accident fault rate report or incident fault rate report) ) is displayed on the display unit 28, and it is determined whether or not to notify in step S15. Since subsequent processing is the same as in FIG. 5, detailed description is omitted.
 以上のように第2実施形態によれば、第1実施形態と同様に、ウエアラブル装置30のカメラ映像から事故や事件の状況報告を自動的に生成でき、さらに過失割合報告も自動的に生成できる。しかも、判例データベース(事故判例データベース17又は事件判例データベース18)を状況略図で照合するので、テキスト検索で判例を探す場合に比較して類似の判例を見つけやすい。また、人物や乗り物の画像を所定の記号に置き換えた状況略図を自動で生成できるので、判例などの状況略図との直接照合も可能となり、人物や乗り物の画像をそのまま貼りつける場合に比較して照合精度を大幅に向上できる。判例データに基づく過失割合報告を自動で生成できるので、同様のケースの過失割合が分かり、訴訟の提起や示談の判断をしやすくなり、保険会社も保険の適用を判断しやすくなる。 As described above, according to the second embodiment, similar to the first embodiment, it is possible to automatically generate a situation report of an accident or incident from the camera image of the wearable device 30, and further to automatically generate a fault rate report. . Moreover, since the judicial precedent database (accident judicial precedent database 17 or incident judicial precedent database 18) is collated with a rough situation diagram, similar judicial precedents can be found more easily than when searching for judicial precedents by text search. In addition, since it is possible to automatically generate a schematic situation map by replacing the images of people and vehicles with predetermined symbols, it is possible to directly compare with the schematic situation charts of court cases, etc., compared to pasting the images of people and vehicles as they are. Matching accuracy can be greatly improved. Since it is possible to automatically generate a percentage-of-fault report based on case law data, it is possible to know the percentage of fault in similar cases, making it easier to file lawsuits and decide whether to settle, and for insurance companies to decide whether to apply insurance.
 また、第2実施形態の判例データベースは、事故判例データベース17と事件判例データベース18に分けられ、過失割合報告生成部127は、携帯端末装置20からの状況データにより事故か事件かを判断し、事故であると判断した場合は、事故判例データベースから状況略図が類似する判例を選出し、事件であると判断した場合は、事件判例データベースから状況略図が類似する判例を選出する。これによれば、事故の場合は事故判例データベース17から状況略図が類似する判例が選出され、事件の場合は事件判例データベース18から状況略図が類似する判例を選出されるので、事故に応じた適切な判例を選出でき、また事件に応じた適切な判例や犯罪類型(条文など)を選出できる。 Further, the judicial precedent database of the second embodiment is divided into an accident judicial precedent database 17 and a case precedent database 18, and the fault rate report generator 127 judges whether it is an accident or a case based on the situation data from the mobile terminal device 20, If it is determined to be a case, it selects a precedent with a similar outline of the situation from the accident precedent database, and if it is determined to be a case, selects a precedent with a similar outline of the situation from the case precedent database. According to this, in the case of an accident, a judicial precedent with a similar schematic diagram of the situation is selected from the accident judicial precedent database 17, and in the case of a case, a judicial precedent with a similar schematic diagram of the situation is selected from the case judicial precedent database 18. It is possible to select appropriate judicial precedents and appropriate judicial precedents and crime types (articles, etc.) according to the case.
<変形例>
 本発明は、上述した各実施形態に限定されず、例えば以降に説明する各種の応用・変形が可能である。また、これらの変形の態様および上述した各実施形態は、任意に選択された一または複数を適宜組み合わせることも可能である。また当業者であれば、特許請求の範囲に記載された範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、それらについても当然に本発明の技術的範囲に属するものと了解される。
<Modification>
The present invention is not limited to the above-described embodiments, and various applications and modifications described below are possible. Moreover, it is also possible to appropriately combine one or a plurality of arbitrarily selected aspects of these modifications and each of the above-described embodiments. In addition, it is clear that a person skilled in the art can conceive of various modifications or modifications within the scope described in the claims, and these naturally belong to the technical scope of the present invention. It is understood.
(1)上記第2実施形態において、事件の場合には、図21に示すような事件構成要件報告書127Cを作成して、図17のステップS14にて事件過失割合報告書の代わりに携帯端末装置20に表示させるようにしてもよい。図21の事件構成要件報告書127Cは、図20の過失割合の欄に、過失割合の代わりに事件罪状推定を表示したものである。例えば図18のステップS334で選出した判例から事件罪状推定を推定する。事件罪状推定は、図20の判例類型のように「窃盗」「傷害」「強盗」「強盗致傷」などが挙げられる。怪我がなければ「窃盗」の可能性があるが、被害者の犯行が抑圧されていれば「強盗」の可能性もある。怪我があれば「傷害」「強盗」「強盗致傷」の可能性があるが、「窃盗」と「傷害」の併合罪の可能性もある。強盗行為が行われてその際に被害者が怪我をしていれば「強盗致傷」になる可能性がある。また「窃盗」の際に、暴行が行われていなければ窃盗罪のみの可能性が高いが、暴行が行われていれば「窃盗」と「暴行」の併合罪の可能性がある。このように、可能性のある事件罪状を判例から推定できる。第2実施形態で説明した通り、事件の場合には、過失割合は被害者0:加害者100の場合が多く、過失割合の表記がない場合もあるので、そのような場合に事件構成要件報告書127Cを自動で作成してもよい。なお、この事件罪状推定を事件過失割合報告書127Cに載せるようにしてもよい。 (1) In the above-described second embodiment, in the case of a case, a case constituent element report 127C as shown in FIG. 21 is created, and in step S14 of FIG. You may make it display on the apparatus 20. FIG. The case constituent element report 127C in FIG. 21 displays the case guilt presumption instead of the fault rate in the fault rate column of FIG. For example, the case is estimated based on the judicial precedent selected in step S334 of FIG. The presumption of crimes includes "theft", "injury", "robbery", "robbery resulting in injury", etc., as shown in FIG. If there is no injury, there is a possibility of "theft", but if the victim's crime is suppressed, there is also a possibility of "robbery". If there is an injury, there is a possibility of "injury", "robbery", and "robbery resulting in injury", but there is also a possibility of combined crime of "theft" and "injury". If a robbery is committed and the victim is injured during the robbery, it may be a robbery resulting in injury. Also, in the case of "theft", if there is no assault, there is a high possibility that only theft will be committed, but if there is assault, there is a possibility that "theft" and "assault" are combined. In this way, possible case charges can be inferred from judicial precedents. As explained in the second embodiment, in the case of a case, the ratio of negligence is often 0 to the perpetrator and 100 to the perpetrator, and there are cases where the ratio of negligence is not indicated. The document 127C may be created automatically. It should be noted that this case guilt presumption may be included in the case fault rate report 127C.
(2)上記第1実施形態及び第2実施形態において、携帯端末装置20がウエアラブル情報処理を行うための専用のアプリケーションプログラムを実行しているときに、表示部28に緊急ボタンを表示するようにしてもよい。緊急ボタンは、携帯端末装置20が衝撃を検知しなくても、事故事件選択画面を表示するためのボタンである。緊急ボタンが押されると、例えば図22に示す事故事件選択画面が表示部28に表示される。 (2) In the first and second embodiments, the emergency button is displayed on the display unit 28 while the mobile terminal device 20 is executing a dedicated application program for performing wearable information processing. may The emergency button is a button for displaying the accident case selection screen even if the mobile terminal device 20 does not detect an impact. When the emergency button is pressed, for example, an accident case selection screen shown in FIG. 22 is displayed on the display unit 28.
 この場合、図10及び図18のステップS11は衝撃検知か又は緊急ボタンが押されたかとし、緊急ボタンが押された場合のステップS11以降の処理においては、「衝撃時」とあるのを「緊急ボタン押下時」と読み替えて適用できる。具体的には例えば図10及び図18の衝撃時データは、緊急ボタン押下時データと読み替える。衝撃時の状況データ(時刻データ、GPSデータ、センサデータ)は、緊急ボタン押下時の状況データ(時刻データ、GPSデータ、センサデータ)と読み替える。衝撃時の映像データは、緊急ボタン押下時の映像データと読み替え、衝撃時の地図データは、緊急ボタン押下時の地図データと読み替える。 In this case, step S11 in FIGS. 10 and 18 assumes that the impact has been detected or the emergency button has been pressed. When the emergency button is pressed" can be read and applied. Specifically, for example, data at the time of impact in FIGS. 10 and 18 is read as data at the time of pressing the emergency button. The situation data (time data, GPS data, sensor data) at the time of impact is read as the situation data (time data, GPS data, sensor data) at the time of pressing the emergency button. The image data at the time of impact is read as the image data at the time of pressing the emergency button, and the map data at the time of impact is read as the map data at the time of pressing the emergency button.
 このように、緊急ボタンで事故事件選択画面を携帯端末装置20に表示できるようにすることで、例えばストーカーのように携帯端末装置20に衝撃がほとんどないような事故や事件の場合も状況報告や過失割合報告を自動生成できる。 In this way, by enabling the accident case selection screen to be displayed on the mobile terminal device 20 by pressing the emergency button, even in the case of an accident or incident in which the mobile terminal device 20 has almost no impact, such as a stalker, the situation can be reported and Percentage fault reports can be automatically generated.
(3)図10及び図18のステップS16の通報において、警察に通報する場合には、図23に示すような通報指示画面を表示部28に表示するようにしてもよい。これにより、事故や事件が発生した場合に、警察に通報するか否かを選択できる。警察への通報と共に状況報告書も警察に送信できるようにしてもよい。これにより、事故や事件の早急な解決を期待できる。特に緊急ボタンが押された場合に警察への通報指示画面を表示することで、緊急時の対応も可能になる。 (3) In the report in step S16 of FIGS. 10 and 18, when reporting to the police, a report instruction screen as shown in FIG. With this, it is possible to select whether or not to report to the police when an accident or incident occurs. A situation report may also be sent to the police along with the report to the police. As a result, it is possible to expect prompt resolution of accidents and incidents. In particular, when the emergency button is pressed, a screen for instructing to report to the police is displayed, making it possible to respond to emergencies.
(4)図11の状況略図生成処理において、衝撃時の映像データ422aから衝撃時の時刻の所定時間前(例えば数秒前)の映像(静止画像)を選出する場合を例示した。この衝撃時の時刻の所定時間前の映像としては、上述の図12のように映像データ422aである全方位映像の一部から生成された映像を用いることもできるが、これに限られず、後述する図25のような全方位映像をそのまま用いてもよい。その全方位映像に写っている人物や乗り物を認識し、ユーザ甲を巻き込む事故又は事件を引き起こした対象として特定する。 (4) In the schematic situation diagram generation process of FIG. 11, a case is illustrated in which a video (still image) of a predetermined time (for example, several seconds) before the time of impact is selected from the video data 422a at the time of impact. As the image a predetermined time before the time of impact, an image generated from part of the omnidirectional image, which is the image data 422a, as shown in FIG. The omnidirectional image shown in FIG. 25 may be used as it is. Recognize the person or vehicle appearing in the omnidirectional video and identify it as the object that caused the accident or incident involving User A.
 図12では、ユーザ甲の全方位映像に乙が運転する1台の自転車が写っている場合を例示した。このように全方位映像に対象となる候補が1つしか写ってない場合には、対象の特定が容易である。ところが、実際の現場では全方位映像に複数の乗り物や人物が写っている場合もある。この場合には、ユーザ甲を巻き込む事故又は事件を引き起こした対象(乗り物や人物)を、その全方位映像に写っている乗り物や人物の中から特定する必要がある。 In Fig. 12, an example is shown in which a single bicycle driven by User A is shown in User A's omnidirectional video. In this way, when only one target candidate appears in the omnidirectional video, it is easy to identify the target. However, in an actual scene, there are cases where a plurality of vehicles and people are captured in the omnidirectional video. In this case, it is necessary to identify the object (vehicle or person) that caused the accident or incident involving user A from among the vehicles or person appearing in the omnidirectional video.
 ここで、本発明の変形例にかかる対象特定処理について図24乃至図28を参照しながら説明する。図24は、対象特定処理の具体例を示すフローチャートである。図25乃至図28は、図12乃至図15と同様に、ユーザ甲pA(Party pA)が青信号で横断歩道を渡りはじめたときに、乙pB(Party pB)の運転する自転車が横断歩道に侵入し、甲に衝突したという事故の場合を例示する。なお、図25乃至図28では、説明を分かり易くするため、乙の自転車だけを表示し、他の乗り物や人物などは省略している。 Here, the target specifying processing according to the modified example of the present invention will be described with reference to FIGS. 24 to 28. FIG. FIG. 24 is a flowchart illustrating a specific example of target identification processing. 25 to 28, similar to FIGS. 12 to 15, when user A pA (Party pA) starts crossing the crosswalk with a green light, a bicycle driven by Party pB (Party pB) enters the crosswalk. Then, the case of an accident in which the car collides with the instep is illustrated. In addition, in FIGS. 25 to 28, in order to make the explanation easier to understand, only B's bicycle is displayed, and other vehicles and people are omitted.
 図25(a)、図26(a)は映像データ422aによる全方位映像である。図25(b)、図26(b)は全方位映像から生成された映像である。図25(a)、(b)は衝撃時又は緊急ボタン押下時の数秒前の映像であり、図26(a)、(b)は、図25(a)、(b)よりもさらに数秒前の映像である。図27は、対象候補図422dの具体例を示す図であり、図25及び図26の映像に基づく甲と乙の位置と動きを示すものである。図28は、対象候補検証図422eの具体例を示す図であり、図27に事故又は事件の位置mX(衝撃時又は緊急ボタン押下時の位置)を重ね合わせたものである。図27及び図28において甲のm1と乙のt1はそれぞれ、図25(a)の全方位映像に基づく甲と乙のGPSデータの緯度と経度である。甲のm2と乙のt2はそれぞれ、図26(a)の全方位映像に基づく甲と乙のGPSデータの緯度と経度である。  Figures 25(a) and 26(a) are omnidirectional images based on the image data 422a. 25(b) and 26(b) are images generated from the omnidirectional images. Figures 25(a) and (b) are images several seconds before the impact or pressing of the emergency button, and Figures 26(a) and (b) are several seconds before Figures 25(a) and (b). It is a video of FIG. 27 is a diagram showing a specific example of the target candidate diagram 422d, showing the positions and movements of A and B based on the images of FIGS. FIG. 28 is a diagram showing a specific example of the target candidate verification diagram 422e, in which the position mX of the accident or incident (position at the time of impact or pressing of the emergency button) is superimposed on FIG. In FIGS. 27 and 28, m1 of A and t1 of B are respectively the latitude and longitude of the GPS data of A and B based on the omnidirectional video of FIG. 25(a). m2 of A and t2 of B are respectively the latitude and longitude of the GPS data of A and B based on the omnidirectional video of FIG. 26(a).
 図24の対象特定処理は、図11の状況略図生成処理を実行する前に、制御部12により所定のプログラムが読み出されて実行される。図24に示すように、先ず制御部12は、ステップS461にて略図生成用の映像を選出する。具体的には制御部12は、衝撃時の時刻の所定時間前(例えば数秒前)の全方位映像(例えば図25(a))を、衝撃時の映像データ422aから選出する。 The target specifying process in FIG. 24 is executed by reading out a predetermined program by the control unit 12 before executing the schematic situation diagram generation process in FIG. As shown in FIG. 24, the control unit 12 first selects an image for schematic diagram generation in step S461. Specifically, the control unit 12 selects an omnidirectional image (for example, FIG. 25A) of a predetermined time (eg, several seconds before) the time of impact from the image data 422a at the time of impact.
 次いでステップS462にて制御部12は、選出した全方位映像から乗り物や人物を検出し、事故又は事件を引き起こした対象候補となり得る人物や乗り物が複数あるか否かを判断する。具体的には制御部12は、全方位映像から人物や乗り物の画像を検出し、人物や乗り物の画像が複数検出されたかどうか判断する。制御部12は、ステップS462にて対象候補(人物や乗り物の画像)が1つしか検出されないと判断した場合は、対象特定処理を終了して図11の状況略図生成処理に移り、ステップS462にて対象候補(人物や乗り物の画像)が複数検出されたと判断した場合はステップS463に移る。 Next, in step S462, the control unit 12 detects vehicles and persons from the selected omnidirectional video, and determines whether or not there are multiple persons or vehicles that can be candidates for causing an accident or incident. Specifically, the control unit 12 detects an image of a person or vehicle from the omnidirectional video, and determines whether or not multiple images of a person or vehicle have been detected. If the control unit 12 determines in step S462 that only one target candidate (image of a person or vehicle) is detected, the control unit 12 terminates the target specifying process, shifts to the schematic situation diagram generation process of FIG. 11, and proceeds to step S462. If it is determined that a plurality of target candidates (images of persons or vehicles) have been detected, the process moves to step S463.
 次にステップS463にて制御部12は、ステップS461で選出した図24の全方位映像よりもさらに所定時間前(例えば数秒前)の全方位映像(例えば図26(a))を映像データ422aより選出する。そしてステップS464にて制御部12は、選出した2つの全方位映像(図25(a)と図26(a))から対象候補となる人物や乗り物の位置(GPSデータの緯度と経度)を算出して、図27の対象候補図422dを生成し、その対象候補の位置と動きを検出する。 Next, in step S463, the control unit 12 extracts an omnidirectional image (for example, FIG. 26A) a predetermined time (for example, several seconds earlier) than the omnidirectional image in FIG. 24 selected in step S461 from the image data 422a. elect. Then, in step S464, the control unit 12 calculates the positions (latitude and longitude of GPS data) of the person or vehicle that is the target candidate from the two selected omnidirectional images (Fig. 25(a) and Fig. 26(a)). Then, an object candidate drawing 422d in FIG. 27 is generated, and the position and movement of the object candidate are detected.
 ここで、対象候補となる人物や乗り物の位置(GPSデータの緯度と経度)の算出方法の具体例について説明する。例えば図25(a)の全方位映像は、ユーザ甲の周囲の映像なので、その全方位映像の中心m1を甲の位置とすることができる。この場合、全方位映像の中心がユーザの位置に合うようにカメラ映像を調整するようにしてもよい。そして、甲から乙までの位置をベクトル(m1→t1)で示す。このベクトルの方向が甲に対する乙の方向であり、ベクトルの大きさ(長さ)が甲と乙と間の距離に対応する。全方位映像内における複数の対象物までのベクトル(方向と距離)と、実際の甲からのその対象物までのベクトル(方向と距離)とを関連付けて予め学習させた学習モデルを用いて実際の乙の方向と距離を算出する。乙について甲に対する方向と距離が特定できれば、甲のGPSデータの緯度と経度に基づいて、乙のGPSデータの緯度と経度を得ることができる。なお、乙のGPSデータの緯度と経度の算出方法は上記に限られず、全方位映像から公知の別の方法で算出するようにしてもよい。甲のGPSデータの緯度と経度は上述したように携帯端末装置20から受信したものを利用する。 Here, a specific example of a method of calculating the position (latitude and longitude of GPS data) of a person or vehicle that is a target candidate will be described. For example, the omnidirectional video in FIG. 25(a) is the video around user A, so the center m1 of the omnidirectional video can be the position of user A. In this case, the camera image may be adjusted so that the center of the omnidirectional image matches the position of the user. Then, the position from A to B is indicated by a vector (m1→t1). The direction of this vector is the direction of B with respect to A, and the magnitude (length) of the vector corresponds to the distance between A and B. Using a learning model that has been trained in advance by associating vectors (direction and distance) to multiple objects in the omnidirectional video with vectors (direction and distance) from the actual instep to the object, the actual Calculate the direction and distance of B. If the direction and distance to Party A can be specified for Party B, the latitude and longitude of Party B's GPS data can be obtained based on the latitude and longitude of Party A's GPS data. The method of calculating the latitude and longitude of the GPS data of B is not limited to the above, and may be calculated by another known method from the omnidirectional video. As for the latitude and longitude of the GPS data of Party A, those received from the portable terminal device 20 are used as described above.
 次にステップS465にて制御部12は、対象候補がユーザ甲に近づいているか否かを判断する。具体的には制御部は、図27に示すように2つの全方位映像から得られた対象候補の位置と動きに基づいて対象候補がユーザ甲に近づいているか否かを判断する。例えば近づいているか否かは、動きの方向がユーザ甲の進路に向かっているか否かで判断する。 Next, in step S465, the control unit 12 determines whether or not the target candidate is approaching user A. Specifically, as shown in FIG. 27, the control unit determines whether or not the target candidate is approaching user A based on the position and movement of the target candidate obtained from the two omnidirectional images. For example, whether or not it is approaching is determined by whether or not the direction of movement is toward the course of user A.
 例えば図27の対象候補図422dでは乙の自転車の動きの方向(t2→t1)がユーザ甲の進路(m2→m1)に向かっているので、近づいていると判断できる。これに対して、例えば乗り物や人物が写っていても静止していれば、位置が変わらないはずであるから対象候補ではないことがわかる。また動きの方向が甲の進路から遠ざかる方向であれば対象候補ではないことが分かる。このように、対象特定処理によれば、複数の人物や乗り物が写っていても、ユーザ甲を事故又は事件を引き起こした対象候補になるか否かを自動的に判断できる。 For example, in the target candidate diagram 422d of FIG. 27, the direction of movement of B's bicycle (t2→t1) is toward the course of user A (m2→m1), so it can be determined that User A is approaching. On the other hand, even if a vehicle or a person is captured, if it is stationary, the position should not change, so it is known that it is not a target candidate. Also, if the direction of movement is in the direction away from the course of the instep, it can be seen that it is not a target candidate. As described above, according to the object identification process, even if a plurality of persons or vehicles are photographed, it is possible to automatically determine whether or not user A is a candidate for the object that caused the accident or incident.
 次にステップS466にて制御部12は、その対象候補が事故又は事件の対象になるかどうかの検証を行う。具体的には制御部12は、図27の対象候補図422dに衝撃時又は緊急ボタン押下時の位置mX(GPSデータの緯度と経度)を重ね合わせて、図28に示す対象候補検証図422eを生成する。そして制御部12はステップS467にて、事故又は事件の位置mX(衝撃時又は緊急ボタン押下時の位置)が対象の動き(t2→t1)の延長線上(点線矢印)にあるか否かを判断する。この場合、乙の位置のGPSデータの算出精度が検証精度に影響を及ぼすおそれがある。そこで、多少のずれがあっても延長線上と判断できるように、例えば乙の位置のGPSデータの算出精度に応じてある程度の許容範囲を設けておくようにしてもよい。 Next, in step S466, the control unit 12 verifies whether or not the target candidate is the target of an accident or incident. Specifically, the control unit 12 superimposes the position mX (the latitude and longitude of the GPS data) at the time of impact or pressing of the emergency button on the target candidate diagram 422d of FIG. Generate. Then, in step S467, the control unit 12 determines whether or not the position mX of the accident or incident (the position at the time of impact or pressing of the emergency button) is on the extension line (dotted arrow) of the target movement (t2→t1). do. In this case, there is a risk that the calculation accuracy of the GPS data for the location of B may affect the verification accuracy. Therefore, a certain tolerance may be provided according to the calculation accuracy of the GPS data of the position of B, for example, so that even if there is some deviation, it can be determined that it is on the extended line.
 ステップS467にて制御部12は、事故又は事件の位置がその対象候補の動きの延長線上であると判断した場合は、ステップS468にてその対象候補を事件又は事故を引き起こした対象と特定する。以上のステップS465乃至ステップS468までは、対象候補ごとに行われる。制御部12は、ステップS465にて対象候補を任意で選んでその対象候補に対してステップS465乃至ステップS468を行う。ステップS465でその対象候補が近づいて来ない場合やステップS467にて事故又は事件の位置がその対象候補の動きの延長線上でない判断した場合は、ステップS464の処理に戻り、別の任意の対象候補を選んでステップS465乃至ステップS468を行う。 When the control unit 12 determines in step S467 that the position of the accident or incident is on the extension line of the movement of the target candidate, in step S468 the target candidate is identified as the target that caused the incident or accident. The above steps S465 to S468 are performed for each target candidate. The control unit 12 arbitrarily selects a target candidate in step S465 and performs steps S465 to S468 on the target candidate. If the target candidate does not approach in step S465 or if it is determined in step S467 that the position of the accident or incident is not on the extension line of the target candidate's movement, the process returns to step S464, and another arbitrary target candidate is determined. is selected, and steps S465 to S468 are performed.
 ステップS468にて制御部12は、対象を特定した場合には、対象特定処理を終了し、図11の状況略図生成処理に移る。なお、対象特定処理は、状況略図生成部125が行うようにしてもよい。対象特定処理のステップS461と状況略図生成処理のステップS261の処理は重複するので、これら対象特定処理と状況略図生成処理を連続して行う場合、状況略図生成処理では図11のステップS261の処理を省略して、図24のステップS461で選定した例えば図25の映像をそのまま利用するようにしてもよい。そうすると、図11のステップS262の現場状況略図を生成する前に、対象特定処理が行われることになる。 When the target is specified in step S468, the control unit 12 ends the target specifying process and proceeds to the schematic situation diagram generation process of FIG. Note that the target identification process may be performed by the schematic situation diagram generation unit 125 . Since step S461 of the target specifying process and step S261 of the schematic situation drawing process overlap, when these target specifying process and the schematic situation drawing process are performed continuously, the process of step S261 in FIG. 11 is performed in the schematic situation drawing process. For example, the image shown in FIG. 25 selected in step S461 of FIG. 24 may be used as it is. Then, the object specifying process is performed before generating the site situation schematic diagram in step S262 of FIG.
 このような変形例にかかる対象特定処理によれば、全方位映像に複数の乗り物や人物が写っている場合でも、ユーザ甲を巻き込む事故又は事件を引き起こした対象として自動的に特定することができる。また、事故又は事件の発生時(衝撃時又は緊急ボタン押下時)より少し前の全方位映像(映像データ)とそれよりもさらに前の全方位映像(映像データ)とを比較することで、対象候補である人物や乗り物の位置と動きを検出できる。これにより、その対象候補の位置と動きに基づいて対象を特定することができるので、単に映像に写っている対象の大きさの違いなどから対象を特定する場合に比較してより正確に対象を特定できる。 According to the object identification processing according to such a modified example, even when a plurality of vehicles or persons are shown in the omnidirectional video, it is possible to automatically identify the object as the object that caused the accident or incident involving user A. . In addition, by comparing the omnidirectional video (video data) slightly before the occurrence of the accident or incident (at the time of impact or pressing the emergency button) with the omnidirectional video (video data) even before that, the target It can detect the position and motion of candidate people and vehicles. As a result, the target can be specified based on the position and movement of the target candidate, so the target can be specified more accurately than when the target is specified simply from the difference in size of the target in the image. can be identified.
(5)上記第1実施形態及び第2実施形態並びに変形例において、上記状況略図生成処理や上記対象特定処理で選定される全方位映像は、ウエアラブル装置30からのカメラ映像から取得する場合を例示したが、必ずしもこれに限られず、監視カメラの映像から取得するようにしてもよい。例えば事故や事件の状況によってはウエアラブル装置30が破損する場合も考えられるので、そのような場合は、破損前後の映像を周囲の監視カメラの映像から補完するようにしてもよい。監視カメラの映像はユーザ甲の全方位映像になるように合成することで、上記実施形態や変形例の状況略図生成処理や対象特定処理をそのまま適用できる。 (5) In the above-described first and second embodiments and modifications, the omnidirectional video selected in the schematic situation diagram generation process and the target identification process is obtained from the camera video from the wearable device 30 as an example. However, the information is not necessarily limited to this, and may be acquired from the image of the surveillance camera. For example, it is conceivable that the wearable device 30 may be damaged depending on the circumstances of an accident or incident. In such a case, the images before and after the damage may be supplemented from the images of surrounding surveillance cameras. By synthesizing the video of the surveillance camera so as to become the omnidirectional video of the user A, the schematic situation diagram generation processing and target identification processing of the above-described embodiment and modifications can be applied as they are.
(6)上記第1実施形態及び第2実施形態並びに変形例において、日本国の状況報告書を例に挙げて説明したが、これに限られるものではない。例えば外国の状況報告書のフォーム(ひな形)を取り込むことで外国の状況報告書にも適用可能である。その場合、日本語がベースの部分はその国の言語に自動翻訳してその国のフォームに合わせることもできる。また、状況略図における人物や乗り物の記号についてもその国の指定の記号があればそれを取り込むことで、容易に置き換え可能である。しかも、地図情報についても当然に世界の地図情報を取り込むことで、人物や乗り物の画像を所定の記号に置き換えて地図上の位置で示す各国のフォームに対応した現場状況略図を生成することができる。 (6) In the first embodiment, the second embodiment, and the modified example, the situation report in Japan was used as an example, but the present invention is not limited to this. For example, it can be applied to a foreign situation report by incorporating a foreign situation report form (template). In that case, the Japanese-based part can be automatically translated into the language of the country to match the form of the country. In addition, symbols of people and vehicles in schematic diagrams can be easily replaced by incorporating symbols designated by the country, if any. Moreover, as for the map information, by taking in the map information of the world as a matter of course, it is possible to replace the images of people and vehicles with predetermined symbols and generate a rough map of the site situation corresponding to the form of each country indicated by the position on the map. .
 100…ウエアラブル情報処理システム、10…ウエアラブル情報処理装置、10L…バスライン、11…通信部、12…制御部、14…記憶部、15…プログラム記憶部、16…データ記憶部、17…事故判例データベース、18…事件判例データベース、20…携帯端末装置、20L…バスライン、21…通信部、22…制御部、23…記憶部、24…カメラ、25…マイク、26…センサ部、27…入力部、28…表示部、30…ウエアラブル装置、31…通信部、34…カメラ、34a…カメラ映像、40…映像処理装置、40L…バスライン、41…通信部、42…制御部、43…記憶部、50…地図処理装置、50L…バスライン、51…通信部、52…制御部、53…記憶部、121…映像データ取得部、122…状況データ取得部、122a…状況データ、123…地図データ取得部、125…状況略図生成部、125a…状況略図、126…状況報告生成部、126a…状況報告データ、126A…事故発生状況報告書、126B…事故発生状況報告書、127…過失割合報告生成部、127A…事故過失割合報告書、127B…事件過失割合報告書、127C…事件構成要件報告書、161…ユーザデータ、162…取得データ、163…生成データ、221…センサデータ取得部、222…衝撃検知部、223…衝撃データ収集部、223a…衝撃データ、224…状況データ収集部、224a…時刻データ、224b…地図データ、231…プログラム記憶部、232…データ記憶部、421…時刻データ取得部、422…映像データ選択部、422a…衝撃時の映像データ、422b…略図生成用映像、422c…現場状況略図、422d…対象候補図、422e…対象候補検証図、432…映像データ、521…GPSデータ取得部、522…地図データ選択部、522a…衝撃時の地図データ、522b…現場付近地図、532…地図データ、N…ネットワーク、P…プッシュ通知。
 
DESCRIPTION OF SYMBOLS 100... Wearable information processing system, 10... Wearable information processing apparatus, 10L... Bus line, 11... Communication part, 12... Control part, 14... Storage part, 15... Program storage part, 16... Data storage part, 17... Accident judgment Database 18 Case precedent database 20 Portable terminal device 20L Bus line 21 Communication unit 22 Control unit 23 Storage unit 24 Camera 25 Microphone 26 Sensor unit 27 Input Unit 28 Display unit 30 Wearable device 31 Communication unit 34 Camera 34a Camera image 40 Video processing device 40L Bus line 41 Communication unit 42 Control unit 43 Storage Unit 50 Map processing device 50L Bus line 51 Communication unit 52 Control unit 53 Storage unit 121 Video data acquisition unit 122 Situation data acquisition unit 122a Situation data 123 Map Data acquisition unit 125 Situation diagram generation unit 125a Situation diagram 126 Situation report generation unit 126a Situation report data 126A Accident occurrence situation report 126B Accident occurrence situation report 127 Negligence ratio report Generating unit 127A Accident negligence ratio report 127B Incident negligence ratio report 127C Incident constituent elements report 161 User data 162 Acquired data 163 Generated data 221 Sensor data acquisition unit 222 Impact detection unit 223 Impact data collection unit 223a Impact data 224 Situation data collection unit 224a Time data 224b Map data 231 Program storage unit 232 Data storage unit 421 Time data Acquisition unit 422 Video data selection unit 422a Video data at the time of impact 422b Schematic diagram generation video 422c Field situation schematic 422d Target candidate diagram 422e Target candidate verification diagram 432 Video data 521 ... GPS data acquisition unit 522 ... map data selection unit 522a ... map data at the time of impact 522b ... near site map 532 ... map data N ... network P ... push notification.

Claims (10)

  1. 人に装着されるウエアラブル装置からのカメラ映像を含む情報を処理するウエアラブル情報処理装置であって、
     前記ウエアラブル装置のカメラ映像を映像データとして記憶する映像処理装置と、地図データを記憶する地図処理装置と、事故又は事件の発生時の時刻データ及びGPSデータを含む状況データを収集可能な携帯端末装置と、にネットワークを介して接続され、
     前記携帯端末装置から前記状況データを取得する状況データ取得部と、
     前記映像処理装置から前記時刻データに基づく前記映像データを取得する映像データ取得部と、
     前記地図処理装置から前記GPSデータに基づく前記地図データを取得する地図データ取得部と、
     少なくとも前記映像データと前記地図データに基づいて状況略図を生成する状況略図生成部と、
     前記状況略図を含む事故又は事件の状況報告を生成する状況報告生成部と、
    を備えるウエアラブル情報処理装置。
    A wearable information processing device that processes information including camera images from a wearable device worn by a person,
    A video processing device that stores camera video of the wearable device as video data, a map processing device that stores map data, and a mobile terminal device that can collect situation data including time data and GPS data when an accident or incident occurs. and connected over the network to
    a situation data acquisition unit that acquires the situation data from the mobile terminal device;
    a video data acquisition unit that acquires the video data based on the time data from the video processing device;
    a map data acquisition unit that acquires the map data based on the GPS data from the map processing device;
    a schematic situation map generation unit that generates a schematic situation map based on at least the video data and the map data;
    a situation report generator that generates an accident or incident situation report including the situation diagram;
    A wearable information processing device.
  2.  前記カメラ映像は、前記ウエアラブル装置を装着した人の周囲の映像である
    請求項1に記載のウエアラブル情報処理装置。
    2. The wearable information processing device according to claim 1, wherein said camera image is an image of the surroundings of a person wearing said wearable device.
  3.  前記状況略図生成部は、
     前記映像データ取得部で取得された映像データから、前記事故又は事件を引き起こした対象の画像を検出し、前記対象の画像を所定の記号に置き換えて地図上の位置で示す現場状況略図を生成し、
     前記地図データ取得部で取得された地図データから前記GPSデータの位置を含む現場付近地図を生成し、
     前記現場状況略図と前記現場付近地図から状況略図を生成する
    請求項1又は請求項2に記載のウエアラブル情報処理装置。
    The situation diagram generation unit
    An image of the object that caused the accident or incident is detected from the image data acquired by the image data acquisition unit, and a schematic map of the site situation is generated by replacing the image of the object with a predetermined symbol and showing the position on the map. ,
    generating a site vicinity map including the position of the GPS data from the map data acquired by the map data acquisition unit;
    3. The wearable information processing device according to claim 1, wherein a schematic situation map is generated from the schematic site situation map and the map near the site.
  4.  前記ウエアラブル装置は、前記携帯端末装置に接続され、
     前記映像処理装置は、前記携帯端末装置を介して前記ウエアラブル装置からのカメラ映像を受信して所定の映像時間ごとの複数の映像データとして記憶し、
     前記映像データ取得部は、前記複数の映像データのうち少なくとも前記時刻データの時刻の映像を含む映像データを取得する
    請求項1から請求項3のいずれかに記載のウエアラブル情報処理装置。
    The wearable device is connected to the mobile terminal device,
    The video processing device receives a camera video from the wearable device via the mobile terminal device and stores it as a plurality of video data for each predetermined video time,
    The wearable information processing apparatus according to any one of claims 1 to 3, wherein the image data acquisition unit acquires image data including at least an image at the time of the time data from among the plurality of image data.
  5.  前記状況略図生成部は、
     前記現場状況略図を生成する前に、前記映像データ取得部で取得された映像データから前記対象の候補となる人物又は乗り物の画像が複数検出されたか否かを判断し、
     前記対象の候補となる人物又は乗り物の画像が1つしか検出されないと判断した場合には、その1つの画像を前記事故又は事件を引き起こした対象として特定して、前記現場状況略図及び前記現場付近地図及び状況略図を生成し、
     前記対象の候補となる人物又は乗り物の画像が複数検出されたと判断した場合には、前記映像データとそれよりも前の映像データとを比較することで前記対象の候補となる人物又は乗り物のそれぞれの位置と動きを検出して対象候補図を生成し、
     前記対象候補図の前記位置と動きに基づいて、前記複数の画像の中から前記事故又は事件を引き起こした対象を特定して、前記現場状況略図及び前記現場付近地図及び状況略図を生成する
    請求項3又は請求項4に記載のウエアラブル情報処理装置。
    The situation diagram generation unit
    determining whether or not a plurality of images of a person or a vehicle that is a candidate for the target is detected from the image data acquired by the image data acquisition unit before generating the schematic diagram of the site situation;
    If it is determined that only one image of a person or vehicle that is a candidate for the object is detected, that one image is specified as the object that caused the accident or incident, and the schematic diagram of the scene and the vicinity of the scene are identified. Generate maps and situation diagrams,
    When it is determined that a plurality of images of a person or vehicle that is a candidate for the object has been detected, each person or vehicle that is a candidate for the object is identified by comparing the image data with previous image data. Detect the position and movement of and generate a target candidate map,
    wherein, based on the position and movement of the target candidate map, an object that caused the accident or incident is specified from among the plurality of images to generate the site situation schematic map and the site neighborhood map and situation schematic map. The wearable information processing device according to claim 3 or 4.
  6.  前記携帯端末装置は、衝撃を検知する衝撃センサを備え、前記衝撃センサにより衝撃が検知されると、事故の発生又は事件の発生又はどちらでもないのいずれかを選択するための事故事件選択画面を表示部に表示し、
     前記状況データ取得部は、前記事故の発生又は前記事件の発生が選択された情報を含む前記状況データを前記携帯端末装置から取得する
    請求項1から請求項5のいずれかに記載のウエアラブル情報処理装置。
    The mobile terminal device includes an impact sensor that detects an impact, and when the impact sensor detects an impact, an accident event selection screen is displayed for selecting either an accident, an incident, or neither. displayed on the display,
    6. The wearable information processing according to any one of claims 1 to 5, wherein the situation data acquisition unit acquires the situation data including information that the occurrence of the accident or the occurrence of the incident is selected from the portable terminal device. Device.
  7.  前記状況報告生成部は、
     前記携帯端末装置からの前記状況データにより事故か事件かを判断し、
     前記事故であると判断した場合は、所定の事故発生状況報告書のフォームから前記状況報告を生成し、
     前記事件であると判断した場合は、所定の事件発生状況報告書のフォームから前記状況報告を生成する
    請求項6に記載のウエアラブル情報処理装置。
    The status report generation unit
    judging whether it is an accident or an incident based on the situation data from the portable terminal device;
    If it is determined that there is an accident, the situation report is generated from a predetermined accident occurrence situation report form,
    7. The wearable information processing apparatus according to claim 6, wherein when it is determined that the incident is the incident, the situation report is generated from a predetermined incident occurrence situation report form.
  8.  前記状況略図を含む判例データを記憶する判例データベースと、
     前記状況略図生成部で生成された状況略図と前記判例データベースの判例に含まれる状況略図と照合して前記状況略図が類似する判例を選出し、選出した判例から過失割合を抽出し、選出した判例の前記状況略図と前記過失割合を含む過失割合報告を生成する過失割合報告生成部と、を備える
    請求項6又は請求項7に記載のウエアラブル情報処理装置。
    a judicial precedent database that stores judicial precedent data including the schematic situation diagram;
    The situation diagram generated by the situation diagram generation unit is collated with the situation diagram included in the cases in the case database to select judicial precedents similar to the situation diagram, extract the percentage of negligence from the selected judicial precedents, and select the judicial precedents. 8. The wearable information processing apparatus according to claim 6 or 7, further comprising: a fault rate report generator that generates a fault rate report including the situation diagram and the fault rate.
  9.  前記判例データベースは、事故判例データベースと事件判例データベースに分けられ、
     前記過失割合報告生成部は、
     前記携帯端末装置からの前記状況データにより事故か事件かを判断し、
     前記事故であると判断した場合は、前記事故判例データベースから前記状況略図が類似する判例を選出し、
     前記事件であると判断した場合は、前記事件判例データベースから前記状況略図が類似する判例を選出する
    請求項8に記載のウエアラブル情報処理装置。
    The precedent database is divided into an accident precedent database and a case precedent database,
    The percent-of-failure report generator,
    judging whether it is an accident or an incident based on the situation data from the portable terminal device;
    If it is determined that it is an accident, select a precedent similar to the schematic situation diagram from the accident precedent database,
    9. The wearable information processing apparatus according to claim 8, wherein when it is determined that the case is the case, a case precedent similar to the schematic situation diagram is selected from the case precedent database.
  10.  ウエアラブル情報処理装置が行う事故又は事件の状況報告生成処理をコンピュータに実行させるプログラムであって、
     前記ウエアラブル情報処理装置は、
     人に装着されるウエアラブル装置のカメラ映像を映像データとして記憶する映像処理装置と、地図データを記憶する地図処理装置と、事故又は事件の発生時の時刻データ及びGPSデータを含む状況データを収集可能な携帯端末装置と、にネットワークを介して接続され、
     前記状況報告生成処理は、
     前記携帯端末装置からの前記状況データを取得するステップと、
     前記映像処理装置から前記時刻データに基づく前記映像データを取得するステップと、
     前記地図処理装置から前記GPSデータに基づく前記地図データを取得するステップと、
     少なくとも前記映像データと前記地図データに基づいて状況略図を生成するステップと、
     前記状況略図を含む状況報告を生成するステップと、
    を含むプログラム。
     
    A program that causes a computer to execute a situation report generation process of an accident or incident performed by a wearable information processing device,
    The wearable information processing device,
    A video processing device that stores video data captured by a wearable device worn by a person, a map processing device that stores map data, and can collect situation data including time data and GPS data when an accident or incident occurs. connected via a network to a mobile terminal device such as
    The status report generation process includes:
    obtaining the situation data from the mobile terminal device;
    obtaining the video data based on the time data from the video processing device;
    obtaining the map data based on the GPS data from the map processing device;
    generating a situation map based at least on said video data and said map data;
    generating a status report including the status diagram;
    A program that contains .
PCT/JP2022/029209 2021-07-28 2022-07-28 Wearable information processing device and program WO2023008540A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022576445A JP7239126B1 (en) 2021-07-28 2022-07-28 Wearable information processing device, program and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021123757 2021-07-28
JP2021-123757 2021-07-28

Publications (1)

Publication Number Publication Date
WO2023008540A1 true WO2023008540A1 (en) 2023-02-02

Family

ID=85086880

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/029209 WO2023008540A1 (en) 2021-07-28 2022-07-28 Wearable information processing device and program

Country Status (2)

Country Link
JP (1) JP7239126B1 (en)
WO (1) WO2023008540A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079474A (en) * 2023-09-04 2023-11-17 南通途腾信息科技有限公司 Movable road vehicle information acquisition device
JP7576878B1 (en) 2023-12-28 2024-11-01 株式会社カノア Logistics accident prediction system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010081480A (en) * 2008-09-29 2010-04-08 Fujifilm Corp Portable suspicious individual detecting apparatus, suspicious individual detecting method, and program
JP2018061215A (en) * 2016-10-07 2018-04-12 パナソニックIpマネジメント株式会社 Monitoring system and monitoring method
JP2020004307A (en) * 2018-07-02 2020-01-09 Mogコンサルタント株式会社 Wild animal spotting information collection system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002290626A (en) * 2001-03-26 2002-10-04 Toshiba Medical System Co Ltd First aid support system
JP2011013911A (en) * 2009-07-01 2011-01-20 System Origin Co Ltd Roll call management system
US20130254133A1 (en) * 2012-03-21 2013-09-26 RiskJockey, Inc. Proactive evidence dissemination

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010081480A (en) * 2008-09-29 2010-04-08 Fujifilm Corp Portable suspicious individual detecting apparatus, suspicious individual detecting method, and program
JP2018061215A (en) * 2016-10-07 2018-04-12 パナソニックIpマネジメント株式会社 Monitoring system and monitoring method
JP2020004307A (en) * 2018-07-02 2020-01-09 Mogコンサルタント株式会社 Wild animal spotting information collection system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079474A (en) * 2023-09-04 2023-11-17 南通途腾信息科技有限公司 Movable road vehicle information acquisition device
JP7576878B1 (en) 2023-12-28 2024-11-01 株式会社カノア Logistics accident prediction system
JP7576877B1 (en) 2023-12-28 2024-11-01 株式会社カノア Logistics accident cause determination system
JP7576879B1 (en) 2023-12-28 2024-11-01 株式会社カノア Unmanned monitoring system for logistics centers

Also Published As

Publication number Publication date
JPWO2023008540A1 (en) 2023-02-02
JP7239126B1 (en) 2023-03-14

Similar Documents

Publication Publication Date Title
JP6980888B2 (en) Image transmitter
CN110770084B (en) Vehicle monitoring system and vehicle monitoring method
US11393215B2 (en) Rescue system and rescue method, and server used for rescue system and rescue method
US10553113B2 (en) Method and system for vehicle location
JP6669240B1 (en) Recording control device, recording control system, recording control method, and recording control program
JP2018061216A (en) Information display system and information display method
JP2018061215A (en) Monitoring system and monitoring method
US11679763B2 (en) Vehicle accident surrounding information link apparatus
JP7052305B2 (en) Relief systems and methods, as well as the servers and programs used for them.
JP7239126B1 (en) Wearable information processing device, program and storage medium
CN111784923A (en) Shared bicycle parking management method, system and server
EP3660458A1 (en) Information providing system, server, onboard device, and information providing method
JP6448880B1 (en) Danger information collection device
US10719547B2 (en) Image retrieval assist device and image retrieval assist method
CN113313075A (en) Target object position relation analysis method and device, storage medium and electronic equipment
KR101613501B1 (en) Integrated car number recognizer with movable or fixed type and detecting system using the same
KR20110076693A (en) System, terminal and method for providing vehicle security function
JPWO2023008540A5 (en) Wearable information processing device, program and storage medium
JP7409638B2 (en) Investigation support system and investigation support method
JP2021196626A (en) Image data provision device, image data provision system, image data provision method, and computer program
CN111862576A (en) Method for tracking suspected target, corresponding vehicle, server, system and medium
CN113808397A (en) Data processing method and device for non-motor vehicle accidents and cloud server
WO2020044646A1 (en) Image processing device, image processing method, and program
JP2019079203A (en) Image generation device and image generation method
KR102502170B1 (en) Traffic safety system based on IoT

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022576445

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22849601

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22849601

Country of ref document: EP

Kind code of ref document: A1