WO2016151994A1 - Caméra portative et système de caméra portative - Google Patents

Caméra portative et système de caméra portative Download PDF

Info

Publication number
WO2016151994A1
WO2016151994A1 PCT/JP2016/000544 JP2016000544W WO2016151994A1 WO 2016151994 A1 WO2016151994 A1 WO 2016151994A1 JP 2016000544 W JP2016000544 W JP 2016000544W WO 2016151994 A1 WO2016151994 A1 WO 2016151994A1
Authority
WO
WIPO (PCT)
Prior art keywords
attribute information
wearable camera
video data
unit
recording
Prior art date
Application number
PCT/JP2016/000544
Other languages
English (en)
Japanese (ja)
Inventor
治男 田川
康志 横光
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2016151994A1 publication Critical patent/WO2016151994A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present disclosure relates to a wearable camera that can be attached to, for example, a human body or worn clothing, and a wearable camera system including the wearable camera.
  • This wearable surveillance camera system uses an encoding server means for storing a video signal and an audio signal from a CCD camera means and a microphone means worn on the body, and a date and time information signal from a built-in clock means in a pouch means worn on the body. The date and time information encoded and converted into character information is superimposed on the captured video and recorded.
  • the present disclosure is directed to an imaging unit, a storage unit that stores video data captured by the imaging unit, a sound collection unit that collects user's voice, and audio related to attribute information indicating the imaging status of video data.
  • a wearable camera comprising: a control unit that adds audio data related to attribute information collected by a sound collection unit to video data and stores the image data in a storage unit when the utterance is made.
  • the present disclosure is a wearable camera system in which a wearable camera and a server are connected, and the wearable camera includes an imaging unit, a first storage unit that stores video data captured by the imaging unit, and a user's voice.
  • the sound data related to the attribute information collected by the sound collecting unit is added to the video data.
  • a control unit that stores in the first storage unit; and a transmission unit that transmits the video data to which the audio data related to the attribute information is added to the server.
  • the server receives the video data to which the audio data related to the attribute information is added.
  • a receiving unit a voice recognizing unit for recognizing audio data related to attribute information attached to video data, and a voice recognition unit for audio data related to attribute information.
  • the speech recognition result in section, and a second storage unit that stores in association with the received video data by the receiving unit, to provide a wearable camera system.
  • attribute information regarding on-site video data can be easily provided, and convenience in handling captured video data can be improved.
  • FIG. 1 is an explanatory diagram regarding the outline of the wearable camera system of the present embodiment and the use of video data captured by the wearable camera.
  • FIG. 2 is a block diagram illustrating an example of an internal configuration of the wearable camera according to the present embodiment.
  • FIG. 3 is a block diagram showing an example of the internal configuration of the back-end server of this embodiment.
  • FIG. 4 is a diagram illustrating a state in which the user wears the wearable camera of the present embodiment.
  • FIG. 5 is a front view showing an example of the appearance of the wearable camera of the present embodiment.
  • FIG. 6 is a left side view illustrating an example of the appearance of the wearable camera of the present embodiment.
  • FIG. 7 is a right side view showing an example of the appearance of the wearable camera of the present embodiment.
  • FIG. 1 is an explanatory diagram regarding the outline of the wearable camera system of the present embodiment and the use of video data captured by the wearable camera.
  • FIG. 2 is a block diagram illustrating an example of an
  • FIG. 8 is a diagram illustrating an example of a data structure of recorded data captured by the wearable camera.
  • FIG. 9 is a diagram illustrating an example of a correspondence relationship between an attribute selection switch and attribute information.
  • FIG. 10 is a diagram illustrating an example of a correspondence relationship between an attribute selection switch, an attribute information addition mode, and attribute information.
  • FIG. 11 is a diagram illustrating an example of a data structure of a recorded video list.
  • FIG. 12 is a time chart showing an example of an operation for assigning attribute information during continuous recording.
  • FIG. 13 is a time chart showing an example of an operation for assigning attribute information when there is no pre-recording and pre-recording.
  • FIG. 14 is a time chart showing an example of an operation for assigning attribute information when there is pre-recording and pre-recording.
  • FIG. 15 is a time chart showing an example of an operation for assigning attribute information when there is pre-recording.
  • FIG. 16 is a flowchart illustrating an example of a procedure related to the attribute information providing operation in the wearable camera of the present embodiment.
  • the present disclosure provides a wearable camera and a wearable camera system that easily assigns attribute information regarding on-site video data and improves convenience in handling captured video data. For the purpose.
  • the present embodiment specifically discloses the wearable camera and the wearable camera system according to the present disclosure will be described in detail with reference to the drawings.
  • the wearable camera 10 is The microphone MC that picks up the user's voice, and the attribute concession picked up by the microphone MC when the voice related to the attribute information indicating the imaging state of the video data picked up by the image pickup unit 11 is spoken by the user Is added to the video data and stored in the storage unit 15, and the video data with the audio data related to the attribute information is transmitted to the back-end server SV.
  • the back-end server SV receives the video data to which the audio data related to the attribute information is received, recognizes the audio data related to the attribute information added to the video data, and associates the result with the video data in the storage 308. save.
  • FIG. 1 is an explanatory diagram relating to the outline of the wearable camera system 100 of the present embodiment and the use of video data captured by the wearable camera.
  • the wearable camera 10 of this embodiment is an imaging device that a user (for example, police officer 7) can wear on a body, clothes, a hat, or the like.
  • the wearable camera 10 communicates (for example, wireless communication) with an in-vehicle system 60 mounted on a vehicle (for example, a police car on which a police officer gets on) or a back-end server SV in the police station 5 to which the police officer 7 belongs. Has a communication function to perform.
  • the wearable camera 10 and the in-vehicle system 60 constitute a front-end system 100A, and the management software 70 on the network, the back-end server SV, and the in-station PC 71 that is a PC in the police station are back.
  • the end system 100B is configured.
  • the management software 70 is executed by the in-station PC 71 or the back-end server SV, for example.
  • the wearable camera system 100 is used in the police station 5, for example.
  • the police officer 7 operates after wearing the wearable camera 10, and conducts regular patrols and the situation of the site that is the destination of emergency or specific subjects (for example, victims of the incident, suspects, tours around the scene) Image).
  • Wearable camera 10 transfers video data obtained by imaging to, for example, back-end system 100 ⁇ / b> B in police station 5 in accordance with the operation of police officer 7.
  • the wearable camera 10 is not limited to the police officer 7 as a user, and may be used in various other offices (for example, a security company). In this embodiment, the police officer 7 rushing to the scene as a user is mainly illustrated.
  • the front-end system 100A includes a wearable camera 10 that can be worn by a police officer 7 dispatched to the front-line site, and a portable terminal (for example, a smartphone, which is held in a police car that is held or used by a police officer) And a vehicle-mounted system 60 installed in the police car.
  • the in-vehicle system 60 includes an in-vehicle camera (not shown), an in-vehicle recorder (not shown), an in-vehicle PC (not shown), a communication unit, and the like, and constitutes an in-vehicle camera system, a video management system, and the like.
  • the in-vehicle camera is installed at a predetermined position of the police car, and captures images around the police car (for example, in front of the police car or a back seat in the police car) constantly or at predetermined timings. That is, the in-vehicle camera is, for example, a front camera (not shown) for imaging the front of the police car, and a back seat camera (not used) for imaging the back seat (for example, the seat on which the suspect is sitting) in the police car. And a configuration including Video data captured by the in-vehicle camera is accumulated in the in-vehicle recorder, for example, by executing a recording operation. A plurality of in-vehicle cameras may be provided.
  • the in-vehicle camera may have a microphone (not shown) that collects sound inside and outside the police car as a front camera or a back seat camera. In this case, it is also possible to collect (record) voices produced by the police officer 7 or the suspect in the police car.
  • the in-vehicle recorder stores video data captured by the in-vehicle camera.
  • the in-vehicle recorder can acquire and store video data captured by the wearable camera 10.
  • the in-vehicle recorder may manage meta information such as attribute information given to the video data.
  • the in-vehicle PC may be a PC that is fixedly installed in the police car, or may be a wireless communication device such as a portable PC, a smartphone, a mobile phone, a tablet terminal, or a PDA (Personal Digital Assistant) that is used outside the police car. .
  • a wireless communication device such as a portable PC, a smartphone, a mobile phone, a tablet terminal, or a PDA (Personal Digital Assistant) that is used outside the police car.
  • the in-vehicle PC executes management software (not shown) to enable cooperation between the in-vehicle system 60 and the wearable camera 10 (specifically, communication between the in-vehicle system 60 and the wearable camera 10).
  • a UI User Interface
  • a UI for example, an operation device, a display device, and an audio output device
  • the in-vehicle PC is also used as a UI for operating the in-vehicle recorder.
  • the wearable camera 10 When the police officer 7 is dispatched from the police station 5 with a predetermined requirement (for example, patrol), for example, the wearable camera 10 is mounted, and the police officer 7 gets on a police car equipped with the in-vehicle system 60 and heads for the site.
  • a predetermined requirement for example, patrol
  • the wearable camera 10 In the front-end system 100 ⁇ / b> A, for example, an image of the scene where the police car arrives is captured by the in-vehicle camera of the in-vehicle system 60, and the police officer 7 gets off the police car and captures a closer and more detailed image of the scene by the wearable camera 10.
  • Video data of a moving image or a still image captured by the wearable camera 10 is stored in the storage unit 15 such as a memory of the wearable camera 10, for example.
  • the wearable camera 10 transfers (uploads) various data including video data captured by the wearable camera 10 from the storage device of the wearable camera 10 to the back-end system 100B.
  • Various data including video data captured by the wearable camera 10 may be directly transferred from the wearable camera 10 to the back-end system 100B, or may be transferred to the back-end system 100B via the in-vehicle system 60. Good.
  • Video data of a moving image or a still image captured by the in-vehicle camera is stored in a storage such as a hard disk (HDD (Hard Disk Drive)), SSD (Solid State Drive) or the like included in the in-vehicle recorder of the in-vehicle system 60, for example.
  • the in-vehicle system 60 (for example, in-vehicle recorder) transfers (uploads) various data including video data captured by the in-vehicle camera from the storage of the in-vehicle system 60 to the back-end system 100B.
  • Data transfer to the back-end system 100B is performed by connecting by wireless communication from the field, for example, or when patrol is completed and the police station 5 is returned, wireless communication, wired communication, or manual (for example, storage medium) Carry around).
  • video data captured by the wearable camera 10 is transferred (uploaded) to the back-end system 100B via the in-vehicle system 60, for example.
  • video data of a moving image or a still image captured by the wearable camera 10 is transferred from the wearable camera 10 to the in-vehicle recorder of the in-vehicle system 60 and stored.
  • Video data of a moving image or a still image captured by the wearable camera 10 or the in-vehicle camera and stored in the in-vehicle recorder is transferred (uploaded) from the in-vehicle recorder to the back-end system 100B.
  • the video data captured by the wearable camera 10 may be stored in a storage or the like of the in-vehicle PC and transferred (uploaded) from the in-vehicle PC to the back-end system 100B.
  • the back-end system 100B includes a back-end server SV installed in the police station 5 or other places, management software 70 for communicating with the front-end system 100A, and a station PC 71. .
  • the back-end server SV includes a storage 308 configured using an HDD, SSD, or the like inside (see FIG. 3) or outside.
  • the back-end server SV accumulates the video data and other data transferred from the front-end system 100A in the storage 308, and constructs a database used in each department in the police station.
  • the back-end server SV receives video data transferred from, for example, the wearable camera 10 or the in-vehicle system 60 (for example, in-vehicle recorder) and stores it in the storage 308.
  • the video data stored in the back-end system 100B is used, for example, by the person in charge at the relevant department in the police station 5 for operation / verification of the incident, and as required, a predetermined storage medium (for example, DVD: Digital Versatile Disk) ) Is copied and submitted as evidence in a predetermined scene (for example, trial).
  • a predetermined storage medium for example, DVD: Digital Versatile Disk
  • the identification information of the police officer 7 (for example, an officer ID having a function as a user ID), which is used by the police officer
  • the identification information of the wearable camera 10 (for example, the camera ID (Camera ID), the identification information of the police car used by the police officer 7 (for example, the car ID (Car ID))) is set and registered using the in-station PC 71 or the like. As a result, it is possible to clearly distinguish when and what police officers use which camera to capture the image data stored in the back-end server SV.
  • the person in charge in the police station 5 or the police officer 7 to be dispatched operates the operation device (not shown) of the in-station PC 71, and the in-station PC 71 executes the management software 70 It is done by doing.
  • information other than the Office ID, Camera ID, and Car ID may be input via the operation device of the in-station PC 71.
  • the management software 70 includes, for example, an application for managing the personnel of the police officer 7, an application for managing dispatch of a police car, and an application for managing taking out of the wearable camera 10.
  • the management software 70 includes an application for searching and extracting specific video data based on attribute information from a plurality of video data stored in the back-end server SV, for example.
  • FIG. 2 is a block diagram showing an example of the internal configuration of the wearable camera 10 of the present embodiment.
  • FIG. 4 is a diagram illustrating a state in which the user wears the wearable camera 10 of the present embodiment.
  • FIG. 5 is a front view showing an example of the appearance of the wearable camera 10 of the present embodiment.
  • FIG. 6 is a left side view illustrating an example of the appearance of the wearable camera 10 of the present embodiment.
  • FIG. 7 is a right side view showing an example of the appearance of the wearable camera 10 of the present embodiment.
  • the wearable camera 10 shown in FIG. 2 includes a microphone MC, an image pickup unit 11, a GPIO 12 (General Purpose Input / Output), a RAM 13, a ROM 14, a storage unit 15, and an EEPROM (Electrically Erasable Programmable Read-Only 16).
  • RTC Real Time Clock
  • GPS Global Positioning System
  • MCU Micro Controller Unit
  • communication unit 21 USB (Universal Serial Bus) 22, contact terminal 23, power supply unit 24 and a battery 25.
  • the wearable camera 10 includes a recording switch SW1, a snapshot switch SW2, an attribute information addition switch SW3, and an attribute selection switch SW4 for inputting operations of the police officer 7.
  • the wearable camera 10 includes three LEDs (Light Emitting Diodes) 26a, 26b, and 26c and a vibrator 27 in order to notify the police officer 7 of the operating state of the wearable camera 10.
  • the microphone MC as an example of the sound collection unit collects the voice of the police officer 7 wearing the wearable camera 10 and outputs the voice data obtained by the sound collection to the MCU 19.
  • the imaging unit 11 includes an imaging lens and a solid-state imaging device such as a CCD (Charge Coupled Device) type image sensor or a CMOS (Complementary Metal Oxide Semiconductor) type image sensor, and the MCU 19 receives video data of a subject obtained by imaging. Output to.
  • a CCD Charge Coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • GPIO 12 is an interface for serial / parallel conversion, and inputs / outputs signals between MCU 19 and recording switch SW 1, snapshot switch SW 2, attribute information assignment switch SW 3, attribute selection switch SW 4, LEDs 26 a to 26 c and vibrator 27. Do.
  • RAM 13 is a work memory used in the operation of the MCU 19.
  • the ROM 14 is a memory that stores a program and data for controlling the MCU 19 in advance.
  • the storage unit 15 is configured by a storage medium such as an SD memory, for example, and stores video data obtained by imaging by the imaging unit 11.
  • a storage medium such as an SD memory, for example, and stores video data obtained by imaging by the imaging unit 11.
  • an SD memory is used as the storage unit 15, it can be attached to and detached from the housing body of the wearable camera 10.
  • the EEPROM 16 stores identification information (for example, camera ID) for identifying the wearable camera 10 and other setting information.
  • the RTC 17 counts the current time information and outputs it to the MCU 19.
  • the GPS 18 receives the current position information of the wearable camera 10 from a GPS transmitter (not shown) and outputs it to the MCU 19.
  • the MCU 19 has a function as an operation control unit of the wearable camera 10, and performs control processing for overall control of operations of each unit of the wearable camera 10, data input / output processing with other units, Calculation (calculation) processing and data storage processing are performed, and operations are performed in accordance with programs and data stored in the ROM 14.
  • the MCU 19 uses the RAM 13, obtains current time information from the RTC 17, and obtains current position information from the GPS 18.
  • the communication unit 21 defines the connection between the communication unit 21 and the MCU 19 in the physical layer, which is the first layer of the OSI (Open Systems Interconnection) reference model, for example, and wireless LAN (W-LAN), for example, according to the rule. Communication (for example, Wi-fi (registered trademark)) is performed.
  • the communication unit 21 may use wireless communication such as Bluetooth (registered trademark).
  • USB 22 is a serial bus, and enables connection between wearable camera 10 and in-vehicle system 60 or in-station PC 71 in police station 5 or the like.
  • the contact terminal 23 is a terminal for electrically connecting to a cradle (not shown), an external adapter (not shown) or the like, and is connected to the MCU 19 via the USB 22 and to the power supply unit 24. Via the contact terminal 23, the wearable camera 10 can be charged and data including video data can be communicated.
  • the contact terminal 23 is provided with, for example, “charging terminal V +”, “CON.DET terminal”, “data terminals D ⁇ , D +”, and “ground terminal” (all not shown). CON.
  • the DET terminal is a terminal for detecting a voltage change.
  • the data terminals D ⁇ and D + are terminals for transferring video data captured by the wearable camera 10 to an external PC or the like via, for example, a USB connector terminal.
  • the power supply unit 24 supplies the battery 25 with power supplied from a cradle or an external adapter via the contact terminal 23 to charge the battery 25.
  • the battery 25 is composed of a rechargeable secondary battery, and supplies power to each part of the wearable camera 10.
  • the recording switch SW1 is a push button switch for inputting an operation instruction for starting / stopping recording (moving image capturing) by pressing operation of the police officer 7.
  • the snapshot switch SW2 is a push button switch for inputting an operation instruction for capturing a still image by pressing the police officer 7.
  • the attribute information addition switch SW3 is a push button switch for inputting an operation instruction for adding attribute information to the video data by a pressing operation of the police officer 7.
  • the attribute selection switch SW4 is a slide switch for inputting an operation instruction for selecting an attribute to be added to the video data.
  • Wearable camera 10 may be provided with communication mode switch SW5 and indicator switch SW6.
  • the communication mode switch SW5 is a slide switch that inputs an operation instruction for setting a communication mode between the wearable camera 10 and an external device.
  • the indicator switch SW6 is a slide switch for inputting an operation instruction for setting an operation state display mode by the LEDs 26a to 26c and the vibrator 27.
  • the recording switch SW1, the snapshot switch SW2, the attribute information addition switch SW3, and the attribute selection switch SW4 are configured to be easily operable even in an emergency.
  • the LED 26a reports the power-on state (on / off state) of the wearable camera 10 and the state of the battery 25 depending on whether or not the light is on.
  • the LED 26b notifies the state (recording state) of the imaging operation of the wearable camera 10 according to the presence or absence of lighting.
  • the LED 26c notifies the state of the communication mode of the wearable camera 10 according to the presence or absence of lighting.
  • the MCU 19 detects input of each of the recording switch SW1, the snapshot switch SW2, the attribute information addition switch SW3, the attribute selection switch SW4, the communication mode switch SW5, and the indicator switch SW6, and performs processing for the switch input that has been operated. .
  • the MCU 19 controls the start or stop of the imaging operation in the imaging unit 11, and stores the imaging data obtained from the imaging unit 11 in the storage unit 15 as video data of a moving image.
  • the MCU 19 saves the image data captured by the imaging unit 11 when the snapshot switch SW2 is operated in the storage unit 15 as video data of a still image.
  • the MCU 19 detects an operation input of the attribute information addition switch SW3, the MCU 19 assigns preset attribute information to the video data and stores it in the storage unit 15 in association with the video data.
  • the MCU 19 can assign, for example, audio data such as the situation of the scene where the police officer 7 speaks or the type of the case as attribute information corresponding to the video data.
  • correspondence information indicating the correspondence between the state of the attribute selection switch SW4 and predetermined attribute information is held in the EEPROM 16, and the MCU 19 detects the state of the attribute selection switch SW4 and sets the attribute selection switch SW4. Appropriate attribute information is assigned.
  • the MCU 19 detects the state of the communication mode switch SW5 and operates the communication unit 21 in a communication mode according to the setting of the communication mode switch SW5. Further, when the recording operation is started, the MCU 19 detects the state of the indicator switch SW6, and notifies the outside of the state of the recording operation by LED display and / or vibrator vibration according to the setting of the indicator switch SW6.
  • the wearable camera 10 is worn on the clothes or body worn by the police officer 7 so as to capture an image of the field of view from a position close to the user's viewpoint, such as the chest of the police officer 7 who is the user. Or may be attached to the hat via a fastener such as a clip. With the wearable camera 10 attached, the police officer 7 operates the recording switch SW1 to image a surrounding subject.
  • the wearable camera 10 is provided with an imaging lens 11a of the imaging unit 11, a recording switch SW1, and a snapshot switch SW2 on the front surface of a substantially rectangular parallelepiped casing.
  • the microphone MC is disposed, for example, in the vicinity of the recording switch SW1, and is, for example, a police officer so that it can easily collect sound when speaking even in an emergency with the police officer 7 attached. 7 is preferably arranged at a position where the distance from the mouth is close (see, for example, FIG. 5). For example, when the recording switch SW1 is pressed an odd number of times, recording (moving image capturing) starts, and when the recording switch SW1 is pressed an even number of times, the recording ends. Each time the snapshot switch SW2 is pressed, the still image is captured at that time.
  • an attribute information addition switch SW3, an attribute selection switch SW4, and a USB connector 22a are provided on the left side when viewed from the front of the housing of the wearable camera 10.
  • the attribute information corresponding to the setting state of the attribute selection switch SW4 is assigned to the video data currently recorded or the video data recorded immediately before.
  • the attribute selection switch SW4 is a slide switch having three-stage contact positions C1, C2, and C3, and the user selects and designates the attribute information assigned and set to each of C1 to C3.
  • the USB connector 22a is connected to a cable for connecting to an external device via USB, and the wearable camera 10 can be connected to the in-vehicle system 60 or the in-station PC 71 in the police station 5 to transmit and receive data.
  • a communication mode switch SW5 and an indicator switch SW6 are provided on the right side when viewed from the front of the housing of the wearable camera 10.
  • the communication mode switch SW5 is a slide switch having contact points in four stages of AP, STA1, STA2, and OFF, and the user selects and designates the communication mode of the wearable camera 10.
  • AP is an access point mode, in which the wearable camera 10 operates as a wireless LAN access point, wirelessly connects to a portable terminal (not shown), and communicates between the wearable camera 10 and the portable terminal.
  • the mobile terminal In the access point mode, the mobile terminal is connected to the wearable camera 10 to display the current live video by the wearable camera 10, play back recorded video data, add attribute information, display a captured still image, and the like. It can be performed.
  • STA1 and STA2 are station modes, and are modes in which communication is performed using an external device as an access point when connecting to an external device (for example, a mobile terminal that is a smartphone arranged in the above-described police car) via a wireless LAN.
  • STA1 is a mode for connecting to an access point in a police station
  • STA2 In the station mode, settings of the wearable camera 10, transfer (upload) of video data recorded on the wearable camera 10, and the like can be performed.
  • OFF is a mode in which the wireless LAN communication operation is turned off and the wireless LAN is not used.
  • the indicator switch SW6 is a slide switch having LED, Vibration, LED & Vibration, and OFF contact positions, and the user selects and designates the notification mode of the wearable camera 10.
  • the LED is a mode in which an operation state such as recording of the wearable camera 10 is displayed by the LEDs 26a to 26c.
  • Vibration is a mode in which the operating state of the wearable camera 10 is notified by vibration of the vibrator 27.
  • LED & Vibration is a mode in which the operating state of the wearable camera 10 is notified by the display of the LEDs 26a to 26c and the vibration of the vibrator 27.
  • OFF is a mode in which the operation state notification operation is turned off.
  • the LEDs 26a to 26c are arranged on the upper surface when viewed from the front of the casing of the wearable camera 10. Thereby, the user can easily visually recognize the LED while wearing the wearable camera 10, and the LED cannot be seen by anyone other than the user himself / herself.
  • a contact terminal 23 is provided on the lower surface of the wearable camera 10 as viewed from the front.
  • FIG. 3 is a block diagram showing an example of the internal configuration of the back-end server of this embodiment.
  • the back-end server SV shown in FIG. 3 includes a CPU 301, a memory 302, an I / O control unit 303, a communication unit 304, an input unit 305, an output unit 306, a storage control unit 307, and a storage 308. It is the composition which includes.
  • the CPU 301 performs, for example, control processing for overall control of operations of each unit of the back-end server SV, data input / output processing with other units, data calculation (calculation) processing, and data storage processing. .
  • the CPU 301 as an example of a voice recognition unit, when audio data related to attribute information is attached to video data captured by the wearable camera 10, is a user (for example, in the police station 5 having jurisdiction over the back-end system 100B).
  • the voice data related to the attribute information is recognized by voice according to the operation of the person in charge. Since the speech recognition process itself is a known technique, detailed description thereof is omitted here. For example, as voice data related to attribute information, the voice of the police officer 7 saying “arrived at the drinking inspection site on March 4, 2015, 19:00” was picked up by the microphone MC of the wearable camera 10. To do.
  • the CPU 301 uses the term “drinking check” indicating the incident or on-site situation registered in advance in the dictionary database (not shown) in the storage 308 from the entire sentence of the speech recognition result or a part thereof as attribute information. Is added to the corresponding video data and stored in the storage 308.
  • the back-end server SV arrives at the site by the wearable camera 10 even when, for example, attribute information indicating the situation at the site (for example, “drinking check”) has not been assigned to the attribute selection switch SW4 of the wearable camera 10 in advance.
  • attribute information indicating the situation of an arbitrary site can be flexibly provided to suit the video data of the site using the voice recognition result of the voice.
  • the memory 302 is configured by using, for example, RAM, ROM, nonvolatile or volatile semiconductor memory, functions as a work memory when the CPU 301 operates, and stores a predetermined program and data for operating the CPU 301. .
  • the memory 302 stores audio data included in the metadata MTD1 of the recorded data RCD1 (see FIG. 8) read from the storage 308 by the CPU 301 (that is, audio data in which audio related to attribute information is collected by the wearable camera 10). At the time of recognition, the recording data RCD1 is temporarily stored.
  • the I / O control unit 303 performs control related to data input / output between the CPU 301 and each unit of the back-end server SV (for example, the communication unit 304, the input unit 305, and the output unit 306). Relay data.
  • the I / O control unit 303 may be configured integrally with the CPU 301.
  • the communication unit 304 performs wired or wireless communication with the in-vehicle system 60 of the front end system 100A or the wearable camera 10 via the management software 70.
  • the communication unit 304 as an example of a reception unit receives the recording data RCD1 transmitted from the in-vehicle system 60. Further, the communication unit 304 may receive the recording data RCD1 directly transmitted from the wearable camera 10.
  • the input unit 305 is a UI for receiving an input operation of a person in charge in the police station 5 having jurisdiction over the back-end system 100B and notifying the CPU 301 via the I / O control unit 303. It is a pointing device.
  • the input unit 305 may be configured using, for example, a touch panel or a touch pad that is arranged corresponding to the screen output from the output unit 306 and can be operated by the person's finger or stylus pen.
  • the output unit 306 includes, for example, a display device configured using an LCD or an organic EL and / or a speaker that outputs sound, and displays various data on a screen or outputs sound. For example, when the recording data RCD1 imaged (recorded) by the wearable camera 10 is input in response to an input operation of a person in charge in the police station 5 having jurisdiction over the back-end system 100B, the output unit 306 receives the recorded data RCD1. Under the instruction, the video included in the video data of the recording data RCD1 is displayed on the screen.
  • the storage control unit 307 reads various data stored in the storage 308 or writes various data to the storage 308 in accordance with an instruction from the CPU 301 or the I / O control unit 303.
  • the storage 308 is configured by, for example, an SSD or an HDD, and stores and accumulates recorded data RCD1 captured and recorded by the wearable camera 10. In addition, when the recording data RCD1 is directly transferred from the wearable camera 10, the storage 308 stores and accumulates the recording data RCD1 captured and recorded by the wearable camera 10.
  • the storage 308 may store various data other than the recorded data RCD1, for example, as a result of the voice recognition processing performed by the CPU 301, for example, a dictionary database (not shown) of terms indicating incidents or on-site conditions. May be.
  • a plurality of storages 308 may be provided.
  • wearable camera system 100 uses video data recorded by wearable camera 10, transferred to back-end server SV, and stored.
  • the target video data is extracted from the stored video data based on some attribute information related to the video data, such as the type of video content, the police officer 7 who took the image, the date and time, the location, etc., and reproduced. If the attribute information is not added to the video data, it is difficult to determine what the video was captured later, the target video data cannot be extracted, and the convenience of the video data deteriorates. When storing the video data, it is necessary to add attribute information.
  • the wearable camera 10 assigns classification information indicating the type of video content to the video data as attribute information, or the voice data itself when the police officer 7 speaks the situation of the scene or the type of the incident. Is added to video data as attribute information.
  • the attribute information is not limited to the classification information, and may include any information related to the recorded video data. Further, the classification information as the attribute information may have a hierarchical structure or may be classified into categories by a plurality of different systems.
  • the wearable camera 10 of the present embodiment can easily add attribute information immediately after recording or during recording. Furthermore, the wearable camera 10 has a limited number of assignable attribute information (for example, three, see FIG. 9).
  • the attribute selection switch SW4 is assigned for a large-scale case type, and the wearable camera 10 is small. If the attribute selection switch SW4 is not assigned to the type of incident, the attribute selection switch SW4 is set to correspond to the voice recording mode (see later), and then the voice (for example, small voice) uttered by the police officer 7 is set. Audio data including the type of large-scale case) is added to the video data as attribute information.
  • FIG. 8 is a diagram illustrating an example of a data structure of the recording data RCD1 imaged by the wearable camera 10.
  • the wearable camera 10 When the wearable camera 10 according to the present embodiment captures and records video, the wearable camera 10 generates meta information MTD1 including attribute information related to the video data VD1 together with the captured video data VDO1, as shown in FIG. Then, the data is stored in the storage unit 15 as recorded data RCD1 in which the two data are associated with each other. In other words, the video data VDO1 and the meta information MTD1 are included in the recording data RCD1 captured and recorded by the wearable camera 10.
  • the recorded data RCD1 is preferably transferred from the wearable camera 10 to the back-end server SV via the in-vehicle system 60 in terms of communication environment (for example, line speed).
  • the recorded data RCD1 captured and recorded by the wearable camera 10 may be directly transferred from the wearable camera 10, but in order to shorten the transfer time, the in-vehicle system 60 transmits video data to the back-end server SV.
  • the recorded data RCD1 is transferred from the wearable camera 10 to the in-vehicle system 60 during the period when the policeman 7 is not transferred (for example, the shift time of the patrol shift of the police officer 7 and the travel time until returning to the police station 5). It is preferable to transfer from the in-vehicle system 60 to the back-end server SV.
  • FIG. 9 is a diagram illustrating an example of a correspondence relationship between the attribute selection switch SW4 and attribute information.
  • FIG. 10 is a diagram illustrating an example of a correspondence relationship among the attribute selection switch SW4, the attribute information addition mode, and the attribute information.
  • the attribute information addition mode of the wearable camera 10 is “fixed” corresponding to C1 to C3 indicating the state of the attribute selection switch SW4 of the wearable camera 10 (that is, the contact position).
  • “Mode” is set, and attribute information allocation is set.
  • the attribute information is assigned by selecting attribute information that is frequently used by the user from among a plurality of (for example, ten) defined attribute information.
  • the setting contents of the attribute information are held in the EEPROM 16 of the wearable camera 10.
  • the police officer 7 images the scene of the incident
  • “traffic accident” is assigned to C1
  • “drinking driving” is assigned to C2
  • “overspeed” is assigned to C3.
  • the attribute information addition mode of the wearable camera 10 is “fixed mode”.
  • the fixed mode indicates an operation state in which fixed attribute information (text data) assigned in advance is assigned as attribute information of video data.
  • the attribute information providing mode of the wearable camera 10 is set to “voice recording mode” corresponding to, for example, C3 indicating the state (that is, the contact position) of the attribute selection switch SW4 of the wearable camera 10. If this is the case, “speech” is set as the attribute information.
  • the voice recording mode indicates an operation state in which the voice data of the voice itself spoken by the police officer 7 can be given as the attribute information of the video data.
  • FIG. 11 is a diagram showing an example of the data structure of the recorded video list LST.
  • the meta information MTD1 related to the video data VDO1 is stored, for example, as a recorded video list LST shown in FIG.
  • the recorded video list LST includes data such as event ID, time information, camera ID, user ID, attribute information, and GPS information.
  • the event ID is identification information for identifying a recording event.
  • one recording operation from the start of recording to the end of recording is defined as one event, and the wearable camera 10 assigns an event ID to each recording operation event (hereinafter also referred to as a recording event).
  • a recording event a file name of video data or the like may be used.
  • the time information is time information of each recording event, and for example, a recording start time is given. As the time information, not only the recording start time but also the recording start time and the recording end time, the recording start time and the recording duration time, and the like may be used.
  • the camera ID is identification information for identifying each wearable camera 10.
  • the user ID is identification information of the police officer 7 who uses the wearable camera 10.
  • a camera ID and a user ID are set so that it is possible to determine which individual camera used to record the recorded video data.
  • the attribute information is classification information for identifying the type of video data, and the operation of the attribute information addition switch SW3 and the attribute selection switch SW4 by the police officer 7 based on the setting contents of the attribute information shown in FIG. 9 or FIG. Is granted according to
  • the GPS information is position information indicating a place where video data is recorded. For example, current position information at the start of recording is acquired from the GPS 18 and given. Each of the above meta information is given by the processing of the MCU 19 at the start of recording or immediately after the end of recording, and is stored in the storage unit 15 in association with the video data.
  • the attribute information is voice data of a voice uttered by the police officer 7.
  • the attribute information is audio data
  • the file name of the audio data collected by the microphone MC and the information on the recording start time and recording end time of the audio are added as supplementary information.
  • the back-end server SV recognizes the sound data as the attribute information added to the video data, the back-end server SV does not recognize the wrong sound data and recognizes the sound data as a target of the sound recognition processing correctly. Can be specified.
  • FIG. 12 is a time chart showing an example of an operation for assigning attribute information during continuous recording.
  • FIG. 13 is a time chart showing an example of an operation for assigning attribute information when there is no pre-recording and pre-recording.
  • FIG. 14 is a time chart showing an example of an operation for assigning attribute information when there is pre-recording and pre-recording.
  • FIG. 15 is a time chart showing an example of an operation for assigning attribute information when there is pre-recording.
  • the horizontal axis represents time, and a time chart relating to sound collection by the microphone MC and a time chart relating to imaging (recording) of video in the imaging unit 11 are shown in parallel.
  • the wearable camera 10 is set to the “voice recording mode” and, for example, the contact position C3 of the attribute selection switch SW4 is set to “voice”.
  • FIG. 12 shows a state in which on-site video images are continuously captured (recorded) by the wearable camera 10, for example. That is, the voice produced by the police officer 7 or the surrounding sounds after the wearable camera 10 is turned on is continuously picked up, and further, the image is taken by the imaging unit 11 when the police officer 7 presses the recording switch SW1. Are continuously imaged (recorded).
  • the attribute information switch SW3 is operated after the attribute selection switch SW4 is selected as C3 and the attribute selection switch SW3 is operated after a predetermined time has elapsed since the power of the wearable camera 10 is turned on, the wearable camera 10 is registered in advance in the wearable camera. The voice for a certain period of time or when the police officer 7 has long pressed the attribute information addition switch SW3 is assigned to the video data as attribute information.
  • FIG. 13 shows a state in which the police officer 7 is recording a voice or a surrounding sound during a period in which an image is captured (recorded).
  • the recording start triggers OP1a and OP2a (for example, when the police officer 7 presses the recording switch SW1 once) starts recording the video of the scene imaged by the imaging unit 11, and the sound produced by the police officer 7 by the microphone MC. Alternatively, ambient sound collection is started. Recording of the on-site video imaged by the imaging unit 11 is ended by the recording end triggers OP1b and OP2b (for example, when the police officer 7 presses the recording switch SW1 for a certain period of time).
  • the wearable camera 10 is registered in advance in the wearable camera.
  • the voice of the police officer for a certain period of time or when the police officer 7 has long pressed the attribute information addition switch SW3 is assigned to the video data as attribute information.
  • the recording start triggers OP1a and OP2a are not limited to the operation in which the police officer 7 presses the recording switch SW1 once.
  • automatic recording may be started when a predetermined condition is satisfied, and so on. It is.
  • the recording end triggers OP1b and OP2b are not limited to operations in which the police officer 7 presses the recording switch SW1 for a certain period of time.
  • automatic recording may end when a predetermined condition is satisfied. It is the same.
  • a pre-captured image (pre-video) of a certain time (for example, 10 seconds or 30 seconds) and a pre-sound collection (pre-sound) of the voice of the police officer 7 are performed. It is shown that the police officer 7 or surrounding sounds are recorded during the period in which the video is being captured (recorded).
  • the recording start trigger OP1a (for example, when the police officer 7 presses the recording switch SW1 once) starts the recording of the on-site video imaged by the imaging unit 11, and the voice or surroundings emitted by the police officer 7 by the microphone MC The sound collection starts.
  • Recording of the on-site video imaged by the imaging unit 11 is ended by the recording end trigger OP1b (for example, when the police officer 7 presses the recording switch SW1 for a certain period of time). If the attribute selection switch SW3 is operated after the attribute selection switch SW4 is selected as C3 after a certain period of time has elapsed since the start of video recording, the wearable camera 10 is registered in advance in the wearable camera. Audio for a certain period of time or when the police officer 7 has long pressed the attribute information addition switch SW3 is assigned to the video data as attribute information.
  • FIG. 15 for example, for an operation test of the wearable camera 10, after a pre-captured image (pre-video) of a certain time (for example, 10 seconds or 30 seconds) is performed, the image is captured (recorded). It is shown that the voice of the police officer 7 or the surrounding sound is recorded during a certain period.
  • the recording start triggers OP1a and OP2a (for example, when the police officer 7 presses the recording switch SW1 once) starts recording the video of the scene imaged by the imaging unit 11, and the sound produced by the police officer 7 by the microphone MC. Alternatively, ambient sound collection is started.
  • Recording of the on-site video imaged by the imaging unit 11 is ended by the recording end triggers OP1b and OP2b (for example, when the police officer 7 presses the recording switch SW1 for a certain period of time). If the attribute selection switch SW3 is operated after the attribute selection switch SW4 is selected as C3 after a certain period of time has elapsed since the start of video recording, the wearable camera 10 is registered in advance in the wearable camera. Audio for a certain period of time or when the police officer 7 has long pressed the attribute information addition switch SW3 is assigned to the video data as attribute information.
  • FIG. 16 is a flowchart illustrating an example of a procedure related to the attribute information providing operation in the wearable camera 10 of the present embodiment.
  • the MCU 19 of the wearable camera 10 performs initial setting prior to the recording operation (S1). For example, when the police officer 7 is dispatched, the wearable camera 10 is initially set by connecting to the in-station PC 71 in the police station 5 and operating the in-station PC 71 to transfer setting information. Initial settings include camera ID and user ID assignment (see FIG. 11), validation of attribute information assignment switch SW3 and attribute selection switch SW4, allocation of multiple attribute information to attribute selection switch SW4 (see FIG. 9), or Setting of “voice recording mode” (see FIG. 10) is performed in which attribute information “voice” can be given to a part of attribute selection switch SW4 (for example, C3 indicating the contact position).
  • the MCU 19 When the MCU 19 detects the first input of the recording switch SW1 by the police officer 7, the MCU 19 starts a recording operation, executes imaging by the imaging unit 11, and stores video data of the moving image in the storage unit 15 (that is, video data). (S2).
  • the MCU 19 ends the recording operation of one recording event when detecting the second input of the recording switch SW1 by the police officer 7 (S3). Subsequently, the MCU 19 inputs the selection state of the attribute selection switch SW4 (S4), and determines whether or not the attribute information addition switch SW3 is input (S5). Note that the selection state of the attribute selection switch SW4 may be maintained or changed until the attribute information addition switch SW3 is input (S5, NO).
  • the MCU 19 determines whether or not the attribute information addition mode of the wearable camera 10 is the “voice recording mode” (S6).
  • attribute information “voice” is assigned to a part of attribute selection switch SW4 (for example, C3 indicating a contact position) in the initial setting in step S1.
  • the attribute information addition switch SW3 in step S5 is pressed for a predetermined time (for example, 5 seconds) may be included. That is, when the attribute information addition switch SW3 is pressed for a predetermined time (for example, 5 seconds), the MCU 19 can easily switch the attribute information addition mode to the “voice recording mode”.
  • the MCU 19 picks up the voice of the police officer 7 picked up by the microphone MC for the designated time.
  • the voice data of the voice obtained by the sound collection is stored (that is, recorded) in the storage unit 15 (S7).
  • the specified time may be a predetermined time (for example, 5 seconds or 10 seconds).
  • the information providing switch SW3 is pressed for a predetermined time (for example, 5 seconds), it may be a long time (for example, 5 seconds). In the latter case, the wearable camera 10 can flexibly add the voice for the recorded period as the attribute information according to the length of the period in which the police officer 7 presses and holds.
  • the MCU 19 reads the attribute information corresponding to the state of the attribute selection switch SW4 from the EEPROM 16, and the voice data recorded in step S7 (that is, voice data of voice spoken by the police officer 7 toward the wearable camera 10). Then, it is given as attribute information for the video data recorded in steps S2 to S3 (S8).
  • the MCU 19 displays attribute information corresponding to the state of the attribute selection switch SW4 in the EEPROM 16 Are added to the video data recorded in steps S2 to S3 (S9).
  • the MCU 19 outputs the meta information MTD1 including the assigned attribute information to the storage unit 15, and stores the meta information MTD1 in association with the stored video data VDO1 immediately after finishing the recording operation (S10).
  • the meta information includes event ID, time information, attribute information, camera ID, user ID, and GPS information (see FIG. 11). And MCU19 complete
  • the procedure for assigning attribute information after one recording event ends is shown.
  • the input of the attribute information assignment switch SW3 is detected while the recording operation is continued, and recording is being performed.
  • a procedure for assigning attribute information may be used.
  • the police officer 7 who is a user has recorded the image by performing a simple operation of the wearable camera 10 alone when the wearable camera 10 is used to capture an image of the situation in the field.
  • the attribute information can be easily given to the video data.
  • the police officer 7 can reliably provide the attribute information of the video data even at the emergency imaging site.
  • the wearable camera 10 can pick up the voice of the police officer 7 with the microphone MC, for example, the attribute information addition mode can be easily changed to the “voice recording mode” by the initial setting or the long press operation of the attribute information addition switch SW3. it can.
  • the wearable camera 10 can add to the video data using the voice data of the voice uttered by the police officer 7 as attribute information indicating the type of the video on the spot, so that the wearable camera 10 is not limited to the video type or the case type. Attribute information with a wide range of contents can be easily acquired.
  • the back-end server SV receives the video data to which the voice data of the police officer 7 collected by the wearable camera 10 is added as attribute information, and then performs voice recognition processing on the voice data.
  • the back-end server SV extracts, for example, a term indicating an incident or on-site situation from a voice recognition result of voice data or a part of the result from a dictionary database (not shown) registered in the storage 308 and video. Attribute information for data can be automatically assigned. Therefore, when using video data stored in the back-end server SV at the police station 5, it is possible to easily determine and extract which type of video data is by referring to the attribute information. Further, the meta information MTD1 including attribute information can easily identify when, where, by which camera, by whom, and what kind of content the video is, and can improve the reliability as an evidence video.
  • the attribute of the video data can be easily set by the wearable camera 10, it is possible to reduce time and effort at the time of providing the attribute information, and it is possible to easily identify the video data immediately after recording. Thereby, in the wearable camera system 100, the convenience at the time of handling the imaged video data can be improved. Furthermore, in consideration of the portability of the wearable camera 10 worn by the police officer 7, there is a need to reduce the size of the housing, so there are restrictions on the arrangement of input units such as operation buttons, but the wearable camera of this embodiment According to No. 10, since the voice uttered by the police officer 7 can be given to the video data as attribute information, it is not necessary to take a large number of contact positions of the attribute selection switch SW4, and design restrictions are eased.
  • voice recognition processing is performed in the back-end server SV.
  • the wearable camera 10 can easily acquire the attribute information related to the video data.
  • the MCU 19 waits until the next recording operation starts after the video data recording operation by the imaging unit 11 ends, or the video data recording operation by the imaging unit 11. In the middle, the input from the attribute information addition switch SW3 is detected, and the attribute information is assigned to the recording data that has just been recorded or the recording data that is being recorded. As a result, the attribute information can be easily given to the video data immediately after recording or during the recording by operating the wearable camera 10 alone.
  • the wearable camera 10 of the present embodiment when the attribute information provision mode is “fixed mode”, different attribute information can be assigned and set to each of a plurality of setting states in the attribute selection switch SW4. . Thereby, the wearable camera 10 can selectively set desired attribute information from among a plurality of attribute information by operating the attribute selection switch SW4, and can give an appropriate attribute to the video data.
  • the present disclosure is useful as a wearable camera and a wearable camera system that can easily add attribute information related to on-site video data and improve convenience in handling captured video data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

L'objectif de l'invention est d'attribuer des informations d'attribut liées à des données vidéo sur site d'une manière aisée, et d'améliorer la facilité de la manipulation de données vidéo ayant été capturées. Ladite caméra portative est équipée : d'une unité d'imagerie ; d'une unité de mémorisation qui mémorise des données vidéo capturées par l'unité d'imagerie ; d'un microphone MC qui capture la voix de l'utilisateur ; et d'une unité de commande de micro (MCU) qui, lorsque des paroles associées à des informations d'attribut indiquant l'état d'imagerie des données vidéo sont prononcées par un utilisateur, mémorise des données vocales relatives aux informations d'attribut capturées par le microphone MC dans l'unité de mémorisation, en association avec les données vidéo.
PCT/JP2016/000544 2015-03-23 2016-02-03 Caméra portative et système de caméra portative WO2016151994A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-060133 2015-03-23
JP2015060133A JP2016181767A (ja) 2015-03-23 2015-03-23 ウェアラブルカメラ及びウェアラブルカメラシステム

Publications (1)

Publication Number Publication Date
WO2016151994A1 true WO2016151994A1 (fr) 2016-09-29

Family

ID=56978838

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/000544 WO2016151994A1 (fr) 2015-03-23 2016-02-03 Caméra portative et système de caméra portative

Country Status (2)

Country Link
JP (1) JP2016181767A (fr)
WO (1) WO2016151994A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7252724B2 (ja) * 2018-08-31 2023-04-05 株式会社デンソーテン データ収集装置、データ収集システムおよびデータ収集方法
JP7164465B2 (ja) 2019-02-21 2022-11-01 i-PRO株式会社 ウェアラブルカメラ
JP2020150360A (ja) 2019-03-12 2020-09-17 パナソニックi−PROセンシングソリューションズ株式会社 ウェアラブルカメラおよび映像データ生成方法
JP2020184678A (ja) 2019-05-08 2020-11-12 パナソニックi−PROセンシングソリューションズ株式会社 ウェアラブルカメラおよび信号付与方法
JP7270154B2 (ja) * 2019-11-20 2023-05-10 ダイキン工業株式会社 遠隔作業支援システム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08272989A (ja) * 1995-03-28 1996-10-18 Toshiba Corp 映像使用による資料作成支援システム
WO2005039175A1 (fr) * 2003-10-16 2005-04-28 Matsushita Electric Industrial Co., Ltd. Lecteur/enregistreur audio/video, procede d'enregistrement et de reproduction audio/video
JP2006262214A (ja) * 2005-03-17 2006-09-28 Ricoh Co Ltd 画像処理システム、画像処理装置及びプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08272989A (ja) * 1995-03-28 1996-10-18 Toshiba Corp 映像使用による資料作成支援システム
WO2005039175A1 (fr) * 2003-10-16 2005-04-28 Matsushita Electric Industrial Co., Ltd. Lecteur/enregistreur audio/video, procede d'enregistrement et de reproduction audio/video
JP2006262214A (ja) * 2005-03-17 2006-09-28 Ricoh Co Ltd 画像処理システム、画像処理装置及びプログラム

Also Published As

Publication number Publication date
JP2016181767A (ja) 2016-10-13

Similar Documents

Publication Publication Date Title
US10554935B2 (en) Wearable camera system, and video recording control method for wearable camera system
US10939066B2 (en) Wearable camera system and recording control method
WO2016103610A1 (fr) Caméra portable
WO2016151994A1 (fr) Caméra portative et système de caméra portative
JP2017005436A (ja) ウェアラブルカメラシステム及び録画制御方法
JP6799779B2 (ja) 監視映像解析システム及び監視映像解析方法
JP5861073B1 (ja) ウェアラブルカメラ
JP6115874B2 (ja) ウェアラブルカメラシステム及び録画制御方法
JP5856700B1 (ja) ウェアラブルカメラシステム及び録画制御方法
JP5810332B1 (ja) ウェアラブルカメラ
JP5849195B1 (ja) ウェアラブルカメラシステム及び撮像方法
JP6145780B2 (ja) ウェアラブルカメラシステム及び映像データ転送方法
JP6115873B2 (ja) ウェアラブルカメラシステムおよび映像データ同期再生方法
JP5861075B1 (ja) ウェアラブルカメラ
JP5856702B1 (ja) ウェアラブルカメラシステム及び属性情報付与方法
JP5861074B1 (ja) ウェアラブルカメラ
JP2016143895A (ja) ウェアラブルカメラシステム及び録画制御方法
WO2016121314A1 (fr) Système de caméra vestimentaire et procédé de commande d'enregistrement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16767908

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16767908

Country of ref document: EP

Kind code of ref document: A1