WO2019077750A1 - Data processing device, programmable display, and data processing method - Google Patents

Data processing device, programmable display, and data processing method Download PDF

Info

Publication number
WO2019077750A1
WO2019077750A1 PCT/JP2017/038058 JP2017038058W WO2019077750A1 WO 2019077750 A1 WO2019077750 A1 WO 2019077750A1 JP 2017038058 W JP2017038058 W JP 2017038058W WO 2019077750 A1 WO2019077750 A1 WO 2019077750A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
event information
unit
data processing
video data
Prior art date
Application number
PCT/JP2017/038058
Other languages
French (fr)
Japanese (ja)
Inventor
孝一 折戸
茂 角
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2018532182A priority Critical patent/JP6400260B1/en
Priority to PCT/JP2017/038058 priority patent/WO2019077750A1/en
Priority to CN201780077812.8A priority patent/CN110140152B/en
Publication of WO2019077750A1 publication Critical patent/WO2019077750A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a data processing apparatus that processes video data, a programmable display, and a data processing method.
  • Patent Document 1 it is determined whether to capture an input image based on a monitoring target, and an image storage device that controls an imaging interval so as to capture only the monitoring target by storing an image determined to be captured. Is disclosed.
  • the present invention has been made in view of the above, and it is an object of the present invention to obtain a data processing apparatus capable of shortening the time for confirming a monitoring target.
  • a data processing unit that extracts a plurality of feature images in which a monitoring target is displayed from an image forming video data; And a display generation unit configured to generate a synthesized image by synthesizing a plurality of feature images extracted from the images constituting the image.
  • the data processing apparatus has the effect of being able to shorten the time to check the monitoring target.
  • FIG. 1 shows the structure of the data processor in embodiment of this invention.
  • a figure showing an example of a synthetic picture in an embodiment A diagram showing a configuration of a data processing unit in the embodiment
  • a diagram provided for describing a specific example of extracting a feature image from an image constituting video data in the embodiment Flow chart showing procedure for acquiring event information in the embodiment
  • a flowchart showing a procedure for extracting a feature image from images constituting video data in the embodiment Diagram for explaining the procedure for selecting event information in the embodiment
  • a diagram provided for describing a procedure for adding an event information image to a feature image in the embodiment Flowchart showing procedure from selection of video data in the embodiment to display of composite image
  • a diagram showing a configuration of a display generation unit of a data processing device according to an embodiment A figure showing an example of a screen which chooses image data in an embodiment
  • a diagram showing an example of a configuration of video data in the embodiment A figure showing an example of a feature picture extracted from a picture by an extraction part in an embodiment
  • a figure showing an example of a synthetic picture in an embodiment A diagram showing an example of a configuration of video data in the embodiment A diagram showing an example of a configuration of video data in the embodiment A figure showing an example of a feature picture extracted from a picture by an extraction part in an embodiment A figure showing an example of a feature picture extracted from a picture by an extraction part in an embodiment The figure which shows an example which adds an event information image to a feature image by the addition part in embodiment. The figure which shows an example which adds an event information image to a feature image by the addition part in embodiment. A figure showing an example of a synthetic picture in an embodiment A diagram showing an example of a hardware configuration of a data processing apparatus according to an embodiment
  • FIG. 1 is a diagram showing the configuration of a data processing apparatus 1 according to an embodiment of the present invention.
  • the data processing device 1 is realized by a programmable display.
  • the programmable display includes a display unit for displaying an image, an operation unit for receiving a user's operation, a connection unit for connecting an external device, and a storage unit for storing data, and displays the operation status of the external device; It is an electronic operation display for inputting values to an external device.
  • the data processing apparatus 1 extracts a characteristic image on which a monitoring target is displayed from a plurality of images constituting video data recorded during a predetermined time by the imaging apparatus, and synthesizes the plurality of extracted characteristic images to synthesize it. Generate an image.
  • the monitoring target is an object, equipment, etc. moving at a monitoring location.
  • the feature image is described as having the same size as the image forming the video data, but may be the same size as the monitoring target displayed in the image. That is, the feature image may be a part of image data, and the image on which the monitoring target is displayed is divided into a plurality of areas. The specific operation and configuration of the data processing apparatus 1 will be described below.
  • the data processing device 1 includes an operation unit 11 that receives a user's operation, an external device connection unit 12 to which the external device 2 is connected, and an imaging device connection unit 13 to which the imaging device 3 is connected.
  • the data processing device 1 includes a data processing unit 14 for extracting a feature image which is an image of a portion where a monitoring target is displayed from images constituting video data, and a storage unit 15 for storing video data and feature images. Prepare.
  • the data processing device 1 generates the composite image by combining the plurality of feature images extracted from each of the plurality of images constituting the video data by the data processing unit 14, and the display generation unit 16 And a display unit 17 for displaying the composite image.
  • the operation unit 11 is configured of a keyboard or a touch panel, and receives user's operation. Specifically, the operation unit 11 receives an operation of inputting additional information and an operation of selecting video data to be reproduced. The operation unit 11 generates an operation signal according to the received operation, and outputs the generated operation signal to the data processing unit 14.
  • the external device connection unit 12 is an interface to which the external device 2 is connected.
  • the external device connection unit 12 is configured by, for example, a USB (Universal Serial Bus) connector or an RS-232C connector.
  • the external device 2 is, for example, a programmable logic controller (PLC) that controls an industrial machine or a sensing device that inspects a product.
  • PLC programmable logic controller
  • the PLC holds a device ID (Identification) stored in the industrial machine to be controlled.
  • the device ID is information for identifying an industrial machine.
  • the PLC controls the operation of the connected industrial machine.
  • the PLC also monitors the state of operation of the industrial machine.
  • the PLC generates alarm information when detecting an abnormality or a failure of the industrial machine, and outputs the generated alarm information to the data processing device 1.
  • the alarm information includes the content of the abnormality or the failure and the time information when the abnormality or the failure occurs.
  • the sensing device performs, for example, an appearance inspection that inspects the appearance of the product, or a position inspection that checks whether parts of the product are placed at desired positions.
  • the sensing device inspects a product, generates sensor information which is a result of the inspection, and outputs the generated sensor information to the data processing device 1.
  • the sensor information includes the result of the inspection and the time information of the inspection.
  • the external device connection unit 12 outputs the information input from the external device 2 to the data processing unit 14.
  • the information input from the external device 2 is a device ID and alarm information when the external device 2 is a PLC. Further, the information input from the external device 2 is sensor information when the external device 2 is a sensing device.
  • device ID, alarm information, and sensor information are called event information.
  • the event information may be, for example, information indicating the movement status of the product transported by the belt conveyor in the process of producing the product, information on the work of setup replacement, and the like.
  • the information indicating the movement status of the product transported by the belt conveyor is information indicating the product leaving time and warehousing time, or information indicating the time when the product has passed a predetermined point.
  • the information of the work of setup change is information which shows the change of the parameter set to the mechanical equipment.
  • the event information is given as information on an image constituting video data. That is, event information is information that is obtained corresponding to the time when an image was captured and the cause of the occurrence, which constitute video data.
  • the image is stored in an image file format including a header portion, a payload portion, an index portion, and the like. Event information is stored, for example, in the index unit.
  • the imaging device connection unit 13 is an interface to which the imaging device 3 is connected.
  • the imaging device connection unit 13 is, for example, a USB connector or a communication connector for Ethernet (registered trademark).
  • the imaging device 3 outputs the imaged video data to the data processing device 1 after imaging at a constant time interval or after imaging by a trigger signal transmitted from the imaging device connection unit 13.
  • the imaging device connection unit 13 outputs the video data input from the imaging device 3 to the data processing unit 14.
  • the data processing unit 14 receives video data from the imaging device connection unit 13 and extracts a characteristic image, which is an image on which a monitoring target is displayed, from an image forming the video data. The details of the procedure for extracting the feature image from the images constituting the video data will be described later.
  • Image data is input from the imaging device 3 in digital form, but may be input in analog form.
  • the imaging device connecting unit 13 converts the video data in analog format into video data in digital format by A / D (Analog to Digital) conversion.
  • the display generation unit 16 generates a composite image by combining the characteristic image that is the image of the portion where the monitoring target is displayed extracted by the data processing unit 14 and the image where the background of the monitoring target is displayed. Do.
  • the display generation unit 16 generates a composite image D by combining the characteristic image a2 in which the monitoring target B1 is displayed and the background image a1 in which the background of the monitoring target is displayed. Do.
  • the present embodiment can be applied to the field of monitoring a production line when a product is produced at a factory.
  • the monitoring target is a product to be produced
  • the background image is a production line such as a belt conveyor.
  • the display unit 17 is configured of a liquid crystal display device or an organic EL (Electro Luminescence) display device.
  • the operation unit 11 and the display unit 17 may be configured by an operation display unit in which an operation function for receiving an operation by a user and a display function for displaying an operation screen are integrated.
  • the data processing apparatus 1 selects the video data, and combines the feature images extracted from each of the plurality of images constituting the selected video data to generate a composite image.
  • the images can be shown consolidated in one composite image.
  • the user can grasp the movement status of the monitoring target without reproducing all the video data by confirming the composite image in which the plurality of feature images are combined, and therefore, the confirmation operation of the monitoring target can be performed in a short time. Can do it.
  • FIG. 3 is a diagram showing the configuration of the data processing unit 14.
  • FIG. 4 is a diagram provided for describing a specific example of extracting a feature image from an image forming video data.
  • the data processing unit 14 acquires event information indicating a state of a monitoring target when video data is captured or after video data is captured, and based on the event information acquired by the acquisition unit 21,
  • the image processing apparatus includes an extraction unit 22 which selects an image constituting video data and extracts a characteristic image from the selected image.
  • the extraction unit 22 selects an image captured at the same time as acquiring the event information. That is, the extraction unit 22 searches for an image based on the time when the event information is acquired, and selects an image from among the searched images.
  • the extraction unit 22 may search for an image based on the trigger signal and select an image from the retrieved images.
  • the extraction unit 22 extracts a feature image a1 which is an image of a portion where the monitoring target B1 is displayed, from the selected image A1 as shown in FIG.
  • the extraction unit 22 outputs the video data, the feature image, and the event information to the storage unit 15.
  • the storage unit 15 associates and stores video data, a feature image, and event information.
  • the operation unit 11 receives setting information.
  • the setting information is information indicating the timing of acquiring event information.
  • the timing at which the acquisition unit 21 acquires event information can be arbitrarily set according to the content of the setting information.
  • the timing of acquisition may be the time when imaging of video data by imaging device 3 is started, the time when imaging of video data is performed, the time when imaging of video data is completed, a predetermined cycle, multiple bit conditions, etc.
  • Be The multi-bit condition refers to the result obtained by the logic operation using the plurality of bits while monitoring changes in bits that are in the rising and falling states of the plurality of bits set in advance in the external device 2 such as PLC. Is a condition that determines the timing of acquiring event information.
  • the acquisition unit 21 of the data processing device 1 performs the logical operation described above on the basis of the result of the logical operation acquired from the external device 2 such as PLC, etc. It may be determined. Further, the acquisition unit 21 of the data processing device 1 receives a plurality of bits from the external device 2 such as PLC, performs a logical operation using the received plurality of bits, and based on the result of the logical operation, event information The timing of acquiring may be determined. The operation unit 11 outputs the setting information to the acquisition unit 21.
  • step S2 the acquisition unit 21 outputs setting information to the storage unit 15.
  • step S3 the acquisition unit 21 determines whether it is time to acquire event information, based on the setting information. If the acquisition unit 21 determines that it is time to acquire event information (Yes at step S3), the process proceeds to step S4, and if it is determined that it is not time to acquire event information (No at step S3). Repeat the process of step S3.
  • step S4 the acquisition unit 21 requests the external device 2 to transmit event information, and acquires event information from the external device 2.
  • step S11 the imaging device 3 outputs the video data to the extraction unit 22 via the imaging device connection unit 13.
  • Video data is composed of a plurality of images.
  • step S12 the extraction unit 22 selects an image forming the video data based on the event information acquired by the acquisition unit 21.
  • step S13 the extraction unit 22 specifies an image area having motion in the image based on the selected image.
  • An image is composed of a plurality of pixels.
  • a block a of an arbitrary size is determined in an image A constituting video data, and the same size as the block a in an arbitrary place of an image B immediately preceding the image A
  • a block b is determined, and the difference between the luminance value of the pixels forming the block b and the luminance value of the pixels forming the block a is calculated.
  • the location of block b is sequentially changed in image B, and the difference between the luminance value of the pixels forming block b and the luminance value of the pixels forming block a at the changed location is calculated.
  • the block b of the smallest difference is specified among the calculated differences.
  • the identified block b can be estimated to be the same image portion as the block a of the image A.
  • a motion vector is calculated based on the difference between the position vector of the identified block b and the position vector of the block a.
  • the extraction unit 22 identifies a moving image area based on the motion vector. For example, when the magnitude of the calculated motion vector is larger than a certain value, the extraction unit 22 may specify that the image region is moving.
  • the identified image area includes a monitoring target.
  • step S14 the extraction unit 22 extracts a feature image from the image based on the identified image area.
  • the data processing apparatus 1 selects event information, selects an image based on the selected event information, extracts a feature image from the selected image, and extracts the extracted feature image.
  • the composition may be combined to generate a combined image.
  • the list of event information acquired by the acquisition unit 21 is stored in the storage unit 15.
  • the display unit 17 displays a list of event information.
  • the user operates the operation unit 11 based on the list of event information displayed on the display unit 17 to select arbitrary event information.
  • the extraction unit 22 extracts a corresponding image based on the selected event information.
  • the operation unit 11 receives the selection of the event information acquired by the acquisition unit 21.
  • FIG. 7 shows the display unit 17 displaying a plurality of pieces of event information, that is, a list of event information stored in the storage unit 15.
  • the imaging device 3 transmits to the data processing device 1 the information on the time when the image data was imaged, the information on the location where the imaging device 3 is installed, and the unique ID given to the imaging device 3 together with the video data. May be In addition to the event information, information on the time at which the image data was captured, information on the location where the imaging device 3 is installed, and a unique ID assigned to the imaging device 3 are also displayed on the display unit 17. You may
  • the user operates the operation unit 11 based on the screen displayed on the display unit 17 to select one or more pieces of event information.
  • the extraction unit 22 selects an image forming the video data based on the event information received by the operation unit 11, and extracts a feature image from the selected image.
  • the extraction unit 22 reads video data associated with the selected event information from the storage unit 15, and selects an image from among the images constituting the read video data based on the event information. , Extract a feature image from the selected image. The extraction unit 22 outputs the extracted feature image to the display generation unit 16.
  • the extraction unit 22 may be configured to read the feature image corresponding to the selected event information directly from the storage unit 15 and to output the read feature image to the display generation unit 16.
  • the display generation unit 16 combines a plurality of feature images to generate a combined image.
  • the data processing device 1 since the data processing device 1 generates a composite image in which a plurality of feature images are composited based on event information arbitrarily selected by the user, it is possible to grasp the movement situation of a specific monitoring target, It is possible to complete the check work of the target of monitoring in a short time.
  • the acquisition unit 21 acquires processing information for processing the feature image. Specifically, the user operates the operation unit 11 to select video data, and sets processing information to the selected video data.
  • the acquisition unit 21 acquires the processing information set from the operation unit 11.
  • the processing information may be input from a terminal device which is the external device 2 connected to the external device connection unit 12.
  • the acquisition unit 21 outputs the acquired processing information to the storage unit 15.
  • the storage unit 15 stores the processing information in association with the corresponding video data.
  • the processing information is information indicating the content of processing the monitoring target of the feature image by image processing.
  • the processing information includes any one or more of a tone correction value for correcting the tone of the monitoring target, a tone correction value for correcting the tone of the monitoring target, and a value indicating the magnitude of the motion vector.
  • the data processing unit 14 includes a processing unit 23 that processes the feature image based on the processing information.
  • the processing unit 23 When the processing unit 23 receives selection of video data to be reproduced by the operation unit 11, the processing unit 23 reads from the storage unit 15 the feature image and the processing information associated with the video data.
  • the processing unit 23 corrects the density of the monitoring target based on the gradation correction value.
  • the processing unit 23 corrects the color tone of the monitoring target based on the color tone correction value.
  • the processing unit 23 selects a feature image of a motion vector having a size exceeding the value from the feature images.
  • the data processing unit 14 reads out the event information from the storage unit 15, converts the read out event information into an image, and generates an event information image, and adds the event information image to the feature image monitoring target And an adding unit 25.
  • event information is stored in the index unit that constitutes an image.
  • the image generation unit 24 reads out event information from the index unit constituting an image, converts the read out event information into an image, and generates an event information image.
  • the adding unit 25 adds an event information image to the feature image extracted from the image from which the event information has been read.
  • the image generation unit 24 when the image generation unit 24 receives the selection of video data to be reproduced by the operation unit 11, the image generation unit 24 reads from the storage unit 15 event information associated with the video data.
  • the image generation unit 24 converts event information, which is text data, into an image, and generates an event information image.
  • the acquisition unit 21 acquires instruction information for instructing a method for adding an event information image to a monitoring target of a feature image.
  • the adding unit 25 adds the event information image to the monitoring target of the feature image based on the instruction information.
  • the adding unit 25 adds the event information image C to the upper side of the monitoring target B, as shown in FIG.
  • the position to which the event information image is added is not limited to the upper side of the monitoring target, but may be the right side of the monitoring target, the left side of the monitoring target, or the lower side of the monitoring target, or may be superimposed on the front of the monitoring target .
  • FIG. 8 shows an event information image including an event ID and time information, but this is an example, and other information may be included.
  • the user may operate the operation unit 11 to arbitrarily select information included in the event information image.
  • step S21 the operation unit 11 receives selection of video data. Specifically, the user operates the operation unit 11 to select one or more video data from among the plurality of video data displayed on the display unit 17.
  • step S ⁇ b> 22 the processing unit 23 reads from the storage unit 15 the feature image associated with the received video data, the event information, and the processing information.
  • step S23 the processing unit 23 processes the feature image based on the processing information.
  • the processing unit 23 corrects the density of the monitoring target based on the gradation correction value. If the processing information includes a color tone correction value, the processing unit 23 corrects the color tone of the monitoring target based on the color tone correction value.
  • the processing unit 23 selects a feature image corresponding to the motion vector having a size exceeding the value.
  • the adding unit 25 determines the feature image corresponding to the motion vector having a magnitude exceeding the value. Choose Then, the adding unit 25 corrects the density of the monitoring target of the selected feature image based on the density correction value. The adding unit 25 corrects the color tone of the monitoring target of the selected characteristic image based on the color tone correction value.
  • step S24 the image generation unit 24 reads from the storage unit 15 event information associated with the video data selected in the process of step S21, converts the read out event information into an image, and generates an event information image. Do.
  • step S25 the adding unit 25 adds an event information image to the monitoring target of the feature image.
  • the adding unit 25 adds an event information image to the monitoring target of the feature image based on the instruction information.
  • the adding unit 25 adds an event information image at a predetermined position.
  • the predetermined position is, for example, the right side of the monitoring target.
  • step S21 when selection of a plurality of video data is accepted by the operation unit 11, the processes of step S22 to step S25 are repeated for each of the accepted video data.
  • step S26 the display generation unit 16 generates a composite image by combining the feature image to which the event information image is added in the process of step S25, and the image on which the background of the monitoring target is displayed.
  • the data processing apparatus 1 selects the video data and combines the characteristic image to which the event information image is added from the image forming the selected video data to generate a composite image, so the event information image is added.
  • the user can grasp the movement status of the monitoring target without reproducing all the video data, so that the confirmation operation of the monitoring target can be completed in a short time.
  • the display generation unit 16 may combine a plurality of feature images to generate a combined image, and then add an event information image to the feature image included in the combined image.
  • the data processing device 1 includes a display generation unit 31 that generates a composite image.
  • the display generation unit 31 combines the plurality of feature images to generate a combined image, and the addition unit 25 adds the event information image to the feature image included in the combined image combined by the combining unit 32. Equipped with That is, in the configuration example shown in FIG. 10, the data processing unit 14 does not include the adding unit 25.
  • FIG. 11 shows that the display unit 17 displays sample images of a plurality of video data.
  • the sample image of the video data is an image generated by reducing the size of one image constituting the video data.
  • the user operates the operation unit 11 based on the sample image displayed on the display unit 17 to select one or more video data.
  • the following description is given on the assumption that the video data E1 is selected.
  • the video data E1 is composed of a plurality of images A11, A12 and A13, and is video data in which the monitoring objects B1, B2 and B3 are conveyed on the belt conveyor X. It is.
  • the monitoring targets B1, B2, and B3 are the same product conveyed on the belt conveyor X.
  • FIG. 13 shows a feature image group E1 'configured of a plurality of feature images.
  • the extraction unit 22 extracts a characteristic image a11 which is an image of a portion where the monitoring target B1 is displayed from the image A11 based on the event information as shown in FIG. 13, and the monitoring target from the image A12 based on the event information
  • a characteristic image a12 which is an image of a portion where B2 is displayed is extracted
  • a characteristic image a13 which is an image of a portion where the monitoring target B3 is displayed is extracted from the image A13 based on event information.
  • FIG. 14 shows a feature image group E1 'configured of a plurality of feature images to which an event information image is added.
  • the adding unit 25 adds the event information image C1 to the monitoring target B1 of the feature image a11, adds the event information image C2 to the monitoring target B2 of the feature image a12, and monitors the feature image a13, as shown in FIG. An event information image C3 is added to B3.
  • the display generation unit 16 generates a composite image D by combining the feature image a11, the feature image a12, the feature image a13, and the image on which the background of the monitoring target is displayed.
  • the composite image D as shown in FIG. 15, the monitoring target B1 to which the event information image C1 is added, the monitoring target B2 to which the event information image C2 is added, and the monitoring target B3 to which the event information image C3 is added. And are included.
  • the data processing apparatus 1 selects the video data and combines the characteristic image in which the event information image is added to the monitoring target from the image forming the selected video data to generate a synthetic image, the characteristic image is generated. Can be summarized in one composite image.
  • the user can grasp the movement status of the monitoring target without reproducing all the video data, so that the confirmation operation of the monitoring target can be completed in a short time.
  • the image data E2 is composed of a plurality of images A11, A12 and A13 as shown in FIG. 16, and it is imaged that the monitoring objects B11, B12 and B13 are transported from the belt conveyor X1 to the belt conveyor X2.
  • the monitoring targets B11, B12, and B13 are the same product conveyed on the belt conveyors X1 and X2.
  • the video data E3 is composed of a plurality of images A21, A22, A23, and it is imaged that the monitoring objects B21, B22, B23 are transported from the belt conveyor X3 to the belt conveyor X2.
  • the monitoring targets B21, B22, and B23 are the same product conveyed on the belt conveyors X2 and X3.
  • FIG. 18 shows a feature image group E2 ′ composed of a plurality of feature images.
  • the extraction unit 22 extracts a characteristic image a11 which is an image of a portion where the monitoring target B11 is displayed from the image A11 based on the event information as shown in FIG. 18, and the monitoring target from the image A12 based on the event information
  • a characteristic image a12 which is an image of a portion where B12 is displayed is extracted
  • a characteristic image a13 which is an image of a portion where a monitoring target B13 is displayed is extracted from the image A13 based on event information.
  • FIG. 19 shows a feature image group E3 'composed of a plurality of feature images.
  • the extraction unit 22 extracts a characteristic image a21 that is an image of a portion where the monitoring target B21 is displayed from the image A21 based on event information, and based on the event information
  • a feature image a22 which is an image of a portion where the monitoring target B22 is displayed is extracted
  • a feature image a23 which is an image of a portion where the monitoring target B23 is displayed is extracted from the image A23 based on event information.
  • FIG. 20 shows a feature image group E2 'configured of a plurality of feature images to which an event information image is added.
  • the adding unit 25 adds the event information image C11 to the monitoring target B11 of the feature image a11, adds the event information image C12 to the monitoring target B12 of the feature image a12, and monitors the feature image a13 An event information image C13 is added to B13.
  • FIG. 21 shows a feature image group E3 'configured of a plurality of feature images to which an event information image is added.
  • the adding unit 25 adds the event information image C21 to the monitoring target B21 of the characteristic image a21, adds the event information image C22 to the monitoring target B22 of the characteristic image a22, and monitors the characteristic image a23, as shown in FIG. An event information image C23 is added to B23.
  • the display generation unit 16 combines the feature image a11, the feature image a12, the feature image a13, the feature image a21, the feature image a22, the feature image a23, and the image on which the background to be monitored is displayed.
  • a composite image D is generated.
  • the monitoring target B11 to which the event information image C11 is added the monitoring target B12 to which the event information image C12 is added, and the monitoring target B13 to which the event information image C13 is added.
  • one composite image may be generated from three or more video data.
  • the data processing apparatus 1 selects a plurality of video data and generates a composite image by combining the characteristic images to which the event information image is added from the images constituting the selected plurality of video data,
  • the characteristic images of the video data of can be summarized and shown in one composite image.
  • the user can grasp the movement status of the monitoring target without reproducing each of a plurality of video data, so that the task of confirming the monitoring target can be completed in a short time.
  • the data processing device 1 synthesizes the composite image generated from each of the plurality of video data, there is no need to display the plurality of video data in parallel, and the display unit 17 can be miniaturized. Can be scaled down.
  • the data processing apparatus 1 adds the event information to the feature image extracted from the video data to generate a composite image, it is possible to easily grasp the difference and the situation for each video data.
  • the data processing device 1 can easily grasp the situation.
  • all of the event information images are added to the feature image, but the present invention is not limited to this.
  • the event information image May be displayed in the form of pop-up.
  • FIG. 23 is a diagram showing an example of the hardware configuration of the data processing apparatus 1.
  • the data processing apparatus 1 is a computer, and includes a communication circuit 101, a processor 102, a memory 103, a display unit 104, and an input unit 105.
  • the external device connection unit 12 and the imaging device connection unit 13 illustrated in FIG. 1 are realized by the communication circuit 101.
  • the data processing unit 14 and the display generation unit 16 illustrated in FIG. 1 are realized by the processor 102 executing a program stored in the memory 103.
  • the storage unit 15 illustrated in FIG. 1 is realized by the memory 103.
  • the processor 102 is, for example, a CPU, a microprocessor or the like, and is a processing circuit.
  • the memory 103 is also used as a storage area when the processor 102 executes a program.
  • the operation unit 11 illustrated in FIG. 1 is realized by the input unit 105.
  • the display unit 17 illustrated in FIG. 1 is realized by the display unit 104.
  • the input unit 105 is a keyboard, a mouse or the like.
  • the display unit 104 is a display, a monitor, or the like.
  • the display unit 104 and the input unit 105 may be realized by a touch panel in which these are integrated.
  • the configuration shown in the above embodiment shows an example of the contents of the present invention, and can be combined with another known technique, and one of the configurations is possible within the scope of the present invention. Parts can be omitted or changed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Alarm Systems (AREA)

Abstract

This data processing device is provided with: a data processing unit (14) which extracts, from images constituting video data, a plurality of feature images in which a subject to be monitored is displayed; and a display generation unit (16) which generates a synthesized image by synthesizing the plurality of feature images extracted by means of the data processing unit (14) from the images constituting the video data. The subject to be monitored is an object which moves in a monitored place, and is, for example, a product being conveyed on a belt conveyer in a process of manufacturing the product. With the configuration, it is possible to quickly confirm the subject to be monitored.

Description

データ処理装置、プログラマブル表示器およびデータ処理方法Data processing apparatus, programmable display and data processing method
 本発明は、映像データの処理を行うデータ処理装置、プログラマブル表示器およびデータ処理方法に関する。 The present invention relates to a data processing apparatus that processes video data, a programmable display, and a data processing method.
 従来、撮像された時間の異なる複数の画像を階層的に提示し、長時間にわたって動的に変化する被写体の動きを監視する技術がある。監視を効率的に行うためには、再生時間の短縮が望まれる。 2. Description of the Related Art Conventionally, there is a technique of hierarchically presenting a plurality of captured images having different times and monitoring a motion of an object dynamically changing over a long time. In order to perform monitoring efficiently, shortening of reproduction time is desired.
 例えば、特許文献1には、監視対象に基づいて入力画像を撮像するか判定し、撮像すると判定された画像を記憶することにより、監視対象のみを撮像するように撮像間隔を制御する画像記憶装置が開示されている。 For example, in Patent Document 1, it is determined whether to capture an input image based on a monitoring target, and an image storage device that controls an imaging interval so as to capture only the monitoring target by storing an image determined to be captured. Is disclosed.
特開2000-224542号公報JP 2000-224542 A
 しかしながら、特許文献1の画像記憶装置では、再生時間の短縮は図られているものの、撮像した映像をすべて再生しないと監視対象の状態を確認することができず、特に撮像時間が長い場合には、監視対象の確認に時間を要してしまう。 However, in the image storage device of Patent Document 1, although the reproduction time is shortened, the state of the monitoring target can not be confirmed unless all the captured video is reproduced, and especially when the imaging time is long. It takes time to check what to monitor.
 本発明は、上記に鑑みてなされたものであって、監視対象を確認する時間を短縮することができるデータ処理装置を得ることを目的とする。 The present invention has been made in view of the above, and it is an object of the present invention to obtain a data processing apparatus capable of shortening the time for confirming a monitoring target.
 上述した課題を解決し、目的を達成するために、本発明は、映像データを構成する画像から監視対象が表示されている複数の特徴画像を抽出するデータ処理部と、データ処理部により映像データを構成する画像から抽出された複数の特徴画像を合成して合成画像を生成する表示生成部とを備える。 In order to solve the problems described above and achieve the object, according to the present invention, a data processing unit that extracts a plurality of feature images in which a monitoring target is displayed from an image forming video data; And a display generation unit configured to generate a synthesized image by synthesizing a plurality of feature images extracted from the images constituting the image.
 本発明にかかるデータ処理装置は、監視対象を確認する時間を短縮することができるという効果を奏する。 The data processing apparatus according to the present invention has the effect of being able to shorten the time to check the monitoring target.
本発明の実施の形態におけるデータ処理装置の構成を示す図The figure which shows the structure of the data processor in embodiment of this invention. 実施の形態における合成画像の一例を示す図A figure showing an example of a synthetic picture in an embodiment 実施の形態におけるデータ処理部の構成を示す図A diagram showing a configuration of a data processing unit in the embodiment 実施の形態における映像データを構成する画像から特徴画像を抽出する具体例についての説明に供する図A diagram provided for describing a specific example of extracting a feature image from an image constituting video data in the embodiment 実施の形態におけるイベント情報を取得する手順を示すフローチャートFlow chart showing procedure for acquiring event information in the embodiment 実施の形態における映像データを構成する画像から特徴画像を抽出する手順を示すフローチャートA flowchart showing a procedure for extracting a feature image from images constituting video data in the embodiment 実施の形態におけるイベント情報を選択する手順についての説明に供する図Diagram for explaining the procedure for selecting event information in the embodiment 実施の形態における特徴画像にイベント情報画像を追加する手順についての説明に供する図A diagram provided for describing a procedure for adding an event information image to a feature image in the embodiment 実施の形態における映像データが選択されてから、合成画像が表示されるまでの手順を示すフローチャートFlowchart showing procedure from selection of video data in the embodiment to display of composite image 実施の形態におけるデータ処理装置の表示生成部の構成を示す図A diagram showing a configuration of a display generation unit of a data processing device according to an embodiment 実施の形態における映像データを選択する画面の一例を示す図A figure showing an example of a screen which chooses image data in an embodiment 実施の形態における映像データの構成の一例を示す図A diagram showing an example of a configuration of video data in the embodiment 実施の形態における抽出部により画像から抽出された特徴画像の一例を示す図A figure showing an example of a feature picture extracted from a picture by an extraction part in an embodiment 実施の形態における追加部により特徴画像にイベント情報画像を追加する一例を示す図The figure which shows an example which adds an event information image to a feature image by the addition part in embodiment. 実施の形態における合成画像の一例を示す図A figure showing an example of a synthetic picture in an embodiment 実施の形態における映像データの構成の一例を示す図A diagram showing an example of a configuration of video data in the embodiment 実施の形態における映像データの構成の一例を示す図A diagram showing an example of a configuration of video data in the embodiment 実施の形態における抽出部により画像から抽出された特徴画像の一例を示す図A figure showing an example of a feature picture extracted from a picture by an extraction part in an embodiment 実施の形態における抽出部により画像から抽出された特徴画像の一例を示す図A figure showing an example of a feature picture extracted from a picture by an extraction part in an embodiment 実施の形態における追加部により特徴画像にイベント情報画像を追加する一例を示す図The figure which shows an example which adds an event information image to a feature image by the addition part in embodiment. 実施の形態における追加部により特徴画像にイベント情報画像を追加する一例を示す図The figure which shows an example which adds an event information image to a feature image by the addition part in embodiment. 実施の形態における合成画像の一例を示す図A figure showing an example of a synthetic picture in an embodiment 実施の形態におけるデータ処理装置のハードウェアの構成例を示す図A diagram showing an example of a hardware configuration of a data processing apparatus according to an embodiment
 以下に、本発明の実施の形態にかかるデータ処理装置、プログラマブル表示器およびデータ処理方法を図面に基づいて詳細に説明する。なお、この実施の形態によりこの発明が限定されるものではない。 Hereinafter, a data processing device, a programmable display, and a data processing method according to an embodiment of the present invention will be described in detail based on the drawings. The present invention is not limited by the embodiment.
実施の形態.
 図1は、本発明の実施の形態におけるデータ処理装置1の構成を示す図である。データ処理装置1は、プログラマブル表示器により実現される。プログラマブル表示器は、画像を表示する表示部と、ユーザの操作を受け付ける操作部と、外部装置を接続する接続部と、データを記憶する記憶部とを備え、外部装置の動作状況の表示と、外部装置に対して値の入力とを行う電子式操作表示器である。
Embodiment.
FIG. 1 is a diagram showing the configuration of a data processing apparatus 1 according to an embodiment of the present invention. The data processing device 1 is realized by a programmable display. The programmable display includes a display unit for displaying an image, an operation unit for receiving a user's operation, a connection unit for connecting an external device, and a storage unit for storing data, and displays the operation status of the external device; It is an electronic operation display for inputting values to an external device.
 データ処理装置1は、撮像装置により一定時間の間に録画された映像データを構成する複数の画像から監視対象が表示されている特徴画像を抽出し、抽出した複数の特徴画像を合成して合成画像を生成する。本発明の実施の形態では、監視対象とは、監視場所において移動する物体、設備機器などである。また、本発明の実施の形態では、特徴画像は、映像データを構成する画像と同じ大きさであるとして説明するが、画像に表示されている監視対象と同じ大きさであってもよい。すなわち、特徴画像は、映像データを構成し、監視対象が表示されている画像を複数領域に分けた一部であってもよい。以下に、データ処理装置1の具体的な動作と構成について説明する。 The data processing apparatus 1 extracts a characteristic image on which a monitoring target is displayed from a plurality of images constituting video data recorded during a predetermined time by the imaging apparatus, and synthesizes the plurality of extracted characteristic images to synthesize it. Generate an image. In the embodiment of the present invention, the monitoring target is an object, equipment, etc. moving at a monitoring location. In the embodiment of the present invention, the feature image is described as having the same size as the image forming the video data, but may be the same size as the monitoring target displayed in the image. That is, the feature image may be a part of image data, and the image on which the monitoring target is displayed is divided into a plurality of areas. The specific operation and configuration of the data processing apparatus 1 will be described below.
 データ処理装置1は、ユーザの操作を受け付ける操作部11と、外部装置2が接続される外部装置接続部12と、撮像装置3が接続される撮像装置接続部13とを備える。 The data processing device 1 includes an operation unit 11 that receives a user's operation, an external device connection unit 12 to which the external device 2 is connected, and an imaging device connection unit 13 to which the imaging device 3 is connected.
 データ処理装置1は、映像データを構成する画像から監視対象が表示されている部分の画像である特徴画像を抽出するデータ処理部14と、映像データと特徴画像とを記憶する記憶部15とを備える。 The data processing device 1 includes a data processing unit 14 for extracting a feature image which is an image of a portion where a monitoring target is displayed from images constituting video data, and a storage unit 15 for storing video data and feature images. Prepare.
 データ処理装置1は、データ処理部14により映像データを構成する複数の画像それぞれから抽出された複数の特徴画像を合成して合成画像を生成する表示生成部16と、表示生成部16で生成された合成画像を表示する表示部17とを備える。 The data processing device 1 generates the composite image by combining the plurality of feature images extracted from each of the plurality of images constituting the video data by the data processing unit 14, and the display generation unit 16 And a display unit 17 for displaying the composite image.
 操作部11は、キーボードまたはタッチパネルにより構成されており、ユーザの操作を受け付ける。具体的には、操作部11は、付加情報を入力する操作と、再生対象の映像データを選択する操作とを受け付ける。操作部11は、受け付けた操作に応じた操作信号を生成し、生成した操作信号をデータ処理部14に出力する。 The operation unit 11 is configured of a keyboard or a touch panel, and receives user's operation. Specifically, the operation unit 11 receives an operation of inputting additional information and an operation of selecting video data to be reproduced. The operation unit 11 generates an operation signal according to the received operation, and outputs the generated operation signal to the data processing unit 14.
 外部装置接続部12は、外部装置2が接続されるインターフェイスである。外部装置接続部12は、例えば、USB(Universal Serial Bus)コネクタまたはRS-232Cコネクタにより構成される。 The external device connection unit 12 is an interface to which the external device 2 is connected. The external device connection unit 12 is configured by, for example, a USB (Universal Serial Bus) connector or an RS-232C connector.
 外部装置2は、例えば、産業用機械を制御するPLC(Programmable Logic Controller)、または、製品の検査を行うセンシング装置である。PLCは、制御対象である産業用機械に記憶されているデバイスID(Identification)を保持する。デバイスIDとは、産業用機械を識別するための情報である。 The external device 2 is, for example, a programmable logic controller (PLC) that controls an industrial machine or a sensing device that inspects a product. The PLC holds a device ID (Identification) stored in the industrial machine to be controlled. The device ID is information for identifying an industrial machine.
 PLCは、接続されている産業用機械の動作を制御する。また、PLCは、産業用機械が動作している状態を監視する。PLCは、産業用機械の異常または故障を検出するとアラーム情報を生成し、生成したアラーム情報をデータ処理装置1に出力する。アラーム情報には、異常または故障の内容と、異常または故障が発生した時刻情報とが含まれている。 The PLC controls the operation of the connected industrial machine. The PLC also monitors the state of operation of the industrial machine. The PLC generates alarm information when detecting an abnormality or a failure of the industrial machine, and outputs the generated alarm information to the data processing device 1. The alarm information includes the content of the abnormality or the failure and the time information when the abnormality or the failure occurs.
 また、センシング装置は、例えば、製品の外観を検査する外観検査、または、製品の部品が所望の位置に配置されているかどうかを検査する位置検査を行う。センシング装置は、製品の検査を行い、検査の結果であるセンサ情報を生成し、生成したセンサ情報をデータ処理装置1に出力する。センサ情報には、検査の結果と、検査の時刻情報とが含まれている。 In addition, the sensing device performs, for example, an appearance inspection that inspects the appearance of the product, or a position inspection that checks whether parts of the product are placed at desired positions. The sensing device inspects a product, generates sensor information which is a result of the inspection, and outputs the generated sensor information to the data processing device 1. The sensor information includes the result of the inspection and the time information of the inspection.
 外部装置接続部12は、外部装置2から入力された情報をデータ処理部14に出力する。外部装置2から入力された情報とは、外部装置2がPLCの場合には、デバイスIDおよびアラーム情報である。また、外部装置2から入力された情報とは、外部装置2がセンシング機器の場合には、センサ情報である。以下では、デバイスIDとアラーム情報とセンサ情報とをイベント情報という。また、イベント情報は、例えば、製品を生産する工程において、ベルトコンベアにより搬送される製品の移動状況を示す情報、段取り替えの作業の情報などでもよい。ベルトコンベアにより搬送される製品の移動状況を示す情報とは、製品の出庫時間および入庫時間を示す情報、または、製品が予め定めた箇所を通過した時間を示す情報などである。また、段取り替えの作業の情報とは、機械設備に設定されているパラメータの変更を示す情報である。 The external device connection unit 12 outputs the information input from the external device 2 to the data processing unit 14. The information input from the external device 2 is a device ID and alarm information when the external device 2 is a PLC. Further, the information input from the external device 2 is sensor information when the external device 2 is a sensing device. Below, device ID, alarm information, and sensor information are called event information. Also, the event information may be, for example, information indicating the movement status of the product transported by the belt conveyor in the process of producing the product, information on the work of setup replacement, and the like. The information indicating the movement status of the product transported by the belt conveyor is information indicating the product leaving time and warehousing time, or information indicating the time when the product has passed a predetermined point. Moreover, the information of the work of setup change is information which shows the change of the parameter set to the mechanical equipment.
 なお、イベント情報は、映像データを構成する画像に関する情報として与えられる。すなわち、イベント情報は、映像データを構成する、ある画像が撮影された時間および発生要因に対応して得られる情報である。なお、画像は、ヘッダ部、ペイロード部、インデックス部などから構成される画像ファイル形式で保存される。イベント情報は、例えば、インデックス部に格納される。 The event information is given as information on an image constituting video data. That is, event information is information that is obtained corresponding to the time when an image was captured and the cause of the occurrence, which constitute video data. The image is stored in an image file format including a header portion, a payload portion, an index portion, and the like. Event information is stored, for example, in the index unit.
 撮像装置接続部13は、撮像装置3が接続されるインターフェイスである。撮像装置接続部13は、例えば、USBコネクタまたはイーサネット(登録商標)用の通信コネクタである。 The imaging device connection unit 13 is an interface to which the imaging device 3 is connected. The imaging device connection unit 13 is, for example, a USB connector or a communication connector for Ethernet (registered trademark).
 撮像装置3は、一定の時間間隔で撮像した後に、または、撮像装置接続部13から送信されるトリガ信号により撮像した後に、撮像した映像データをデータ処理装置1に出力する。撮像装置接続部13は、撮像装置3から入力された映像データをデータ処理部14に出力する。 The imaging device 3 outputs the imaged video data to the data processing device 1 after imaging at a constant time interval or after imaging by a trigger signal transmitted from the imaging device connection unit 13. The imaging device connection unit 13 outputs the video data input from the imaging device 3 to the data processing unit 14.
 データ処理部14は、撮像装置接続部13から映像データが入力され、当該映像データを構成する画像から監視対象が表示されている画像である特徴画像を抽出する。映像データを構成する画像から特徴画像を抽出する手順の詳細については、後述する。 The data processing unit 14 receives video data from the imaging device connection unit 13 and extracts a characteristic image, which is an image on which a monitoring target is displayed, from an image forming the video data. The details of the procedure for extracting the feature image from the images constituting the video data will be described later.
 映像データは、撮像装置3からデジタル形式で入力されるが、アナログ形式で入力されてもよい。アナログ形式で映像データが入力された場合には、撮像装置接続部13は、A/D(Analog to Digital)変換により、アナログ形式の映像データをデジタル形式の映像データに変換する。 Image data is input from the imaging device 3 in digital form, but may be input in analog form. When video data is input in an analog format, the imaging device connecting unit 13 converts the video data in analog format into video data in digital format by A / D (Analog to Digital) conversion.
 表示生成部16は、データ処理部14で抽出された監視対象が表示されている部分の画像である特徴画像と、監視対象の背景が表示されている画像とを合成することにより合成画像を生成する。 The display generation unit 16 generates a composite image by combining the characteristic image that is the image of the portion where the monitoring target is displayed extracted by the data processing unit 14 and the image where the background of the monitoring target is displayed. Do.
 例えば、表示生成部16は、図2に示すように、監視対象B1が表示されている特徴画像a2と、監視対象の背景が表示されている背景画像a1とを合成して合成画像Dを生成する。 For example, as illustrated in FIG. 2, the display generation unit 16 generates a composite image D by combining the characteristic image a2 in which the monitoring target B1 is displayed and the background image a1 in which the background of the monitoring target is displayed. Do.
 なお、本実施の形態は、工場で製品が生産される場合の生産ラインを監視する分野に適用できる。よって、例えば、図2に示すように、監視対象は、生産される商品であり、背景画像は、ベルトコンベアなどの生産ラインである。 The present embodiment can be applied to the field of monitoring a production line when a product is produced at a factory. Thus, for example, as shown in FIG. 2, the monitoring target is a product to be produced, and the background image is a production line such as a belt conveyor.
 表示部17は、液晶表示装置または有機EL(Electro Luminescence)表示装置により構成される。操作部11と表示部17とは、ユーザによる操作を受け付ける操作機能と、操作画面を表示する表示機能とが一体化した操作表示部により構成されてもよい。 The display unit 17 is configured of a liquid crystal display device or an organic EL (Electro Luminescence) display device. The operation unit 11 and the display unit 17 may be configured by an operation display unit in which an operation function for receiving an operation by a user and a display function for displaying an operation screen are integrated.
 本実施の形態では、データ処理装置1は、映像データが選択され、選択された映像データを構成する複数の画像それぞれから抽出された特徴画像を合成して合成画像を生成するので、複数の特徴画像を一つの合成画像に集約して示すことができる。 In the present embodiment, the data processing apparatus 1 selects the video data, and combines the feature images extracted from each of the plurality of images constituting the selected video data to generate a composite image. The images can be shown consolidated in one composite image.
 ユーザは、複数の特徴画像が合成された合成画像を確認することにより、映像データをすべて再生することなく、監視対象の移動状況を把握することができるので、監視対象の確認作業を短時間で済ますことができる。 The user can grasp the movement status of the monitoring target without reproducing all the video data by confirming the composite image in which the plurality of feature images are combined, and therefore, the confirmation operation of the monitoring target can be performed in a short time. Can do it.
 ここで、データ処理部14の具体的な構成について説明する。図3は、データ処理部14の構成を示す図である。図4は、映像データを構成する画像から特徴画像を抽出する具体例についての説明に供する図である。データ処理部14は、映像データが撮像された時または映像データが撮像された後に監視対象の状態を示すイベント情報を取得する取得部21と、取得部21により取得されたイベント情報に基づいて、映像データを構成する画像を選択し、選択した画像から特徴画像を抽出する抽出部22とを備える。 Here, a specific configuration of the data processing unit 14 will be described. FIG. 3 is a diagram showing the configuration of the data processing unit 14. FIG. 4 is a diagram provided for describing a specific example of extracting a feature image from an image forming video data. The data processing unit 14 acquires event information indicating a state of a monitoring target when video data is captured or after video data is captured, and based on the event information acquired by the acquisition unit 21, The image processing apparatus includes an extraction unit 22 which selects an image constituting video data and extracts a characteristic image from the selected image.
 具体的には、抽出部22は、イベント情報を取得した時刻と同じ時刻に撮像された画像を選択する。つまり、抽出部22は、イベント情報を取得した時刻に基づいて画像を検索し、検索した画像の中から画像を選択する。また、抽出部22は、撮像装置接続部13から送信されるトリガ信号により撮像を行った場合、トリガ信号に基づいて画像を検索し、検索した画像の中から画像を選択してもよい。抽出部22は、図4に示すように、選択した画像A1から監視対象B1が表示されている部分の画像である特徴画像a1を抽出する。また、抽出部22は、映像データと、特徴画像と、イベント情報とを記憶部15に出力する。記憶部15は、映像データと、特徴画像と、イベント情報とを関連付けて保存する。 Specifically, the extraction unit 22 selects an image captured at the same time as acquiring the event information. That is, the extraction unit 22 searches for an image based on the time when the event information is acquired, and selects an image from among the searched images. When the extraction unit 22 captures an image based on a trigger signal transmitted from the imaging device connection unit 13, the extraction unit 22 may search for an image based on the trigger signal and select an image from the retrieved images. The extraction unit 22 extracts a feature image a1 which is an image of a portion where the monitoring target B1 is displayed, from the selected image A1 as shown in FIG. In addition, the extraction unit 22 outputs the video data, the feature image, and the event information to the storage unit 15. The storage unit 15 associates and stores video data, a feature image, and event information.
 ここで、取得部21によりイベント情報を取得する手順について、図5に示すフローチャートを用いて説明する。 Here, the procedure for acquiring the event information by the acquisition unit 21 will be described using the flowchart shown in FIG.
 ステップS1において、操作部11は、設定情報を受け付ける。設定情報とは、イベント情報を取得するタイミングを示す情報である。取得部21がイベント情報を取得するタイミングは、設定情報の内容によって任意に設定することができる。例えば、取得するタイミングは、撮像装置3による映像データの撮像を開始する時刻、映像データの撮像を行っている時刻、映像データの撮像が終了した時刻、予め定めた周期、複数ビット条件などが考えられる。なお、複数ビット条件とは、PLC等の外部装置2において予め設定された複数のビットの立上りおよび立下り状態であるビットの変化を監視し、当該複数のビットを用いた論理演算で得られる結果によりイベント情報を取得するタイミングを定める条件である。例えば、PLC等の外部装置2が上述した論理演算を行い、データ処理装置1の取得部21は、PLC等の外部装置2から取得した論理演算の結果に基づいて、イベント情報を取得するタイミングを定めてもよい。また、データ処理装置1の取得部21は、PLC等の外部装置2から複数のビットを受信し、受信した複数のビットを用いて論理演算を行い、当該論理演算の結果に基づいて、イベント情報を取得するタイミングを定めてもよい。操作部11は、設定情報を取得部21に出力する。 In step S1, the operation unit 11 receives setting information. The setting information is information indicating the timing of acquiring event information. The timing at which the acquisition unit 21 acquires event information can be arbitrarily set according to the content of the setting information. For example, the timing of acquisition may be the time when imaging of video data by imaging device 3 is started, the time when imaging of video data is performed, the time when imaging of video data is completed, a predetermined cycle, multiple bit conditions, etc. Be The multi-bit condition refers to the result obtained by the logic operation using the plurality of bits while monitoring changes in bits that are in the rising and falling states of the plurality of bits set in advance in the external device 2 such as PLC. Is a condition that determines the timing of acquiring event information. For example, the acquisition unit 21 of the data processing device 1 performs the logical operation described above on the basis of the result of the logical operation acquired from the external device 2 such as PLC, etc. It may be determined. Further, the acquisition unit 21 of the data processing device 1 receives a plurality of bits from the external device 2 such as PLC, performs a logical operation using the received plurality of bits, and based on the result of the logical operation, event information The timing of acquiring may be determined. The operation unit 11 outputs the setting information to the acquisition unit 21.
 ステップS2において、取得部21は、設定情報を記憶部15に出力する。 In step S2, the acquisition unit 21 outputs setting information to the storage unit 15.
 ステップS3において、取得部21は、設定情報に基づいて、イベント情報を取得するタイミングになったかどうかを判断する。取得部21は、イベント情報を取得するタイミングになったと判断した場合(ステップS3 Yes)には、ステップS4に進み、イベント情報を取得するタイミングになっていないと判断した場合(ステップS3 No)には、ステップS3の工程を繰り返す。 In step S3, the acquisition unit 21 determines whether it is time to acquire event information, based on the setting information. If the acquisition unit 21 determines that it is time to acquire event information (Yes at step S3), the process proceeds to step S4, and if it is determined that it is not time to acquire event information (No at step S3). Repeat the process of step S3.
 ステップS4において、取得部21は、外部装置2にイベント情報の送信を要求し、外部装置2からイベント情報を取得する。 In step S4, the acquisition unit 21 requests the external device 2 to transmit event information, and acquires event information from the external device 2.
 つぎに、抽出部22により映像データを構成する画像から特徴画像を抽出する手順について、図6に示すフローチャートを用いて説明する。 Next, a procedure for extracting a feature image from an image constituting video data by the extraction unit 22 will be described with reference to a flowchart shown in FIG.
 ステップS11において、撮像装置3は、撮像装置接続部13を介して映像データを抽出部22に出力する。映像データは、複数の画像により構成されている。 In step S11, the imaging device 3 outputs the video data to the extraction unit 22 via the imaging device connection unit 13. Video data is composed of a plurality of images.
 ステップS12において、抽出部22は、取得部21で取得されたイベント情報に基づいて、映像データを構成する画像を選択する。 In step S12, the extraction unit 22 selects an image forming the video data based on the event information acquired by the acquisition unit 21.
 ステップS13において、抽出部22は、選択した画像に基づいて、画像内の動きのある画像領域を特定する。画像は、複数の画素により構成されている。 In step S13, the extraction unit 22 specifies an image area having motion in the image based on the selected image. An image is composed of a plurality of pixels.
 ここで、画像内の動きのある画像領域の特定は、例えば、特開昭54-124927号公報に記載されているように、動きベクトルを利用する方法がある。 Here, there is a method of using a motion vector, as described in, for example, Japanese Patent Application Laid-Open No. 54-124927, for specifying an image area having motion in an image.
 特開昭54-124927号公報に記載の方法では、映像データを構成する画像Aにおいて任意の大きさのブロックaを定め、画像Aのひとつ前の画像Bの任意の場所にブロックaと同じ大きさのブロックbを定め、ブロックbを構成する画素の輝度値とブロックaを構成する画素の輝度値との差分を算出する。また、ブロックbの場所を画像B内で順次変更し、変更した場所におけるブロックbを構成する画素の輝度値とブロックaを構成する画素の輝度値との差分を算出する。算出した差分の中で最も小さい差分のブロックbを特定する。特定したブロックbは、画像Aのブロックaと同じ画像部分であると推定できる。そして、特定したブロックbの位置ベクトルと、ブロックaの位置ベクトルとの差に基づいて、動きベクトルを算出する。 In the method disclosed in Japanese Patent Application Laid-Open No. 54-124927, a block a of an arbitrary size is determined in an image A constituting video data, and the same size as the block a in an arbitrary place of an image B immediately preceding the image A A block b is determined, and the difference between the luminance value of the pixels forming the block b and the luminance value of the pixels forming the block a is calculated. Also, the location of block b is sequentially changed in image B, and the difference between the luminance value of the pixels forming block b and the luminance value of the pixels forming block a at the changed location is calculated. The block b of the smallest difference is specified among the calculated differences. The identified block b can be estimated to be the same image portion as the block a of the image A. Then, a motion vector is calculated based on the difference between the position vector of the identified block b and the position vector of the block a.
 抽出部22は、動きベクトルに基づいて動きのある画像領域を特定する。例えば、抽出部22は、算出した動きベクトルの大きさが一定の値よりも大きい場合には、動きのある画像領域であると特定してもよい。特定された画像領域には、監視対象が含まれている。 The extraction unit 22 identifies a moving image area based on the motion vector. For example, when the magnitude of the calculated motion vector is larger than a certain value, the extraction unit 22 may specify that the image region is moving. The identified image area includes a monitoring target.
 ステップS14において、抽出部22は、特定された画像領域に基づいて、画像から特徴画像を抽出する。 In step S14, the extraction unit 22 extracts a feature image from the image based on the identified image area.
 なお、上述では、イベント情報を取得したタイミングに対応する画像から特徴画像を抽出する動作について説明したが、当該動作に限定されない。例えば、本実施の形態の変形例として、データ処理装置1は、イベント情報を選択し、選択したイベント情報に基づいて画像を選択し、選択した画像から特徴画像を抽出し、抽出した特徴画像を合成して合成画像を生成する構成でもよい。具体的には、取得部21によって取得されたイベント情報の一覧は、記憶部15に記憶されている。表示部17は、イベント情報の一覧を表示する。ユーザは、表示部17に表示されているイベント情報の一覧に基づき、操作部11を操作して任意のイベント情報を選択する。抽出部22は、選択されたイベント情報に基づいて対応する画像を抽出する。 In the above, the operation of extracting the feature image from the image corresponding to the timing of acquiring the event information has been described, but the present invention is not limited to this operation. For example, as a modification of the present embodiment, the data processing apparatus 1 selects event information, selects an image based on the selected event information, extracts a feature image from the selected image, and extracts the extracted feature image. The composition may be combined to generate a combined image. Specifically, the list of event information acquired by the acquisition unit 21 is stored in the storage unit 15. The display unit 17 displays a list of event information. The user operates the operation unit 11 based on the list of event information displayed on the display unit 17 to select arbitrary event information. The extraction unit 22 extracts a corresponding image based on the selected event information.
 操作部11は、取得部21によって取得されたイベント情報の選択を受け付ける。図7は、表示部17に複数のイベント情報、すなわち、記憶部15に記憶されているイベント情報の一覧が表示されている様子を示している。撮像装置3は、映像データとともに、映像データを撮像した時刻の情報、撮像装置3が設置されている場所の情報、および撮像装置3に付与されている固有のIDをデータ処理装置1に送信してもよい。また、表示部17には、イベント情報だけでなく、映像データを撮像した時刻の情報、撮像装置3が設置されている場所の情報、および撮像装置3に付与されている固有のIDも表示されてよい。 The operation unit 11 receives the selection of the event information acquired by the acquisition unit 21. FIG. 7 shows the display unit 17 displaying a plurality of pieces of event information, that is, a list of event information stored in the storage unit 15. The imaging device 3 transmits to the data processing device 1 the information on the time when the image data was imaged, the information on the location where the imaging device 3 is installed, and the unique ID given to the imaging device 3 together with the video data. May be In addition to the event information, information on the time at which the image data was captured, information on the location where the imaging device 3 is installed, and a unique ID assigned to the imaging device 3 are also displayed on the display unit 17. You may
 具体的には、ユーザは、表示部17に表示されている画面に基づいて、操作部11を操作して、一または複数のイベント情報を選択する。 Specifically, the user operates the operation unit 11 based on the screen displayed on the display unit 17 to select one or more pieces of event information.
 抽出部22は、操作部11が受け付けたイベント情報に基づいて、映像データを構成する画像を選択し、選択した画像から特徴画像を抽出する。 The extraction unit 22 selects an image forming the video data based on the event information received by the operation unit 11, and extracts a feature image from the selected image.
 具体的には、抽出部22は、選択されたイベント情報に関連付けられている映像データを記憶部15から読み出し、イベント情報に基づいて、読み出した映像データを構成する画像の中から画像を選択し、選択した画像から特徴画像を抽出する。抽出部22は、抽出した特徴画像を表示生成部16に出力する。 Specifically, the extraction unit 22 reads video data associated with the selected event information from the storage unit 15, and selects an image from among the images constituting the read video data based on the event information. , Extract a feature image from the selected image. The extraction unit 22 outputs the extracted feature image to the display generation unit 16.
 なお、抽出部22は、選択されたイベント情報に対応する特徴画像を直接記憶部15から読み出し、読み出した特徴画像を表示生成部16に出力する構成でもよい。 The extraction unit 22 may be configured to read the feature image corresponding to the selected event information directly from the storage unit 15 and to output the read feature image to the display generation unit 16.
 表示生成部16は、複数の特徴画像を合成して合成画像を生成する。 The display generation unit 16 combines a plurality of feature images to generate a combined image.
 よって、データ処理装置1は、ユーザが任意に選択したイベント情報に基づいて、複数の特徴画像が合成された合成画像を生成するので、特定の監視対象の移動状況を把握することができ、特定の監視対象の確認作業を短時間で済ますことができる。 Therefore, since the data processing device 1 generates a composite image in which a plurality of feature images are composited based on event information arbitrarily selected by the user, it is possible to grasp the movement situation of a specific monitoring target, It is possible to complete the check work of the target of monitoring in a short time.
 また、取得部21は、特徴画像を加工する加工情報を取得する。具体的には、ユーザは、操作部11を操作して映像データを選択し、選択した映像データに加工情報を設定する。取得部21は、操作部11から設定された加工情報を取得する。なお、加工情報は、外部装置接続部12に接続された外部装置2である端末装置から入力されてもよい。取得部21は、取得した加工情報を記憶部15に出力する。記憶部15は、加工情報を対応する映像データに関連付けて保存する。 In addition, the acquisition unit 21 acquires processing information for processing the feature image. Specifically, the user operates the operation unit 11 to select video data, and sets processing information to the selected video data. The acquisition unit 21 acquires the processing information set from the operation unit 11. The processing information may be input from a terminal device which is the external device 2 connected to the external device connection unit 12. The acquisition unit 21 outputs the acquired processing information to the storage unit 15. The storage unit 15 stores the processing information in association with the corresponding video data.
 加工情報とは、特徴画像の監視対象を画像処理によって加工する内容を示す情報である。加工情報には、監視対象の濃淡を補正する濃淡補正値、監視対象の色調を補正する色調補正値、および動きベクトルの大きさを示す値のいずれか一つまたは複数が含まれる。 The processing information is information indicating the content of processing the monitoring target of the feature image by image processing. The processing information includes any one or more of a tone correction value for correcting the tone of the monitoring target, a tone correction value for correcting the tone of the monitoring target, and a value indicating the magnitude of the motion vector.
 データ処理部14は、加工情報に基づいて、特徴画像を加工する加工部23を備える。 The data processing unit 14 includes a processing unit 23 that processes the feature image based on the processing information.
 ここで、加工部23の具体的な動作について説明する。加工部23は、操作部11により再生対象の映像データの選択を受け付けた場合、当該映像データに関連付けられている特徴画像と加工情報とを記憶部15から読み出す。 Here, the specific operation of the processing unit 23 will be described. When the processing unit 23 receives selection of video data to be reproduced by the operation unit 11, the processing unit 23 reads from the storage unit 15 the feature image and the processing information associated with the video data.
 加工部23は、加工情報に濃淡補正値が含まれている場合には、当該濃淡補正値に基づいて、監視対象の濃淡を補正する。 When the gradation correction value is included in the processing information, the processing unit 23 corrects the density of the monitoring target based on the gradation correction value.
 また、加工部23は、加工情報に色調補正値が含まれている場合には、当該色調補正値に基づいて、監視対象の色調を補正する。 Further, when the processing information includes a color tone correction value, the processing unit 23 corrects the color tone of the monitoring target based on the color tone correction value.
 また、加工部23は、加工情報に動きベクトルの大きさを示す値が含まれている場合には、特徴画像の中から、当該値を超える大きさの動きベクトルの特徴画像を選択する。 When the processing information includes a value indicating the magnitude of the motion vector, the processing unit 23 selects a feature image of a motion vector having a size exceeding the value from the feature images.
 また、データ処理部14は、記憶部15からイベント情報を読み出し、読み出したイベント情報を画像に変換してイベント情報画像を生成する画像生成部24と、イベント情報画像を特徴画像の監視対象に追加する追加部25とを備える。 Further, the data processing unit 14 reads out the event information from the storage unit 15, converts the read out event information into an image, and generates an event information image, and adds the event information image to the feature image monitoring target And an adding unit 25.
 例えば、イベント情報は、上述したように、画像を構成するインデックス部に格納されている。画像生成部24は、画像を構成するインデックス部からイベント情報を読み出し、読み出したイベント情報を画像に変換してイベント情報画像を生成する。追加部25は、イベント情報を読み出した画像から抽出された特徴画像にイベント情報画像を追加する。 For example, as described above, event information is stored in the index unit that constitutes an image. The image generation unit 24 reads out event information from the index unit constituting an image, converts the read out event information into an image, and generates an event information image. The adding unit 25 adds an event information image to the feature image extracted from the image from which the event information has been read.
 具体的には、画像生成部24は、操作部11により再生対象の映像データの選択を受け付けた場合、当該映像データに関連付けられているイベント情報を記憶部15から読み出す。画像生成部24は、テキストデータであるイベント情報を画像に変換し、イベント情報画像を生成する。 Specifically, when the image generation unit 24 receives the selection of video data to be reproduced by the operation unit 11, the image generation unit 24 reads from the storage unit 15 event information associated with the video data. The image generation unit 24 converts event information, which is text data, into an image, and generates an event information image.
 また、取得部21は、特徴画像の監視対象にイベント情報画像を追加するときの方法を指示する指示情報を取得する。追加部25は、指示情報に基づいて、イベント情報画像を特徴画像の監視対象に追加する。 In addition, the acquisition unit 21 acquires instruction information for instructing a method for adding an event information image to a monitoring target of a feature image. The adding unit 25 adds the event information image to the monitoring target of the feature image based on the instruction information.
 例えば、指示情報が「監視対象の上側にイベント情報画像を追加する」の場合には、追加部25は、図8に示すように、監視対象Bの上側にイベント情報画像Cを追加する。なお、イベント情報画像を追加する位置は、監視対象の上側に限らず、監視対象の右側、監視対象の左側、または監視対象の下側などでもよいし、監視対象の正面に重畳させてもよい。また、図8では、イベントIDと時刻情報とが含まれているイベント情報画像を示しているが、一例であって、他の情報が含まれていてもよい。また、ユーザが操作部11を操作して、イベント情報画像に含まれる情報を任意に選択できてもよい。 For example, when the instruction information is "add an event information image to the upper side of the monitoring target", the adding unit 25 adds the event information image C to the upper side of the monitoring target B, as shown in FIG. The position to which the event information image is added is not limited to the upper side of the monitoring target, but may be the right side of the monitoring target, the left side of the monitoring target, or the lower side of the monitoring target, or may be superimposed on the front of the monitoring target . Further, FIG. 8 shows an event information image including an event ID and time information, but this is an example, and other information may be included. Further, the user may operate the operation unit 11 to arbitrarily select information included in the event information image.
 ここで、操作部11により映像データが選択されてから、合成画像が表示部17に表示されるまでの手順について、図9に示すフローチャートを用いて説明する。 Here, a procedure from selection of video data by the operation unit 11 to display of a composite image on the display unit 17 will be described using a flowchart shown in FIG.
 ステップS21において、操作部11は、映像データの選択を受け付ける。具体的には、ユーザは、操作部11を操作して、表示部17に表示されている複数の映像データの中から一または複数の映像データを選択する。 In step S21, the operation unit 11 receives selection of video data. Specifically, the user operates the operation unit 11 to select one or more video data from among the plurality of video data displayed on the display unit 17.
 ステップS22において、加工部23は、受け付けた映像データに関連付けられている特徴画像と、イベント情報と、加工情報とを記憶部15から読み出す。 In step S <b> 22, the processing unit 23 reads from the storage unit 15 the feature image associated with the received video data, the event information, and the processing information.
 ステップS23において、加工部23は、加工情報に基づいて特徴画像を加工する。加工部23は、加工情報に濃淡補正値が含まれている場合には、当該濃淡補正値に基づいて、監視対象の濃淡を補正する。加工部23は、加工情報に色調補正値が含まれている場合には、当該色調補正値に基づいて、監視対象の色調を補正する。加工部23は、加工情報に動きベクトルの大きさを示す値が含まれている場合には、当該値を超える大きさの動きベクトルに対応する特徴画像を選択する。 In step S23, the processing unit 23 processes the feature image based on the processing information. When the gradation correction value is included in the processing information, the processing unit 23 corrects the density of the monitoring target based on the gradation correction value. If the processing information includes a color tone correction value, the processing unit 23 corrects the color tone of the monitoring target based on the color tone correction value. When the processing information includes a value indicating the magnitude of the motion vector, the processing unit 23 selects a feature image corresponding to the motion vector having a size exceeding the value.
 加工情報に濃淡補正値と色調補正値と動きベクトルの大きさを示す値とのすべてが含まれている場合には、追加部25は、当該値を超える大きさの動きベクトルに対応する特徴画像を選択する。そして、追加部25は、濃淡補正値に基づいて、選択した特徴画像の監視対象の濃淡を補正する。追加部25は、色調補正値に基づいて、選択した特徴画像の監視対象の色調を補正する。 When the processing information includes all of the gradation correction value, the color tone correction value, and the value indicating the magnitude of the motion vector, the adding unit 25 determines the feature image corresponding to the motion vector having a magnitude exceeding the value. Choose Then, the adding unit 25 corrects the density of the monitoring target of the selected feature image based on the density correction value. The adding unit 25 corrects the color tone of the monitoring target of the selected characteristic image based on the color tone correction value.
 ステップS24において、画像生成部24は、ステップS21の工程により選択を受け付けた映像データに関連付けられているイベント情報を記憶部15から読み出し、読み出したイベント情報を画像に変換してイベント情報画像を生成する。 In step S24, the image generation unit 24 reads from the storage unit 15 event information associated with the video data selected in the process of step S21, converts the read out event information into an image, and generates an event information image. Do.
 ステップS25において、追加部25は、特徴画像の監視対象にイベント情報画像を追加する。加工情報に指示情報が含まれている場合には、追加部25は、指示情報に基づいて、特徴画像の監視対象にイベント情報画像を追加する。なお、加工情報に指示情報が含まれていない場合には、追加部25は、予め定められている位置にイベント情報画像を追加する。予め定められている位置とは、例えば、監視対象の右側である。 In step S25, the adding unit 25 adds an event information image to the monitoring target of the feature image. When instruction information is included in the processing information, the adding unit 25 adds an event information image to the monitoring target of the feature image based on the instruction information. When the instruction information is not included in the processing information, the adding unit 25 adds an event information image at a predetermined position. The predetermined position is, for example, the right side of the monitoring target.
 また、ステップS21の工程において、操作部11により複数の映像データの選択を受け付けた場合には、受け付けた映像データごとにステップS22の工程からステップS25の工程を繰り返す。 In the process of step S21, when selection of a plurality of video data is accepted by the operation unit 11, the processes of step S22 to step S25 are repeated for each of the accepted video data.
 ステップS26において、表示生成部16は、ステップS25の工程でイベント情報画像が追加された特徴画像と、監視対象の背景が表示されている画像とを合成することにより合成画像を生成する。 In step S26, the display generation unit 16 generates a composite image by combining the feature image to which the event information image is added in the process of step S25, and the image on which the background of the monitoring target is displayed.
 よって、データ処理装置1は、映像データが選択され、選択された映像データを構成する画像から、イベント情報画像が追加された特徴画像を合成して合成画像を生成するので、イベント情報画像が追加された特徴画像を一つの合成画像に集約して示すことができる。 Therefore, the data processing apparatus 1 selects the video data and combines the characteristic image to which the event information image is added from the image forming the selected video data to generate a composite image, so the event information image is added These feature images can be summarized and shown in one composite image.
 ユーザは、合成画像を確認することにより、映像データをすべて再生することなく監視対象の移動状況を把握することができるので、監視対象の確認作業を短時間で済ますことができる。 By confirming the composite image, the user can grasp the movement status of the monitoring target without reproducing all the video data, so that the confirmation operation of the monitoring target can be completed in a short time.
 なお、本実施の形態においては、特徴画像にイベント情報画像を追加した後に、合成画像を生成する動作について説明したが、当該動作に限定されない。表示生成部16は、複数の特徴画像を合成して合成画像を生成し、その後、合成画像に含まれている特徴画像にイベント情報画像を追加してもよい。具体的には、この場合、データ処理装置1は、図10に示すように、合成画像を生成する表示生成部31を備える。表示生成部31は、複数の特徴画像を合成して合成画像を生成する合成部32と、合成部32により合成された合成画像に含まれている特徴画像にイベント情報画像を追加する追加部25を備える。つまり、図10に示した構成例では、データ処理部14は、追加部25を備えない構成になる。 In the present embodiment, although the operation of generating a composite image after adding an event information image to a feature image has been described, the present invention is not limited to this operation. The display generation unit 16 may combine a plurality of feature images to generate a combined image, and then add an event information image to the feature image included in the combined image. Specifically, in this case, as shown in FIG. 10, the data processing device 1 includes a display generation unit 31 that generates a composite image. The display generation unit 31 combines the plurality of feature images to generate a combined image, and the addition unit 25 adds the event information image to the feature image included in the combined image combined by the combining unit 32. Equipped with That is, in the configuration example shown in FIG. 10, the data processing unit 14 does not include the adding unit 25.
 つぎに、映像データが選択され、イベント情報画像が特徴画像に追加されて、合成画像が生成されるまでの手順について説明する。 Next, a procedure until video data is selected, an event information image is added to a feature image, and a composite image is generated will be described.
 図11は、表示部17に複数の映像データの見本用画像が表示されている様子を示している。映像データの見本用画像とは、映像データを構成する一つの画像のサイズを縮小して生成された画像である。 FIG. 11 shows that the display unit 17 displays sample images of a plurality of video data. The sample image of the video data is an image generated by reducing the size of one image constituting the video data.
 ユーザは、表示部17に表示されている見本用画像に基づいて、操作部11を操作して、一または複数の映像データを選択する。以下では、映像データE1が選択された場合を想定して説明する。 The user operates the operation unit 11 based on the sample image displayed on the display unit 17 to select one or more video data. The following description is given on the assumption that the video data E1 is selected.
 映像データE1は、図12に示すように、複数の画像A11,A12,A13により構成されており、監視対象B1,B2,B3がベルトコンベアX上を搬送される様子が撮像されている映像データである。なお、監視対象B1,B2,B3は、ベルトコンベアX上を搬送される同一の製品である。 As shown in FIG. 12, the video data E1 is composed of a plurality of images A11, A12 and A13, and is video data in which the monitoring objects B1, B2 and B3 are conveyed on the belt conveyor X. It is. The monitoring targets B1, B2, and B3 are the same product conveyed on the belt conveyor X.
 図13は、複数の特徴画像により構成される特徴画像群E1′を示す。抽出部22は、図13に示すように、イベント情報に基づいて画像A11から監視対象B1が表示されている部分の画像である特徴画像a11を抽出し、イベント情報に基づいて画像A12から監視対象B2が表示されている部分の画像である特徴画像a12を抽出し、イベント情報に基づいて画像A13から監視対象B3が表示されている部分の画像である特徴画像a13を抽出する。 FIG. 13 shows a feature image group E1 'configured of a plurality of feature images. The extraction unit 22 extracts a characteristic image a11 which is an image of a portion where the monitoring target B1 is displayed from the image A11 based on the event information as shown in FIG. 13, and the monitoring target from the image A12 based on the event information A characteristic image a12 which is an image of a portion where B2 is displayed is extracted, and a characteristic image a13 which is an image of a portion where the monitoring target B3 is displayed is extracted from the image A13 based on event information.
 図14は、イベント情報画像が追加された複数の特徴画像により構成される特徴画像群E1′を示す。追加部25は、図14に示すように、特徴画像a11の監視対象B1にイベント情報画像C1を追加し、特徴画像a12の監視対象B2にイベント情報画像C2を追加し、特徴画像a13の監視対象B3にイベント情報画像C3を追加する。 FIG. 14 shows a feature image group E1 'configured of a plurality of feature images to which an event information image is added. The adding unit 25 adds the event information image C1 to the monitoring target B1 of the feature image a11, adds the event information image C2 to the monitoring target B2 of the feature image a12, and monitors the feature image a13, as shown in FIG. An event information image C3 is added to B3.
 表示生成部16は、特徴画像a11と、特徴画像a12と、特徴画像a13と、監視対象の背景が表示されている画像とを合成することにより合成画像Dを生成する。合成画像Dには、図15に示すように、イベント情報画像C1が追加された監視対象B1と、イベント情報画像C2が追加された監視対象B2と、イベント情報画像C3が追加された監視対象B3とが含まれている。 The display generation unit 16 generates a composite image D by combining the feature image a11, the feature image a12, the feature image a13, and the image on which the background of the monitoring target is displayed. In the composite image D, as shown in FIG. 15, the monitoring target B1 to which the event information image C1 is added, the monitoring target B2 to which the event information image C2 is added, and the monitoring target B3 to which the event information image C3 is added. And are included.
 よって、データ処理装置1は、映像データが選択され、選択された映像データを構成する画像から、監視対象にイベント情報画像が追加された特徴画像を合成して合成画像を生成するので、特徴画像を一つの合成画像に集約して示すことができる。 Therefore, since the data processing apparatus 1 selects the video data and combines the characteristic image in which the event information image is added to the monitoring target from the image forming the selected video data to generate a synthetic image, the characteristic image is generated. Can be summarized in one composite image.
 ユーザは、合成画像を確認することにより、映像データをすべて再生することなく監視対象の移動状況を把握することができるので、監視対象の確認作業を短時間で済ますことができる。 By confirming the composite image, the user can grasp the movement status of the monitoring target without reproducing all the video data, so that the confirmation operation of the monitoring target can be completed in a short time.
 つぎに、複数の映像データが選択されて、合成画像が生成されるまでの手順について説明する。なお、以下では、監視対象を撮像するときの撮像装置3の撮像角度は同じであるが、異なるベルトコンベア上を搬送される監視対象をそれぞれ撮像した映像データE2と映像データE3とが選択された場合を想定して説明する。映像データE2は、図16に示すように、複数の画像A11,A12,A13により構成されており、ベルトコンベアX1からベルトコンベアX2へ監視対象B11,B12,B13が搬送される様子が撮像されている映像データである。なお、監視対象B11,B12,B13は、ベルトコンベアX1,X2上を搬送される同一の製品である。 Next, a procedure until a plurality of video data are selected and a composite image is generated will be described. In the following, although the imaging angle of the imaging device 3 at the time of imaging the monitoring target is the same, the video data E2 and the video data E3 obtained by imaging the monitoring targets transported on different belt conveyors are selected. The description will be made assuming a case. The image data E2 is composed of a plurality of images A11, A12 and A13 as shown in FIG. 16, and it is imaged that the monitoring objects B11, B12 and B13 are transported from the belt conveyor X1 to the belt conveyor X2. Video data. The monitoring targets B11, B12, and B13 are the same product conveyed on the belt conveyors X1 and X2.
 映像データE3は、図17に示すように、複数の画像A21,A22,A23により構成されており、ベルトコンベアX3からベルトコンベアX2へ監視対象B21,B22,B23が搬送される様子が撮像されている映像データである。なお、監視対象B21,B22,B23は、ベルトコンベアX2,X3上を搬送される同一の製品である。 As shown in FIG. 17, the video data E3 is composed of a plurality of images A21, A22, A23, and it is imaged that the monitoring objects B21, B22, B23 are transported from the belt conveyor X3 to the belt conveyor X2. Video data. The monitoring targets B21, B22, and B23 are the same product conveyed on the belt conveyors X2 and X3.
 図18は、複数の特徴画像により構成される特徴画像群E2′を示す。抽出部22は、図18に示すように、イベント情報に基づいて画像A11から監視対象B11が表示されている部分の画像である特徴画像a11を抽出し、イベント情報に基づいて画像A12から監視対象B12が表示されている部分の画像である特徴画像a12を抽出し、イベント情報に基づいて画像A13から監視対象B13が表示されている部分の画像である特徴画像a13を抽出する。 FIG. 18 shows a feature image group E2 ′ composed of a plurality of feature images. The extraction unit 22 extracts a characteristic image a11 which is an image of a portion where the monitoring target B11 is displayed from the image A11 based on the event information as shown in FIG. 18, and the monitoring target from the image A12 based on the event information A characteristic image a12 which is an image of a portion where B12 is displayed is extracted, and a characteristic image a13 which is an image of a portion where a monitoring target B13 is displayed is extracted from the image A13 based on event information.
 図19は、複数の特徴画像により構成される特徴画像群E3′を示す。また、抽出部22は、図19に示すように、イベント情報に基づいて画像A21から監視対象B21が表示されている部分の画像である特徴画像a21を抽出し、イベント情報に基づいて画像A22から監視対象B22が表示されている部分の画像である特徴画像a22を抽出し、イベント情報に基づいて画像A23から監視対象B23が表示されている部分の画像である特徴画像a23を抽出する。 FIG. 19 shows a feature image group E3 'composed of a plurality of feature images. In addition, as illustrated in FIG. 19, the extraction unit 22 extracts a characteristic image a21 that is an image of a portion where the monitoring target B21 is displayed from the image A21 based on event information, and based on the event information A feature image a22 which is an image of a portion where the monitoring target B22 is displayed is extracted, and a feature image a23 which is an image of a portion where the monitoring target B23 is displayed is extracted from the image A23 based on event information.
 図20は、イベント情報画像が追加された複数の特徴画像により構成される特徴画像群E2′を示す。追加部25は、図20に示すように、特徴画像a11の監視対象B11にイベント情報画像C11を追加し、特徴画像a12の監視対象B12にイベント情報画像C12を追加し、特徴画像a13の監視対象B13にイベント情報画像C13を追加する。 FIG. 20 shows a feature image group E2 'configured of a plurality of feature images to which an event information image is added. As shown in FIG. 20, the adding unit 25 adds the event information image C11 to the monitoring target B11 of the feature image a11, adds the event information image C12 to the monitoring target B12 of the feature image a12, and monitors the feature image a13 An event information image C13 is added to B13.
 図21は、イベント情報画像が追加された複数の特徴画像により構成される特徴画像群E3′を示す。追加部25は、図21に示すように、特徴画像a21の監視対象B21にイベント情報画像C21を追加し、特徴画像a22の監視対象B22にイベント情報画像C22を追加し、特徴画像a23の監視対象B23にイベント情報画像C23を追加する。 FIG. 21 shows a feature image group E3 'configured of a plurality of feature images to which an event information image is added. The adding unit 25 adds the event information image C21 to the monitoring target B21 of the characteristic image a21, adds the event information image C22 to the monitoring target B22 of the characteristic image a22, and monitors the characteristic image a23, as shown in FIG. An event information image C23 is added to B23.
 表示生成部16は、特徴画像a11と、特徴画像a12と、特徴画像a13と、特徴画像a21と、特徴画像a22と、特徴画像a23と、監視対象の背景が表示されている画像とを合成することにより合成画像Dを生成する。合成画像Dには、図22に示すように、イベント情報画像C11が追加された監視対象B11と、イベント情報画像C12が追加された監視対象B12と、イベント情報画像C13が追加された監視対象B13と、イベント情報画像C21が追加された監視対象B21と、イベント情報画像C22が追加された監視対象B22と、イベント情報画像C23が追加された監視対象B23とが含まれている。 The display generation unit 16 combines the feature image a11, the feature image a12, the feature image a13, the feature image a21, the feature image a22, the feature image a23, and the image on which the background to be monitored is displayed. Thus, a composite image D is generated. As shown in FIG. 22, in the composite image D, the monitoring target B11 to which the event information image C11 is added, the monitoring target B12 to which the event information image C12 is added, and the monitoring target B13 to which the event information image C13 is added. , The monitoring target B21 to which the event information image C21 is added, the monitoring target B22 to which the event information image C22 is added, and the monitoring target B23 to which the event information image C23 is added.
 なお、上述では、二つの映像データE2,E3から一つの合成画像Dを生成する手順について説明したが、三つ以上の映像データから一つの合成画像を生成してもよい。 In the above, the procedure for generating one composite image D from two video data E2 and E3 has been described, but one composite image may be generated from three or more video data.
 よって、データ処理装置1は、複数の映像データが選択され、選択された複数の映像データを構成する画像から、イベント情報画像が追加された特徴画像を合成して合成画像を生成するので、複数の映像データの特徴画像を一つの合成画像に集約して示すことができる。 Therefore, the data processing apparatus 1 selects a plurality of video data and generates a composite image by combining the characteristic images to which the event information image is added from the images constituting the selected plurality of video data, The characteristic images of the video data of can be summarized and shown in one composite image.
 ユーザは、合成画像を確認することにより、複数の映像データをそれぞれ再生することなく監視対象の移動状況を把握することができるので、監視対象の確認作業を短時間で済ますことができる。 By confirming the composite image, the user can grasp the movement status of the monitoring target without reproducing each of a plurality of video data, so that the task of confirming the monitoring target can be completed in a short time.
 また、データ処理装置1は、複数の映像データからそれぞれ生成した合成画像を合成するので、複数の映像データを並列的に表示する必要がなく、表示部17を小型化することができ、装置全体を小規模化することができる。 In addition, since the data processing device 1 synthesizes the composite image generated from each of the plurality of video data, there is no need to display the plurality of video data in parallel, and the display unit 17 can be miniaturized. Can be scaled down.
 また、データ処理装置1は、映像データから抽出した特徴画像にイベント情報を追加して合成画像を生成するので、映像データごとの差異と状況を容易に把握することができる。 In addition, since the data processing apparatus 1 adds the event information to the feature image extracted from the video data to generate a composite image, it is possible to easily grasp the difference and the situation for each video data.
 また、データ処理装置1は、特徴画像にイベント情報が表示されているので、状況の把握が容易になる。なお、図22に示す例では、イベント情報画像のすべてが特徴画像に追加されているが、これに限定されない。例えば、表示画面においてイベント情報画像の一部またはすべてが非表示になっており、マウスなどのポインティングデバイスの操作にしたがって表示画面上を移動するカーソルが特徴画像に重畳されたときに、イベント情報画像がポップアップ状に表示される形態でもよい。 In addition, since the event information is displayed on the feature image, the data processing device 1 can easily grasp the situation. In the example illustrated in FIG. 22, all of the event information images are added to the feature image, but the present invention is not limited to this. For example, when a part or all of the event information image is hidden on the display screen, and the cursor moving on the display screen according to the operation of the pointing device such as a mouse is superimposed on the feature image, the event information image May be displayed in the form of pop-up.
 図23は、データ処理装置1のハードウェア構成例を示す図である。データ処理装置1は、コンピュータであり、通信回路101、プロセッサ102、メモリ103、表示部104および入力部105を備える。図1に示した外部装置接続部12および撮像装置接続部13は、通信回路101により実現される。 FIG. 23 is a diagram showing an example of the hardware configuration of the data processing apparatus 1. The data processing apparatus 1 is a computer, and includes a communication circuit 101, a processor 102, a memory 103, a display unit 104, and an input unit 105. The external device connection unit 12 and the imaging device connection unit 13 illustrated in FIG. 1 are realized by the communication circuit 101.
 図1に示したデータ処理部14および表示生成部16は、メモリ103に格納されたプログラムがプロセッサ102により実行されることにより実現される。図1に示した記憶部15は、メモリ103により実現される。 The data processing unit 14 and the display generation unit 16 illustrated in FIG. 1 are realized by the processor 102 executing a program stored in the memory 103. The storage unit 15 illustrated in FIG. 1 is realized by the memory 103.
 プロセッサ102は、例えば、CPU、マイクロプロセッサなどであり、処理回路である。メモリ103は、プロセッサ102によりプログラムが実行される際の記憶領域としても用いられる。 The processor 102 is, for example, a CPU, a microprocessor or the like, and is a processing circuit. The memory 103 is also used as a storage area when the processor 102 executes a program.
 図1に示した操作部11は、入力部105により実現される。図1に示した表示部17は、表示部104により実現される。入力部105は、キーボード、マウスなどである。表示部104は、ディスプレイ、モニタなどである。表示部104および入力部105は、これらが一体化されたタッチパネルにより実現されてもよい。 The operation unit 11 illustrated in FIG. 1 is realized by the input unit 105. The display unit 17 illustrated in FIG. 1 is realized by the display unit 104. The input unit 105 is a keyboard, a mouse or the like. The display unit 104 is a display, a monitor, or the like. The display unit 104 and the input unit 105 may be realized by a touch panel in which these are integrated.
 以上の実施の形態に示した構成は、本発明の内容の一例を示すものであり、別の公知の技術と組み合わせることも可能であるし、本発明の要旨を逸脱しない範囲で、構成の一部を省略、変更することも可能である。 The configuration shown in the above embodiment shows an example of the contents of the present invention, and can be combined with another known technique, and one of the configurations is possible within the scope of the present invention. Parts can be omitted or changed.
 1 データ処理装置、2 外部装置、3 撮像装置、11 操作部、12 外部装置接続部、13 撮像装置接続部、14 データ処理部、15 記憶部、16,31 表示生成部、17 表示部、21 取得部、22 抽出部、23 加工部、24 画像生成部、25 追加部。 REFERENCE SIGNS LIST 1 data processing device 2 external device 3 imaging device 11 operation unit 12 external device connection unit 13 imaging device connection unit 14 data processing unit 15 storage unit 16 31 display generation unit 17 display unit 21 Acquisition part, 22 extraction parts, 23 processing parts, 24 image generation parts, 25 addition parts.

Claims (10)

  1.  映像データを構成する画像から監視対象が表示されている複数の特徴画像を抽出するデータ処理部と、
     前記データ処理部により前記映像データを構成する画像から抽出された複数の前記特徴画像を合成して合成画像を生成する表示生成部とを備えることを特徴とするデータ処理装置。
    A data processing unit for extracting a plurality of feature images in which a monitoring target is displayed from images constituting video data;
    And a display generation unit configured to generate a composite image by combining the plurality of feature images extracted from the image forming the video data by the data processing unit.
  2.  前記データ処理部は、
      前記映像データを構成する画像が撮像された時間における前記監視対象の状態を示すイベント情報を取得する取得部と、
      前記取得部により取得された前記イベント情報に基づいて、前記映像データを構成する画像を選択し、選択した画像から前記特徴画像を抽出する抽出部とを備えることを特徴とする請求項1に記載のデータ処理装置。
    The data processing unit
    An acquisition unit configured to acquire event information indicating a state of the monitoring target at a time when an image constituting the video data is captured;
    The image processing apparatus according to claim 1, further comprising: an extraction unit that selects an image constituting the video data based on the event information acquired by the acquisition unit and extracts the feature image from the selected image. Data processing equipment.
  3.  前記取得部で取得した前記イベント情報の選択を受け付ける操作部を備え、
     前記抽出部は、前記操作部が受け付けた前記イベント情報に基づいて、前記映像データを構成する画像を選択し、選択した画像から前記特徴画像を抽出することを特徴とする請求項2に記載のデータ処理装置。
    The operation unit configured to receive a selection of the event information acquired by the acquisition unit;
    The said extraction part selects the image which comprises the said video data based on the said event information which the said operation part received, The said characteristic image is extracted from the selected image, It is characterized by the above-mentioned. Data processing unit.
  4.  前記取得部は、前記特徴画像を加工する加工情報を取得し、
     前記データ処理部は、前記加工情報に基づいて、前記特徴画像を加工する加工部を備えることを特徴とする請求項2または3に記載のデータ処理装置。
    The acquisition unit acquires processing information for processing the feature image,
    The data processing apparatus according to claim 2, wherein the data processing unit includes a processing unit configured to process the characteristic image based on the processing information.
  5.  前記データ処理部は、
      前記取得部で取得された前記イベント情報を画像に変換してイベント情報画像を生成する画像生成部と、
      前記イベント情報画像を前記特徴画像に追加する追加部とを備えることを特徴とする請求項2,3または4に記載のデータ処理装置。
    The data processing unit
    An image generation unit that converts the event information acquired by the acquisition unit into an image to generate an event information image;
    5. The data processing apparatus according to claim 2, further comprising: an addition unit that adds the event information image to the feature image.
  6.  前記データ処理部は、
      前記映像データを構成する画像が撮像された時間における前記監視対象の状態を示すイベント情報を取得する取得部と、
      前記取得部で取得された前記イベント情報を画像に変換してイベント情報画像を生成する画像生成部と、
      前記イベント情報画像を前記特徴画像に追加する追加部とを備えることを特徴とする請求項1に記載のデータ処理装置。
    The data processing unit
    An acquisition unit configured to acquire event information indicating a state of the monitoring target at a time when an image constituting the video data is captured;
    An image generation unit that converts the event information acquired by the acquisition unit into an image to generate an event information image;
    The data processing apparatus according to claim 1, further comprising: an addition unit that adds the event information image to the feature image.
  7.  前記データ処理部は、
      前記映像データを構成する画像が撮像された時間における前記監視対象の状態を示すイベント情報を取得する取得部と、
      前記取得部で取得された前記イベント情報を画像に変換してイベント情報画像を生成する画像生成部とを備え、
     前記表示生成部は、前記合成画像に含まれている前記特徴画像に前記イベント情報画像を追加する追加部を備えることを特徴とする請求項1に記載のデータ処理装置。
    The data processing unit
    An acquisition unit configured to acquire event information indicating a state of the monitoring target at a time when an image constituting the video data is captured;
    An image generation unit configured to convert the event information acquired by the acquisition unit into an image to generate an event information image;
    The data processing apparatus according to claim 1, wherein the display generation unit includes an addition unit that adds the event information image to the feature image included in the composite image.
  8.  前記取得部は、前記特徴画像に前記イベント情報画像を追加するときの方法を指示する指示情報を取得し、
     前記追加部は、前記指示情報に基づいて、前記イベント情報画像を前記特徴画像に追加することを特徴とする請求項5,6または7に記載のデータ処理装置。
    The acquisition unit acquires instruction information instructing a method for adding the event information image to the feature image,
    8. The data processing apparatus according to claim 5, wherein the adding unit adds the event information image to the feature image based on the instruction information.
  9.  請求項1から8のいずれか一項に記載のデータ処理装置を備えることを特徴とするプログラマブル表示器。 A programmable display comprising the data processing device according to any one of claims 1 to 8.
  10.  映像データを構成する画像から監視対象が表示されている複数の特徴画像を抽出するデータ処理工程と、
     前記データ処理工程により前記映像データを構成する画像から抽出された複数の前記特徴画像を合成して合成画像を生成する表示生成工程とを備えることを特徴とするデータ処理方法。
    A data processing step of extracting a plurality of feature images in which a monitoring target is displayed from images constituting video data;
    A display generating step of synthesizing a plurality of feature images extracted from images constituting the video data in the data processing step to generate a synthetic image.
PCT/JP2017/038058 2017-10-20 2017-10-20 Data processing device, programmable display, and data processing method WO2019077750A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2018532182A JP6400260B1 (en) 2017-10-20 2017-10-20 Data processing apparatus, programmable display and data processing method
PCT/JP2017/038058 WO2019077750A1 (en) 2017-10-20 2017-10-20 Data processing device, programmable display, and data processing method
CN201780077812.8A CN110140152B (en) 2017-10-20 2017-10-20 Data processing device, programmable display and data processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/038058 WO2019077750A1 (en) 2017-10-20 2017-10-20 Data processing device, programmable display, and data processing method

Publications (1)

Publication Number Publication Date
WO2019077750A1 true WO2019077750A1 (en) 2019-04-25

Family

ID=63708678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/038058 WO2019077750A1 (en) 2017-10-20 2017-10-20 Data processing device, programmable display, and data processing method

Country Status (3)

Country Link
JP (1) JP6400260B1 (en)
CN (1) CN110140152B (en)
WO (1) WO2019077750A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7103156B2 (en) * 2018-10-23 2022-07-20 オムロン株式会社 Image data processing equipment, image data processing system, image data processing method, and program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10150657A (en) * 1996-09-20 1998-06-02 Hitachi Ltd Mobile object displaying method, display system using it and program recording medium for it
JP2004058737A (en) * 2002-07-25 2004-02-26 National Institute Of Advanced Industrial & Technology Safety monitoring device in station platform
JP2007194928A (en) * 2006-01-19 2007-08-02 Matsushita Electric Ind Co Ltd Remote monitoring device and method
JP2007267294A (en) * 2006-03-30 2007-10-11 Hitachi Ltd Moving object monitoring apparatus using a plurality of cameras
JP2010239992A (en) * 2009-03-31 2010-10-28 Sogo Keibi Hosho Co Ltd Person identification device, person identification method, and person identification program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3826598B2 (en) * 1999-01-29 2006-09-27 株式会社日立製作所 Image monitoring apparatus and recording medium
JP4582793B2 (en) * 2005-12-06 2010-11-17 ソニー株式会社 Image processing apparatus and image processing method
JP4973622B2 (en) * 2007-08-29 2012-07-11 カシオ計算機株式会社 Image composition apparatus and image composition processing program
JP5246286B2 (en) * 2011-03-15 2013-07-24 カシオ計算機株式会社 Image recording apparatus, image recording method, and program
WO2016208070A1 (en) * 2015-06-26 2016-12-29 日立マクセル株式会社 Imaging device and image processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10150657A (en) * 1996-09-20 1998-06-02 Hitachi Ltd Mobile object displaying method, display system using it and program recording medium for it
JP2004058737A (en) * 2002-07-25 2004-02-26 National Institute Of Advanced Industrial & Technology Safety monitoring device in station platform
JP2007194928A (en) * 2006-01-19 2007-08-02 Matsushita Electric Ind Co Ltd Remote monitoring device and method
JP2007267294A (en) * 2006-03-30 2007-10-11 Hitachi Ltd Moving object monitoring apparatus using a plurality of cameras
JP2010239992A (en) * 2009-03-31 2010-10-28 Sogo Keibi Hosho Co Ltd Person identification device, person identification method, and person identification program

Also Published As

Publication number Publication date
JP6400260B1 (en) 2018-10-03
CN110140152B (en) 2020-10-30
JPWO2019077750A1 (en) 2019-11-14
CN110140152A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
US20150249808A1 (en) Monitoring system with monitoring camera
JP2011191171A (en) Image processing device and image processing method
US10991340B2 (en) Image processing apparatus and image processing method
JP2014033248A (en) Image pickup device
JP4741283B2 (en) Integrated circuit device, microcomputer and surveillance camera system
US20150288539A1 (en) Reconfigurable data distribution system
JP2014212462A (en) Imaging device
JP5877329B2 (en) Imaging apparatus and image processing apparatus
JP6400260B1 (en) Data processing apparatus, programmable display and data processing method
US11361408B2 (en) Image processing apparatus, system, image processing method, and non-transitory computer-readable storage medium
EP1080447B1 (en) Image processing inspection apparatus
US20140068514A1 (en) Display controlling apparatus and display controlling method
US20130314444A1 (en) Image data transmitting device, image data receiving device, image data transmitting system, image data transmitting method, image data receiving method, transmission image data, and computer product
US9934593B2 (en) Status determination system
JP2017156928A (en) Video monitoring device, video monitoring method, and video monitoring system
CN103674274A (en) Thermal image recording control device and thermal image recording control method
CN114338874A (en) Image display method of electronic device, image processing circuit and electronic device
JP3716466B2 (en) Image processing device
JPH08125844A (en) Image processing method and image processing system using the method
CN112307882A (en) Image determination device and image determination system
US20040223059A1 (en) Image pickup apparatus, image pickup system, and image pickup method
KR100873445B1 (en) System for recognizing difference of image and method for recognizing difference of image using image system
JP2006234718A (en) X-ray inspecting apparatus
US20230421891A1 (en) Controller
WO2014203289A1 (en) Camera link cable

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018532182

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17929282

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17929282

Country of ref document: EP

Kind code of ref document: A1