WO2020022362A1 - Motion detection device, feature detection device, fluid detection device, motion detection system, motion detection method, program, and recording medium - Google Patents

Motion detection device, feature detection device, fluid detection device, motion detection system, motion detection method, program, and recording medium Download PDF

Info

Publication number
WO2020022362A1
WO2020022362A1 PCT/JP2019/028948 JP2019028948W WO2020022362A1 WO 2020022362 A1 WO2020022362 A1 WO 2020022362A1 JP 2019028948 W JP2019028948 W JP 2019028948W WO 2020022362 A1 WO2020022362 A1 WO 2020022362A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
vector
motion detection
motion
image
Prior art date
Application number
PCT/JP2019/028948
Other languages
French (fr)
Japanese (ja)
Inventor
航 鈴木
紀孝 一戸
博臣 竹市
Original Assignee
国立研究開発法人国立精神・神経医療研究センター
国立研究開発法人理化学研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立研究開発法人国立精神・神経医療研究センター, 国立研究開発法人理化学研究所 filed Critical 国立研究開発法人国立精神・神経医療研究センター
Priority to JP2020532430A priority Critical patent/JPWO2020022362A1/en
Publication of WO2020022362A1 publication Critical patent/WO2020022362A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present invention relates to a motion detection device, a characteristic detection device, a fluid detection device, a motion detection system, a motion detection method, a program, and a recording medium.
  • Optical flow refers to movements in images such as objects, surfaces, and edges caused by the relative movement of the observer (camera) and the outside world. -It is known that it is useful for recognition of distance and movement, control of self-motion, and the like. It is also useful in the field of engineering related to image processing and navigation, and is used in motion detection, object segmentation, collision time estimation, brightness, coding with motion compensation, parallax measurement, and the like.
  • Patent Document 1 when detecting a moving object, a moving object detection device that has improved the accuracy of detecting a feature point in a dark part is described. Is disclosed.
  • the above-described moving object detection device has a problem that the calculation cost increases.
  • the object of the present invention is to provide a motion detection device capable of appropriately performing motion detection while suppressing calculation cost.
  • a motion detection device is a motion detection device that performs motion detection of a target image, and an image acquisition unit that acquires a target image, and the image acquisition unit includes: A vector derivation unit that derives a vector related to motion from the acquired target image, and a motion detection unit that performs motion detection by tracking the vector derived by the vector derivation unit.
  • a motion detection system is a motion detection system that performs motion detection of a target image
  • the image acquisition unit that acquires the target image includes: Refer to a vector deriving unit that derives a vector related to motion from the acquired target image, a motion detecting unit that performs motion detection by tracking the vector derived by the vector deriving unit, and a motion detected by the motion detecting unit.
  • an image generation unit that generates an image relating to the movement, and a display unit that displays the image generated by the image generation unit.
  • a motion detection method is a motion detection method that performs motion detection of a target image, and includes: an image obtaining step of obtaining a target image; The method includes a vector deriving step of deriving a vector related to motion from the acquired target image, and a motion detecting step of performing motion detection by tracking the vector derived in the vector deriving step.
  • a characteristic detection device uses the motion detection device, a detection result of the motion detection unit, and a characteristic of an object included in the target image and the object. And a specifying unit that specifies at least one of the characteristics of the movement.
  • a characteristic detection device includes a motion detection device, and a detection unit that detects a fluid contained in the target image using a detection result of the motion detection unit.
  • motion detection can be performed without increasing calculation cost.
  • FIG. 1 is a block diagram illustrating a main configuration of a motion detection system according to a first embodiment of the present invention. It is a flowchart which shows the flow of a process of a motion detection system.
  • A shows a target image for motion detection
  • B shows a curved surface in a three-dimensional space formed by pixel coordinates and pixel values in the target image
  • C shows the vector derived in the target image
  • d shows the inter-frame difference of the vector of the target image.
  • 5 is an image showing the coordinates of a first vector derived from the target image of FIG. 5 is an image showing a line segment connecting target pixels derived from each frame of FIG.
  • FIG. 9 is a block diagram illustrating a main configuration of a motion detection system according to a second embodiment of the present invention.
  • FIG. 11 is a block diagram illustrating a configuration of a computer that can be used as a server or a terminal.
  • FIG. 11 is a block diagram illustrating a main configuration of a motion detection system according to a fourth embodiment of the present invention. It is an example of a target image in a moving image that is a target of motion detection. 11 is an image showing the coordinates of a first vector derived from the target image of FIG. 11 is an image obtained by superimposing the image of FIG. 11 on the target image of FIG. It is an example of a target image in a moving image that is a target of motion detection.
  • FIG. 14 is an image showing the coordinates of a first vector derived from the target image of FIG. 14 is an image obtained by superimposing the image of FIG. 14 on the target image of FIG. 11 is an image exemplifying a motion detection result of the conventional optical flow detection method for the target image in FIG. 10.
  • FIG. 1 is a diagram illustrating a schematic configuration of a motion detection system 1 according to the present embodiment. As shown in FIG. 1, the motion detection system 1 includes a motion detection device 10 and a display device 20.
  • the motion detection device 10 includes an image acquisition unit 11, a vector derivation unit 12, and a motion detection unit 13.
  • the image acquisition unit 11 acquires, from a moving image captured by an imaging unit (not shown), a target image that is an individual frame of the moving image.
  • the image acquisition unit 11 outputs the acquired target image to the vector derivation unit 12.
  • the ⁇ vector deriving unit 12 sets one of the frames output by the image acquiring unit 11 as a target image, and derives a motion-related vector (first vector) from the target image.
  • the vector deriving unit 12 outputs the derived vector of the target image to the motion detecting unit 13.
  • the vector deriving unit 12 similarly derives a motion-related vector (first vector) in at least one other frame included in the moving image and outputs the vector to the motion detecting unit 13.
  • the at least one other frame is preferably temporally adjacent to the target image (in other words, a frame immediately before the target image or a frame immediately after the target image).
  • this does not limit the present embodiment.
  • the at least one other frame may be a frame temporally separated from the target image by a predetermined number of frames. Unless there is confusion, the at least one other frame may also be referred to as a target image. A method of deriving a vector related to motion in the vector deriving unit 12 will be described later.
  • the motion detector 13 detects the motion of the moving image by tracking the motion-related vector derived by the vector deriver 12. More specifically, the motion detection unit 13 compares the vector (first vector) derived in the target image with the vector (first frame) temporally adjacent to the target image (also referred to as an adjacent frame). A vector (second vector) related to the motion between the target image and the adjacent frame is derived from the difference from the first vector. The motion detection unit 13 detects the motion of the moving image with reference to the vector related to the derived motion. The motion detection unit 13 can also track the motion of the moving image by tracking the second vector over a plurality of frames.
  • the motion detection unit 13 may be configured to set a search area according to the characteristics of the vector (second vector) derived by the motion detection unit 13 and track the motion vector in the set search area.
  • the search area is an area in a search space for tracking, and may include the direction of motion. The method for setting the search area will be described later.
  • the motion detection unit 13 may be configured to select a vector related to a specific object from the vectors derived by the vector derivation unit 12 and perform motion detection by tracking the selected vector.
  • the specific object is not limited to the present embodiment, but includes, for example, an object such as a light in a captured image of a car, whose movement can be easily tracked.
  • the method for identifying these specific objects is not limited to the present embodiment.
  • the objects can be identified by using a technique such as pattern matching or edge detection.
  • the display device 20 includes an image generation unit 21 and a display unit 22.
  • the image generation unit 21 acquires the vector derived by the vector derivation unit 12, and generates an image related to the vector. Further, the image generation unit 21 acquires a vector related to the motion derived by the motion detection unit 13 and generates an image related to the vector related to the motion (image related to the motion). The image generated by the image generation unit 21 is output to the display unit 22.
  • the display unit 22 displays the image output from the image generation unit 21.
  • FIG. 2 is a flowchart illustrating an example of a processing flow of the motion detection system 1.
  • the image acquisition unit 11 acquires a target image, which is an individual frame of a moving image, from a moving image to be subjected to motion detection.
  • the vector deriving unit 12 derives a first vector that is a vector related to motion from the target image acquired by the image acquiring unit 11.
  • the motion detection unit 13 detects the motion of the moving image, that is, the second vector, with reference to the first vector derived by the vector derivation unit 12. Further, as described above, the motion detection unit 13 can track the motion of the moving image by tracking the second vector over a plurality of frames.
  • the image generation unit 21 generates an image related to the motion-related vector with reference to the motion-related vector derived by the motion detection unit 13.
  • the display unit 22 displays the image generated by the image generation unit 21.
  • the image generation unit 21 replaces the configuration for generating an image related to the vector related to the motion derived by the motion detection unit 13, and replaces the configuration related to the motion derived by the vector derivation unit 12.
  • the configuration may be such that an image related to a vector related to the motion is generated with reference to the obtained vector.
  • step S ⁇ b> 1 described above the image acquisition unit 11 acquires a moving image to be subjected to motion detection and a moving image in which the contrast of the moving image is inverted.
  • the image acquisition unit 11 acquires, from these acquired moving images, target images that are individual frames of these moving images.
  • the vector deriving unit 12 determines the number of dots for deriving the first vector related to motion in the target image.
  • the vector deriving unit 12 sets half of the determined number of dots to be randomly located in the target image (first frame) acquired by the image acquiring unit 11 from the moving image. The other half is set so that the image acquisition unit 11 is randomly located in the target image (first frame) acquired from the moving image in which the contrast of the moving image is inverted.
  • the vector deriving unit 12 derives a first vector related to motion in a pixel where a dot is set in the target image (first frame).
  • step S3 the motion detection unit 13 derives a second vector by performing time differentiation (inter-frame difference) on the first vector derived by the vector derivation unit 12.
  • the motion detecting unit 13 repeats the same operation in a frame adjacent to the target image (first frame) (frame immediately after the target image), and derives a second vector in all frames.
  • the motion detection unit 13 After deriving the second vector for all frames, the motion detection unit 13 performs dot matching for each frame. Specifically, the motion detection unit 13 selects an arbitrary dot in the target image (first frame), and searches for a dot having the same vector direction and size as the arbitrary dot in an adjacent frame. Here, the motion detection unit 13 sets an area on a straight line along a vector of the dot from an arbitrary dot in the target image as a starting point, as a search area in an adjacent frame.
  • the motion detection unit 13 matches the arbitrary dot with a dot detected in an adjacent frame. When a plurality of dots having the same vector direction and size as the arbitrary dot are detected in the adjacent frame, the motion detection unit 13 determines the most Match with a dot located near.
  • the motion detection unit 13 tracks the motion of the moving image by performing dot matching in all frames. Note that the motion detection unit 13 deletes all dots that have not been matched.
  • the image generation unit 21 generates an image related to a motion-related vector by visualizing the dots matched by the motion detection unit 13.
  • the display unit 22 displays the image generated by the image generation unit 21.
  • the motion detection system 1 does not derive a motion-related vector for all pixels in the target image, but performs a motion-related vector calculation only on randomly set dots. Since the derivation is performed, motion detection can be performed without increasing calculation cost.
  • FIG. 3A and 3B are images showing a method of deriving a vector in a target image, in which FIG. 3A shows a target image for motion detection, and FIG. FIG. 3C shows a curved surface, FIG. 4C shows a vector derived from the target image, and FIG.
  • the vector deriving unit 12 derives a first vector indicating a gradient of a pixel value for at least one of the pixels included in the target image.
  • the vector deriving unit 12 generates a two-dimensional plane (XY plane) indicating the coordinates of the pixel of the target image, and a Z indicating the pixel value of each pixel.
  • XY plane two-dimensional plane
  • Z the pixel value of each pixel.
  • the horizontal position, the vertical position, and the pixel value of each pixel of the target image correspond to the X coordinate, the Y coordinate, and the Z coordinate, respectively, in the three-dimensional space.
  • the pixel value combined as the Z axis for example, any one of the colors such as a luminance value, a color difference value, and RGB or a combination thereof can be used.
  • the vector deriving unit 12 derives a normal vector of a curved surface in the three-dimensional space for each pixel.
  • the vector deriving unit 12 normalizes the derived normal vector, and projects the normalized normal vector onto the XY plane, as shown in FIG. Is derived.
  • the X component nx and the Y component ny of the first vector derived by the vector deriving unit 12 are represented by the following equations.
  • I (x, y, t) is the luminance of the pixel located at the coordinates (x, y) at time t.
  • the time t can be read as a frame number (also referred to as Picture Order Count) assigned to each frame.
  • the partial derivative with respect to x and y can be read as the difference with respect to x and y, respectively.
  • the motion detection unit 13 determines the first vector of the target image obtained by the vector derivation unit 12 and the first vector of the frame temporally adjacent to the target image. From the difference with one vector, a vector (also referred to as a second vector) indicating the difference between the two frames is derived.
  • the second vector is derived by taking the time differential (time difference) of the X component nx and the Y component ny of the first vector.
  • FIG. 4 is an image showing a target image in a moving image on which motion detection has been performed.
  • (A) to (c) of FIG. 4 are frames that are temporally continuous in the moving image.
  • FIG. 5 is an image showing a part of the first vector derived from the target image of FIG.
  • FIGS. 5A to 5C are images showing the coordinates of the first vector derived from the target image in FIGS. 4A to 4C, respectively.
  • the first vector is derived for some but not all of the pixels included in the target image.
  • FIG. 6 is an image depicting a line segment connecting the target pixel P1 derived from the first frame and the target pixel P2 derived from the second frame in FIG.
  • FIG. 6A shows a line segment derived from FIGS. 5A and 5B
  • FIG. 6B shows a line segment derived from FIGS. 5B and 5C. Indicates minutes.
  • the motion detection system 1 can preferably perform motion detection by tracking a vector related to the motion derived by the motion detection unit 13.
  • the motion detection unit 13 sets a search area in a direction along the second vector V1 derived for the target pixel P1 in the target image N1, and sets a frame N2 adjacent to the target image (for example, immediately after the target image).
  • a search area in the frame () a pixel P2 corresponding to the target pixel is specified, and a second vector V2 is calculated for the specified pixel.
  • the motion detection unit 13 performs tracking of the second vector by performing the above operation.
  • Another example of a specific method of setting the search area in the motion detection unit 13 is as follows.
  • the motion detector 13 calculates a first vector in each of the target image N1 and the frame N2.
  • the motion detection unit 13 derives a second vector from target pixels in the target image N1 and the frame N2.
  • the motion detection unit 13 sets a search area in a direction along the second vector V1 derived for the target pixel P1 that is the target of vector tracking.
  • the motion detection unit 13 specifies the second vector V2 derived for the target pixel P2 corresponding to the target pixel P1 in the search area in the frame N2, and tracks the vector.
  • the search area may be set as a linear area along the second vector V1 starting from the target pixel P1 of the target image, or the linear area may be set as the second area.
  • the vector may be set as a band-shaped area that is thickened along the vertical direction, or may be set as an area having another shape.
  • the search area may be set using, for example, a function value range in which the second vector is defined as a domain.
  • the search area is not limited to a continuous, connected, or linear one, and may be discontinuous, non-connected, or non-linear.
  • the first vector may be derived between adjacent frames.
  • the target pixel in the adjacent frame can be specified based on pixel characteristics (for example, luminance, color difference, difference in pixel value between adjacent pixels, and the like) of the target pixel.
  • the moving image detection system since the moving image detection system according to the present invention tracks the first vector after setting the search area, the motion detection can be performed without increasing the calculation cost.
  • the specific configuration of the classification and learning processing for motion detection by the motion detection unit 13 is not limited to the present embodiment.
  • any one of the following machine learning methods or a combination thereof may be used. Can be.
  • 3D data may be processed and used in advance for input to the neural network.
  • SVM Support Vector Machine
  • IDP Inductive Logic Programming
  • GP Genetic Algorithm
  • BN Bayesian Network
  • NN Neural Network
  • 3D data may be processed and used in advance for input to the neural network.
  • a technique such as data augmentation (Data Augmentation) can be used.
  • a convolutional neural network including a convolution process
  • a convolution layer for performing a convolution operation is provided as one or a plurality of layers (layers) included in the neural network, and a filter operation (product-sum operation) is performed on input data input to the layer. It may be configured.
  • processing such as padding may be used in combination, or an appropriately set stride width may be employed.
  • a multilayer or super multilayer neural network having several tens to thousands of layers may be used.
  • the specific configuration of the classification and learning processing for motion detection by the motion detection unit 13 is not limited to the present embodiment.
  • the machine learning used in the above processing may be supervised learning. And it may be unsupervised learning.
  • the image acquisition unit 11, the vector derivation unit 12, the motion detection unit 13, the image generation unit 21, and the display unit 22 are provided in separate devices, and the exchange of information between these devices is performed by wired or wireless communication. It is good also as a structure which performs.
  • the motion detection system 1a includes a terminal device (imaging device) 10a, a display device 20, and a server (motion detection device) 30.
  • the image acquisition unit 11 includes a communication unit 14
  • the display device 20 includes an image generation unit 21, a display unit 22, and a communication unit 23.
  • the server 30 includes a communication unit 31, a vector derivation unit 32, and a motion detection unit.
  • a configuration including the unit 33 may be adopted.
  • the operations of the vector derivation unit 32 and the motion detection unit 33 are the same as the operations of the vector derivation unit 12 and the motion detection unit 13 described in the first embodiment, respectively.
  • the server may be configured to manage the motion detection results for a plurality of image sets together with the identification numbers of the image sets.
  • a configuration may be employed in which an image set (video data) is managed for each subject ID. Further, time information indicating the time when the image was acquired may be further linked.
  • the server may further include a learning unit that performs machine learning based on the motion detection result.
  • the learning unit receives the motion information (or the first vector, the second vector, or the like) detected by the motion detection unit 13, the ID of the subject, and time information as inputs. It functions as a learning device that outputs classification information on the movement of the subject's body.
  • the control blocks (the image acquisition unit 11, the vector derivation unit 12, and the motion detection unit 13) of the motion detection device 10 may be realized by a logic circuit (hardware) formed on an integrated circuit (IC chip) or the like, It may be realized by software.
  • each of the motion detection device 10, the display device 20, an output device 20b described later (see Embodiment 4), and the server 30 is configured using a computer (electronic computer) as shown in FIG. Can be.
  • the motion detection device 10, the display device 20, the output device 20b, and the server 30 may be configured as separate devices, respectively, or at least a part of the devices may be configured as an integrated device. May be.
  • the motion detection device 10 and the output device 20b may be configured as an integrated device.
  • FIG. 8 is a block diagram illustrating the configuration of a computer 910 that can be used as the motion detection device 10, the display device 20, the output device 20b, or the server 30.
  • the computer 910 includes an arithmetic device 912, a main storage device 913, an auxiliary storage device 914, an input / output interface 915, and a communication interface 916 connected to each other via a bus 911.
  • the arithmetic device 912, the main storage device 913, and the auxiliary storage device 914 may be storages such as a CPU, a RAM (random access memory), a hard disk drive, and a flash memory, respectively.
  • the input / output interface 915 is connected to an input device 920 for the user to input various information to the computer 910 and an output device 930 for the computer 910 to output various information to the user.
  • the input device 920 and the output device 930 may be built in the computer 910, or may be connected (externally attached) to the computer 910.
  • the input device 920 may be a keyboard, a mouse, a touch sensor, and the like
  • the output device 930 may be a display, a printer, a speaker, and the like.
  • a device having both functions of the input device 920 and the output device 930, such as a touch panel in which a touch sensor and a display are integrated, may be applied.
  • the communication interface 916 is an interface for the computer 910 to communicate with an external device.
  • a recording medium for recording information such as a program provided in the auxiliary storage device 914 may be a computer-readable “temporary tangible medium”, such as a tape, disk, card, semiconductor memory, or programmable logic. It may be a circuit or the like.
  • main storage device 913 may be omitted as long as the computer can execute the program recorded on the recording medium without expanding the program on the main storage device 913.
  • each of the above devices (the arithmetic device 912, the main storage device 913, the auxiliary storage device 914, the input / output interface 915, the communication interface 916, the input device 920, and the output device 930) may be one each. There may be more than one.
  • the program may be obtained from outside the computer 910.
  • the program may be obtained via an arbitrary transmission medium (such as a communication network or a broadcast wave).
  • the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
  • the motion of the target image to be detected includes not only the motion of a rigid body such as an automobile, a person, an animal, and a ball, but also a gas (for example, smoke) and a liquid (for example, water). , Oil).
  • a gas for example, smoke
  • a liquid for example, water
  • Oil oil
  • the object to be subjected to motion detection in each of the above embodiments and the present embodiment includes a rigid body and a fluid.
  • a target image contains a fluid such as a gas and a liquid, and the detection system detects the movement of the fluid.
  • FIG. 9 is a block diagram showing a schematic configuration of a motion detection system 1b (an example of a characteristic detection device and a fluid detection device) according to this embodiment.
  • the motion detection system 1b includes a motion detection device 10 and an output device 20b.
  • the operations of the image acquisition unit 11, the vector derivation unit 12, and the motion detection unit 13 of the motion detection device 10 are the same as the operations of the image acquisition unit 11, the vector derivation unit 12, and the motion detection unit 13 described in the first embodiment. is there.
  • the output device 20b includes an image generation unit 21, a display unit 22, a specification unit 24, a detection unit 25, and an output unit 26.
  • the operations of the image generation unit 21 and the display unit 22 of the output device 20b are the same as the operations of the image generation unit 21 and the display unit 22 described in the first embodiment.
  • the identification unit 24 analyzes the detection result of the motion detection unit 13 and identifies at least one of the characteristics of the object included in the target image and the characteristics of the motion of the object.
  • the object included in the target image may be a rigid body such as a person, an animal, a car, and a ball, or may be a fluid such as a gas or a liquid.
  • the characteristics of an object refer to the characteristic properties or appearance of the object.
  • the property of the object is, for example, the expression of a person, clothes, the shape of the object, or the viscosity of the object.
  • the characteristic of the movement of the object is, for example, a moving direction of the object or a moving speed of the object predicted in the future.
  • the specifying unit 24 when specifying a human expression as a characteristic, specifies a human expression and / or a change in the facial expression based on the image analysis result of the target image and the detection result of the motion detection unit 13.
  • the specifying unit 24 specifies the shape of the clothing and / or the movement of the clothing based on the image analysis result of the target image and the detection result of the motion detection unit 13.
  • the specifying unit 24 may specify the material of the clothes from the specified shape of the clothes and / or the movement of the clothes.
  • the specifying unit 24 analyzes the motion detected by the motion detection unit 13 and predicts the motion of the object included in the target image. More specifically, for example, the specifying unit 24 may use the moving direction and the moving speed indicated by the second vector derived by the motion detecting unit 13 as the predicted moving direction and the moving speed of the object. Further, for example, the specifying unit 24 may predict the motion of the object by analyzing a change with time of the moving direction and the moving speed indicated by the second vector derived by the motion detecting unit 13. .
  • the specifying unit 24 measures the viscosity of the fluid using the detection result of the motion detecting unit 13.
  • the object whose viscosity is to be measured is, for example, cement or ice cream.
  • the identification unit 24 acquires the second vector derived by the motion detection unit 13, and measures the viscosity of the object based on the moving speed of the object indicated by the second vector.
  • the detection unit 25 detects the fluid included in the target image using the detection result of the motion detection unit 13.
  • the detection unit 25 may analyze a detection result of the motion detection unit 13 and specify a region where the motion is detected as a region where the object is located.
  • the output unit 26 outputs information indicating the specification result of the specification unit 24 and the measurement result of the detection unit 25.
  • the output of information by the output unit 26 may be performed, for example, by outputting data to an externally connected device, or may be performed by transmitting data to another device via a communication network. Is also good.
  • the output of the information may be performed, for example, by outputting data representing an image to the display unit, or the information may be output by sound or voice.
  • FIG. 10 is a diagram illustrating a target image in a moving image on which motion detection has been performed.
  • a moving image obtained by capturing a gas such as smoke is used.
  • (A) to (d) of FIG. 10 are frames included in a moving image.
  • FIG. 10B is a frame after a fixed time (for example, 20 seconds) has elapsed from the frame of FIG. 10A.
  • (C) of FIG. 10 is a frame after a lapse of a fixed time (for example, 20 seconds) from the frame of (b) of FIG. 10.
  • D) of FIG. 10 is a frame after a predetermined time (for example, 20 seconds) has elapsed from the frame of (c) of FIG. 10.
  • the appearance of a gas such as a captured smoke gradually changes over time.
  • FIG. 11 is an image showing the coordinates of the first vector derived from the target image of FIG. (A) to (d) of FIG. 11 are images showing the coordinates of the first vector derived in the target images of (a) to (d) of FIG. 10, respectively.
  • the vector deriving unit 12 derives a first vector for all pixels included in the target image, performs a dot matching process between frames, and determines a first vector that succeeds in matching. Extract. That is, in the example of FIG. 11, the coordinates of the first vector for which matching has succeeded are displayed, and the coordinates of the first vector for which matching has not succeeded are not displayed. Note that the pixels for which the vector derivation unit 12 derives the first vector need not be all the pixels included in the target image.
  • the vector deriving unit 12 may derive the first vector from some pixels included in the target image.
  • the image generation unit 21 determines, as a moving image corresponding to the moving image that is the target image of the motion detection, a moving image (a moving image representing the temporal change of the coordinates of the first vector derived by the vector deriving unit 12).
  • a moving image a moving image representing the temporal change of the coordinates of the first vector derived by the vector deriving unit 12.
  • FIGS. 11A to 11D show examples of frames included in this moving image. This moving image allows the user to grasp the movement of the fluid such as smoke.
  • FIG. 12 is an image obtained by superimposing the image of FIG. 11 on the target image of FIG. 12A to 12D are obtained by superimposing the images of FIGS. 11A to 11D on the target images of FIGS. 10A to 10D, respectively.
  • FIG. 16 is an image exemplifying a motion detection result of the target image of FIG. 10 by a conventional optical flow detection method.
  • (A) to (d) of FIG. 16 are images showing detection results of motion from the target images of (a) to (d) of FIG. 10, respectively.
  • the accuracy of motion detection of a fluid such as smoke is improved.
  • FIG. 13 is a diagram illustrating a target image in another moving image on which motion detection has been performed.
  • (A) to (d) of FIG. 13 are frames included in a moving image.
  • the relationship between the frames shown by (a) to (d) in each drawing is the same as that in FIG. 10 described above.
  • FIG. 14 is an image showing the coordinates of the first vector derived from the target image of FIG.
  • FIG. 15 is an image in which an image representing the coordinates of the first vector derived from each target image is superimposed on the target image in FIG.
  • the motion detection unit 13 can easily track the motion of the moving image.
  • the output device 20b or a device (not shown) that has obtained data output from the output device 20b provides various services to the user using the detected movement.
  • the services provided are, for example, fluid viscosity measurement, automatic driving control, monitoring services for children and the elderly, or evacuation support services in the event of a disaster.
  • the output device 20b outputs information indicating the viscosity of the fluid measured by the specifying unit 24.
  • the output device 20b analyzes, for example, the movement of an object (a person, a bicycle, a car, a ball, etc.) predicted by the identification unit 24, and a dangerous situation (a collision or the like) is caused.
  • a dangerous situation a collision or the like
  • a warning sound or a warning message may be output, or the warning message may be displayed on the display unit 22.
  • the output device 20b monitors the protected person by, for example, analyzing the motion of the person detected by the motion detecting unit 13 or analyzing the expression or the like of the protected person specified by the specifying unit 24. .
  • the output device 20b may output a warning sound or a warning message or display a warning message on the display unit 22 when it is estimated from the analysis result that the protected person is in a dangerous situation.
  • the captured moving image is the target image for motion detection, but it is not necessary to disclose this moving image. Therefore, the detection system 1b can provide a watching service while protecting the privacy of the protected person.
  • the motion detection unit 13 selects an arbitrary dot in the target image, and in a frame adjacent to the frame, a difference between the direction and the size of the arbitrary vector is determined under a predetermined condition (for example, a predetermined value).
  • the search may be performed by searching for a dot that satisfies the following condition.
  • a motion detection device is a motion detection device that performs motion detection of a target image, and includes an image acquisition unit that acquires a target image, A vector deriving unit for deriving a vector, and a motion detecting unit for performing motion detection by tracking the vector derived by the vector deriving unit are provided.
  • motion detection can be performed without increasing the calculation cost.
  • the motion detection unit sets a search area according to the characteristics of the vector detected by the motion detection unit, and sets the search area in the set search area. Perform tracking.
  • motion detection can be performed without increasing the calculation cost.
  • the motion detection unit sets the search area in a direction along a vector detected by the motion detection unit.
  • motion detection can be performed without increasing the calculation cost.
  • the vector deriving unit derives a first vector indicating a gradient of a pixel value for at least one of the pixels included in the target image, and Is derived as a vector related to the motion.
  • a suitable vector can be referred to when performing motion detection.
  • the vector derivation unit may determine a horizontal position, a vertical position, and a pixel value of each pixel of the target image by an X coordinate, a Y coordinate, and a Z coordinate, respectively.
  • the first vector is derived by projecting the normal vector of the curved surface to the XY plane.
  • a suitable vector can be referred to when performing motion detection.
  • the vector deriving unit derives the first vector by normalizing the normal vector and projecting the normal vector onto the XY plane.
  • a suitable vector can be referred to when performing motion detection.
  • a motion detection device is a motion detection system that performs motion detection of a target image, and includes an image acquisition unit that acquires a target image, and a motion-related image that is acquired by the image acquisition unit.
  • a vector deriving unit that derives a vector
  • a motion detecting unit that performs motion detection by tracking the vector derived by the vector deriving unit
  • a motion detected by the motion detecting unit includes an image generation unit that generates the image, and a display unit that displays the image generated by the image generation unit.
  • motion detection can be performed without increasing the calculation cost.
  • a motion detection method is a motion detection method for detecting a motion of a target image, the method including: an image obtaining step of obtaining a target image; and a motion-related method from the target image obtained in the image obtaining step.
  • the method includes a vector deriving step of deriving a vector, and a motion detecting step of performing motion detection by tracking the vector derived in the vector deriving step.
  • motion detection can be performed without increasing the calculation cost.
  • the characteristic detection device uses the motion detection device and the detection result of the motion detection unit, and uses at least one of a characteristic of an object included in the target image and a characteristic of movement of the object. And a specifying unit that specifies
  • the specifying unit predicts a motion of the object by using a detection result of the motion detecting unit.
  • the motion of the object can be predicted without increasing the calculation cost.
  • the target image includes a fluid image
  • the specifying unit specifies at least one of the characteristic of the fluid and the characteristic of the movement of the fluid.
  • the specifying unit specifies the viscosity of the fluid using a detection result of the motion detecting unit.
  • the viscosity of the object can be measured without increasing the calculation cost.
  • a fluid detection device includes the motion detection device, and a detection unit that detects a fluid contained in the target image by using a detection result of the motion detection unit.
  • the fluid can be detected without increasing the calculation cost.
  • the motion detecting device, the characteristic detecting device, and the fluid detecting device may be each realized by a computer, and in this case, the computer may be implemented by the motion detecting device, the characteristic detecting device, or the fluid detecting device.
  • a control program that causes the computer to implement the motion detecting device, the characteristic detecting device, or the fluid detecting device by operating as each unit included in the device, and a computer-readable recording medium that records the control program are also included in the scope of the present invention. enter.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Provided is a motion detection device with which motion detection can be carried out without increasing the computation cost. This motion detection device is equipped with an image acquisition unit for acquiring a subject image, a vector derivation unit for deriving a motion-related vector from the subject image acquired by the image acquisition unit, and a motion detection unit for carrying out motion detection by tracking a vector derived by the vector derivation unit.

Description

動き検出装置、特性検出装置、流体検出装置、動き検出システム、動き検出方法、プログラム、および、記録媒体Motion detecting device, characteristic detecting device, fluid detecting device, motion detecting system, motion detecting method, program, and recording medium
 本発明は、動き検出装置、特性検出装置、流体検出装置、動き検出システム、動き検出方法、プログラム、および、記録媒体に関する。 The present invention relates to a motion detection device, a characteristic detection device, a fluid detection device, a motion detection system, a motion detection method, a program, and a recording medium.
 観察者(カメラ)と外界の相対的な動きによって引き起こされる物体、表面、エッジ等の画像内の動きのことをオプティカルフローといい、ヒトを含む霊長類にとって観察者の動きの認知、物体の形状・距離・動きの認知、自己運動の制御等に役立つことが知られている。また、画像処理やナビゲーションに関連した工学の分野でも有用であり、動き検出、物体のセグメンテーション、衝突時間の見積もり、明度、動き補償によるコーディング、視差計測等で用いられる。 Optical flow refers to movements in images such as objects, surfaces, and edges caused by the relative movement of the observer (camera) and the outside world. -It is known that it is useful for recognition of distance and movement, control of self-motion, and the like. It is also useful in the field of engineering related to image processing and navigation, and is used in motion detection, object segmentation, collision time estimation, brightness, coding with motion compensation, parallax measurement, and the like.
 近年、このオプティカルフローを用いた移動物の検出について研究されており、例えば、特許文献1では、移動物を検知する際に、暗い部分での特徴点の検知精度を向上させた移動物検出装置が開示されている。 In recent years, research on the detection of a moving object using this optical flow has been studied. For example, in Patent Document 1, when detecting a moving object, a moving object detection device that has improved the accuracy of detecting a feature point in a dark part is described. Is disclosed.
日本国公開特許公報「特開2015-076633号(2015年4月20日公開)」Japanese Patent Application Laid-Open Publication No. 2015-076633 (published on April 20, 2015)
 しかしながら、上述の移動物検出装置では、計算コストが増大するという問題がある。 However, the above-described moving object detection device has a problem that the calculation cost increases.
 本発明は、計算コストを抑制しつつ好適に動き検出を行うことができる動き検出装置を提供することを目的とする。 The object of the present invention is to provide a motion detection device capable of appropriately performing motion detection while suppressing calculation cost.
 上記の課題を解決するために、本発明の一態様に係る動き検出装置は、対象画像の動き検出を行う動き検出装置であって、対象画像を取得する画像取得部と、上記画像取得部が取得した対象画像から動きに関連したベクトルを導出するベクトル導出部と、上記ベクトル導出部が導出したベクトルを追跡することによって動き検出を行う動き検出部とを備えている。 In order to solve the above problem, a motion detection device according to an aspect of the present invention is a motion detection device that performs motion detection of a target image, and an image acquisition unit that acquires a target image, and the image acquisition unit includes: A vector derivation unit that derives a vector related to motion from the acquired target image, and a motion detection unit that performs motion detection by tracking the vector derived by the vector derivation unit.
 上記の課題を解決するために、本発明の一態様に係る動き検出システムは、対象画像の動き検出を行う動き検出システムであって、対象画像を取得する画像取得部と、上記画像取得部が取得した対象画像から動きに関連したベクトルを導出するベクトル導出部と、上記ベクトル導出部が導出したベクトルを追跡することによって動き検出を行う動き検出部と、上記動き検出部が検出した動きを参照して、当該動きに係る画像を生成する画像生成部と、上記画像生成部が生成した画像を表示する表示部とを備えている。 In order to solve the above problem, a motion detection system according to an aspect of the present invention is a motion detection system that performs motion detection of a target image, wherein the image acquisition unit that acquires the target image, the image acquisition unit includes: Refer to a vector deriving unit that derives a vector related to motion from the acquired target image, a motion detecting unit that performs motion detection by tracking the vector derived by the vector deriving unit, and a motion detected by the motion detecting unit. And an image generation unit that generates an image relating to the movement, and a display unit that displays the image generated by the image generation unit.
 上記の課題を解決するために、本発明の一態様に係る動き検出方法は、対象画像の動き検出を行う動き検出方法であって、対象画像を取得する画像取得ステップと、上記画像取得ステップにおいて取得した対象画像から動きに関連したベクトルを導出するベクトル導出ステップと、上記ベクトル導出ステップにおいて導出したベクトルを追跡することによって動き検出を行う動き検出ステップとを含んでいる。 In order to solve the above-described problem, a motion detection method according to one embodiment of the present invention is a motion detection method that performs motion detection of a target image, and includes: an image obtaining step of obtaining a target image; The method includes a vector deriving step of deriving a vector related to motion from the acquired target image, and a motion detecting step of performing motion detection by tracking the vector derived in the vector deriving step.
 上記の課題を解決するために、本発明の一態様に係る特性検出装置は、上記動き検出装置と、上記動き検出部の検出結果を用いて、上記対象画像に含まれる物体の特性及び当該物体の動きの特性の少なくともいずれか一方を特定する特定部とを備えている。 In order to solve the above-described problem, a characteristic detection device according to one embodiment of the present invention uses the motion detection device, a detection result of the motion detection unit, and a characteristic of an object included in the target image and the object. And a specifying unit that specifies at least one of the characteristics of the movement.
 上記の課題を解決するために、本発明の一態様に係る特性検出装置は、上記動き検出装置と、上記動き検出部の検出結果を用いて、上記対象画像に含まれる流体を検知する検知部とを備えている。 In order to solve the above-described problem, a characteristic detection device according to an aspect of the present invention includes a motion detection device, and a detection unit that detects a fluid contained in the target image using a detection result of the motion detection unit. And
 本発明の一態様によれば、計算コストの増大を招来することなく、動き検出を行うことができる。 According to one embodiment of the present invention, motion detection can be performed without increasing calculation cost.
本発明の実施形態1に係る動き検出システムの要部構成を示すブロック図である。FIG. 1 is a block diagram illustrating a main configuration of a motion detection system according to a first embodiment of the present invention. 動き検出システムの処理の流れを示すフローチャートである。It is a flowchart which shows the flow of a process of a motion detection system. 対象画像におけるベクトルの導出方法を示す画像であり、(a)は動き検出の対象画像を示し、(b)は対象画像において画素の座標と画素値とによって張られる3次元空間における曲面を示し、(c)は対象画像において導出されたベクトルを示し、(d)は対象画像のベクトルのフレーム間差分を示す。(A) shows a target image for motion detection, (b) shows a curved surface in a three-dimensional space formed by pixel coordinates and pixel values in the target image, (C) shows the vector derived in the target image, and (d) shows the inter-frame difference of the vector of the target image. 動き検出の対象である動画像における対象画像の一例である。It is an example of a target image in a moving image that is a target of motion detection. 図4の対象画像より導出した第1のベクトルの座標を表す画像である。5 is an image showing the coordinates of a first vector derived from the target image of FIG. 図4の各フレームより導出した対象画素を結ぶ線分を示す画像である。5 is an image showing a line segment connecting target pixels derived from each frame of FIG. 4. 本発明の実施形態2に係る動き検出システムの要部構成を示すブロック図である。FIG. 9 is a block diagram illustrating a main configuration of a motion detection system according to a second embodiment of the present invention. サーバまたは端末として利用可能なコンピュータの構成を例示したブロック図である。FIG. 11 is a block diagram illustrating a configuration of a computer that can be used as a server or a terminal. 本発明の実施形態4に係る動き検出システムの要部構成を示すブロック図である。FIG. 11 is a block diagram illustrating a main configuration of a motion detection system according to a fourth embodiment of the present invention. 動き検出の対象である動画像における対象画像の一例である。It is an example of a target image in a moving image that is a target of motion detection. 図10の対象画像より導出した第1ベクトルの座標を表す画像である。11 is an image showing the coordinates of a first vector derived from the target image of FIG. 図10の対象画像に図11の画像を重畳させた画像である。11 is an image obtained by superimposing the image of FIG. 11 on the target image of FIG. 動き検出の対象である動画像における対象画像の一例である。It is an example of a target image in a moving image that is a target of motion detection. 図13の対象画像より導出した第1ベクトルの座標を表す画像である。14 is an image showing the coordinates of a first vector derived from the target image of FIG. 図13の対象画像に図14の画像を重畳させた画像である。14 is an image obtained by superimposing the image of FIG. 14 on the target image of FIG. 図10の対象画像に対する従来のオプティカルフローの検出手法による動き検出結果を例示する画像である。11 is an image exemplifying a motion detection result of the conventional optical flow detection method for the target image in FIG. 10.
 〔実施形態1〕
 以下、本発明の一実施形態について、詳細に説明する。
[Embodiment 1]
Hereinafter, an embodiment of the present invention will be described in detail.
 (動き検出システム)
 図1は、本実施形態に係る動き検出システム1の概略構成を示す図である。図1に示すように、動き検出システム1は、動き検出装置10及び表示装置20を備えている。
(Motion detection system)
FIG. 1 is a diagram illustrating a schematic configuration of a motion detection system 1 according to the present embodiment. As shown in FIG. 1, the motion detection system 1 includes a motion detection device 10 and a display device 20.
 (動き検出装置)
 動き検出装置10は、画像取得部11、ベクトル導出部12及び動き検出部13を備えている。
(Motion detection device)
The motion detection device 10 includes an image acquisition unit 11, a vector derivation unit 12, and a motion detection unit 13.
 画像取得部11は、撮像部(不図示)で撮像された動画像より、当該動画像の個々のフレームである対象画像を取得する。画像取得部11は、取得した対象画像をベクトル導出部12に出力する。 The image acquisition unit 11 acquires, from a moving image captured by an imaging unit (not shown), a target image that is an individual frame of the moving image. The image acquisition unit 11 outputs the acquired target image to the vector derivation unit 12.
 ベクトル導出部12は、画像取得部11が出力したフレームの1つを対象画像に設定し、当該対象画像から、動きに関連したベクトル(第1のベクトル)を導出する。ベクトル導出部12は、導出した対象画像におけるベクトルを動き検出部13に出力する。ベクトル導出部12は、上記動画像に含まれる少なくとも1枚の他のフレームにおいても同様に、動きに関連したベクトル(第1のベクトル)を導出し、動き検出部13に出力する。ここで、当該少なくとも1枚の他のフレームは、上記対象画像に対して時間的に隣接する(換言すれば、上記対象画像の直前のフレーム、又は、上記対象画像の直後のフレーム)ことが好ましいが、これは本実施形態を限定するものではない。例えば、当該少なくとも1枚の他のフレームは、上記対象画像に対して、時間的に所定フレーム枚数分離れているフレームであってもよい。なお、混乱のない限り、上記少なくとも1枚の他のフレームのことも対象画像と呼称する場合がある。また、ベクトル導出部12における、動きに関連したベクトルの導出方法については後述する。 The 導出 vector deriving unit 12 sets one of the frames output by the image acquiring unit 11 as a target image, and derives a motion-related vector (first vector) from the target image. The vector deriving unit 12 outputs the derived vector of the target image to the motion detecting unit 13. The vector deriving unit 12 similarly derives a motion-related vector (first vector) in at least one other frame included in the moving image and outputs the vector to the motion detecting unit 13. Here, the at least one other frame is preferably temporally adjacent to the target image (in other words, a frame immediately before the target image or a frame immediately after the target image). However, this does not limit the present embodiment. For example, the at least one other frame may be a frame temporally separated from the target image by a predetermined number of frames. Unless there is confusion, the at least one other frame may also be referred to as a target image. A method of deriving a vector related to motion in the vector deriving unit 12 will be described later.
 動き検出部13は、ベクトル導出部12が導出した、動きに関連したベクトルを追跡することによって上記動画像の動きを検出する。より具体的には、動き検出部13は、対象画像において導出されたベクトル(第1のベクトル)と、対象画像に対して時間的に隣接するフレーム(隣接フレームとも呼ぶ)において導出されたベクトル(第1のベクトル)との差分より、対象画像と、当該隣接フレームとの間の動きに関連したベクトル(第2のベクトル)を導出する。動き検出部13は、導出した動きに関連したベクトルを参照して上記動画像の動きを検出する。また、動き検出部13は、第2のベクトルを複数フレームに亘って追跡することにより、動画像の動きを追跡することもできる。 The motion detector 13 detects the motion of the moving image by tracking the motion-related vector derived by the vector deriver 12. More specifically, the motion detection unit 13 compares the vector (first vector) derived in the target image with the vector (first frame) temporally adjacent to the target image (also referred to as an adjacent frame). A vector (second vector) related to the motion between the target image and the adjacent frame is derived from the difference from the first vector. The motion detection unit 13 detects the motion of the moving image with reference to the vector related to the derived motion. The motion detection unit 13 can also track the motion of the moving image by tracking the second vector over a plurality of frames.
 また、動き検出部13は、動き検出部13が導出したベクトル(第2のベクトル)の特性に応じたサーチ領域を設定し、設定したサーチ領域において、動きベクトルの追跡を行う構成であってもよい。サーチ領域とは、追跡のための探索空間の領域であり、動きの向きを含んでもよい。なお、サーチ領域の設定方法については後述する。 Also, the motion detection unit 13 may be configured to set a search area according to the characteristics of the vector (second vector) derived by the motion detection unit 13 and track the motion vector in the set search area. Good. The search area is an area in a search space for tracking, and may include the direction of motion. The method for setting the search area will be described later.
 また、動き検出部13は、ベクトル導出部12において導出されたベクトルのうち、特定のオブジェクトに係るベクトルを選択し、選択したベクトルを追跡することにより動き検出を行う構成であってもよい。ここで、特定のオブジェクトとは、本実施形態を限定するものではないが、例えば、自動車の撮像画像におけるライトのように、動きを追跡しやすいオブジェクトが挙げられる。また、これら特定のオブジェクトの識別方法は本実施形態を限定するものではないが、例えば、パターンマッチング、エッジ検出等の技術を用いることにより、オブジェクトの識別を行うことができる。 The motion detection unit 13 may be configured to select a vector related to a specific object from the vectors derived by the vector derivation unit 12 and perform motion detection by tracking the selected vector. Here, the specific object is not limited to the present embodiment, but includes, for example, an object such as a light in a captured image of a car, whose movement can be easily tracked. Further, the method for identifying these specific objects is not limited to the present embodiment. For example, the objects can be identified by using a technique such as pattern matching or edge detection.
 (表示装置)
 表示装置20は、画像生成部21及び表示部22を備えている。
(Display device)
The display device 20 includes an image generation unit 21 and a display unit 22.
 画像生成部21は、ベクトル導出部12において導出されたベクトルを取得し、当該ベクトルに係る画像を生成する。また、画像生成部21は、動き検出部13において導出された動きに関連したベクトルを取得し、当該動きに関連したベクトルに係る画像(動きに係る画像)を生成する。画像生成部21において生成された画像は表示部22に出力される。 The image generation unit 21 acquires the vector derived by the vector derivation unit 12, and generates an image related to the vector. Further, the image generation unit 21 acquires a vector related to the motion derived by the motion detection unit 13 and generates an image related to the vector related to the motion (image related to the motion). The image generated by the image generation unit 21 is output to the display unit 22.
 表示部22は、画像生成部21から出力された画像を表示する。 The display unit 22 displays the image output from the image generation unit 21.
 (動き検出方法)
 図2は、動き検出システム1の処理の流れの一例を示すフローチャートである。
(Motion detection method)
FIG. 2 is a flowchart illustrating an example of a processing flow of the motion detection system 1.
 <ステップS1>
 画像取得部11によって、動き検出の対象となる動画像より、動画像の個々のフレームである対象画像が取得される。
<Step S1>
The image acquisition unit 11 acquires a target image, which is an individual frame of a moving image, from a moving image to be subjected to motion detection.
 <ステップS2>
 ベクトル導出部12は、画像取得部11が取得した対象画像から動きに関連したベクトルである第1のベクトルを導出する。
<Step S2>
The vector deriving unit 12 derives a first vector that is a vector related to motion from the target image acquired by the image acquiring unit 11.
 <ステップS3>
 動き検出部13は、ベクトル導出部12が導出した上記第1のベクトルを参照して動画像の動き、すなわち、第2のベクトルを検出する。また、上述のように、動き検出部13は、第2のベクトルを複数フレームに亘って追跡することにより、動画像の動きを追跡することもできる。
<Step S3>
The motion detection unit 13 detects the motion of the moving image, that is, the second vector, with reference to the first vector derived by the vector derivation unit 12. Further, as described above, the motion detection unit 13 can track the motion of the moving image by tracking the second vector over a plurality of frames.
 <ステップS4>
 画像生成部21は、動き検出部13によって導出された動きに関連したベクトルを参照して、動きに関連したベクトルに係る画像を生成する。
<Step S4>
The image generation unit 21 generates an image related to the motion-related vector with reference to the motion-related vector derived by the motion detection unit 13.
 <ステップS5>
 表示部22は、画像生成部21によって生成された画像を表示する。
<Step S5>
The display unit 22 displays the image generated by the image generation unit 21.
 なお、上述の処理の流れにおいて、画像生成部21は、動き検出部13によって導出された動きに関連したベクトルに係る画像を生成する構成に替えて、ベクトル導出部12によって導出された動きに関連したベクトルを参照して当該動きに関連したベクトルに係る画像を生成する構成であってもよい。 Note that, in the above-described processing flow, the image generation unit 21 replaces the configuration for generating an image related to the vector related to the motion derived by the motion detection unit 13, and replaces the configuration related to the motion derived by the vector derivation unit 12. The configuration may be such that an image related to a vector related to the motion is generated with reference to the obtained vector.
 以下に、より具体的な動き検出方法について例を挙げて説明する。 Hereinafter, a more specific motion detection method will be described with reference to examples.
 上述したステップS1において、画像取得部11は、動き検出の対象となる動画像と当該動画像のコントラストを反転した動画像を取得する。画像取得部11は、取得したこれらの動画像より、これらの動画像の個々のフレームである対象画像を取得する。 In step S <b> 1 described above, the image acquisition unit 11 acquires a moving image to be subjected to motion detection and a moving image in which the contrast of the moving image is inverted. The image acquisition unit 11 acquires, from these acquired moving images, target images that are individual frames of these moving images.
 上述したステップS2において、ベクトル導出部12は、対象画像において動きに関連する第1のベクトルの導出を行うドットの数を決定する。ベクトル導出部12は、決定したドットの数のうちの半分を画像取得部11が動画像から取得した対象画像(最初のフレーム)にランダムに位置するように設定し、決定したドットの数のうちの残りの半分を画像取得部11が上記動画像のコントラストを反転した動画像から取得した対象画像(最初のフレーム)にランダムに位置するように設定する。ベクトル導出部12は、対象画像(最初のフレーム)におけるドットが設定された画素において、動きに関連する第1のベクトルを導出する。 In step S2 described above, the vector deriving unit 12 determines the number of dots for deriving the first vector related to motion in the target image. The vector deriving unit 12 sets half of the determined number of dots to be randomly located in the target image (first frame) acquired by the image acquiring unit 11 from the moving image. The other half is set so that the image acquisition unit 11 is randomly located in the target image (first frame) acquired from the moving image in which the contrast of the moving image is inverted. The vector deriving unit 12 derives a first vector related to motion in a pixel where a dot is set in the target image (first frame).
 上述したステップS3において、動き検出部13は、ベクトル導出部12より導出された第1のベクトルを時間微分(フレーム間差分)することにより、第2のベクトルを導出する。 In step S3 described above, the motion detection unit 13 derives a second vector by performing time differentiation (inter-frame difference) on the first vector derived by the vector derivation unit 12.
 動き検出部13は、対象画像(最初のフレーム)に隣接するフレーム(対象画像の直後のフレーム)においても、同様の操作を繰り返し、全てのフレームにおいて、第2のベクトルを導出する。 The motion detecting unit 13 repeats the same operation in a frame adjacent to the target image (first frame) (frame immediately after the target image), and derives a second vector in all frames.
 動き検出部13は、全てのフレームにおいて、第2のベクトルを導出した後、各フレームのドットマッチングを行う。具体的には、動き検出部13は、対象画像(最初のフレーム)において、任意のドットを選択し、隣接するフレームにおいて、当該任意のドットとベクトルの方向及び大きさが同じドットを探索する。ここで、動き検出部13は、対象画像における任意のドットを起点として当該ドットのベクトルに沿った直線上の領域を隣接するフレームにおける探索領域として設定する。 After deriving the second vector for all frames, the motion detection unit 13 performs dot matching for each frame. Specifically, the motion detection unit 13 selects an arbitrary dot in the target image (first frame), and searches for a dot having the same vector direction and size as the arbitrary dot in an adjacent frame. Here, the motion detection unit 13 sets an area on a straight line along a vector of the dot from an arbitrary dot in the target image as a starting point, as a search area in an adjacent frame.
 動き検出部13は、上記任意のドットと、隣接フレームにおいて検出したドットとをマッチングする。なお、隣接するフレームにおいて、上記任意のドットとベクトルの方向及び大きさが同じドットが複数検出された場合、動き検出部13は、検出された複数のドットのうち、上記任意のドットに一番近い位置にあるドットとマッチングさせる。 The motion detection unit 13 matches the arbitrary dot with a dot detected in an adjacent frame. When a plurality of dots having the same vector direction and size as the arbitrary dot are detected in the adjacent frame, the motion detection unit 13 determines the most Match with a dot located near.
 動き検出部13は、全てのフレームにおいてドットマッチングを行うことにより、上記動画像の動きを追跡する。なお、動き検出部13は、マッチングされなかったドットをすべて削除する。 The motion detection unit 13 tracks the motion of the moving image by performing dot matching in all frames. Note that the motion detection unit 13 deletes all dots that have not been matched.
 画像生成部21は、動き検出部13においてマッチングされたドットを視覚化することで、動きに関連したベクトルに係る画像を生成する。表示部22は、画像生成部21によって生成された画像を表示する。 The image generation unit 21 generates an image related to a motion-related vector by visualizing the dots matched by the motion detection unit 13. The display unit 22 displays the image generated by the image generation unit 21.
 上記の具体例では、動き検出システム1は、対象画像における全ての画素において、動きに関連したベクトルの導出を行っているのではなく、ランダムに設定されたドット上においてのみ動きに関連したベクトルの導出を行っているため、計算コストの増大を招来することなく、動き検出を行うことができる。 In the above specific example, the motion detection system 1 does not derive a motion-related vector for all pixels in the target image, but performs a motion-related vector calculation only on randomly set dots. Since the derivation is performed, motion detection can be performed without increasing calculation cost.
 (ベクトルの導出方法)
 続いて、図3を参照して、上述した動きに関連したベクトルの導出方法についてより具体的に説明する。図3は、対象画像におけるベクトルの導出方法を示す画像であり、(a)は動き検出の対象画像を示し、(b)は対象画像において画素の座標と画素値とによって張られる3次元空間における曲面を示し、(c)は対象画像において導出されたベクトルを示し、(d)は対象画像のベクトルのフレーム間差分を示す。
(How to derive the vector)
Next, with reference to FIG. 3, a method for deriving the above-described motion-related vector will be described more specifically. 3A and 3B are images showing a method of deriving a vector in a target image, in which FIG. 3A shows a target image for motion detection, and FIG. FIG. 3C shows a curved surface, FIG. 4C shows a vector derived from the target image, and FIG.
 (導出工程1)
 まず、本実施形態において、ベクトル導出部12は、対象画像に含まれる少なくとも何れかの画素について、画素値の勾配を示す第1のベクトルを導出する。
(Derivation process 1)
First, in the present embodiment, the vector deriving unit 12 derives a first vector indicating a gradient of a pixel value for at least one of the pixels included in the target image.
 より具体的には、まず、ベクトル導出部12は、図3の(b)に示すように、対象画像の画素の座標を示す2次元平面(XY平面)と、各画素の画素値を示すZ軸とによって張られる3次元空間において、対象画像の各画素値を示す曲面を考える。すなわち、対象画像の各画素の横方向の位置、縦方向の位置、及び画素値は、上記3次元空間において、それぞれX座標、Y座標、及びZ座標に対応する。ここで、Z軸として組み合わせられる画素値としては、例えば、輝度値、色差値、及びRGB等の各色の何れか又はそれらの組み合わせを用いることができる。 More specifically, first, as shown in FIG. 3B, the vector deriving unit 12 generates a two-dimensional plane (XY plane) indicating the coordinates of the pixel of the target image, and a Z indicating the pixel value of each pixel. In a three-dimensional space spanned by axes, a curved surface indicating each pixel value of a target image is considered. That is, the horizontal position, the vertical position, and the pixel value of each pixel of the target image correspond to the X coordinate, the Y coordinate, and the Z coordinate, respectively, in the three-dimensional space. Here, as the pixel value combined as the Z axis, for example, any one of the colors such as a luminance value, a color difference value, and RGB or a combination thereof can be used.
 (導出工程2)
 次に、ベクトル導出部12は、それぞれの画素における、上記3次元空間上の曲面の法線ベクトルを導出する。
(Derivation process 2)
Next, the vector deriving unit 12 derives a normal vector of a curved surface in the three-dimensional space for each pixel.
 次に、ベクトル導出部12は、導出した法線ベクトルを正規化し、当該正規化した法線ベクトルをXY平面に射影することによって、図3の(c)に示すように、各画素に関して第1のベクトルを導出する。ベクトル導出部12において導出された第1のベクトルのX成分nx及びY成分nyは下記式のように表される。
Figure JPOXMLDOC01-appb-M000001
ここで、I(x,y,t)は時刻tにおける座標(x,y)に位置する画素の輝度である。なお、時刻tは、各フレームに付されたフレーム番号(Picture Order Countとも言う)と読み替えることができる。また、x及びyに関する偏微分は、それぞれx及びyに関する差分と読み替えることができる。
Next, the vector deriving unit 12 normalizes the derived normal vector, and projects the normalized normal vector onto the XY plane, as shown in FIG. Is derived. The X component nx and the Y component ny of the first vector derived by the vector deriving unit 12 are represented by the following equations.
Figure JPOXMLDOC01-appb-M000001
Here, I (x, y, t) is the luminance of the pixel located at the coordinates (x, y) at time t. The time t can be read as a frame number (also referred to as Picture Order Count) assigned to each frame. In addition, the partial derivative with respect to x and y can be read as the difference with respect to x and y, respectively.
 (導出工程3)
 次に、動き検出部13は、図3の(d)に示すように、ベクトル導出部12において得られた、上記対象画像の第1ベクトルと、当該対象画像に時間的に隣接するフレームの第1ベクトルとの差分より、この2フレーム間の差分を示すベクトル(第2のベクトルとも呼ぶ)を導出する。換言すれば、第1のベクトルのX成分nx及びY成分nyの時間微分(時間差分)をとることにより、第2のベクトルを導出する。
(Derivation process 3)
Next, as shown in FIG. 3D, the motion detection unit 13 determines the first vector of the target image obtained by the vector derivation unit 12 and the first vector of the frame temporally adjacent to the target image. From the difference with one vector, a vector (also referred to as a second vector) indicating the difference between the two frames is derived. In other words, the second vector is derived by taking the time differential (time difference) of the X component nx and the Y component ny of the first vector.
 ここで、本実施形態に係る動き検出システム1の動きの検出結果について、図4~6を参照して説明する。 Here, the detection result of the motion of the motion detection system 1 according to the present embodiment will be described with reference to FIGS.
 図4は、動き検出を実施した動画像における対象画像を示す画像である。図4の(a)~(c)は、当該動画像内において、時間的に連続するフレームである。 FIG. 4 is an image showing a target image in a moving image on which motion detection has been performed. (A) to (c) of FIG. 4 are frames that are temporally continuous in the moving image.
 図5は、図4の対象画像より導出した第1のベクトルの一部を表す画像である。図5の(a)~(c)は、それぞれ、図4の(a)~(c)に対象画像において導出された第1のベクトルの座標を示す画像である。なお、図5に示した例では、対象画像に含まれる画素の全てではなく、一部の画素に対して第1のベクトルの導出を行っている。 FIG. 5 is an image showing a part of the first vector derived from the target image of FIG. FIGS. 5A to 5C are images showing the coordinates of the first vector derived from the target image in FIGS. 4A to 4C, respectively. In the example shown in FIG. 5, the first vector is derived for some but not all of the pixels included in the target image.
 図6は、図4の第1のフレームより導出した対象画素P1、および、第2のフレームより導出した対象画素P2を結ぶ線分を描いた画像である。図6の(a)は、図5の(a)及び(b)より導出された線分を示し、図6の(b)は、図5の(b)及び(c)より導出された線分を示す。 FIG. 6 is an image depicting a line segment connecting the target pixel P1 derived from the first frame and the target pixel P2 derived from the second frame in FIG. FIG. 6A shows a line segment derived from FIGS. 5A and 5B, and FIG. 6B shows a line segment derived from FIGS. 5B and 5C. Indicates minutes.
 図6から明らかなように、本実施形態に係る動き検出システム1は、動き検出部13が導出した動きに関連したベクトルを追跡することによって動き検出を好適に行うことができる。 As is clear from FIG. 6, the motion detection system 1 according to the present embodiment can preferably perform motion detection by tracking a vector related to the motion derived by the motion detection unit 13.
 (サーチ領域の設定方法)
 動き検出部13におけるサーチ領域の具体的な設定方法について例を挙げれば以下の通りである。
(How to set search area)
An example of a specific method of setting a search area in the motion detection unit 13 is as follows.
 動き検出部13は、対象画像N1における対象画素P1に関して導出した第2のベクトルV1に沿った方向にサーチ領域を設定し、上記対象画像に対して隣接するフレームN2(例えば、上記対象画像の直後のフレーム)における当該サーチ領域において、上記対象画素に対応する画素P2を特定し、当該特定した画素に対して第2のベクトルV2を算出する。動き検出部13は、上記の動作を行うことによって第2のベクトルの追跡を行う。 The motion detection unit 13 sets a search area in a direction along the second vector V1 derived for the target pixel P1 in the target image N1, and sets a frame N2 adjacent to the target image (for example, immediately after the target image). In the search area in the frame (), a pixel P2 corresponding to the target pixel is specified, and a second vector V2 is calculated for the specified pixel. The motion detection unit 13 performs tracking of the second vector by performing the above operation.
 また、動き検出部13におけるサーチ領域の具体的な設定方法について別の例を挙げれば以下の通りである。 Another example of a specific method of setting the search area in the motion detection unit 13 is as follows.
 動き検出部13は、対象画像N1およびフレームN2において、それぞれ、第1のベクトルを算出する。動き検出部13は、対象画像N1およびフレームN2における対象画素より、第2のベクトルを導出する。動き検出部13は、ベクトルの追跡を行う対象である対象画素P1に関して導出した第2のベクトルV1に沿った方向にサーチ領域を設定する。動き検出部13は、フレームN2における当該サーチ領域において、対象画素P1に対応する対象画素P2に関して導出した第2のベクトルV2を特定し,ベクトルの追跡を行う。 The motion detector 13 calculates a first vector in each of the target image N1 and the frame N2. The motion detection unit 13 derives a second vector from target pixels in the target image N1 and the frame N2. The motion detection unit 13 sets a search area in a direction along the second vector V1 derived for the target pixel P1 that is the target of vector tracking. The motion detection unit 13 specifies the second vector V2 derived for the target pixel P2 corresponding to the target pixel P1 in the search area in the frame N2, and tracks the vector.
 ここで、上記サーチ領域は、一例として、対象画像の対象画素P1を起点とした第2のベクトルV1に沿った直線状の領域として設定してもよいし、当該直線状の領域を、第2のベクトルとは垂直の方向に沿って太らせた帯状の領域として設定してもよいし、その他の形状の領域として設定してもよい。また、上記サーチ領域は、一例として、第2のベクトルを定義域とするような関数の値域を用いて設定してもよい。上記サーチ領域は、連続的、連結的、線形的なものに限定されるものではなく、非連続的、非連結的、非線形的であってもよい。 Here, as an example, the search area may be set as a linear area along the second vector V1 starting from the target pixel P1 of the target image, or the linear area may be set as the second area. The vector may be set as a band-shaped area that is thickened along the vertical direction, or may be set as an area having another shape. Further, the search area may be set using, for example, a function value range in which the second vector is defined as a domain. The search area is not limited to a continuous, connected, or linear one, and may be discontinuous, non-connected, or non-linear.
 なお、対象画像における対象画素P1の座標と、隣接フレームにおける対象画素P2の座標は一般的に異なり得るが、これは本実施形態を限定するものではなく、同じ座標の対象画素に関して、対象画像と隣接フレームとで第1のベクトルを導出する構成としてもよい。 Note that the coordinates of the target pixel P1 in the target image and the coordinates of the target pixel P2 in the adjacent frame may generally be different, but this is not a limitation of the present embodiment. The first vector may be derived between adjacent frames.
 また、隣接フレームにおける対象画素の特定は、当該対象画素に関する画素特性(例えば、輝度、色差、隣接画素との画素値の差等)に基づいて行う構成とすることができる。 対 象 In addition, the target pixel in the adjacent frame can be specified based on pixel characteristics (for example, luminance, color difference, difference in pixel value between adjacent pixels, and the like) of the target pixel.
 このように、本発明に係る動画検出システムは、サーチ領域を設定したうえで第1のベクトルの追跡を行うので、計算コストの増大を招来することなく、動き検出を行うことができる。 As described above, since the moving image detection system according to the present invention tracks the first vector after setting the search area, the motion detection can be performed without increasing the calculation cost.
 (機械学習に関する付記事項)
 なお、動き検出部13による動き検出のための分類及び学習処理の具体構成は本実施形態を限定するものではなく、例えば、以下のような機械学習的手法の何れかまたはそれらの組み合わせを用いることができる。
(Appendix on machine learning)
The specific configuration of the classification and learning processing for motion detection by the motion detection unit 13 is not limited to the present embodiment. For example, any one of the following machine learning methods or a combination thereof may be used. Can be.
 ・サポートベクターマシン(SVM: Support Vector Machine)
 ・クラスタリング(Clustering)
 ・帰納論理プログラミング(ILP: Inductive Logic Programming)
 ・遺伝的アルゴリズム(GP: Genetic Programming)
 ・ベイジアンネットワーク(BN: Bayesian Network)
 ・ニューラルネットワーク(NN: Neural Network)
 ニューラルネットワークを用いる場合、3Dデータをニューラルネットワークへのインプット用に予め加工して用いるとよい。このような加工には、データの1次元的配列化、または多次元的配列化に加え、例えば、データオーギュメンテーション(Data Augmentation)等の手法を用いることができる。
・ Support Vector Machine (SVM)
・ Clustering
・ Inductive Logic Programming (ILP)
・ Genetic Algorithm (GP)
・ Bayesian Network (BN)
・ Neural Network (NN)
When a neural network is used, 3D data may be processed and used in advance for input to the neural network. For such processing, in addition to one-dimensional arrangement or multi-dimensional arrangement of data, for example, a technique such as data augmentation (Data Augmentation) can be used.
 また、ニューラルネットワークを用いる場合、畳み込み処理を含む畳み込みニューラルネットワーク(CNN: Convolutional Neural Network)を用いてもよい。より具体的には、ニューラルネットワークに含まれる1又は複数の層(レイヤ)として、畳み込み演算を行う畳み込み層を設け、当該層に入力される入力データに対してフィルタ演算(積和演算)を行う構成としてもよい。またフィルタ演算を行う際には、パディング等の処理を併用したり、適宜設定されたストライド幅を採用したりしてもよい。 In the case of using a neural network, a convolutional neural network (CNN: Convolutional Neural Network) including a convolution process may be used. More specifically, a convolution layer for performing a convolution operation is provided as one or a plurality of layers (layers) included in the neural network, and a filter operation (product-sum operation) is performed on input data input to the layer. It may be configured. When performing the filter operation, processing such as padding may be used in combination, or an appropriately set stride width may be employed.
 また、ニューラルネットワークとして、数十~数千層に至る多層型又は超多層型のニューラルネットワークを用いてもよい。 多層 Also, as the neural network, a multilayer or super multilayer neural network having several tens to thousands of layers may be used.
 また、動き検出部13による動き検出のための分類及び学習処理の具体構成は本実施形態を限定するものではなく、例えば、以上の処理に用いられる機械学習は、教師あり学習であってもよいし、教師なし学習であってもよい。 The specific configuration of the classification and learning processing for motion detection by the motion detection unit 13 is not limited to the present embodiment. For example, the machine learning used in the above processing may be supervised learning. And it may be unsupervised learning.
 〔実施形態2〕
 本発明の他の実施形態について、以下に説明する。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
[Embodiment 2]
Another embodiment of the present invention will be described below. For convenience of description, members having the same functions as those described in the above embodiment are denoted by the same reference numerals, and description thereof will not be repeated.
 画像取得部11、ベクトル導出部12、動き検出部13、画像生成部21及び表示部22を、それぞれ別体の装置に設ける構成とし、これらの装置同士の情報のやり取りを、有線又は無線通信によって行う構成としてもよい。 The image acquisition unit 11, the vector derivation unit 12, the motion detection unit 13, the image generation unit 21, and the display unit 22 are provided in separate devices, and the exchange of information between these devices is performed by wired or wireless communication. It is good also as a structure which performs.
 具体的には、図7に示すように、動き検出システム1aとして、端末装置(撮像装置)10aと、表示装置20と、サーバ(動き検出装置)30とを備える構成とし、当該端末装置10aが、画像取得部11を通信部14とを備え、表示装置20が、画像生成部21と、表示部22と、通信部23とを備え、サーバ30が通信部31とベクトル導出部32と動き検出部33とを備える構成としてもよい。ここで、ベクトル導出部32、及び動き検出部33の動作は、それぞれ、実施形態1において説明したベクトル導出部12、及び動き検出部13の動作と同様である。 Specifically, as shown in FIG. 7, the motion detection system 1a includes a terminal device (imaging device) 10a, a display device 20, and a server (motion detection device) 30. , The image acquisition unit 11 includes a communication unit 14, the display device 20 includes an image generation unit 21, a display unit 22, and a communication unit 23. The server 30 includes a communication unit 31, a vector derivation unit 32, and a motion detection unit. A configuration including the unit 33 may be adopted. Here, the operations of the vector derivation unit 32 and the motion detection unit 33 are the same as the operations of the vector derivation unit 12 and the motion detection unit 13 described in the first embodiment, respectively.
 また、当該サーバは、複数の画像セットに対する動き検出結果を、当該画像セットの識別番号と共に管理する構成としてもよい。より具体的な例として、複数の対象者の体の動きを、本システムによって動き検出する場合、対象者のID毎に画像セット(映像データ)を管理する構成としてもよい。更に、画像を取得した時期を示す時間情報を更に紐付けてもよい。 The server may be configured to manage the motion detection results for a plurality of image sets together with the identification numbers of the image sets. As a more specific example, when the movement of a plurality of subjects is detected by the present system, a configuration may be employed in which an image set (video data) is managed for each subject ID. Further, time information indicating the time when the image was acquired may be further linked.
 また、当該サーバは、動き検知結果に基づいた機械学習を行う学習部を更に備える構成としてもよい。ここで、当該学習部は、一例として、動き検出部13が検出した動き情報(又は、第1のベクトル、第2のベクトル等)と、上記対象者のIDと、時間情報とをインプットとし、対象者の体の動きに関するクラス分け情報を出力とする学習装置として機能する。 The server may further include a learning unit that performs machine learning based on the motion detection result. Here, as an example, the learning unit receives the motion information (or the first vector, the second vector, or the like) detected by the motion detection unit 13, the ID of the subject, and time information as inputs. It functions as a learning device that outputs classification information on the movement of the subject's body.
 なお当該学習部としては、実施形態1において(機械学習に関する付記事項)に記載したように、各種の手法を用いることができる。 As the learning unit, various methods can be used as described in the first embodiment (additional notes on machine learning).
 〔実施形態3〕
 動き検出装置10の制御ブロック(画像取得部11、ベクトル導出部12および動き検出部13)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、ソフトウェアによって実現してもよい。後者の場合、動き検出装置10、表示装置20、後述する出力装置20b(実施形態4を参照)、およびサーバ30のそれぞれを、図8に示すようなコンピュータ(電子計算機)を用いて構成することができる。なお、動き検出装置10、表示装置20、出力装置20bおよびサーバ30は、それぞれ別体の装置として構成されていてもよく、また、それらの装置のうちの少なくとも一部が一体の装置として構成されていてもよい。例えば、動き検出装置10と出力装置20bとが一体の装置として構成されていてもよい。
[Embodiment 3]
The control blocks (the image acquisition unit 11, the vector derivation unit 12, and the motion detection unit 13) of the motion detection device 10 may be realized by a logic circuit (hardware) formed on an integrated circuit (IC chip) or the like, It may be realized by software. In the latter case, each of the motion detection device 10, the display device 20, an output device 20b described later (see Embodiment 4), and the server 30 is configured using a computer (electronic computer) as shown in FIG. Can be. The motion detection device 10, the display device 20, the output device 20b, and the server 30 may be configured as separate devices, respectively, or at least a part of the devices may be configured as an integrated device. May be. For example, the motion detection device 10 and the output device 20b may be configured as an integrated device.
 図8は、動き検出装置10、表示装置20、出力装置20b、またはサーバ30として利用可能なコンピュータ910の構成を例示したブロック図である。コンピュータ910は、バス911を介して互いに接続された演算装置912と、主記憶装置913と、補助記憶装置914と、入出力インターフェース915と、通信インターフェース916とを備えている。演算装置912、主記憶装置913、および補助記憶装置914は、それぞれ、例えばCPU、RAM(random access memory)、ハードディスクドライブおよびフラッシュメモリ等のストレージであってもよい。入出力インターフェース915には、ユーザがコンピュータ910に各種情報を入力するための入力装置920、および、コンピュータ910がユーザに各種情報を出力するための出力装置930が接続される。入力装置920および出力装置930は、コンピュータ910に内蔵されたものであってもよいし、コンピュータ910に接続された(外付けされた)ものであってもよい。例えば、入力装置920は、キーボード、マウス、タッチセンサなどであってもよく、出力装置930は、ディスプレイ、プリンタ、スピーカなどであってもよい。また、タッチセンサとディスプレイとが一体化されたタッチパネルのような、入力装置920および出力装置930の双方の機能を有する装置を適用してもよい。そして、通信インターフェース916は、コンピュータ910が外部の装置と通信するためのインターフェースである。 FIG. 8 is a block diagram illustrating the configuration of a computer 910 that can be used as the motion detection device 10, the display device 20, the output device 20b, or the server 30. The computer 910 includes an arithmetic device 912, a main storage device 913, an auxiliary storage device 914, an input / output interface 915, and a communication interface 916 connected to each other via a bus 911. The arithmetic device 912, the main storage device 913, and the auxiliary storage device 914 may be storages such as a CPU, a RAM (random access memory), a hard disk drive, and a flash memory, respectively. The input / output interface 915 is connected to an input device 920 for the user to input various information to the computer 910 and an output device 930 for the computer 910 to output various information to the user. The input device 920 and the output device 930 may be built in the computer 910, or may be connected (externally attached) to the computer 910. For example, the input device 920 may be a keyboard, a mouse, a touch sensor, and the like, and the output device 930 may be a display, a printer, a speaker, and the like. Further, a device having both functions of the input device 920 and the output device 930, such as a touch panel in which a touch sensor and a display are integrated, may be applied. The communication interface 916 is an interface for the computer 910 to communicate with an external device.
 補助記憶装置914には、コンピュータ910を動き検出装置10、表示装置20、出力装置20b、またはサーバ30として動作させるための各種のプログラムが格納されている。そして、演算装置912は、補助記憶装置914に格納された上記プログラムを主記憶装置913上に展開して当該プログラムに含まれる命令を実行することによって、コンピュータ910を、動き検出装置10、表示装置20、出力装置20b、またはサーバ30が備える各部として機能させる。なお、補助記憶装置914が備える、プログラム等の情報を記録する記録媒体は、コンピュータ読み取り可能な「一時的でない有形の媒体」であればよく、例えば、テープ、ディスク、カード、半導体メモリ、プログラマブル論理回路などであってもよい。また、記録媒体に記録されているプログラムを、主記憶装置913上に展開することなく実行可能なコンピュータであれば、主記憶装置913を省略してもよい。なお、上記各装置(演算装置912、主記憶装置913、補助記憶装置914、入出力インターフェース915、通信インターフェース916、入力装置920、および出力装置930)は、それぞれ1つであってもよいし、複数であってもよい。 Various programs for operating the computer 910 as the motion detection device 10, the display device 20, the output device 20b, or the server 30 are stored in the auxiliary storage device 914. Then, the arithmetic unit 912 expands the program stored in the auxiliary storage device 914 on the main storage device 913 and executes an instruction included in the program, thereby causing the computer 910 to operate the motion detection device 10 and the display device. 20, the output device 20b, or each unit included in the server 30. Note that a recording medium for recording information such as a program provided in the auxiliary storage device 914 may be a computer-readable “temporary tangible medium”, such as a tape, disk, card, semiconductor memory, or programmable logic. It may be a circuit or the like. Further, the main storage device 913 may be omitted as long as the computer can execute the program recorded on the recording medium without expanding the program on the main storage device 913. Note that each of the above devices (the arithmetic device 912, the main storage device 913, the auxiliary storage device 914, the input / output interface 915, the communication interface 916, the input device 920, and the output device 930) may be one each. There may be more than one.
 また、上記プログラムは、コンピュータ910の外部から取得してもよく、この場合、任意の伝送媒体(通信ネットワークや放送波等)を介して取得してもよい。そして、本発明は、上記プログラムが電子的な伝送によって具現化された、搬送波に埋め込まれたデータ信号の形態でも実現され得る。 The program may be obtained from outside the computer 910. In this case, the program may be obtained via an arbitrary transmission medium (such as a communication network or a broadcast wave). The present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
 本発明は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。 The present invention is not limited to the embodiments described above, and various modifications are possible within the scope of the claims, and embodiments obtained by appropriately combining technical means disclosed in different embodiments. Is also included in the technical scope of the present invention.
 〔実施形態4〕
 本発明の他の実施形態について、以下に説明する。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
[Embodiment 4]
Another embodiment of the present invention will be described below. For convenience of description, members having the same functions as those described in the above embodiment are denoted by the same reference numerals, and description thereof will not be repeated.
 上述の各実施形態及び本実施形態において検出の対象となる対象画像の動きには、自動車、人、動物、ボール、等の剛体の動きだけではなく、気体(例えば煙)、及び液体(例えば水、オイル)等の流体の動きが含まれる。このように、上述の各実施形態及び本実施形態において動き検出の対象となる物体には、剛体及び流体が含まれる。この実施形態では、特に、対象画像に気体及び液体等の流体が含まれ、この流体の動きを検出システムが検出する場合を主に説明する。 In each of the above-described embodiments and the present embodiment, the motion of the target image to be detected includes not only the motion of a rigid body such as an automobile, a person, an animal, and a ball, but also a gas (for example, smoke) and a liquid (for example, water). , Oil). As described above, the object to be subjected to motion detection in each of the above embodiments and the present embodiment includes a rigid body and a fluid. In this embodiment, a case will be mainly described in which a target image contains a fluid such as a gas and a liquid, and the detection system detects the movement of the fluid.
 図9は、この実施形態に係る動き検出システム1b(特性検出装置、及び流体検出装置の一例)の概略構成を示すブロック図である。図9に示すように、動き検出システム1bは、動き検出装置10及び出力装置20bを備えている。動き検出装置10の画像取得部11、ベクトル導出部12、及び動き検出部13の動作は、実施形態1において説明した画像取得部11、ベクトル導出部12、及び動き検出部13の動作と同様である。 FIG. 9 is a block diagram showing a schematic configuration of a motion detection system 1b (an example of a characteristic detection device and a fluid detection device) according to this embodiment. As shown in FIG. 9, the motion detection system 1b includes a motion detection device 10 and an output device 20b. The operations of the image acquisition unit 11, the vector derivation unit 12, and the motion detection unit 13 of the motion detection device 10 are the same as the operations of the image acquisition unit 11, the vector derivation unit 12, and the motion detection unit 13 described in the first embodiment. is there.
 出力装置20bは、画像生成部21、表示部22、特定部24、検知部25、及び出力部26を備えている。出力装置20bの画像生成部21、及び表示部22の動作は、実施形態1において説明した画像生成部21、及び表示部22の動作と同様である。 The output device 20b includes an image generation unit 21, a display unit 22, a specification unit 24, a detection unit 25, and an output unit 26. The operations of the image generation unit 21 and the display unit 22 of the output device 20b are the same as the operations of the image generation unit 21 and the display unit 22 described in the first embodiment.
 特定部24は、動き検出部13の検出結果を解析し、対象画像に含まれる物体の特性及び物体の動きの特性の少なくともいずれか一方を特定する。対象画像に含まれる物体は、人、動物、自動車、ボール等の剛体であってもよく、また、気体又は液体等の流体であってもよい。物体の特性とは、その物体が有する特徴的な性質又は外観をいう。物体の特性は例えば、人の表情、衣服、物体の形状、又は物体の粘度である。物体の動きの特性は例えば例えば、今後予測される物体の移動方向、又は物体の移動速度である。 The identification unit 24 analyzes the detection result of the motion detection unit 13 and identifies at least one of the characteristics of the object included in the target image and the characteristics of the motion of the object. The object included in the target image may be a rigid body such as a person, an animal, a car, and a ball, or may be a fluid such as a gas or a liquid. The characteristics of an object refer to the characteristic properties or appearance of the object. The property of the object is, for example, the expression of a person, clothes, the shape of the object, or the viscosity of the object. The characteristic of the movement of the object is, for example, a moving direction of the object or a moving speed of the object predicted in the future.
 例えば人の表情を特性として特定する場合、特定部24は、対象画像の画像解析結果及び動き検出部13の検出結果により、人の表情、及び/又は表情の変化を特定する。また、衣服を特定する場合、特定部24は、対象画像の画像解析結果及び動き検出部13の検出結果により、衣服の形状及び/又は衣服の動きを特定する。また、特定部24は、特定された衣服の形状及び/又は衣服の動きから衣服の素材を特定してもよい。 For example, when specifying a human expression as a characteristic, the specifying unit 24 specifies a human expression and / or a change in the facial expression based on the image analysis result of the target image and the detection result of the motion detection unit 13. When specifying clothing, the specifying unit 24 specifies the shape of the clothing and / or the movement of the clothing based on the image analysis result of the target image and the detection result of the motion detection unit 13. In addition, the specifying unit 24 may specify the material of the clothes from the specified shape of the clothes and / or the movement of the clothes.
 また、予測される物体の動きを特性として特定する場合、特定部24は、動き検出部13が検出した動きを解析し、対象画像に含まれる物体の動きを予測する。より具体的には例えば、特定部24は、動き検出部13が導出した第2のベクトルによって示される移動方向及び移動速度を、予測される物体の移動方向及び移動速度としてもよい。また、例えば、特定部24は、動き検出部13が導出した第2のベクトルによって示される移動方向及び移動速度の時間の経過に伴う変化を解析することにより、物体の動きを予測してもよい。 In addition, when specifying the predicted motion of the object as a characteristic, the specifying unit 24 analyzes the motion detected by the motion detection unit 13 and predicts the motion of the object included in the target image. More specifically, for example, the specifying unit 24 may use the moving direction and the moving speed indicated by the second vector derived by the motion detecting unit 13 as the predicted moving direction and the moving speed of the object. Further, for example, the specifying unit 24 may predict the motion of the object by analyzing a change with time of the moving direction and the moving speed indicated by the second vector derived by the motion detecting unit 13. .
 また、粘度を測定する場合、特定部24は、動き検出部13の検出結果を用いて、流体の粘度を測定する。粘度が測定される物体は例えば、セメント、又はアイスクリームである。例えば特定部24は、動き検出部13において導出された第2ベクトルを取得し、第2のベクトルによって示される物体の移動速度から物体の粘度を測定する。 When measuring the viscosity, the specifying unit 24 measures the viscosity of the fluid using the detection result of the motion detecting unit 13. The object whose viscosity is to be measured is, for example, cement or ice cream. For example, the identification unit 24 acquires the second vector derived by the motion detection unit 13, and measures the viscosity of the object based on the moving speed of the object indicated by the second vector.
 検知部25は、動き検出部13の検出結果を用いて、対象画像に含まれる流体を検知する。検知部25は例えば、動き検出部13の検出結果を解析し、動きが検出された領域を物体が位置する領域であると特定してもよい。 The detection unit 25 detects the fluid included in the target image using the detection result of the motion detection unit 13. For example, the detection unit 25 may analyze a detection result of the motion detection unit 13 and specify a region where the motion is detected as a region where the object is located.
 出力部26は、特定部24の特定結果、及び検知部25の計測結果を示す情報を出力する。出力部26による情報の出力は、例えばデータが外部接続された装置に出力されることにより行われてもよく、また、通信ネットワークを介して他の装置にデータが送信されることにより行われてもよい。また、情報の出力は、例えば表示部に画像を表すデータが出力されることにより行われてもよく、また、音や音声により情報が出力されてもよい。 The output unit 26 outputs information indicating the specification result of the specification unit 24 and the measurement result of the detection unit 25. The output of information by the output unit 26 may be performed, for example, by outputting data to an externally connected device, or may be performed by transmitting data to another device via a communication network. Is also good. The output of the information may be performed, for example, by outputting data representing an image to the display unit, or the information may be output by sound or voice.
 図10は、動き検出を実施した動画像における対象画像を例示する図である。図10の例では、煙などの気体を撮影した動画像が用いられている。図10の(a)~(d)はそれぞれ、動画像に含まれるフレームである。具体的には、図10の(b)は、図10の(a)のフレームから一定時間(例えば、20秒)経過後のフレームである。図10の(c)は、図10の(b)のフレームから一定時間(例えば、20秒)経過後のフレームである。図10の(d)は、図10の(c)のフレームから一定時間(例えば、20秒)経過後のフレームである。図10の(a)~(d)に示されるように、撮影された煙などの気体の外観は、時間の経過に伴って徐々に変化している。 FIG. 10 is a diagram illustrating a target image in a moving image on which motion detection has been performed. In the example of FIG. 10, a moving image obtained by capturing a gas such as smoke is used. (A) to (d) of FIG. 10 are frames included in a moving image. Specifically, FIG. 10B is a frame after a fixed time (for example, 20 seconds) has elapsed from the frame of FIG. 10A. (C) of FIG. 10 is a frame after a lapse of a fixed time (for example, 20 seconds) from the frame of (b) of FIG. 10. (D) of FIG. 10 is a frame after a predetermined time (for example, 20 seconds) has elapsed from the frame of (c) of FIG. 10. As shown in FIGS. 10A to 10D, the appearance of a gas such as a captured smoke gradually changes over time.
 図11は、図10の対象画像より導出した第1のベクトルの座標を表す画像である。図11の(a)~(d)はそれぞれ、図10の(a)~(d)の対象画像において導出された第1のベクトルの座標を示す画像である。この例で、ベクトル導出部12は、対象画像に含まれる全ての画素に対して第1のベクトルの導出を行ってフレーム間でのドットのマッチング処理を行い、マッチングに成功した第1のベクトルを抽出する。すなわち、図11の例では、マッチングが成功した第1のベクトルの座標が表示され、マッチングが成功しなかった第1のベクトルの座標は表示されない。なお、ベクトル導出部12が第1のベクトルの導出の対象とする画素は、対象画像に含まれる全ての画素でなくてもよい。ベクトル導出部12は、対象画像に含まれる一部の画素から第1のベクトルの導出を行ってもよい。 FIG. 11 is an image showing the coordinates of the first vector derived from the target image of FIG. (A) to (d) of FIG. 11 are images showing the coordinates of the first vector derived in the target images of (a) to (d) of FIG. 10, respectively. In this example, the vector deriving unit 12 derives a first vector for all pixels included in the target image, performs a dot matching process between frames, and determines a first vector that succeeds in matching. Extract. That is, in the example of FIG. 11, the coordinates of the first vector for which matching has succeeded are displayed, and the coordinates of the first vector for which matching has not succeeded are not displayed. Note that the pixels for which the vector derivation unit 12 derives the first vector need not be all the pixels included in the target image. The vector deriving unit 12 may derive the first vector from some pixels included in the target image.
 この実施形態では、画像生成部21は、動き検出の対象画像である動画像に対応する動画像として、ベクトル導出部12が導出した第1のベクトルの座標の時間的な変化を表す動画像(以下「動き検出画像」という)を生成する。図11の(a)~(d)はこの動画像に含まれるフレームの例である。この動画像により、ユーザは例えば煙などの流体の動きを把握することができる。 In this embodiment, the image generation unit 21 determines, as a moving image corresponding to the moving image that is the target image of the motion detection, a moving image (a moving image representing the temporal change of the coordinates of the first vector derived by the vector deriving unit 12). Hereinafter, referred to as a “motion detection image”). FIGS. 11A to 11D show examples of frames included in this moving image. This moving image allows the user to grasp the movement of the fluid such as smoke.
 図12は、図10の対象画像に図11の画像を重畳させた画像である。図12の(a)~(d)はそれぞれ、図10の(a)~(d)の対象画像に図11の(a)~(d)の画像を重畳したものである。 FIG. 12 is an image obtained by superimposing the image of FIG. 11 on the target image of FIG. 12A to 12D are obtained by superimposing the images of FIGS. 11A to 11D on the target images of FIGS. 10A to 10D, respectively.
 図16は、図10の対象画像に対する従来のオプティカルフローの検出手法による動き検出結果を例示する画像である。図16の(a)~(d)はそれぞれ、図10の(a)~(d)の対象画像からの動きの検出結果を表す画像である。図16と図11とを比較すると明らかなように、本実施形態では、煙などの流体の動き検出の精度が高くなる。 FIG. 16 is an image exemplifying a motion detection result of the target image of FIG. 10 by a conventional optical flow detection method. (A) to (d) of FIG. 16 are images showing detection results of motion from the target images of (a) to (d) of FIG. 10, respectively. As is apparent from a comparison between FIG. 16 and FIG. 11, in the present embodiment, the accuracy of motion detection of a fluid such as smoke is improved.
 図13は、動き検出を実施した他の動画像における対象画像を例示する図である。図13の(a)~(d)はそれぞれ、動画像に含まれるフレームである。各図の(a)~(d)により示されるフレームの関係は、上述した図10のそれと同様である。 FIG. 13 is a diagram illustrating a target image in another moving image on which motion detection has been performed. (A) to (d) of FIG. 13 are frames included in a moving image. The relationship between the frames shown by (a) to (d) in each drawing is the same as that in FIG. 10 described above.
 図14は、図13の対象画像より導出した第1のベクトルの座標を表す画像である。また、図15は、図13の対象画像に、各対象画像から導出された第1のベクトルの座標を表す画像を重畳された画像である。 FIG. 14 is an image showing the coordinates of the first vector derived from the target image of FIG. FIG. 15 is an image in which an image representing the coordinates of the first vector derived from each target image is superimposed on the target image in FIG.
 ところで、気体や液体等の流体が撮影された画像が対象画像として用いられる場合、対象画像には特徴点が顕在しない場合がある。このように特徴点が顕在しない場合、従来のオプティカルフローの手法では対象画像から動きを検出することが困難である場合があった。これに対しこの実施形態では、対象画像に特徴点が顕在しない場合であっても、動き検出部13が動画像の動きを追跡し易い。これにより、この実施形態によれば、人が知覚するような、煙、水などの流体の動きの抽出と可視化が可能になる。 By the way, when an image obtained by photographing a fluid such as a gas or a liquid is used as a target image, a feature point may not be apparent in the target image. When the feature points do not appear as described above, it has been difficult in some cases to detect the motion from the target image using the conventional optical flow technique. On the other hand, in the present embodiment, even when the feature points do not appear in the target image, the motion detection unit 13 can easily track the motion of the moving image. Thus, according to this embodiment, it is possible to extract and visualize the motion of a fluid such as smoke or water, which is perceived by a person.
 出力装置20b又は出力装置20bから出力されたデータを取得した装置(図示略)は、検出された動きを用いてユーザに各種のサービスを提供する。提供されるサービスは例えば、流体の粘度計測、自動運転制御、子供や高齢者の見守りサービス、又は、災害発生時の避難支援サービスである。例えば流体の粘度計測の場合、出力装置20bは、特定部24により測定された流体の粘度を示す情報を出力する。 The output device 20b or a device (not shown) that has obtained data output from the output device 20b provides various services to the user using the detected movement. The services provided are, for example, fluid viscosity measurement, automatic driving control, monitoring services for children and the elderly, or evacuation support services in the event of a disaster. For example, in the case of measuring the viscosity of a fluid, the output device 20b outputs information indicating the viscosity of the fluid measured by the specifying unit 24.
 自動運転制御サービスが提供される場合、出力装置20bは例えば、特定部24により予測された物体(人、自転車、自動車、ボール、等)の動きを解析し、危険な状況(衝突等)が引き起こされると推定される場合、警告音や警告メッセージを出力したり、警告メッセージを表示部22に表示したりしてもよい。 When the automatic driving control service is provided, the output device 20b analyzes, for example, the movement of an object (a person, a bicycle, a car, a ball, etc.) predicted by the identification unit 24, and a dangerous situation (a collision or the like) is caused. When it is estimated that the warning message is output, a warning sound or a warning message may be output, or the warning message may be displayed on the display unit 22.
 また、見守りサービスが提供される場合、見守りの対象となる被保護者(子供、老人、ペット、等)が撮影され、撮影された動画像から動き検出が行われる。出力装置20bは例えば、動き検出部13により検出された人物の動きを解析したり、特定部24により特定された被保護者の表情等を解析したりすることにより、被保護者の見守りを行う。出力装置20bは、解析結果により被保護者が危険な状況であると推定される場合、警告音や警告メッセージを出力したり、警告メッセージを表示部22に表示したりしてもよい。この場合、撮影された動画像が動き検出の対象画像となるが、この動画像が公開される必要はない。そのため、検出システム1bは、被保護者のプライバシーを保護しつつ、見守りサービスを提供することができる。 In addition, when a watching service is provided, a protected person (child, old man, pet, etc.) to be monitored is photographed, and motion detection is performed from the photographed moving image. The output device 20b monitors the protected person by, for example, analyzing the motion of the person detected by the motion detecting unit 13 or analyzing the expression or the like of the protected person specified by the specifying unit 24. . The output device 20b may output a warning sound or a warning message or display a warning message on the display unit 22 when it is estimated from the analysis result that the protected person is in a dangerous situation. In this case, the captured moving image is the target image for motion detection, but it is not necessary to disclose this moving image. Therefore, the detection system 1b can provide a watching service while protecting the privacy of the protected person.
 なお、上述の実施形態1では、動き検出部13が行うドットマッチング処理の例として、対象画像(最初のフレーム)において任意のドットを選択し、隣接するフレームにおいて、当該任意のドットとベクトルの方向及び大きさが同じドットを探索する処理を例示した。ドットマッチングの処理は上述した実施形態で示したものに限られない。ドットマッチング処理は例えば、動き検出部13が、対象画像において任意のドットを選択し、そのフレームに隣接するフレームにおいて、当該任意のベクトルの方向及び大きさの差分が所定の条件(例えば、所定の閾値以下)を満たすドットを探索することにより行われてもよい。 In the first embodiment, as an example of the dot matching process performed by the motion detection unit 13, an arbitrary dot is selected in the target image (first frame), and the direction of the arbitrary dot and the vector in the adjacent frame is selected. And a process of searching for dots having the same size. The process of dot matching is not limited to that described in the above embodiment. In the dot matching process, for example, the motion detection unit 13 selects an arbitrary dot in the target image, and in a frame adjacent to the frame, a difference between the direction and the size of the arbitrary vector is determined under a predetermined condition (for example, a predetermined value). The search may be performed by searching for a dot that satisfies the following condition.
 〔まとめ〕
 本発明の一態様に係る動き検出装置は、対象画像の動き検出を行う動き検出装置であって、対象画像を取得する画像取得部と、上記画像取得部が取得した対象画像から動きに関連したベクトルを導出するベクトル導出部と、上記ベクトル導出部が導出したベクトルを追跡することによって動き検出を行う動き検出部とを備えている。
[Summary]
A motion detection device according to an aspect of the present invention is a motion detection device that performs motion detection of a target image, and includes an image acquisition unit that acquires a target image, A vector deriving unit for deriving a vector, and a motion detecting unit for performing motion detection by tracking the vector derived by the vector deriving unit are provided.
 上記の構成によれば、計算コストの増大を招来することなく、動き検出を行うことができる。 According to the configuration described above, motion detection can be performed without increasing the calculation cost.
 本発明の一態様に係る動き検出装置は、上記動き検出部は、上記動き検出部が検出したベクトルの特性に応じたサーチ領域を設定し、設定したサーチ領域において、上記動きに関連したベクトルの追跡を行う。 In the motion detection device according to one aspect of the present invention, the motion detection unit sets a search area according to the characteristics of the vector detected by the motion detection unit, and sets the search area in the set search area. Perform tracking.
 上記の構成によれば、計算コストの増大を招来することなく、動き検出を行うことができる。 According to the configuration described above, motion detection can be performed without increasing the calculation cost.
 本発明の一態様に係る動き検出装置は、上記動き検出部は、上記動き検出部が検出したベクトルに沿った方向に上記サーチ領域を設定する。 In the motion detection device according to one aspect of the present invention, the motion detection unit sets the search area in a direction along a vector detected by the motion detection unit.
 上記の構成によれば、計算コストの増大を招来することなく、動き検出を行うことができる。 According to the configuration described above, motion detection can be performed without increasing the calculation cost.
 本発明の一態様に係る動き検出装置は、上記ベクトル導出部は、上記対象画像に含まれる少なくとも何れかの画素について、画素値の勾配を示す第1のベクトルを導出し、上記第1のベクトルのフレーム間の差分を示す第2のベクトルを、上記動きに関連したベクトルとして導出する。 In the motion detection device according to one aspect of the present invention, the vector deriving unit derives a first vector indicating a gradient of a pixel value for at least one of the pixels included in the target image, and Is derived as a vector related to the motion.
 上記の構成によれば、動き検出を行うにあたって、好適なベクトルを参照することができる。 According to the above configuration, a suitable vector can be referred to when performing motion detection.
 本発明の一態様に係る動き検出装置は、上記ベクトル導出部は、上記対象画像の各画素の横方向の位置、縦方向の位置、及び画素値を、それぞれX座標、Y座標、及びZ座標とする曲面の法線ベクトルを、XY平面に射影することによって上記第1のベクトルを導出する。 In the motion detection device according to one aspect of the present invention, the vector derivation unit may determine a horizontal position, a vertical position, and a pixel value of each pixel of the target image by an X coordinate, a Y coordinate, and a Z coordinate, respectively. The first vector is derived by projecting the normal vector of the curved surface to the XY plane.
 上記の構成によれば、動き検出を行うにあたって、好適なベクトルを参照することができる。 According to the above configuration, a suitable vector can be referred to when performing motion detection.
 本発明の一態様に係る動き検出装置は、上記ベクトル導出部は、上記法線ベクトルを正規化したうえで上記XY平面に射影することによって上記第1のベクトルを導出する。 In the motion detection device according to an aspect of the present invention, the vector deriving unit derives the first vector by normalizing the normal vector and projecting the normal vector onto the XY plane.
 上記の構成によれば、動き検出を行うにあたって、好適なベクトルを参照することができる。 According to the above configuration, a suitable vector can be referred to when performing motion detection.
 本発明の一態様に係る動き検出装置は、対象画像の動き検出を行う動き検出システムであって、対象画像を取得する画像取得部と、上記画像取得部が取得した対象画像から動きに関連したベクトルを導出するベクトル導出部と、上記ベクトル導出部が導出したベクトルを追跡することによって動き検出を行う動き検出部と、上記動き検出部が検出した動きを参照して、当該動きに係る画像を生成する画像生成部と、上記画像生成部が生成した画像を表示する表示部とを備えている。 A motion detection device according to an aspect of the present invention is a motion detection system that performs motion detection of a target image, and includes an image acquisition unit that acquires a target image, and a motion-related image that is acquired by the image acquisition unit. A vector deriving unit that derives a vector, a motion detecting unit that performs motion detection by tracking the vector derived by the vector deriving unit, and a motion detected by the motion detecting unit. The image processing apparatus includes an image generation unit that generates the image, and a display unit that displays the image generated by the image generation unit.
 上記の構成によれば、計算コストの増大を招来することなく、動き検出を行うことができる。 According to the configuration described above, motion detection can be performed without increasing the calculation cost.
 本発明の一態様に係る動き検出方法は、対象画像の動き検出を行う動き検出方法であって、対象画像を取得する画像取得ステップと、上記画像取得ステップにおいて取得した対象画像から動きに関連したベクトルを導出するベクトル導出ステップと、上記ベクトル導出ステップにおいて導出したベクトルを追跡することによって動き検出を行う動き検出ステップとを含んでいる。 A motion detection method according to one aspect of the present invention is a motion detection method for detecting a motion of a target image, the method including: an image obtaining step of obtaining a target image; and a motion-related method from the target image obtained in the image obtaining step. The method includes a vector deriving step of deriving a vector, and a motion detecting step of performing motion detection by tracking the vector derived in the vector deriving step.
 上記の構成によれば、計算コストの増大を招来することなく、動き検出を行うことができる。 According to the configuration described above, motion detection can be performed without increasing the calculation cost.
 本発明の一態様に係る特性検出装置は、上記動き検出装置と、上記動き検出部の検出結果を用いて、上記対象画像に含まれる物体の特性及び当該物体の動きの特性の少なくともいずれか一方を特定する特定部を更に備える。 The characteristic detection device according to one aspect of the present invention uses the motion detection device and the detection result of the motion detection unit, and uses at least one of a characteristic of an object included in the target image and a characteristic of movement of the object. And a specifying unit that specifies
 上記の構成によれば、計算コストの増大を招来することなく、物体の特性及び物体の動きの特性の少なくともいずれか一方を特定することができる。 According to the configuration described above, it is possible to specify at least one of the characteristics of the object and the characteristics of the motion of the object without increasing the calculation cost.
 本発明の一態様に係る特性検出装置は、前記特定部は、前記特定部は、上記動き検出部の検出結果を用いて、前記物体の動きを予測する。 特性 In the characteristic detecting device according to one aspect of the present invention, the specifying unit predicts a motion of the object by using a detection result of the motion detecting unit.
 上記の構成によれば、計算コストの増大を招来することなく、物体の動きを予測することができる。 According to the above configuration, the motion of the object can be predicted without increasing the calculation cost.
 本発明の一態様に係る特性検出装置は、上記対象画像は流体の画像を含み、上記特定部は、上記流体の特性及び上記流体の動きの特性の少なくともいずれか一方を特定する。 In the characteristic detecting device according to one aspect of the present invention, the target image includes a fluid image, and the specifying unit specifies at least one of the characteristic of the fluid and the characteristic of the movement of the fluid.
 上記の構成によれば、計算コストの増大を招来することなく、物体の特性及び物体の動きの特性の少なくともいずれか一方を特定することができる。 According to the configuration described above, it is possible to specify at least one of the characteristics of the object and the characteristics of the motion of the object without increasing the calculation cost.
 本発明の一態様に係る特性検出装置は、上記特定部は、上記動き検出部の検出結果を用いて、上記流体の粘度を特定する。 特性 In the characteristic detecting device according to one aspect of the present invention, the specifying unit specifies the viscosity of the fluid using a detection result of the motion detecting unit.
 上記の構成によれば、計算コストの増大を招来することなく、物体の粘度を測定することができる。 According to the above configuration, the viscosity of the object can be measured without increasing the calculation cost.
 本発明の一態様に係る流体検出装置は、上記動き検出装置と、上記動き検出部の検出結果を用いて、上記対象画像に含まれる流体を検知する検知部とを備える。 流体 A fluid detection device according to an aspect of the present invention includes the motion detection device, and a detection unit that detects a fluid contained in the target image by using a detection result of the motion detection unit.
 上記の構成によれば、計算コストの増大を招来することなく、流体を検知することができる。 According to the configuration described above, the fluid can be detected without increasing the calculation cost.
 本発明の各態様に係る動き検出装置、特性検出装置、及び流体検出装置は、それぞれ、コンピュータによって実現してもよく、この場合には、コンピュータを上記動き検出装置、特性検出装置、又は流体検出装置が備える各手段として動作させることにより上記動き検出装置、特性検出装置、又は流体検出装置をコンピュータにて実現させる制御プログラム、およびそれを記録したコンピュータ読み取り可能な記録媒体も、本発明の範疇に入る。 The motion detecting device, the characteristic detecting device, and the fluid detecting device according to the respective aspects of the present invention may be each realized by a computer, and in this case, the computer may be implemented by the motion detecting device, the characteristic detecting device, or the fluid detecting device. A control program that causes the computer to implement the motion detecting device, the characteristic detecting device, or the fluid detecting device by operating as each unit included in the device, and a computer-readable recording medium that records the control program are also included in the scope of the present invention. enter.
  1 動き検出システム
 10 動き検出装置
 11 画像取得部
 12 ベクトル導出部
 13 動き検出部
 20 表示装置
 21 画像生成部
 22 表示部
 23 通信部
 24 特定部
 25 検知部
 26 出力部

 
REFERENCE SIGNS LIST 1 motion detection system 10 motion detection device 11 image acquisition unit 12 vector derivation unit 13 motion detection unit 20 display device 21 image generation unit 22 display unit 23 communication unit 24 identification unit 25 detection unit 26 output unit

Claims (19)

  1.  対象画像の動き検出を行う動き検出装置であって、
     対象画像を取得する画像取得部と、
     上記画像取得部が取得した対象画像から動きに関連したベクトルを導出するベクトル導出部と、
     上記ベクトル導出部が導出したベクトルを追跡することによって動き検出を行う動き検出部と
    を備えている動き検出装置。
    A motion detection device that performs motion detection of a target image,
    An image acquisition unit that acquires a target image;
    A vector deriving unit that derives a vector related to motion from the target image acquired by the image acquiring unit,
    A motion detection device comprising: a motion detection unit that performs motion detection by tracking a vector derived by the vector derivation unit.
  2.  上記動き検出部は、
      上記動き検出部が検出したベクトルの特性に応じたサーチ領域を設定し、
      設定したサーチ領域において、上記動きに関連したベクトルの追跡を行う請求項1に記載の動き検出装置。
    The motion detection unit includes:
    Set a search area according to the characteristics of the vector detected by the motion detection unit,
    2. The motion detection device according to claim 1, wherein a vector related to the motion is tracked in a set search area.
  3.  上記動き検出部は、上記動き検出部が検出したベクトルに沿った方向に上記サーチ領域を設定する請求項2に記載の動き検出装置。 The motion detection device according to claim 2, wherein the motion detection unit sets the search area in a direction along a vector detected by the motion detection unit.
  4.  上記ベクトル導出部は、
      上記対象画像に含まれる少なくとも何れかの画素について、
      画素値の勾配を示す第1のベクトルを導出し、
      上記第1のベクトルのフレーム間の差分を示す第2のベクトルを、上記動きに関連したベクトルとして導出する請求項1~3の何れか1項に記載の動き検出装置。
    The vector derivation unit:
    For at least one pixel included in the target image,
    Derive a first vector indicating the gradient of the pixel value;
    4. The motion detection device according to claim 1, wherein a second vector indicating a difference between frames of the first vector is derived as a vector related to the motion.
  5.  上記ベクトル導出部は、
     上記対象画像の各画素の横方向の位置、縦方向の位置、及び画素値を、それぞれX座標、Y座標、及びZ座標とする曲面の法線ベクトルを、XY平面に射影することによって上記第1のベクトルを導出する請求項4に記載の動き検出装置。
    The vector derivation unit:
    The normal vector of a curved surface having the horizontal position, vertical position, and pixel value of each pixel of the target image as the X coordinate, Y coordinate, and Z coordinate, respectively, is projected on the XY plane, thereby The motion detection device according to claim 4, wherein a vector of 1 is derived.
  6.  上記ベクトル導出部は、
     上記法線ベクトルを正規化したうえで上記XY平面に射影することによって上記第1のベクトルを導出する請求項5に記載の動き検出装置。
    The vector derivation unit:
    The motion detection device according to claim 5, wherein the first vector is derived by normalizing the normal vector and projecting the normal vector onto the XY plane.
  7.  対象画像の動き検出を行う動き検出システムであって、
     対象画像を取得する画像取得部と、
     上記画像取得部が取得した対象画像から動きに関連したベクトルを導出するベクトル導出部と、
     上記ベクトル導出部が導出したベクトルを追跡することによって動き検出を行う動き検出部と、
     上記動き検出部が検出した動きを参照して、当該動きに係る画像を生成する画像生成部と、
     上記画像生成部が生成した画像を表示する表示部と
    を備えている動き検出システム。
    A motion detection system that detects motion of a target image,
    An image acquisition unit that acquires a target image;
    A vector deriving unit that derives a vector related to motion from the target image acquired by the image acquiring unit,
    A motion detection unit that performs motion detection by tracking the vector derived by the vector derivation unit,
    With reference to the motion detected by the motion detection unit, an image generation unit that generates an image related to the motion,
    A motion detection system comprising: a display unit that displays the image generated by the image generation unit.
  8.  対象画像の動き検出を行う動き検出方法であって、
     対象画像を取得する画像取得ステップと、
     上記画像取得ステップにおいて取得した対象画像から動きに関連したベクトルを導出するベクトル導出ステップと、
     上記ベクトル導出ステップにおいて導出したベクトルを追跡することによって動き検出を行う動き検出ステップと
    を含んでいる動き検出方法。
    A motion detection method for detecting motion of a target image,
    An image acquisition step of acquiring a target image;
    A vector deriving step of deriving a motion-related vector from the target image acquired in the image acquiring step,
    A motion detecting step of performing motion detection by tracking the vector derived in the vector deriving step.
  9.  請求項1に記載の動き検出装置としてコンピュータを機能させるためのプログラムであって、上記画像取得部、上記ベクトル導出部、及び上記動き検出部としてコンピュータを機能させるためのプログラム。 A program for causing a computer to function as the motion detection device according to claim 1, wherein the program causes a computer to function as the image acquisition unit, the vector derivation unit, and the motion detection unit.
  10.  請求項9に記載のプログラムを記録したコンピュータ読み取り可能な記録媒体。 A computer-readable recording medium on which the program according to claim 9 is recorded.
  11.  請求項1~6のいずれか1項に記載の動き検出装置と、
     上記動き検出部の検出結果を用いて、上記対象画像に含まれる物体の特性及び当該物体の動きの特性の少なくともいずれか一方を特定する特定部と
     を備える特性検出装置。
    A motion detection device according to any one of claims 1 to 6,
    A characteristic detecting device comprising: a specifying unit that specifies at least one of a characteristic of an object included in the target image and a characteristic of a motion of the object by using a detection result of the motion detecting unit.
  12.  前記特定部は、上記動き検出部の検出結果を用いて、前記物体の動きを予測する
     請求項11に記載の特性検出装置。
    The characteristic detecting device according to claim 11, wherein the specifying unit predicts a motion of the object using a detection result of the motion detecting unit.
  13.  上記対象画像は流体の画像を含み、
     上記特定部は、上記流体の特性及び上記流体の動きの特性の少なくともいずれか一方を特定する
     請求項11~12のいずれか1項に記載の特性検出装置。
    The target image includes an image of a fluid,
    13. The characteristic detecting device according to claim 11, wherein the specifying unit specifies at least one of a characteristic of the fluid and a characteristic of movement of the fluid.
  14.  上記特定部は、上記動き検出部の検出結果を用いて、上記流体の粘度を特定する
     請求項13に記載の特性検出装置。
    The characteristic detecting device according to claim 13, wherein the specifying unit specifies the viscosity of the fluid using a detection result of the movement detecting unit.
  15.  請求項1~6のいずれか1項に記載の動き検出装置と、
     上記動き検出部の検出結果を用いて、上記対象画像に含まれる流体を検知する検知部と
     を備える流体検出装置。
    A motion detection device according to any one of claims 1 to 6,
    A detection unit configured to detect a fluid contained in the target image using a detection result of the motion detection unit.
  16.  請求項11に記載の特性検出装置としてコンピュータを機能させるためのプログラムであって、上記画像取得部、上記ベクトル導出部、上記動き検出部、及び上記特定部としてコンピュータを機能させるためのプログラム。 A program for causing a computer to function as the characteristic detection device according to claim 11, wherein the program causes a computer to function as the image acquisition unit, the vector derivation unit, the motion detection unit, and the identification unit.
  17.  請求項16に記載のプログラムを記録したコンピュータ読み取り可能な記録媒体。 A computer-readable recording medium on which the program according to claim 16 is recorded.
  18.  請求項15に記載の流体検出装置としてコンピュータを機能させるためのプログラムであって、上記画像取得部、上記ベクトル導出部、上記動き検出部、及び上記検知部としてコンピュータを機能させるためのプログラム。 A program for causing a computer to function as the fluid detection device according to claim 15, wherein the program causes the computer to function as the image acquisition unit, the vector derivation unit, the motion detection unit, and the detection unit.
  19.  請求項18に記載のプログラムを記録したコンピュータ読み取り可能な記録媒体。 A computer-readable recording medium on which the program according to claim 18 is recorded.
PCT/JP2019/028948 2018-07-24 2019-07-24 Motion detection device, feature detection device, fluid detection device, motion detection system, motion detection method, program, and recording medium WO2020022362A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020532430A JPWO2020022362A1 (en) 2018-07-24 2019-07-24 Motion detection device, characteristic detection device, fluid detection device, motion detection system, motion detection method, program, and recording medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-138717 2018-07-24
JP2018138717 2018-07-24

Publications (1)

Publication Number Publication Date
WO2020022362A1 true WO2020022362A1 (en) 2020-01-30

Family

ID=69181689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/028948 WO2020022362A1 (en) 2018-07-24 2019-07-24 Motion detection device, feature detection device, fluid detection device, motion detection system, motion detection method, program, and recording medium

Country Status (2)

Country Link
JP (1) JPWO2020022362A1 (en)
WO (1) WO2020022362A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021250901A1 (en) * 2020-06-12 2021-12-16 Nec Corporation Intention detection device, intention detection method computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000322581A (en) * 1999-05-14 2000-11-24 Fujitsu Ltd Moving object detecting method
JP2011175599A (en) * 2010-02-25 2011-09-08 Canon Inc Image processor, and processing method and program thereof
JP2012073997A (en) * 2010-09-01 2012-04-12 Ricoh Co Ltd Object tracking device, object tracking method, and program thereof
WO2018123202A1 (en) * 2016-12-28 2018-07-05 シャープ株式会社 Moving-image processing device, display device, moving-image processing method, and control program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000322581A (en) * 1999-05-14 2000-11-24 Fujitsu Ltd Moving object detecting method
JP2011175599A (en) * 2010-02-25 2011-09-08 Canon Inc Image processor, and processing method and program thereof
JP2012073997A (en) * 2010-09-01 2012-04-12 Ricoh Co Ltd Object tracking device, object tracking method, and program thereof
WO2018123202A1 (en) * 2016-12-28 2018-07-05 シャープ株式会社 Moving-image processing device, display device, moving-image processing method, and control program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SUZUKI ET AL.: "Novel method of extracting motion from natural movies", JOURNAL OF NEUROSCIENCE METHODS, vol. 291, 1 November 2017 (2017-11-01), pages 51 - 60, XP055680628 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021250901A1 (en) * 2020-06-12 2021-12-16 Nec Corporation Intention detection device, intention detection method computer-readable storage medium
JP7396517B2 (en) 2020-06-12 2023-12-12 日本電気株式会社 Intention detection device, intention detection method and program

Also Published As

Publication number Publication date
JPWO2020022362A1 (en) 2021-08-02

Similar Documents

Publication Publication Date Title
CN107431786B (en) Image processing apparatus, image processing system, and image processing method
JP6406241B2 (en) Information processing system, information processing method, and program
US10872262B2 (en) Information processing apparatus and information processing method for detecting position of object
Kale et al. Moving object tracking using optical flow and motion vector estimation
JP5102410B2 (en) Moving body detection apparatus and moving body detection method
JP6814673B2 (en) Movement route prediction device and movement route prediction method
CN104103030B (en) Image analysis method, camera apparatus, control apparatus and control method
KR101839827B1 (en) Smart monitoring system applied with recognition technic of characteristic information including face on long distance-moving object
JP2009510541A (en) Object tracking method and object tracking apparatus
JP2007317062A (en) Person recognition apparatus and method
US20120020523A1 (en) Information creation device for estimating object position and information creation method and program for estimating object position
US20110069155A1 (en) Apparatus and method for detecting motion
JP2010123019A (en) Device and method for recognizing motion
JP6043933B2 (en) Sleepiness level estimation device, sleepiness level estimation method, and sleepiness level estimation processing program
JP7266599B2 (en) Devices, systems and methods for sensing patient body movement
Ryan et al. Real-time multi-task facial analytics with event cameras
Dileep et al. Suspicious human activity recognition using 2d pose estimation and convolutional neural network
JP2019040306A (en) Information processing device, information processing program, and information processing method
WO2020022362A1 (en) Motion detection device, feature detection device, fluid detection device, motion detection system, motion detection method, program, and recording medium
JP6405606B2 (en) Image processing apparatus, image processing method, and image processing program
JP7263094B2 (en) Information processing device, information processing method and program
JP6939065B2 (en) Image recognition computer program, image recognition device and image recognition method
JP2011192220A (en) Device, method and program for determination of same person
JP6555940B2 (en) Subject tracking device, imaging device, and method for controlling subject tracking device
WO2020020436A1 (en) Method and system for object tracking in image sequences

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19842013

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2020532430

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19842013

Country of ref document: EP

Kind code of ref document: A1