CN109426811B - Image analysis method - Google Patents

Image analysis method Download PDF

Info

Publication number
CN109426811B
CN109426811B CN201810993233.4A CN201810993233A CN109426811B CN 109426811 B CN109426811 B CN 109426811B CN 201810993233 A CN201810993233 A CN 201810993233A CN 109426811 B CN109426811 B CN 109426811B
Authority
CN
China
Prior art keywords
frame
data
time
specific
position data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810993233.4A
Other languages
Chinese (zh)
Other versions
CN109426811A (en
Inventor
原田崇志
都筑裕二
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shinano Kenshi Co Ltd
Original Assignee
Shinano Kenshi Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shinano Kenshi Co Ltd filed Critical Shinano Kenshi Co Ltd
Publication of CN109426811A publication Critical patent/CN109426811A/en
Application granted granted Critical
Publication of CN109426811B publication Critical patent/CN109426811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The present invention relates to an image analysis method capable of accurately tracking a specific portion in an image and reducing complexity of data processing when extracting time when starting or stopping movement of the specific portion in the image as specific time data. In the video analysis method, position data of a specific part of a video in a frame area is tracked within a predetermined time range, and time data when the specific part stops moving or starts moving is extracted, wherein a calculation unit (24) calculates time data and position data for each frame, extracts a specific position arrival frame estimated to stop moving or start moving of the specific part, and extracts time data of the specific position arrival frame as specific time data.

Description

Image analysis method
Technical Field
The present invention relates to an image analysis method, and more particularly, to an image analysis method for tracking a specific portion of image data in a display device such as a monitor, and extracting time when the specific portion starts moving or stops moving in the display device as specific time data.
Background
There is a need to analyze images in various fields. For example, in the technical content disclosed in patent document 1 (japanese patent laid-open No. 2017-33390), even if a plurality of balls of the same shape are displayed in an image during sports live broadcast or the like, a specific ball can be tracked with high accuracy. Further, patent document 2 (japanese patent application laid-open No. 2016-207140) discloses a technique for obtaining positional information of a person in a video. Patent document 3 (japanese patent application laid-open No. 2015-170874) discloses a technical content related to a large-scale management system in which images transmitted from a plurality of network cameras are analyzed in real time, a mobile body such as a person or a car is detected, and an alarm is automatically given to a manager.
Prior art literature
Patent literature
Patent document 1: japanese patent laid-open publication No. 2017-33390
Patent document 2: japanese patent laid-open publication No. 2016-207140
Patent document 3: japanese patent application laid-open No. 2015-170874
Disclosure of Invention
Technical problem to be solved by the invention
However, in the case of tracking a specific portion in an image and detecting the time (time) when the specific portion starts moving or stops moving in the image, a method of dividing the image into frames and analyzing the motion of the specific portion is generally employed. In this case, in order to accurately track a specific portion, it is necessary to increase the frame rate, but if the frame rate is increased, the amount of data processing for detecting the movement start time or the movement stop time of the specific portion becomes large, and the processing becomes complicated. In contrast, if the frame rate is reduced, the amount of data processing for detecting the movement start time or movement stop time of the specific portion becomes smaller and the complexity is reduced, but there is a problem in that the movement start time or movement stop time of the specific portion cannot be tracked or detected with high accuracy.
Technical proposal adopted for solving the technical problems
Accordingly, an object of the present invention is to provide an image analysis method capable of accurately tracking a specific portion in an image and reducing complexity of data processing when extracting time when starting or stopping movement of the specific portion in the image as specific time data.
That is, the present invention relates to an image analysis method for extracting time data of a frame when a movement of a specific part starts or stops in a frame area as specific time data by tracking position data of the specific part in the frame area in a predetermined time range, using an image analysis apparatus including an identification unit for identifying each frame of an image, a marking unit for marking a specific part of the image, and an operation unit for extracting, for each frame, maximum time data or minimum time data among the time data of the frame which is estimated to start or stop the movement of the specific part in the frame area, as the specific time data, based on the position data of the specific part in the frame area.
Thus, the specific part in the image can be tracked with high precision, and the complexity of data processing when detecting the time when the specific part starts to move or stops moving in the image can be reduced.
The present invention also relates to an image analysis method for extracting time data of the frame when the specific part starts moving or stops moving in a frame area as specific time data by tracking position data of the specific part in a frame area within a predetermined time range, using an image analysis device including an identification unit for identifying each frame of an image, a marking unit for marking the specific part of the image, and an operation unit for extracting the time data of the frame when the specific part starts moving or stops moving in the frame area as specific time data, wherein the operation unit calculates the time data and the position data of the specific part in the frame area for each frame, extracts the frame estimated to start moving or stop moving of the specific part in the frame area as specific position arrival frame based on the position data of the specific part in the frame area, extracts the maximum time data or minimum time data of the frame in the time data and the frame estimated to start moving or stop moving in the frame area as specific position arrival frame, and extracts coordinate data of the frame arrival data in the frame area as specific position arrival frame arrival position data within the frame area, and the coordinate data within the frame arrival position data, and the coordinate data within the frame arrival data is calculated as specific position arrival data within the frame arrival position data.
Thus, the specific part in the image can be tracked with high precision, and the complexity of data processing when detecting the time when the specific part starts to move or stops moving in the image can be reduced.
Further, it is preferable that the arithmetic unit sets the position data in the frame area at a minimum time or a maximum time within the predetermined time range as the position data in the frame area of the specific position arrival frame. In this case, it is assumed that the position data in the frame area at the minimum time or the maximum time in the predetermined time range is data in a stopped state.
Thus, since the specific position arrival frame is easily determined, the complexity of image analysis at the time of extracting the time when the specific position in the image starts to move or stops moving in the display device can be reduced.
Further, it is preferable that the position data in the frame area at the maximum time or the minimum time within the predetermined time range is an average value of the position data in the frame area of the frame in the predetermined time range from the maximum time or the minimum time.
Thereby, the reliability of the position data in the frame area at the maximum time or the minimum time within the predetermined time range is improved.
Further, it is preferable that the arithmetic unit multiplies the position data in the frame area at the maximum time or the minimum time within the predetermined time range by a predetermined ratio to obtain a value as the position data in the frame area where the specific position arrives at the frame.
Thus, since the specific position arrival frame is easily determined, the complexity of image analysis when extracting specific time data in which a specific position in an image starts to move or specific time data in which movement is stopped can be reduced.
Effects of the invention
According to the configuration of the image analysis method of the present invention, it is possible to accurately track a specific portion in an image and to reduce the complexity of data processing when extracting time at which the specific portion in the image starts to move or time at which the movement is stopped as specific time data.
Drawings
Fig. 1 is a schematic configuration diagram showing an image analysis device according to embodiment 1.
Fig. 2 is a schematic process flow diagram showing the video analysis method in embodiment 1.
Fig. 3 is a graph showing a relationship between time data and coordinate data of a specific part in embodiment 1.
Fig. 4 is a schematic process flow diagram showing the video analysis method in embodiment 2.
Fig. 5 is a graph showing a relationship between time data and coordinate data of a specific part in embodiment 2.
Fig. 6 is a graph showing a relationship between time data and coordinate data of a specific part in the modified embodiment.
Detailed Description
The image analysis method according to the present invention will be described based on the embodiments.
(embodiment 1)
As shown in fig. 1, the image analysis device 100 used in the present embodiment includes: a high-speed camera 10 as an identification unit, the high-speed camera 10 identifying each frame of the image; and a personal computer 20, the personal computer 20 having: a marking unit 22, wherein the marking unit 22 marks a specific part of the image shot by the high-speed camera 10; and an arithmetic unit 24 typified by a CPU, the arithmetic unit 24 operating on the image data of the specific portion of the image marked by the marking unit 22.
The storage unit 26 of the personal computer 20 stores video data VD captured by the high-speed camera 10. In addition, an image analysis program vagm is previously installed in the storage unit 26, and the image analysis person can execute the image analysis method according to the present embodiment by operating the data input unit 28 typified by a mouse, a keyboard, or the like in accordance with the image analysis program vagm. Here, an embodiment in which the image analysis method of the present invention is applied to the performance judgment and inspection of the head mounted display 30 for displaying the virtual reality image by the image analysis apparatus 100 will be described.
A head mounted display 30 (hereinafter simply referred to as a display 30) that displays virtual reality images is used in a state of being worn on the head of a user so that the user's field of view is only an image displayed on the display 30. The display 30 changes the image content displayed on the display screen 30a according to the user's motion, thereby making the user feel real to the virtual space. In the case of the virtual reality, if the lag of the time required for the image displayed on the display 30 to change with respect to the user's motion is large, the user may experience poor experience. Therefore, the image analysis method of the present invention is used when confirming whether or not the time lag from the actual operation of the user to the change of the image displayed on the display 30 is within the allowable range.
Specifically, the image analysis method of the present invention uses the high-speed camera 10 for capturing an image displayed on the display screen 30a of the display 30, and the personal computer 20 for analyzing image data captured by the high-speed camera 10. An acceleration sensor, not shown, is mounted on the display 30, and acceleration data AD measured by the acceleration sensor is set to be transmittable to the personal computer 20. The display 30 is held by holders (not shown) that are rotatable about three-dimensional three-axis axes, that is, x-axis, y-axis, and z-axis directions. The arithmetic unit 24 of the personal computer 20 stores the acceleration data AD transmitted from the acceleration sensor in the storage unit 26 in a state associated with the acceleration measurement time data ACTD. The above is the data collection step for analysis shown in fig. 2.
Next, an image analysis method according to the present embodiment will be described with reference to fig. 2. The image analysis person causes the marking unit 22 to mark the characteristic portion of the image as a specific portion through the data input unit 28 of the personal computer 20 (step 1). The arithmetic unit 24 tracks the marked portion (hereinafter referred to as a specific portion) in each frame of the video within a predetermined time range. The arithmetic unit 24 extracts, for each frame, specific part position data PLD (position data indicating the position of the specific part in the frame region) set on the display screen 30a of the display 30 and specific part time data PTD (time data) at that time, which are displayed on the specific part of the display 30, and stores the specific part position data PLD in the storage unit 26 in a state of being paired with the specific part time data PTD (step 2).
The arithmetic unit 24 extracts acceleration measurement time data ACTD, rotational acceleration in the x-axis direction, rotational acceleration in the y-axis direction, and rotational acceleration in the z-axis direction for each frame based on the acceleration data AD and the time data of the measured acceleration data AD, that is, the acceleration measurement time data ACTD (step 3). Here, processing is performed in advance so that the specific part time data PTD extracted together with the specific part position data PLD and the acceleration measurement time data ACTD share a reference time (reference point of the time axis). In addition, the order of step 2 and step 3 may be replaced. The graph in fig. 3 is obtained by plotting the specific part position data PLD and the specific part time data PTD extracted by the arithmetic unit 24 within a predetermined time range. The vertical axis of the graph of fig. 3 represents specific site position data PLD (position data), and the horizontal axis represents specific site time data PTD (time data).
Next, the computing unit 24 extracts, for each frame, a position where the displacement of the specific portion within the frame region is within a predetermined range (a threshold value stored in advance in the storage unit 26) as a stop estimation position. The threshold can be set appropriately. In the present embodiment, as shown in the graph of fig. 3, the displacement of the specific portion in the frame region in the required time range up to the final time becomes 0 (the curve is horizontal, indicating a state in which the specific portion is stopped in the frame region). Therefore, in the present embodiment, regardless of the threshold value, the specific part position data PLD when the curve becomes horizontal is extracted as the stop estimated position, that is, the estimated movement of the specific part in the frame region, and is stored in the storage unit 26 as the specific part position data PLDs at the stop estimated position (step 4).
The arithmetic unit 24 extracts and stores a frame in which the specific part time data PTD is detected first (a frame in which the specific part time data is the minimum time and the same position data as the specific part position data PLDS) as the specific part position arrival frame PRF, which is the same position data as the specific part position data PLDS at the stop estimated position (step 5). The arithmetic unit 24 extracts the specific part time data PTDS and the specific part position data PLDS in the specific part arrival frame PRF as the stop position coordinates SPCD of the specific part, and stores them in the storage unit 26 (step 6).
The calculating unit 24 compares the stop position coordinates SPCD of the specific portion and the time data calculated in step 3 with the rotational acceleration in the x-axis direction, the rotational acceleration in the y-axis direction, and the rotational acceleration in the z-axis direction, calculates a delay time DT between a motion change of the user wearing the display 30 and a display change of an image on the display 30 corresponding to the motion change, and stores the delay time DT in the storage unit 26 (step 7). In the present embodiment, the difference between the time data when all rotational accelerations in the circumferential direction become 0 and the specific portion time data PTDS at the stop estimated position of the stop position coordinate SPCD may be obtained. The computing unit 24 determines whether the delay time DT is equal to or smaller than the allowable value PV stored (set) in advance in the storage unit 26 (step 8), and if the delay time DT is equal to or smaller than the allowable value PV, the processing unit issues a pass determination to the display 30 (step 9), and if the delay time DT exceeds the allowable value PV, the processing unit issues a fail determination to the display 30 (step 10), and ends the processing flow of the present embodiment (end).
In this way, in the graph of the position data of the specific portion and the time data of the specific portion extracted within the predetermined time range, the stop position coordinate SPCD can be extracted by checking the position data of the specific portion and the time data of the specific portion within the defined range. Thus, the stop position coordinates SPCD (i.e., specific time data) can be extracted in an extremely short time as compared with the related art. That is, it has the following advantages: whether or not the product performance evaluation of the display 30 used in virtual reality is acceptable can be judged with high accuracy and in a short time.
(embodiment 2)
In this embodiment, the same structures as those in embodiment 1 are denoted by the reference numerals used in embodiment 1, and detailed description thereof is omitted here. As shown in fig. 4, the image analysis method in the present embodiment is the same from the analysis data collection step to step 3, and the processing after step 4 is different.
Specifically, as shown in fig. 5, the arithmetic unit 24 extracts and stores a frame having the same value as the specific part position data PPLD (position data) at the final time of the predetermined time range and the minimum time data as the specific position arrival frame PPRF in the graph (step 4). The arithmetic unit 24 extracts the specific position arrival time data PPTD and the specific position arrival position data PPLD in the specific position arrival frame PPRF as specific position coordinates PPCD at the time of arrival of the specific position, and stores them in the storage unit 26 (step 5). Next, the arithmetic unit 24 calculates an approximation curve NCV of the time data of the frame in the time range before the specific position reaches the time data in the frame PPRF and the position data in the frame region (step 6). The method for calculating the approximate curve NCV of the time data and the position data of the frame may be a known method, and therefore, a detailed description thereof is omitted. As shown in fig. 5, the approximation curve NCV may be a straight line having an infinite radius of curvature.
The calculation unit 24 calculates an intersection point of the calculated approximation curve NCV and the specific part position data PPLD of the specific position coordinates PPCD (step 7), and extracts the specific part time data PPTD2 and the specific part position data PPLD at the calculated intersection point as the stop position coordinates SPCD and stores them in the storage unit 26 (step 8). Next, the computing unit 24 compares the stop position coordinates SPCD of the specific portion and the time data calculated in step 3 with the rotational acceleration in the x-axis direction, the rotational acceleration in the y-axis direction, and the rotational acceleration in the z-axis direction, calculates a delay time DT between the motion change of the user wearing the display 30 and the display change of the image corresponding to the motion change display 30, and stores the delay time DT in the storage unit 26 (step 9). The method for calculating the delay time DT may be the same as that of embodiment 1, and thus a detailed description thereof will be omitted here.
The operation unit 24 determines whether the delay time DT is equal to or less than the allowable value PV stored (set) in advance in the storage unit 26 (step 10), and if the delay time DT is equal to or less than the allowable value PV, the operation unit 24 issues a pass determination to the display 30 (step 11). In contrast, when the delay time DT exceeds the allowable value PV, the arithmetic unit 24 makes a failure determination to the display 30 (step 12), and ends the processing flow of the present embodiment (end).
In the present embodiment, the data processing when calculating the stop position coordinates SPCD (i.e., the specific time data) of the specific portion is complicated as compared with embodiment 1, but the data processing amount can be sufficiently reduced in the conventional technique. In addition, according to the present embodiment, the stop position coordinates SPCD (specific time data) of the specific portion can be calculated with higher accuracy than in embodiment 1.
The specific part position data in the above embodiment is position data at the final time within the predetermined time range for tracking the specific part, but is not limited to this embodiment. For example, a value obtained by multiplying position data at the maximum time (final time) within a predetermined time range in which the specific part is tracked by a predetermined ratio (stored in the storage unit 26 in advance) may be used as position data for stopping the estimated position within the frame region.
The calculation unit 24 may perform the stop estimation position determination step instead of step 4 in each embodiment, that is, may determine whether or not the displacement of the specific portion within the frame region is within a predetermined threshold range for each frame. The stop estimation position may be determined by estimating that the specific portion has stopped based on the result of the determination in the stop estimation position determination step (yes in the determination result). If the determination result is negative, the stop estimation position determination step may be repeatedly performed until the determination result becomes positive.
In the above embodiment, the frame time data when the movement of the specific portion is stopped from being moved in the frame area was extracted as the specific time data, but the present invention is not limited to this. For example, the operation unit 24 shown in fig. 6 uses a frame when movement is started from a state where a specific part is stopped in a frame area as a movement start position frame, and uses position data of the coordinate MPCD in the movement start position frame as PPLD.
Then, the calculation unit 24 calculates the approximate curve NCV of the specific part time data and the specific part position data after the time data PPTD2 at this time, using the time data at this time as PPTD 2. Further, the arithmetic unit 24 may employ the following means: the position data PPLD of the movement start position frame is substituted into the calculated approximation curve NCV and the specific time data PPTD is extracted. In this embodiment, the "minimum time" and the "maximum time" in embodiment 2 may be replaced with "before time data" and "after time data" respectively.
The above embodiment and the modified embodiment have the following preconditions: the position data of the specific portion in the frame region at the minimum time or the maximum time in the predetermined time range is data in which the specific portion is in the stopped state, but the present invention is not limited to this mode. This is because, as in the above-described embodiment, the case where the high-speed camera 10 is used as the recognition means includes the following cases: even if a specific part is not completely stopped, a person can visually recognize that the stop is made.
For example, the arithmetic unit 24 may also employ the following means: in a minute change range of a specific time range including the maximum time within the predetermined time range, a frame when the specific portion is within a position change rate per unit time set in advance is regarded as a specific position arrival frame PRF, and the calculation unit 24 uses the coordinates at this time as stop position coordinates SPCD. In addition, in a minute change range of the specific time range including the minimum time within the predetermined time range, the operation unit 24 may consider a frame when the change amount of the specific portion is equal to or larger than the position change amount per unit time set in advance as the movement start position frame MPCD. After calculating the movement start position frame MPCD, the arithmetic unit 24 can extract the specific time data PPTD according to the same procedure as in the modification embodiment shown in embodiment 2 and fig. 6.
Further, the modifications described in the specification and other known configurations may be appropriately combined with the configurations of the above embodiments.

Claims (4)

1. An image analysis method, which comprises the steps of,
the image analysis method is characterized in that the image analysis method comprises the steps of tracking position data of a specific part of an image marked by a marking unit in a frame area within a predetermined time range by using an image analysis device provided with the marking unit for marking the specific part of the image, and extracting time data of the frame when the specific part starts to move or stops moving in the frame area as specific time data,
the operation unit calculates the time data and the position data of the specific portion in the frame area for each of the frames,
for each frame, extracting the frame estimated to start or stop the movement of the specific part in the frame area as a specific position arrival frame based on the position data of the specific part in the frame area, extracting the maximum time data or the minimum time data of the specific position arrival frame and the position data in the frame area as specific position coordinates,
calculating an approximate curve of the time data of the frame and the position data within the frame area within a time range after the time data of the specific position arrival frame or before the time data of the specific position arrival frame,
the time data at the intersection of the approximation curve and the position data within the frame region of the specific position arrival frame is extracted as the specific time data.
2. The method of image analysis according to claim 1, wherein,
the arithmetic unit regards the position data in the frame area at a minimum time or a maximum time within the prescribed time range as the position data in the frame area of the specific position arrival frame.
3. The method for analyzing images according to claim 1 or 2, wherein,
the position data in the frame area at the maximum time or the minimum time within the prescribed time range is an average value of the position data in the frame area of the frame in the prescribed time range from the maximum time or the minimum time.
4. The method for analyzing images according to claim 1 or 2, wherein,
the arithmetic unit multiplies the position data in the frame area at the maximum time or the minimum time within the predetermined time range by a predetermined ratio to obtain a value as the position data in the frame area where the specific position arrives at the frame.
CN201810993233.4A 2017-08-30 2018-08-29 Image analysis method Active CN109426811B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017166060A JP6630322B2 (en) 2017-08-30 2017-08-30 Video analysis method
JP2017-166060 2017-08-30

Publications (2)

Publication Number Publication Date
CN109426811A CN109426811A (en) 2019-03-05
CN109426811B true CN109426811B (en) 2023-06-30

Family

ID=65514737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810993233.4A Active CN109426811B (en) 2017-08-30 2018-08-29 Image analysis method

Country Status (2)

Country Link
JP (1) JP6630322B2 (en)
CN (1) CN109426811B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007315892A (en) * 2006-05-25 2007-12-06 Aisin Seiki Co Ltd Obstacle detector and reception time estimation method
JP2008142543A (en) * 2006-12-07 2008-06-26 Toshiba Corp Three-dimensional image processing device, and x-ray diagnosis apparatus
CN101662587A (en) * 2008-08-29 2010-03-03 佳能株式会社 Image pick-up apparatus and tracking method therefor
JP2011254289A (en) * 2010-06-02 2011-12-15 Toa Corp Moving body locus display device, and moving body locus display program
JP2012099976A (en) * 2010-10-29 2012-05-24 Keyence Corp Video tracking apparatus, video tracking method and video tracking program
JP2013081145A (en) * 2011-10-05 2013-05-02 Toyota Central R&D Labs Inc Optical communication device and program
CN103198492A (en) * 2013-03-28 2013-07-10 沈阳航空航天大学 Human motion capture method
WO2016098720A1 (en) * 2014-12-15 2016-06-23 コニカミノルタ株式会社 Image processing device, image processing method, and image processing program
WO2017104372A1 (en) * 2015-12-18 2017-06-22 株式会社リコー Image processing apparatus, image processing system, image processing method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108027652B (en) * 2015-09-16 2021-06-22 索尼公司 Information processing apparatus, information processing method, and recording medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007315892A (en) * 2006-05-25 2007-12-06 Aisin Seiki Co Ltd Obstacle detector and reception time estimation method
JP2008142543A (en) * 2006-12-07 2008-06-26 Toshiba Corp Three-dimensional image processing device, and x-ray diagnosis apparatus
CN101662587A (en) * 2008-08-29 2010-03-03 佳能株式会社 Image pick-up apparatus and tracking method therefor
JP2011254289A (en) * 2010-06-02 2011-12-15 Toa Corp Moving body locus display device, and moving body locus display program
JP2012099976A (en) * 2010-10-29 2012-05-24 Keyence Corp Video tracking apparatus, video tracking method and video tracking program
JP2013081145A (en) * 2011-10-05 2013-05-02 Toyota Central R&D Labs Inc Optical communication device and program
CN103198492A (en) * 2013-03-28 2013-07-10 沈阳航空航天大学 Human motion capture method
WO2016098720A1 (en) * 2014-12-15 2016-06-23 コニカミノルタ株式会社 Image processing device, image processing method, and image processing program
WO2017104372A1 (en) * 2015-12-18 2017-06-22 株式会社リコー Image processing apparatus, image processing system, image processing method, and program

Also Published As

Publication number Publication date
JP2019047215A (en) 2019-03-22
JP6630322B2 (en) 2020-01-15
CN109426811A (en) 2019-03-05

Similar Documents

Publication Publication Date Title
US10417503B2 (en) Image processing apparatus and image processing method
JP6525453B2 (en) Object position estimation system and program thereof
Son et al. Integrated worker detection and tracking for the safe operation of construction machinery
US10445887B2 (en) Tracking processing device and tracking processing system provided with same, and tracking processing method
JP6159179B2 (en) Image processing apparatus and image processing method
US8823779B2 (en) Information processing apparatus and control method thereof
CN103677274B (en) A kind of interaction method and system based on active vision
US20170351924A1 (en) Crowd Monitoring System
CN102156537A (en) Equipment and method for detecting head posture
CN111309144B (en) Method and device for identifying injection behavior in three-dimensional space and storage medium
Pundlik et al. Time to collision and collision risk estimation from local scale and motion
CN111488775B (en) Device and method for judging degree of visibility
EP2476999B1 (en) Method for measuring displacement, device for measuring displacement, and program for measuring displacement
Boltes et al. Influences of extraction techniques on the quality of measured quantities of pedestrian characteristics
JP2021060868A (en) Information processing apparatus, information processing method, and program
CN114155557B (en) Positioning method, positioning device, robot and computer-readable storage medium
CN110991292A (en) Action identification comparison method and system, computer storage medium and electronic device
CN112597903B (en) Electric power personnel safety state intelligent identification method and medium based on stride measurement
CN109426811B (en) Image analysis method
JP2002259984A (en) Motion analyzing device and motion analyzing method
CN116860153A (en) Finger interaction track analysis method, system and storage medium
CN104602094B (en) Information processing method and electronic equipment
Fernández et al. Automated Personnel Digital Twinning in Industrial Workplaces
JP4552018B2 (en) Moving image processing apparatus and moving image processing method
JP2019197278A (en) Image processing apparatus, method of controlling image processing apparatus, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant