WO2016031573A1 - Dispositif de traitement d'image, procédé de traitement d'image, programme et support d'enregistrement - Google Patents

Dispositif de traitement d'image, procédé de traitement d'image, programme et support d'enregistrement Download PDF

Info

Publication number
WO2016031573A1
WO2016031573A1 PCT/JP2015/072818 JP2015072818W WO2016031573A1 WO 2016031573 A1 WO2016031573 A1 WO 2016031573A1 JP 2015072818 W JP2015072818 W JP 2015072818W WO 2016031573 A1 WO2016031573 A1 WO 2016031573A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
still image
still
motion
noted
Prior art date
Application number
PCT/JP2015/072818
Other languages
English (en)
Japanese (ja)
Inventor
学斌 胡
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2016031573A1 publication Critical patent/WO2016031573A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Definitions

  • the present invention relates to an image processing apparatus, an image processing method, a program and a recording medium for extracting and outputting still image data from moving image data.
  • a scene of a best shot (a person captured in a moving image) that can not be captured (still difficult to capture) with a still image, such as a moment when a child blows out a candle on a birthday May be included in a scene that preferably represents the motion of
  • moving images include those with little movement of persons, those with low importance, those with poor composition, those with poor image quality, and the like.
  • Patent Document 1 relates to a person action search device that enables a person to quickly reproduce moving image data from a recorded position at which a person is recorded.
  • a person action search device that enables a person to quickly reproduce moving image data from a recorded position at which a person is recorded.
  • a representative image of the recognized person is extracted, and after the virtual center of gravity of the human image appears in the photographed image in the representative image, it disappears out of the photographed image
  • a bird's-eye view image is created by synthesizing tracking lines which are movement trajectories up to.
  • Patent Document 2 relates to a technique for extracting a representative frame that clearly represents a video included in the moving image data from the moving image data.
  • a representative frame image a frame image with the largest evaluation value output by the face state determination unit.
  • An object of the present invention is an image processing apparatus capable of automatically extracting and outputting still image data of a still image corresponding to a scene of a best shot from moving image data by solving the problems of the prior art, and an image processing method , Providing a program and a recording medium.
  • the present invention provides a still image data extraction unit that extracts still image data of a plurality of frames from moving image data;
  • An attention person detection unit that detects an attention person who is a person to be processed from each of a plurality of still images corresponding to still image data of a plurality of frames;
  • a movement trajectory detection unit for tracking the movement of the person of interest in the moving image corresponding to the moving image data to detect the movement locus of the person of interest based on the detection result of the person of interest in the plurality of still images;
  • Motion analysis unit The present invention provides an image processing apparatus including: a still image data output unit that outputs still image data of a still image having an evaluation value with respect to the motion of a person of interest out of a plurality of frames of still image data
  • a person registration unit for registering a person to be processed among persons photographed in a moving image as a registered person
  • the person-of-interest detection unit detects a person who matches the registered person or a person whose similarity is equal to or more than a threshold value as a person-of-interest from among each of the plurality of still images.
  • the person-of-interest detection unit extracts the face of the person from each of the plurality of still images, and performs the main person determination on the face image of the extracted person's face, thereby extracting the face.
  • the target person detection unit further detects the face area of the target person in the still image
  • the motion trajectory detection unit is based on the face area of the target person and is an arbitrary face corresponding to the face region of the target person in the still image of the current frame and the face region of the target person in the still image of the frame next to the current frame. Based on the position of the detection area in the still image of the next frame, the similarity with the face area of the target person in the still image of the current frame is compared with the detection area in the position and the position detection area is compared. It is preferable to track the movement of the target person in the moving image by detecting to which detection region of the still image of the next frame the face region of the target person in the still image has moved.
  • the motion trajectory detection unit preferably tracks the movement of the person of interest for each of the regions obtained by dividing the region of the upper body of the person of interest into a predetermined number.
  • the motion locus detection unit generates an integral image of the still image of the next frame, and uses the generated integral image to all included in the detection area at any position in the still image of the next frame.
  • the calculation of the sum of the luminance values of the image is sequentially repeated for detection regions at a plurality of positions.
  • the motion locus detection unit track the movement of the noted person by using an average displacement method.
  • the motion analysis unit defines in advance a motion trajectory for the motion of the target person, and detects a portion similar to the motion trajectory defined in advance from the motion trajectory of the target person detected by the motion trajectory detection unit. By doing this, it is preferable to analyze the movement of the noted person and calculate an evaluation value for the movement of the noted person according to the type of the movement of the noted person.
  • the motion analysis unit analyzes the motion of the target person based on the motion history image of the target person as the motion trajectory of the target person, and calculates an evaluation value for the motion of the target person.
  • the target person detection unit further detects the position of the target person in the still image, the size of the target person in the still image, and the region of the target person in the still image
  • the motion trajectory detection unit further detects the length of the motion trajectory of the person of interest and the movement pattern of the person of interest, Furthermore, the importance of each of the plurality of still images is determined based on at least one of the length of the movement locus of the target person, the position of the target person in the still image, and the size of the target person in the still image.
  • An importance determination unit that calculates an evaluation value of the importance based on the determined importance for each of the plurality of still images; Based on at least one of the position of the target person in the still image, the size of the target person in the still image, and the movement pattern of the target person, the quality of each composition of the plurality of still images is analyzed.
  • a composition analysis unit that calculates an evaluation value of the composition based on the quality of the analyzed composition for each of the still images; Image quality determination that determines the image quality of each of a plurality of still images based on the area of the person of interest in the still image, and calculates an evaluation value of the image quality based on the determined image quality for each of the plurality of still images Equipped with
  • the still image data output unit is configured to evaluate at least one of the evaluation value for the motion of the person of interest, the evaluation value of importance, the evaluation value of the composition, and the evaluation value of the image quality among the plurality of still images. It is preferable to output still image data of one or more still images whose overall evaluation value is greater than or equal to a threshold value.
  • the composition analysis unit defines in advance the movement pattern of the person of interest, and the person of interest is moved with the movement pattern defined in advance from among the movement loci of the person of interest detected by the movement locus detection unit. A portion is detected, and the composition of the still image corresponding to the portion where the noted person is moving is analyzed as good according to a predefined movement pattern, and the evaluation value of the composition of the still image analyzed as good is analyzed as good. It is preferable that the calculation is performed so as to be higher than the evaluation value of the composition of the still image which is not performed.
  • the target person detection unit further detects the orientation of the target person's face in the still image, Furthermore, based on the orientation of the face of the target person detected by the target person detection unit, the top and bottom of the still image corresponding to the still image data output from the still image data output unit is captured when the moving image is captured It is preferable to have a top-bottom correction unit that corrects the top and bottom of the still image corresponding to the still image data output from the still image data output unit so as to be the same as the top and bottom of the device.
  • the still image data extraction unit extracts still image data of a plurality of frames from moving image data; Detecting a noted person who is a person to be processed from each of a plurality of still images corresponding to still image data of a plurality of frames;
  • the motion locus detection unit detects the movement locus of the target person by tracking the movement of the target person in the moving image corresponding to the moving image data based on the detection result of the target person in the plurality of still images;
  • the motion analysis unit analyzes the motion of the watched person in the moving image based on the motion trajectory of the watched person, and for each of the plurality of still images, the motion of the watched person with respect to the motion of the watched person based on the analyzed motion of the watched person.
  • the still image data output unit provides an image processing method including the steps of outputting, among still image data of a plurality of frames, still image data of a still image whose evaluation value for the motion of the person of interest is equal to or greater than a threshold.
  • the motion trajectory detection unit detects the length of the motion trajectory of the person of interest and the movement pattern of the person of interest;
  • the importance determination unit determines the importance of each of the plurality of still images based on at least one of the length of the motion locus of the target person, the position of the target person in the still image, and the size of the target person in the still image.
  • the composition analysis unit analyzes the quality of each of the plurality of still images based on at least one of the position of the target person in the still image, the size of the target person in the still image, and the movement pattern of the target person. Calculating an evaluation value of the composition based on the quality of the analyzed composition for each of a plurality of still images;
  • the image quality determination unit determines the image quality of each of the plurality of still images based on the area of the person of interest in the still image, and the image quality evaluation value based on the determined image quality for each of the plurality of still images.
  • the still image data output unit calculates the evaluation value for the motion of the person of interest, the evaluation value of importance, the evaluation value of composition, and the evaluation value of image quality Outputting still image data of one or more still images whose overall evaluation value is greater than or equal to a threshold value.
  • the moving image was taken with the still image corresponding to the still image data output from the still image data output unit based on the face orientation of the noted person detected by the noted person detection unit by the elevation correction unit And correcting the top and bottom of the still image corresponding to the still image data output from the still image data output unit so as to be the same as the top and bottom of the photographing apparatus at the time.
  • the present invention also provides a program for causing a computer to execute the steps of the image processing method described above.
  • the present invention also provides a computer readable recording medium having recorded thereon a program for causing a computer to execute the steps of the image processing method described above.
  • the scene of the best shot is automatically detected from the moving image, and the scene of the best shot from still image data of a plurality of frames extracted from the moving image data Still image data corresponding to the still image can be output.
  • FIG. 1 It is a block diagram of one embodiment showing composition of an image processing device of the present invention.
  • the left side of (A) to (C) is a conceptual diagram of an example showing the movement locus of the noted person, and the right side is a conceptual diagram of an example showing the movement history image of the noted person.
  • (A) is a conceptual diagram of an example showing a still image rotated 90 ° to the left
  • (B) shows a still image corrected to the top and bottom by rotating the still image shown in (A) 90 ° to the right
  • FIG. 1 is a block diagram of an embodiment showing the configuration of the image processing apparatus of the present invention.
  • the image processing apparatus 10 shown in the figure automatically detects a scene of a best shot from a moving image, and outputs still image data of a still image corresponding to the scene of the best shot.
  • Still image data extraction unit 14 attention person detection unit 16, motion trajectory detection unit 18, motion analysis unit 20, importance determination unit 22, composition analysis unit 24, image quality determination unit 26, still image data
  • An output unit 28 and a top-bottom correction unit 30 are provided.
  • the target person registration unit 12 registers a target person to be processed as a registered person among persons photographed in a moving image corresponding to moving image data.
  • the notable person registration unit 12 can register, for example, a person designated by the user among persons photographed in a moving image as a registered person.
  • the focused person registration unit 12 can register an image of a registered person (a face image or the like for specifying the focused person).
  • the still image data extraction unit 14 extracts still image data of a plurality of frames from the moving image data.
  • the still image data extraction unit 14 can extract, for example, still image data of all frames (each frame) of moving image data.
  • still image data of one frame may be extracted, for example, every two frames, for each fixed number of frames.
  • still image data of a frame of an arbitrary section of a moving image corresponding to moving image data may be extracted.
  • the focused person detection unit 16 selects a person to be processed from among each of a plurality of still images corresponding to still image data of a plurality of frames extracted from moving image data by the still image data extraction unit 14. The person of interest is detected.
  • the noted person detection unit 16 detects, for example, the presence or absence of a person in each of a plurality of still images, and detects an image of the detected person and, for example, an image of a registered person registered in the noted person registration unit 12. By comparing (compare face images and the like), it is possible to specify a person who matches or is similar to the registered person (a person whose similarity is equal to or higher than a threshold) among the detected persons as the noted person.
  • the target person detection unit 16 extracts the face of the person from each of the plurality of still images, and the face person is extracted by performing the central person determination on the face image of the extracted face of the person. From among the persons, the person determined to be the center person by the center person determination can be identified as the person of interest.
  • the same person determination process is performed on a plurality of face images, and the plurality of face images are classified into an image group including face images of the same person. Subsequently, one or more persons of the persons classified into the image group are determined as the main character, and one or more persons having high relevancy with the main character among the persons other than the main character are determined as the important persons. Further, based on the face image of each registered person registered in the focused person registration unit 12, the person corresponding to each image group can be specified.
  • a person with the largest number of face images detected can be determined as the main character, or a person other than the main character with a large number of still images photographed with the main character can be determined as the important person.
  • the distance between the face image of the main character and the face image of a person other than the main character photographed in the same still image may be calculated, and a person whose distance between the face images is close may be determined as an important person.
  • the difference between the shooting date and time information of the still image in which the main character was shot and the shooting date and time information of the still image in which the person other than the main character was shot, and the shooting position information of the still image in which the main character was shot and the person other than the main character The important person may be determined based on one or both of the difference from the still image shooting position information.
  • the focused person detection unit 16 detects the position of the focused person, the size of the focused person, the area of the focused person, the upper body area of the focused person, the position of the face of the focused person, and the size of the face of the focused person in the still image.
  • the face area of the person of interest, the direction of the face of the person of interest, etc. can be detected.
  • the detection method of the attention person in a still image the detection method of the face of the attention person, etc. are known, detailed explanation is omitted here, but the specific detection method is not limited at all. Also, the method of detecting the person of interest is not limited at all.
  • the motion locus detection unit 18 tracks the movement of the target person in the moving image corresponding to the moving image data based on the detection result of the target person in the plurality of still images by the target person detection unit 16. It detects the movement locus of a person. Further, the motion locus detection unit 18 can detect the length of the motion locus of the person of interest, the movement pattern of the person of interest, and the like by detecting the motion locus of the person of interest.
  • the movement locus of the person of interest is a region of interest (ROI: Region of Interest), for example, as shown on the left side of FIGS. It is possible to use the ones represented in.
  • ROI Region of Interest
  • a motion history image MHI: Motion History Image
  • the action history image is a view showing the action history of the person of interest, for example, by changing the color at regular intervals.
  • the motion locus detection unit 18 is, for example, based on the face area of the noted person, an arbitrary face corresponding to the face area of the noted person in the still image of the current frame and the face area of the noted person in the still image of the next frame. Based on the position of the detection area in the still image of the next frame, the similarity with the face area of the target person in the still image of the current frame is compared with the detection area in the position and the position detection area is compared. It is possible to track the movement of the target person in the moving image by detecting to which detection region of the still image of the next frame the face region of the target person in the still image has moved.
  • the upper body region of the person of interest is divided into a fixed number, for example, four regions, and the movement of the person of interest is similarly tracked for each of the five regions in total. , Can improve the tracking success rate.
  • the integral image of the still image of the next frame (that is, each frame) is generated, and the total amount of luminance values is calculated using the generated integral image, thereby reducing the amount of calculation and processing. Can be speeded up.
  • the integral image for example, assuming that the coordinates of the pixels of the still image increase from left to right and from top to bottom of the still image, the pixel at each coordinate is from the pixel at the upper left to each coordinate It is an image having an integral value of luminance values up to the pixel.
  • the use of the integral image is not limited for the purpose of reduction of calculation amount and speeding up of processing, for example, mean shift (Mean Shift) method etc. Various methods can be used. Since the mean displacement method is also known, its detailed description is omitted.
  • the motion analysis unit 20 analyzes the motion of the target person in the moving image, based on the motion track of the target person detected by the motion track detection unit 18, for example, the motion track of the region of interest such as a face region, For each of the plurality of still images, the evaluation value for the motion of the target person is calculated based on the analyzed motion of the target person.
  • the motion analysis unit 20 defines, for example, a motion trajectory for the motion of the person of interest, for example, a motion trajectory of when the person of interest is running, and the motion trajectory of the person of interest detected by the motion trajectory detection unit 18. From the inside, the motion of the person of interest is analyzed by detecting a portion similar to the previously defined movement trajectory. Then, when the motion of the target person is a running motion, it is possible to calculate the evaluation value for the motion of the target person according to the type of the motion of the target person, such as what evaluation value.
  • the motion analysis unit 20 analyzes the motion of the target person based on the motion history image as shown on the right side of FIGS. 2A to 2C as the motion locus of the target person, and the motion of the target person is obtained. An evaluation value can be calculated.
  • the motion analysis unit 20 analyzes the motion of the noted person based on the motion history image, so that the noted person is running from the right to the left in the figure as shown on the right of FIG. 2 (A). As shown on the right side of the figure (B), the noted person is moving only the right hand while standing still, and as shown on the right side of the figure (C), the noted person is falling on the ground It can be recognized that something is picked up. In addition, it is possible to calculate an evaluation value for the movement of the person of interest based on whether or not the person of interest is moving, at which position, in which direction, or the like.
  • the importance degree determination unit 22 determines a plurality of still images based on at least one of the length of the motion locus of the target person, the position of the target person in the still image, and the size of the target person in the still image.
  • the degree of importance of each of the plurality of still images is determined, and the evaluation value of the degree of importance is calculated based on the determined degree of importance for each of the plurality of still images.
  • the importance degree determination unit 22 determines that, in the moving image, the importance degree of the still image corresponding to the scene in which the movement locus of the target person is long is high. In addition, it is determined that the still image in which the person of interest is photographed in the central part and the still image in which the person of interest is largely photographed (the size of the person of interest is equal to or larger than the threshold) have high importance. Then, the evaluation value of the importance is calculated to be higher as the importance is higher.
  • the composition analysis unit 24 composes each of the plurality of still images based on at least one of the position of the target person in the still image, the size of the target person in the still image, and the movement pattern of the target person.
  • the quality of the image is analyzed, and the evaluation value of the composition is calculated based on the quality of the analyzed composition for each of the plurality of still images.
  • the composition analysis unit 24 may, for example, have a still image in which the person of interest is photographed at the center, or a composition of still images in which the person of interest is largely photographed (the size of the person of interest is equal to or larger than the threshold). Is analyzed to be better than the composition of a still image not captured in the central part or a still image in which the person of interest is not captured appreciably. Then, the evaluation value of the composition of the still image analyzed as good can be calculated to be higher than the evaluation value of the composition of the still image not analyzed as good.
  • the composition analysis unit 24 defines in advance a movement pattern of the target person, for example, a movement pattern in which the target person moves from the left end to the right end of the moving image. From the trajectory, a portion in which the person of interest is moving is detected by a predefined movement pattern. Then, it analyzes that the composition of the still image corresponding to the portion where the noted person is moving with the predefined movement pattern is good, and the evaluation value of the composition of the still image analyzed as good is not analyzed as good. It can be calculated to be higher than the evaluation value of the composition of the still image.
  • the image quality determination unit 26 determines the image quality of each of the plurality of still images based on the region of interest person in the still image, for example, the region of interest such as the face region.
  • the evaluation value of the image quality is calculated based on the determined image quality.
  • the still image extracted from the moving image may or may not have high image quality depending on the compression method of the moving image data.
  • blurring or blurring may occur in the still image due to defocusing or camera shake, or the luminance, color tone, contrast or the like may not be appropriate.
  • the image quality judgment unit 26 judges that the image quality of the still image is good. Then, for still images judged to have good image quality, the evaluation value of image quality can be calculated to be higher as the image quality is better.
  • the still image data output unit 28 selects still image data of a still image corresponding to the scene of the best shot among still image data of a plurality of frames extracted from the moving image data by the still image data extraction unit 14
  • the evaluation value for the motion of the person of interest, or the evaluation value for the motion of the person of interest, and the evaluation value of importance, the evaluation value of composition, and the evaluation value of at least one of the evaluation values of image quality Still image data of a still image whose value is equal to or greater than a threshold is output.
  • the top / bottom correction unit 30 determines the top / bottom position of the still image corresponding to the still image data output from the still image data output unit 28
  • the top and bottom of the still image corresponding to the still image data output from the still image data output unit 28 is corrected so as to be the same as the top and bottom of the photographing apparatus when the moving image is captured.
  • FIG. 3A is a conceptual view of an example showing a still image rotated 90 degrees to the left.
  • a still image can be obtained by rotating the imaging device by 90 degrees to the right when capturing a moving image.
  • the top-bottom correction unit 30 rotates the still image shown in FIG. 6A by 90 ° to the right so that the top and bottom of the still image are the same as the top and bottom of the imaging device when the moving image is captured.
  • FIG. 6B the top and bottom of the still image can be corrected.
  • the person-of-interest detection unit 16 detects and detects each of the two or more persons of interest from a plurality of still images. It is possible to sequentially identify who is the watched person who has been Further, in this case, the motion locus detection unit 18, the motion analysis unit 20, the importance degree determination unit 22, the composition analysis unit 24, the image quality determination unit 26, the still image data output unit 28, and the top-bottom correction unit 30 Processing is sequentially performed on each of the noted persons.
  • step S1 a person designated by the user is registered as an attention person by the attention person registration unit 12 (step S2).
  • the still image data extraction unit 14 extracts, for example, still image data of all the frames from the moving image data (step S2). That is, as shown in FIG. 5, still images of all frames are extracted from the moving image.
  • a notable person registered in the notable person registration unit 12 is detected by the notable person detection unit 16 out of each of the still images of all the frames extracted by the still image data extraction unit 14 (step S3). ).
  • the person of interest is identified in each of the still images of all the frames, and the position of the person of interest in each of the still images of all the frames, as shown by the frame in FIG. The size, the area of the noted person, etc. are detected.
  • the motion locus detection unit 18 tracks the movement of the target person in the moving image, for example, the movement of the region of interest shown by the frame in FIG.
  • the motion locus of the person of interest is detected (step S4).
  • a motion locus of a person of interest representing in a line the locus of movement of a region of interest such as a face region
  • An operation history image as shown on the right of A) to (C) can be obtained.
  • the motion analysis unit 20 analyzes the motion of the target person in the moving image based on the motion track of the target person detected by the motion track detection unit 18. Then, for each of the still images of all the frames, an evaluation value for the motion of the target person is calculated based on the analyzed motion of the target person (step S5-1).
  • the importance degree determination unit 22 determines the importance degree of each of all still images based on the length of the motion locus of the person of interest, the position of the person of interest in the still image, and the size of the person of interest. Then, the evaluation value of the importance is calculated based on the determined importance for each of the still images of all the frames (step S5-2).
  • composition analysis unit 24 analyzes the quality of each composition of all still images based on the position of the target person in the still image, the size of the target person, and the movement pattern of the target person. Then, for each of the still images of all the frames, the evaluation value of the composition is calculated based on the quality of the analyzed composition (step S5-3).
  • the image quality determination unit 26 determines the image quality of each of the still images of all the frames based on the area of the person of interest in the still images. Then, for each of all still images, the image quality evaluation value is calculated according to the determined image quality, and in the case of the present embodiment, the degree of blur (step S5-4). For example, determination of blurring of the region of interest shown by a frame in FIG. 6 is performed, and the evaluation value of the image quality is calculated to be lower as the degree of blurring is larger.
  • the order of calculating the evaluation value of the motion of the person of interest, the evaluation value of importance, the evaluation value of the composition, and the evaluation value of the image quality is not limited at all, and can be calculated in any order. Also, these evaluation values can be calculated in parallel, that is, simultaneously.
  • Still image data of at least one still image is output (step S6).
  • FIG. 7 is a graph of an example showing the comprehensive evaluation value of each of the still images of all the frames extracted from the moving image.
  • the vertical axis of the figure represents the comprehensive evaluation value of each still image, and the horizontal axis represents time (frame).
  • the person of interest is detected by the person of interest detection unit 16 and the motion locus of the person of interest is detected by the motion locus detection unit 18;
  • still image data of a still image whose overall evaluation value is equal to or greater than a threshold is output.
  • the top and bottom of the still image are the same as the top and bottom of the imaging device when the moving image is captured.
  • the top and bottom of the still image is corrected so as to be (step S7).
  • the image processing apparatus 10 for example, based on the comprehensive evaluation value including the evaluation value for the motion of the target person in the moving image, the evaluation value of the importance of the still image, the evaluation value of the composition and the evaluation value of the image quality. Automatically detects the scene of the best shot from the moving image, and extracts the still image data of the still image corresponding to the scene of the best shot from the still image data of all the frames extracted from the moving image data be able to.
  • each component of the device may be configured by dedicated hardware, or each component may be configured by a programmed computer.
  • the method of the present invention can be implemented, for example, by a program for causing a computer to execute each of the steps. It is also possible to provide a computer readable recording medium in which the program is recorded.
  • the present invention is basically as described above.
  • the present invention has been described above in detail, but the present invention is not limited to the above embodiment, and it goes without saying that various improvements and changes may be made without departing from the spirit of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

 L'invention concerne un dispositif de traitement d'image dans lequel une unité de détection de personne d'intérêt détecte une personne d'intérêt dans chaque image d'une pluralité d'images fixes correspondant à des données d'images fixes sur une pluralité de trames extraites de données d'image en mouvement. Une unité de détection de trajectoire de mouvement détecte la trajectoire de mouvement par le suivi du déplacement de la personne d'intérêt dans les images en mouvement sur la base des résultats de détection pour la personne d'intérêt. Une unité d'analyse d'action analyse les actions de la personne d'intérêt dans les images en mouvement sur la base de la trajectoire de mouvement et calcule des valeurs d'évaluation pour les actions de la personne d'intérêt sur la base des actions de la personne intérêt pour chaque image de la pluralité d'images fixes. En outre, une unité de sortie de données d'image fixe sort des données d'image fixe pour des images fixes qui ont une valeur d'évaluation des actions de la personne d'intérêt supérieure ou égale à une valeur seuil, l'unité de sortie de données d'image fixe sortant les données d'image fixe provenant des données d'image fixe sur la pluralité de trames.
PCT/JP2015/072818 2014-08-29 2015-08-12 Dispositif de traitement d'image, procédé de traitement d'image, programme et support d'enregistrement WO2016031573A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-176402 2014-08-29
JP2014176402A JP2016052013A (ja) 2014-08-29 2014-08-29 画像処理装置、画像処理方法、プログラムおよび記録媒体

Publications (1)

Publication Number Publication Date
WO2016031573A1 true WO2016031573A1 (fr) 2016-03-03

Family

ID=55399470

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/072818 WO2016031573A1 (fr) 2014-08-29 2015-08-12 Dispositif de traitement d'image, procédé de traitement d'image, programme et support d'enregistrement

Country Status (2)

Country Link
JP (1) JP2016052013A (fr)
WO (1) WO2016031573A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020010974A1 (fr) * 2018-07-12 2020-01-16 腾讯科技(深圳)有限公司 Procédé et dispositif de traitement d'images, support lisible par ordinateur et dispositif électronique

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023894B (zh) 2016-08-09 2019-01-22 深圳市华星光电技术有限公司 一种降低amoled显示残影的驱动方法及驱动系统
JP6737247B2 (ja) * 2017-07-20 2020-08-05 京セラドキュメントソリューションズ株式会社 画像処理装置、画像処理方法及び画像処理プログラム
JP6778398B2 (ja) * 2017-07-20 2020-11-04 京セラドキュメントソリューションズ株式会社 画像形成装置、画像形成方法及び画像形成プログラム
EP4047923A4 (fr) 2020-12-22 2022-11-23 Samsung Electronics Co., Ltd. Dispositif électronique comprenant une caméra et son procédé
KR20220090054A (ko) * 2020-12-22 2022-06-29 삼성전자주식회사 카메라를 구비하는 전자 장치 및 그 방법

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009200713A (ja) * 2008-02-20 2009-09-03 Sony Corp 画像処理装置、画像処理方法、プログラム
WO2010116715A1 (fr) * 2009-04-07 2010-10-14 パナソニック株式会社 Dispositif de prise d'image, procédé de prise d'image, programme et circuit intégré
JP2011175599A (ja) * 2010-02-25 2011-09-08 Canon Inc 画像処理装置、その処理方法及びプログラム
JP2011239021A (ja) * 2010-05-06 2011-11-24 Nikon Corp 動画像生成装置、撮像装置および動画像生成プログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009200713A (ja) * 2008-02-20 2009-09-03 Sony Corp 画像処理装置、画像処理方法、プログラム
WO2010116715A1 (fr) * 2009-04-07 2010-10-14 パナソニック株式会社 Dispositif de prise d'image, procédé de prise d'image, programme et circuit intégré
JP2011175599A (ja) * 2010-02-25 2011-09-08 Canon Inc 画像処理装置、その処理方法及びプログラム
JP2011239021A (ja) * 2010-05-06 2011-11-24 Nikon Corp 動画像生成装置、撮像装置および動画像生成プログラム

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020010974A1 (fr) * 2018-07-12 2020-01-16 腾讯科技(深圳)有限公司 Procédé et dispositif de traitement d'images, support lisible par ordinateur et dispositif électronique
US11282182B2 (en) 2018-07-12 2022-03-22 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, computer-readable medium, and electronic device

Also Published As

Publication number Publication date
JP2016052013A (ja) 2016-04-11

Similar Documents

Publication Publication Date Title
WO2016031573A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image, programme et support d'enregistrement
US9734612B2 (en) Region detection device, region detection method, image processing apparatus, image processing method, program, and recording medium
US10417773B2 (en) Method and apparatus for detecting object in moving image and storage medium storing program thereof
KR101071352B1 (ko) 좌표맵을 이용한 팬틸트줌 카메라 기반의 객체 추적 장치 및 방법
JP6482195B2 (ja) 画像認識装置、画像認識方法及びプログラム
JP6389801B2 (ja) 画像処理装置、画像処理方法、プログラムおよび記録媒体
JP4373840B2 (ja) 動物体追跡方法、動物体追跡プログラムおよびその記録媒体、ならびに、動物体追跡装置
JP6579950B2 (ja) カメラの撮影画像に映る人物を検出する画像解析装置、プログラム及び方法
JP6362085B2 (ja) 画像認識システム、画像認識方法およびプログラム
JP4764172B2 (ja) 画像処理による移動体候補の検出方法及び移動体候補から移動体を検出する移動体検出方法、移動体検出装置及び移動体検出プログラム
JP6389803B2 (ja) 画像処理装置、画像処理方法、プログラムおよび記録媒体
CN105374051B (zh) 智能移动终端防镜头抖动视频运动目标检测方法
JP2012212373A (ja) 画像処理装置、画像処理方法及びプログラム
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
KR20170053807A (ko) 동적 배경을 가진 영상의 대상 객체 검출 방법
JP4913801B2 (ja) 遮蔽物映像識別装置及び方法
JP2008288684A (ja) 人物検出装置及びプログラム
JP6548788B2 (ja) 画像処理装置、画像処理方法、プログラムおよび記録媒体
JP2008287594A (ja) 特定動作判定装置、リファレンスデータ生成装置、特定動作判定プログラム及びリファレンスデータ生成プログラム
JP2002342762A (ja) 物体追跡方法
González et al. Single object long-term tracker for smart control of a ptz camera
JP6798609B2 (ja) 映像解析装置、映像解析方法およびプログラム
KR20160000533A (ko) 증강 현실에서 물체의 정보 제공을 위한 지역 특징점을 이용한 오브젝트 다중 검출, 추적 방법 및 그 시스템
JP5539565B2 (ja) 撮像装置及び被写体追跡方法
CN113033350B (zh) 基于俯视图像的行人重识别方法、存储介质和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15836773

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15836773

Country of ref document: EP

Kind code of ref document: A1