WO2018139810A1 - Sensing apparatus for calculating position information of object in motion, and sensing method using same - Google Patents

Sensing apparatus for calculating position information of object in motion, and sensing method using same Download PDF

Info

Publication number
WO2018139810A1
WO2018139810A1 PCT/KR2018/000897 KR2018000897W WO2018139810A1 WO 2018139810 A1 WO2018139810 A1 WO 2018139810A1 KR 2018000897 W KR2018000897 W KR 2018000897W WO 2018139810 A1 WO2018139810 A1 WO 2018139810A1
Authority
WO
WIPO (PCT)
Prior art keywords
difference
image
images
moving object
position information
Prior art date
Application number
PCT/KR2018/000897
Other languages
French (fr)
Korean (ko)
Inventor
박현진
Original Assignee
주식회사 골프존
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 골프존 filed Critical 주식회사 골프존
Publication of WO2018139810A1 publication Critical patent/WO2018139810A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • A63B69/36Training appliances or apparatus for special sports for golf
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/10Positions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/80Special sensors, transducers or devices therefor
    • A63B2220/806Video cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention provides a sensing device that can extract the object from each image to calculate the position information of the object from the image by continuously acquiring and analyzing the image of the object to be exercised, and a sensing method using the same.
  • a sensing device and sensing method using the same to calculate the sensing data for the golf ball and the motion of the golf club by acquiring and analyzing the image of the golf club and the golf ball when hitting the golf ball by holding a golf swing It is about.
  • the technology for calculating the location information of the object by analyzing the image of the moving object is mainly used for interactive sports simulation through simulation of popular sports games such as baseball, soccer, basketball, and golf indoors or at specific places. It is used as a sensing device in various simulators and devices for enjoying it in the form of simulation.
  • a moving object for example, when a user swings a golf club with a golf club and hits the golf ball, it is possible to calculate where the moving golf club and the golf ball are located in every frame where the golf ball is taken, the golf club uses the position information calculated every frame.
  • Various kinematic information such as track, speed, direction angle and height angle of the club and golf ball can be calculated and used to calculate the analysis information about the user's golf swing, or in a virtual golf simulation system such as screen golf
  • An image may be used to implement a simulation image of a golf ball flying.
  • the moving object In an image obtained continuously for a certain moving object, the moving object is displayed at different positions every frame while the background part is almost fixed and does not change from frame to frame. If a difference image is generated by difference calculation on the pixel value (brightness value) of each pixel, the background part is removed from the difference image and only the moving object part remains. Difference technique is widely used to extract moving objects from.
  • FIG. 1 is a diagram for explaining the difference image technique as described above.
  • the difference image Diff (ta, tb) of an image (frame_ta) photographed at time ta and an image (frame_tb) captured at time tb is illustrated in FIG. It is shown.
  • the difference operation is obtained by the difference in pixel values (brightness values) between pixels at the same position. If the difference between the pixel values of pixels at the same position of two images is greater than 0, the difference value is the pixel value of the corresponding pixel ( Brightness value), and if the difference between the pixel values of the pixels at the same position of the two images is less than zero, the difference image is made by replacing with zero.
  • FIG. 1 illustrates a case in which a difference operation is performed on 14 x 13 sized images in 14 x-axis directions and 13 x-axis directions.
  • the purpose of this difference is to extract the moving object (part A) from the frame_ta image.
  • a difference operation is performed on all pixels of two images using an image (frame_tb) at time tb, as shown in FIG. 1, a difference image of Diff (ta, tb) is generated.
  • the difference image Diff (ta , tb)) on pixel (1,1) will be black when the pixel value becomes 0.
  • the object A can be extracted fairly accurately through the difference image if the reflectivity of the moving object due to lighting is always constant and no other variables exist. .
  • the reflectance of the light may be different every frame, so the pixel values of the pixels that make up the object may be different every frame, and another object is captured in the image.
  • the difference image Diff (ta, tb) of the frame_tb image of the frame_ta image in FIG. 1 it can be seen that the object A is substantially damaged by the difference operation and appears as A ′.
  • the object may have a pixel value that is almost similar to or darker than the background part of the image. It is very difficult to extract the golf club from the image by the difference image technique, and even if extracted, the location information calculated therefrom has a problem that the accuracy is inevitably deteriorated.
  • the shape and size of golf balls are standardized, and most of them are bright colors. Therefore, the golf balls can be extracted from the image fairly accurately by the difference image technique, even if some pixels of the golf ball portion are lost during the difference image process.
  • the size is standardized so that the contours can be fitted fairly accurately.
  • Patent Application No. 10-2011-0025149 Patent Application No. 10-2011-0111875
  • Japanese Patent Application Laid-Open No. 2005-210666 Japanese Patent Application Laid-Open No. 2005-210666, and the like.
  • the present invention is to solve the conventional problems as described above, a simple method of moving the object in the image taken for the movement of the object having an unspecified shape, size, material, reflectivity, such as a golf club
  • the present invention provides a sensing device for calculating position information of a moving object that can be extracted quite accurately and stably, and a sensing method using the same.
  • a sensing method of a sensing device for calculating position information of a moving object includes: continuously obtaining an image at an angle of view of the moving object; Generating a difference operation image of two successive difference images for each difference operation image of the reference image for each of the successively acquired images; And calculating position information of the moving object from each of the successive computed images of the two successive computed images.
  • the step of generating the difference operation images of the two consecutive difference images may include determining an image of one frame among the continuously acquired images as the reference image.
  • the step of generating a difference operation image of each of the two successive difference images may include pixel values that are brighter than the background portion of each image by the difference calculation of the reference image for each of the successively acquired images.
  • the step of generating the difference operation images of the two successive difference images may include the pixel value of each pixel and the corresponding position of the reference image in each of the successively acquired images. Generating an absolute difference image with an absolute value for the difference in pixel values as the pixel value; and if the difference between the pixel values of each corresponding pixel is greater than zero for the two successive absolute difference images, the value is converted to pixels; And calculating a difference-difference image by performing a difference calculation process of setting the value to a value less than zero to zero as the pixel value.
  • the sensing method of the sensing device for calculating the position information of the moving object comprising: continuously obtaining an image at an angle of view looking at the moving object; Each successively acquired image includes a background portion, a position moving object appearing at a new position every frame, and a movement position overlapping portion in which a motion change position overlaps in a successive image. Performing image processing to extract the position moving object and the movement position overlapping portion and remove the background portion in each case; Extracting the moving object of the moving object by removing the moving position overlapping portion through the difference operation of two consecutive images with respect to the image processed images; And calculating position information of the extracted exercise object.
  • the performing of the image processing to remove the background portion comprises: a pixel having a brighter pixel value than the background portion of each image through the absolute value of the difference operation of the reference image for each of the successively acquired images; And image processing to include both pixels having pixel values that are darker than the background portion.
  • the sensing method of the sensing device for calculating the position information of the golf club exercising in accordance with the golf swing of the user to continuously acquire an image at an angle of view looking at the golf swing of the user step; Generating an absolute difference image by taking an absolute value of a difference operation of a reference image for each of the successively acquired images; Generating a difference-difference image by performing a difference operation on two consecutive absolute difference images among the generated absolute difference images; And calculating position information of the exercise golf club from each of the generated difference-calculated images.
  • the sensing device for calculating the position information of the moving object the camera unit for continuously obtaining an image at an angle of view looking at the moving object;
  • An image processing unit which performs image processing for extracting a difference operation image of two successive difference images for each difference operation image of a reference image for each successively acquired image;
  • an information calculator configured to calculate positional information of the moving object from each of the successive computed images of the two successive computed images extracted by the image processor.
  • the image processing unit may determine an image of one frame from among the continuously acquired images as the reference image, and determine an absolute value of a value by the difference operation of the reference image for each of the continuously acquired images. And generate an absolute difference image as a pixel value, and generate a difference-difference image through difference operations of two successive absolute difference images.
  • the sensing device for calculating the position information of the golf club exercising according to the golf swing of the user the camera unit for continuously obtaining an image at an angle of view looking at the golf swing of the user; Absolute difference image is generated by taking the absolute value of the difference operation of the reference image with respect to each of the successively acquired images, and performs a difference operation on two consecutive absolute value difference images among the generated absolute value difference images, respectively.
  • An image processing unit which performs image processing to generate a difference-difference image; And an information calculator configured to calculate location information of the golf club to be exercised from each of the generated difference-calculated images.
  • the sensing device for calculating the position information of the moving object and the sensing method using the same according to the present invention has a non-specific shape, size, material, reflectivity, etc., such as a golf club, without attaching a special marker or additional equipment
  • the moving object is extracted fairly accurately and reliably by generating a difference operation image of two consecutive difference images. There is an effect of calculating accurate position information therefrom.
  • FIG. 1 is a diagram for explaining a difference image technique, which is generally used in an image processing technology.
  • FIG. 2 is a block diagram illustrating a configuration of a sensing device according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a process according to a sensing method of a sensing device according to an embodiment of the present invention.
  • FIG. 4 is a diagram illustrating an example of images continuously acquired by a camera unit of a sensing apparatus according to an embodiment of the present invention.
  • FIG. 5 illustrates a process of generating an absolute difference image using the images illustrated in FIG. 4.
  • FIG. 6 is a diagram illustrating a process of generating a difference-difference image using the absolute value difference images shown in FIG. 5.
  • FIG. 2 is a block diagram illustrating a configuration of a sensing device according to an embodiment of the present invention.
  • the present invention is completed in the development process of the sensing device for extracting the golf ball and the golf club from the image to calculate the location information by analyzing the images taken when the user hits the golf ball by golf swing to the golf club
  • the present invention provides a method for accurately and stably extracting an object that is difficult to extract from an image by a conventional image processing technique such as a golf club, and effectively extracts an object that is difficult to extract from an image such as a golf club. Therefore, using the sensing device and the sensing method according to the present invention, as well as a standardized object such as a golf ball, the object according to the motion can be effectively extracted through image processing to accurately calculate its position information. have.
  • the golf club will be described mainly as an example of the above-mentioned 'working object', but as described above, the present invention can be equally applied to any object that is difficult to extract through image processing as well as the golf club. Of course.
  • the sensing device includes a camera unit 100 and a sensing processor 200, and the sensing processor 200 includes an image processor 210 and an information calculator. And 220.
  • the camera unit 100 is configured to continuously acquire an image at an angle of view looking at a moving object. In order to calculate position information in a three-dimensional space with respect to the moving object, the camera unit 100 has different viewing angles. For example, as illustrated in FIG. 2, a plurality of cameras acquiring images of the same object, for example, the first camera 110 and the second camera 120 are preferably synchronized with each other and configured in a stereo manner.
  • the plurality of cameras 110 and 120 of the camera unit 100 are synchronized with each other and configured in a stereo manner, thereby obtaining the image and the second camera 120 acquired through the first camera 110 for the same object.
  • the 2D information of the corresponding object extracted from each of the acquired images may be converted into 3D information.
  • FIG. 2 illustrates a case where a plurality of cameras 110 and 120 of the camera unit 100 acquires an image of a user swinging a golf club GC with a user U, but the present invention is limited thereto.
  • the present invention can be used for various sports simulation systems, such as when the user swings a baseball.
  • the sensing processor 200 collects an image from each of the cameras 110 and 120 of the camera unit 100 and performs a predetermined image processing to extract a corresponding object, and an object extracted from the image. It may be configured to include an information calculation unit 220 for calculating the three-dimensional position information and the like from the two-dimensional position information of.
  • the sensing processor 200 extracts a moving object from each of the images collected by the cameras 110 and 120 of the camera unit 100, calculates position information of the object, and calculates the position information of the corresponding object to the client 300.
  • the client 300 may perform a unique function of the client 300 such as calculating new information or calculating analysis information using the position information of the received object.
  • the client 300 when the client 300 is implemented as a simulator used for a screen golf system, the location information of a golf ball and a golf club is received from the sensing processing unit 200 and simulation of the trajectory of the ball flying on a virtual golf course using the location information is received. You can implement the image.
  • the client 300 in the case of implementing the client 300 as a golf swing analysis device, receiving the position information of the golf ball and the golf club from the sensing processing unit 200 using the analysis information on the user's golf swing, the diagnosis of the swing and It can be implemented to provide lesson information for solving this.
  • the image processing unit 210 processes an image to extract a difference operation image of two successive difference images, respectively, for each difference operation image of the reference image for each image continuously acquired by the camera unit 100.
  • the information calculating unit 220 may be configured to calculate position information of the moving object from each of the difference calculated images of the two consecutive difference images extracted by the image processing unit.
  • the information calculating unit 220 may be configured to be included in the sensing device, but may be configured to be included in the client 300 without being limited thereto. That is, the sensing device performs a function of extracting a moving object by acquiring an image and processing the image, and transmits the extracted information to the information calculating unit of the client to obtain the location information of the extracted moving object and various information using the same. It is also possible to make calculations.
  • the camera unit shoots a golf swing at a golf club at a predetermined angle of view to acquire an image continuously (S10).
  • the continuously acquired image is transferred to the image processing unit of the sensing processing unit, and the image processing unit selects the reference image for the difference calculation according to a preset item among the images acquired continuously (S11).
  • the reference image is an image for removing a background part on an image to be used in a difference image for extracting a moving object (object of interest) to be detected from the image.
  • the reference image may be selected as an image having only a background portion except for a moving object of interest, and at least a moving object of interest may be selected as an image appearing at a position that does not overlap with an object appearing in another image. It is preferable.
  • an image at time ta or an image at time tn may be selected as a reference image for performing a difference operation on each of the images acquired at times t1, t2...
  • images of viewpoints t1, t2, etc., in which the difference operation is to be performed using the reference image will be referred to as 'target images' for convenience.
  • Whether to select an image acquired at a certain point as a reference image can be determined individually from what the object is being exercised and how it is to be exercised. For example, when a golf club exercising during golf swing is to be extracted from the target images, the target is selected. Since the position where the golf club appears on the images is preferably not overlapped with the position where the golf club appears on the reference image, in the above example, the image of the tn viewpoint, which is a point far from the target images, is selected as the reference image. It can be set in advance in the sensing processing unit.
  • the difference operation according to the present invention is not a general difference operation described in the background art, but a method using a difference operation of a difference operation in which two difference operations are performed, and more specifically, an absolute value difference operation. 'Difference operation' is again performed on 'absolute difference calculation images' generated by performing 'to generate a difference difference image', thereby providing a method of accurately extracting an object of interest.
  • steps S12 and S13 of FIG. 3 As described above, when the image processor of the sensing processor selects the reference image (S11), the image processor determines the reference image of each of the images (target images) that are continuously acquired. An absolute value difference image is generated by using the absolute value of the difference value as the pixel value (S12), and a difference-difference image is generated through the difference operation of two consecutive absolute value difference images (S13).
  • the absolute value difference operation is an operation that takes an absolute value of a difference between a pixel value (brightness value) of a pixel on a target image and a pixel value (brightness value) of a pixel at a corresponding position on a reference image.
  • An image whose absolute value of the difference in values is a pixel value will be referred to as an "absolute value difference operation image”.
  • a 'difference difference image' The image generated by performing the difference operation again using the absolute difference operation images described above will be referred to as a 'difference difference image'.
  • the difference-difference image generated by performing two difference calculations excludes an interfering element, that is, an interference element, from the object of interest, that is, a moving object to be extracted, and the object of interest appears fairly accurately. Therefore, the object of interest may be easily extracted from the difference-difference image.
  • the portion corresponding to the moving object on each difference-difference image generated as described above is used to determine the contour using edge information, and to find a specific position such as the center point or the center of gravity of the determined contour and then move the position. It can calculate as positional information of (S14).
  • FIGS. 4 to 6 a specific example of generation of an absolute value difference image and generation of a difference difference image by the sensing method of the sensing device according to an embodiment of the present invention will be described.
  • FIGS. 4 to 6 a specific example of generation of an absolute value difference image and generation of a difference difference image by the sensing method of the sensing device according to an embodiment of the present invention will be described.
  • FIG. 4 illustrates an example of images continuously acquired by a camera unit of a sensing apparatus according to an embodiment of the present invention
  • FIG. 5 illustrates a process of generating an absolute value difference image using the images shown in FIG. 4.
  • FIG. 6 illustrates a process of generating a difference-difference image by using the absolute difference images shown in FIG. 5.
  • FIGS. 4 to 6 are not images acquired by photographing, but are images arbitrarily created to more effectively explain the sensing method of the sensing apparatus according to the present invention. It was planned.
  • the images continuously acquired by the camera unit are images acquired at the times t1, t2, t3 ... tn, respectively.
  • FIG. 4A the images acquired at the time t1 (Frame_t1). ), (B) an image (Frame_t2) acquired at time t2, (c) an image (Frame_t3) acquired at time t3, and (d) somewhat far from t1, t2, t3 ...
  • Each of the images Frame_tn acquired at the viewpoint tn viewpoint is shown.
  • the images (Frame_t1, Frame_t2, Frame_t3, etc.) shown in (a) to (c) of FIG. 4 are a target image
  • Frame_tn which is the image shown in (d)
  • the same background portion BG is present on the image in the images acquired continuously.
  • the objects of interest (M1, M2, M3 ... Mn) corresponding to the moving object to be detected are present at different positions on each image while moving over the time t1 to tn when the image is acquired (slightly some of them). Overlapping may occur, but it is desirable to exist in different positions every frame, depending on the movement speed of the object and the shooting speed of the camera.
  • the user's body part, golf mat part, rubber tee part, etc. correspond to the exercise position overlapping part that indicates the overlapping movement change
  • the golf Clubs correspond to objects of interest that represent positional changes that do not overlap every frame.
  • FIG. 5 A process of generating an absolute value difference operation image corresponding to step S12 of FIG. 3 using the target images and reference images illustrated in FIGS. 4A to 4D is illustrated in FIG. 5.
  • FIG. 5A illustrates a case where an absolute value difference image AbsDiff (t1, tn) is generated by the absolute value difference operation on the reference image Frame_tn of the target image Frame_t1 at the time point t1.
  • (b) illustrates a case in which the absolute value difference image AbsDiff (t2, tn) is generated by the absolute value difference operation on the reference image Frame_tn of the target image Frame_t2 at the time t2
  • FIG. 5 (c) Denotes a case where the absolute value difference image AbsDiff (t3, tn) is generated by the absolute value difference operation on the reference image Frame_tn of the target image Frame_t3 at the time t3.
  • the absolute difference calculation takes an absolute value for the difference between the pixel value (brightness value) of the target image and the reference image, so the higher the pixel value, the higher the pixel value (brightness value) in the absolute difference calculation image. ).
  • the background portion BG of which the pixel values of the target image and the reference image are substantially the same as the pixel values can all be removed as shown in FIGS.
  • Objects of interest (M1, M2, M3), motion position overlaps (V1, V2, V3, Vn), and unnecessary position shift objects (Mn) on the reference image appear fairly intact on the absolute difference image.
  • an absolute value difference image AbsDiff (t1, tn) is generated by an absolute value difference operation on the reference image Frame_tn of the target image Frame_t1, and the absolute value difference image AbsDiff (t1, tn) is generated.
  • the background portion BG is mostly erased and the object of interest M1 and the unnecessary position movement object Mn, which exist at different positions, appear almost intact by the absolute value of the difference operation, and the movement position overlapping portions VO1 (V1 and Vn). A substantial part of this overlapping part) also exists.
  • the moving position overlapping part VO1 may be erased by the absolute value difference operation when the pixel value is the same in the part where V1 and Vn overlap each other. That is, although the drawings are shown as appearing intact, some of them appear to be erased.
  • the absolute difference calculation as described above whether the portion having the high pixel value or the low pixel value of the pixels constituting the object of interest, appears completely in the absolute difference calculation image. Not only that, but the low pixel values can have a high pixel value so that they can be significantly distinguished from the surroundings (i.e., the dark parts of the target image appear bright in the absolute difference image).
  • AbsDiff (t1, tn), AbsDiff (t2, tn), AbsDiff (t3, tn), and the like, which are the absolute value difference images respectively generated in FIGS. An image is generated, which will be described with reference to FIG. 6.
  • an absolute value difference operation generates absolute difference operations images such as AbsDiff (t1, tn), AbsDiff (t2, tn), and AbsDiff (t3, tn).
  • a difference operation means a conventional difference operation
  • a difference difference image can be obtained as shown in FIG. 6.
  • the objects of interest are M1, M2, M3, and the movement position overlapping part (VO1). , VO2, VO3) and the unnecessary displacement object Mn are all parts to be removed as an interference element.
  • Such interferences can be eliminated by the difference operation of two consecutive absolute value difference images.
  • AbsDiff (t2, tn) of the absolute value difference image AbsDiff (t1, tn) is shown.
  • the moving position overlapping portions VO1 and VO2 which are almost identical to the moving object Mn, which are the same portions on the two images, are almost eliminated, and the positions are not overlapped.
  • the object of interest M1 is preserved, and the portion of M2 that has a negative value by the difference operation disappears on the difference-difference image (DOD (t1, t2
  • the position shift object which is a part that exists equally on both images through the difference operation on AbsDiff (t3, tn) of the absolute value difference image AbsDiff (t2, tn),
  • AbsDiff (t3, tn) of the absolute value difference image AbsDiff (t2, tn) AbsDiff (t3, tn) of the absolute value difference image AbsDiff (t2, tn)
  • the movement position overlapping portions VO2 and VO3, which are almost the same as Mn) are almost all erased, the object of interest M2 whose position does not overlap is preserved, and the M3 portion having a negative value by the difference operation.
  • the pixel value is replaced by 0 and disappears on the difference-difference image (DOD (t2, t3
  • the motion position overlapping portions VO1 and VO2, VO2 and VO3, which are not completely overlapped by the difference operations of two successive absolute difference images the difference-difference images DOD (t1, t2
  • the motion position overlap portion VO1 on the absolute difference image AbsDiff (t1, tn) and the motion position overlap portion VO2 on the absolute difference image AbsDiff (t2, tn) are not completely identical except for the NZ portion, Since there may be portions in which the pixel values of pixels at positions corresponding to each other in the position overlapping portions are different from each other, in reality, there are a number of large and unidentifiable large portions other than the NZ portions shown on each difference-difference image shown in FIG. You can leave small noises.
  • NZ portion is shown as noise in FIG. 6, but in fact, other noises may exist in addition to the NZ portion, and such noise (NZ portion and other noise portions) is generally used. It can be removed by applying the noise reduction technique used.
  • the contour is specified using the edge information of the corresponding part, and the center point or the center of gravity of the part specified by the contour.
  • the feature point of can be specified and computed as positional information.
  • Sensing device for calculating the position information of a moving object and a sensing method using the same by allowing the golf simulation based on the industrial field and virtual reality related to the golf practice, such as hit analysis according to the golf swing It can be used in the so-called screen golf industry, etc., where the user can enjoy a virtual golf game.

Abstract

The purpose of the present invention is to provide a sensing apparatus and a sensing method which can simply, very accurately, and stably extract an object in motion from images captured while the object, such as a golf club, having a shape, a size, a material, reflectivity, and the like that cannot be specified, is moving. The sensing method, which is performed by the sensing apparatus for calculating position information of an object in motion according to the present invention, comprises the steps of: successively obtaining images at the angle of view at which the object in motion is viewed; generating difference calculation images between each of the successively-obtained images and a reference image, and generating difference calculation images between each two successive difference calculation images; and calculating position information of the object in motion from each of the difference calculation images between each two successive difference calculation images.

Description

운동하는 객체의 위치 정보를 산출하기 위한 센싱장치 및 이를 이용한 센싱방법Sensing device for calculating position information of a moving object and sensing method using the same
본 발명은 운동하는 객체에 대한 이미지를 연속하여 취득하여 이를 분석함으로써 각각의 이미지로부터 상기 객체를 추출하여 그로부터 상기 객체의 위치 정보를 산출할 수 있는 센싱장치 및 이를 이용한 센싱방법으로서, 예컨대 사용자가 골프클럽을 들고 골프스윙을 하여 골프공을 타격할 때 상기 골프클럽 및 골프공에 대한 이미지를 취득하여 분석함으로써 상기 골프공과 상기 골프클럽의 운동에 대한 센싱 데이터를 산출하는 센싱장치 및 이를 이용한 센싱방법에 관한 것이다.The present invention provides a sensing device that can extract the object from each image to calculate the position information of the object from the image by continuously acquiring and analyzing the image of the object to be exercised, and a sensing method using the same. In the sensing device and sensing method using the same to calculate the sensing data for the golf ball and the motion of the golf club by acquiring and analyzing the image of the golf club and the golf ball when hitting the golf ball by holding a golf swing It is about.
운동하는 객체를 촬영한 이미지를 분석하여 상기 객체의 위치 정보를 산출하는 기술은 주로 야구, 축구, 농구, 그리고 골프 등과 같은 인기 스포츠 경기를 실내나 특정 장소에서 시뮬레이션을 통해 인터액티브 스포츠 시뮬레이션(Interactive Sports Simulation)의 형태로 즐길 수 있도록 하는 여러 가지 다양한 시뮬레이터 및 이를 위한 장치에서 센싱장치의 형태로 이용되고 있다.The technology for calculating the location information of the object by analyzing the image of the moving object is mainly used for interactive sports simulation through simulation of popular sports games such as baseball, soccer, basketball, and golf indoors or at specific places. It is used as a sensing device in various simulators and devices for enjoying it in the form of simulation.
운동하는 객체, 예컨대 사용자가 골프클럽으로 골프스윙을 하여 골프공을 타격할 때 움직이는 골프클럽과 골프공이 촬영되는 매 프레임마다 어떤 위치에 있는지 산출할 수 있다면 매 프레임마다 산출되는 위치 정보를 이용하여 골프클럽과 골프공의 궤적, 속도, 방향각, 높이각 등의 다양한 운동역학적 정보를 산출할 수 있고 이를 이용하여 사용자의 골프스윙에 대한 분석 정보를 산출한다거나, 소위 스크린 골프와 같은 가상 골프 시뮬레이션 시스템에서 영상을 통해 골프공이 비행하는 시뮬레이션 영상을 구현할 수도 있다.If a moving object, for example, when a user swings a golf club with a golf club and hits the golf ball, it is possible to calculate where the moving golf club and the golf ball are located in every frame where the golf ball is taken, the golf club uses the position information calculated every frame. Various kinematic information such as track, speed, direction angle and height angle of the club and golf ball can be calculated and used to calculate the analysis information about the user's golf swing, or in a virtual golf simulation system such as screen golf An image may be used to implement a simulation image of a golf ball flying.
통상 어떤 운동하는 객체에 대해 연속하여 취득되는 이미지에서 매 프레임마다 운동하는 객체는 서로 다른 위치에 나타나는 반면 그 배경 부분은 프레임마다 변화하지 않고 거의 고정되어 있기 때문에, 일 프레임의 이미지와 다른 일 프레임의 이미지에 대해 각각의 픽셀들의 픽셀값(밝기값)에 대한 차연산(Difference Calculation)을 통해 차영상(Difference Image)을 생성하면 그 차영상에서는 배경 부분이 제거되고 운동하는 객체 부분만 남게되므로, 이미지에서 운동하는 객체를 추출해 낼 때 차영상 기법이 널리 이용되고 있다.In an image obtained continuously for a certain moving object, the moving object is displayed at different positions every frame while the background part is almost fixed and does not change from frame to frame. If a difference image is generated by difference calculation on the pixel value (brightness value) of each pixel, the background part is removed from the difference image and only the moving object part remains. Difference technique is widely used to extract moving objects from.
도 1은 상기한 바와 같은 차영상 기법을 설명하기 위한 도면인데, 시간 ta 시점에서 촬영된 이미지(frame_ta)과 시간 tb 시점에서 촬영된 이미지(frame_tb)의 차영상(Diff(ta,tb))을 나타내고 있다.FIG. 1 is a diagram for explaining the difference image technique as described above. The difference image Diff (ta, tb) of an image (frame_ta) photographed at time ta and an image (frame_tb) captured at time tb is illustrated in FIG. It is shown.
차연산은 동일한 위치의 픽셀끼리의 픽셀값(밝기값)의 차이에 의해 구해지는데, 두 이미지의 동일 위치의 픽셀들의 픽셀값의 차이가 0보다 큰 경우에는 그 차이값이 해당 픽셀의 픽셀값(밝기값)이 되고, 두 이미지의 동일 위치의 픽셀들의 픽셀값의 차이가 0보다 작은 음수인 경우에는 0으로 대체하여 차영상이 만들어진다.The difference operation is obtained by the difference in pixel values (brightness values) between pixels at the same position. If the difference between the pixel values of pixels at the same position of two images is greater than 0, the difference value is the pixel value of the corresponding pixel ( Brightness value), and if the difference between the pixel values of the pixels at the same position of the two images is less than zero, the difference image is made by replacing with zero.
도 1에서는 x축 방향 14개, y축 방향 13개의 14 x 13 크기의 이미지에 대해 차연산이 이루어지는 경우를 예시하고 있다.FIG. 1 illustrates a case in which a difference operation is performed on 14 x 13 sized images in 14 x-axis directions and 13 x-axis directions.
이러한 차연산의 목적은 frame_ta 이미지에서 운동하는 객체(A부분)를 추출해 내는 것이다. 이를 위해 시간 tb 시점의 이미지(frame_tb)를 이용하여 두 이미지의 모든 픽셀들에 대해 차연산을 하면, 도 1에 도시된 바와 같이 Diff(ta,tb)의 차영상이 생성된다.The purpose of this difference is to extract the moving object (part A) from the frame_ta image. To this end, if a difference operation is performed on all pixels of two images using an image (frame_tb) at time tb, as shown in FIG. 1, a difference image of Diff (ta, tb) is generated.
도 1에서 보듯이, 예컨대 frame_ta 이미지의 (1,1)번 픽셀과 frame_tb 이미지의 (1,1)번 픽셀이 각각 20의 픽셀값을 갖는다고 할 때, 차연산에 의해 차영상(Diff(ta,tb)) 상의 (1,1)번 픽셀은 픽셀값이 0이 되어 검은색으로 나타나게 된다.As shown in FIG. 1, for example, when pixels (1,1) of the frame_ta image and pixels (1,1) of the frame_tb image each have a pixel value of 20, the difference image Diff (ta , tb)) on pixel (1,1) will be black when the pixel value becomes 0.
예컨대, frame_ta 이미지의 (2,7)번 픽셀의 픽셀값이 50이고 frame_tb 이미지의 (2,7)번 픽셀의 픽셀값이 20이라고 할 때, 차연산에 의해 차영상(Diff(ta,tb)) 상의 (2,7)번 픽셀은 픽셀값이 50-20 = 30이 되어 30의 픽셀값을 갖는 픽셀로 나타나게 된다.For example, when the pixel value of pixel (2,7) of the frame_ta image is 50 and the pixel value of pixel (2,7) of the frame_tb image is 20, the difference image Diff (ta, tb) The pixel (2,7) on the) is represented as a pixel having a pixel value of 30 when the pixel value is 50-20 = 30.
예컨대, frame_ta 이미지의 (6,10)번 픽셀의 픽셀값이 40이고 frame_tb 이미지의 (6,10)번 픽셀의 픽셀값이 100이라고 할 때, 차연산에 의하면 40-100 = -60이 되어 음수값을 갖게 되므로 상기 -60은 0으로 대체되어 차영상(Diff(ta,tb)) 상의 (6,10)번 픽셀은 픽셀값은 0이 되어 검은색으로 나타나게 된다.For example, when the pixel value of pixel (6,10) of the frame_ta image is 40 and the pixel value of pixel (6,10) of the frame_tb image is 100, according to the difference operation, 40-100 = -60 and the negative value is negative. Since -60 is replaced with 0, the pixel (6, 10) on the difference image Diff (ta, tb) has a pixel value of 0 and is displayed in black.
이와 같은 방식으로 frame_ta 이미지의 frame_tb 이미지에 대한 차영상을 구하면, 조명에 따른 운동하는 객체의 반사도 등이 항상 일정하고 주변의 다른 변수가 존재하지 않는다면 객체 A는 차영상을 통해 상당히 정확하게 추출될 수 있다.In this way, if the difference image of the frame_tb image of the frame_ta image is obtained, the object A can be extracted fairly accurately through the difference image if the reflectivity of the moving object due to lighting is always constant and no other variables exist. .
그러나, 객체가 운동하기 때문에 매 프레임마다 조명에 대한 반사도 등이 다를 수 있기 때문에 매 프레임마다 객체를 이루는 픽셀들의 픽셀값이 달라질 수 있고 다른 물체가 중간에 끼어들거나 움직이는 어떤 물체가 이미지에 포착이 되는 등의 여러 가지 변수에 의해 차영상을 통해 항상 정확하게 객체 A를 정확하게 추출할 수 없는 경우가 많다. 도 1에서 frame_ta 이미지의 frame_tb 이미지에 대한 차영상(Diff(ta,tb))을 보면 객체 A가 차연산에 의해 상당 부분 손상되어 A'부분으로 나타나는 것을 볼 수 있다.However, because the object is moving, the reflectance of the light may be different every frame, so the pixel values of the pixels that make up the object may be different every frame, and another object is captured in the image. In many cases, it is not always possible to accurately extract the object A through the difference image. Referring to the difference image Diff (ta, tb) of the frame_tb image of the frame_ta image in FIG. 1, it can be seen that the object A is substantially damaged by the difference operation and appears as A ′.
특히, 객체의 색상이나 조명에 대한 반사도, 주변 조도의 차이, 조명에 대한 쉐이드(Shade), 객체의 재질 등의 영향으로 이미지 상에서 객체가 배경 부분과 거의 비슷하거나 오히려 더 어두운 픽셀값을 갖는 경우도 있는 등 골프클럽을 이미지에서 차영상 기법으로 추출하는 것은 매우 어려우며 추출한다 하더라도 그로부터 산출되는 위치 정보는 정확도가 매우 떨어질 수밖에 없는 문제점이 있었다.In particular, due to the influence of the object's color or light reflectance, the difference in ambient light, the shade of the light, the material of the object, etc., the object may have a pixel value that is almost similar to or darker than the background part of the image. It is very difficult to extract the golf club from the image by the difference image technique, and even if extracted, the location information calculated therefrom has a problem that the accuracy is inevitably deteriorated.
골프클럽과 달리 골프공의 경우에는 형상과 크기 등이 정형화되어 있고 대부분 밝은 색상이므로 차영상 기법을 통해 상당히 정확하게 이미지에서 추출할 수 있으며 차영상 과정에서 골프공 부분의 일부 픽셀이 손실된다고 하더라도 형상과 크기가 정형화되어 있기 때문에 윤곽선을 상당히 정확하게 피팅(fitting)할 수 있다. Unlike golf clubs, the shape and size of golf balls are standardized, and most of them are bright colors. Therefore, the golf balls can be extracted from the image fairly accurately by the difference image technique, even if some pixels of the golf ball portion are lost during the difference image process. The size is standardized so that the contours can be fitted fairly accurately.
그러나, 골프클럽은 차영상 기법으로 추출하는 것도 매우 어려울 뿐만 아니라 추출한다고 하더라도 상기한 바와 같은 골프공의 추출에 이용되는 영상처리기법으로는 추출하기 매우 어렵다)However, golf clubs are not only very difficult to extract by the car image technique but also very difficult to extract by the image processing technique used to extract the golf balls as described above.)
이러한 문제를 해결하기 위하여 골프클럽의 샤프트 및 헤드 등에 특정 마커를 부착하고 이미지에서 그 특정 마커를 찾아 골프클럽의 위치를 특정하는 방법에 대한 기술이 등장하였으나, 사용자가 골프 연습이나 가상의 골프 경기를 할 때 반드시 특정 마커를 부착한 특정 골프클럽을 사용하여야 한다는 치명적인 문제점이 있고, 특정 마커를 부착한다고 하더라도 골프 스윙에 따라 이미지 상에서 해당 마커가 온전히 나타나지 않거나 가려지게 되는 경우가 있기 때문에 정확한 골프클럽의 정확한 위치의 특정이 어렵게 되는 문제점이 있었다.In order to solve this problem, a technique of attaching a specific marker to the shaft and head of the golf club and finding the specific marker in the image to identify the position of the golf club has emerged. There is a fatal problem that you must use a specific golf club attached to a specific marker, and even if a specific marker is attached, the marker may not be shown or hidden completely on the image depending on the golf swing. There was a problem that the identification of the location became difficult.
본 발명과 관련된 종래의 선행기술문헌으로는, 특허출원 제10-2011-0025149호, 특허출원 제10-2011-0111875호, 일본공개특허 제2005-210666호 등이 있다.Conventional prior art documents related to the present invention include Patent Application No. 10-2011-0025149, Patent Application No. 10-2011-0111875, Japanese Patent Application Laid-Open No. 2005-210666, and the like.
본 발명은 상기한 바와 같은 종래의 문제점을 해결하기 위한 것으로서, 골프클럽과 같은 특정할 수 없는 형상, 크기, 재질, 반사도 등을 가진 객체가 운동하는 것에 대해 촬영된 이미지에서 운동하는 객체를 간단한 방법으로 상당히 정확하게 그리고 안정적으로 추출할 수 있는, 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치 및 이를 이용한 센싱방법을 제공하기 위한 것이다.The present invention is to solve the conventional problems as described above, a simple method of moving the object in the image taken for the movement of the object having an unspecified shape, size, material, reflectivity, such as a golf club The present invention provides a sensing device for calculating position information of a moving object that can be extracted quite accurately and stably, and a sensing method using the same.
본 발명의 일 실시예에 따른 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치의 센싱방법은, 상기 운동하는 객체를 바라보는 화각으로 연속하여 이미지를 취득하는 단계; 상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산 이미지 각각에 대해, 연속하는 두 차연산 이미지의 차연산 이미지를 각각 생성하는 단계; 및 상기 연속하는 두 차연산 이미지의 차연산 이미지 각각으로부터 상기 운동하는 객체의 위치 정보를 산출하는 단계를 포함한다.According to an embodiment of the present disclosure, a sensing method of a sensing device for calculating position information of a moving object includes: continuously obtaining an image at an angle of view of the moving object; Generating a difference operation image of two successive difference images for each difference operation image of the reference image for each of the successively acquired images; And calculating position information of the moving object from each of the successive computed images of the two successive computed images.
또한 바람직하게는, 상기 연속하는 두 차연산 이미지의 차연산 이미지를 각각 생성하는 단계는, 상기 연속하여 취득되는 이미지 중 일 프레임의 이미지를 상기 기준 이미지로서 결정하는 단계를 포함하는 것을 특징으로 한다.Also preferably, the step of generating the difference operation images of the two consecutive difference images may include determining an image of one frame among the continuously acquired images as the reference image.
또한 바람직하게는, 상기 연속하는 두 차연산 이미지의 차연산 이미지를 각각 생성하는 단계는, 상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산에 의하여 각 이미지 상의 배경 부분보다 더 밝은 픽셀값을 갖는 픽셀들 및 상기 배경 부분보다 더 어두운 픽셀값을 갖는 픽셀들을 모두 포함하도록 하는 차연산 이미지를 각각 생성하는 단계와, 상기 생성된 차연산 이미지들 중 연속하는 두 차연산 이미지에 대해 각각 차연산 처리를 하여 차-차연산 이미지를 생성하는 단계를 포함하는 것을 특징으로 한다.Also preferably, the step of generating a difference operation image of each of the two successive difference images may include pixel values that are brighter than the background portion of each image by the difference calculation of the reference image for each of the successively acquired images. Generating a difference operation image each including both pixels having pixels and pixels having a darker pixel value than the background portion, and respectively performing a difference operation on two consecutive difference images of the generated difference images And generating a difference-difference image.
또한 바람직하게는, 상기 연속하는 두 차연산 이미지의 차연산 이미지를 각각 생성하는 단계는, 상기 연속하여 취득되는 이미지 각각에 있어서 각각의 픽셀의 픽셀값과 상기 기준 이미지의 대응되는 위치의 각 픽셀의 픽셀값의 차이에 대한 절대치를 픽셀값으로 하는 절대치 차연산 이미지를 생성하는 단계와, 연속하는 두 개의 상기 절대치 차연산 이미지에 대해 각각의 대응 픽셀의 픽셀값의 차이가 0보다 크면 그 값을 픽셀값으로 하고 0보다 작으면 0을 픽셀값으로 하는 차연산 처리를 하여 차-차연산 이미지를 생성하는 단계를 포함하는 것을 특징으로 한다.Also preferably, the step of generating the difference operation images of the two successive difference images may include the pixel value of each pixel and the corresponding position of the reference image in each of the successively acquired images. Generating an absolute difference image with an absolute value for the difference in pixel values as the pixel value; and if the difference between the pixel values of each corresponding pixel is greater than zero for the two successive absolute difference images, the value is converted to pixels; And calculating a difference-difference image by performing a difference calculation process of setting the value to a value less than zero to zero as the pixel value.
한편, 본 발명의 다른 일 실시예에 따른 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치의 센싱방법은, 상기 운동하는 객체를 바라보는 화각으로 연속하여 이미지를 취득하는 단계; 상기 연속하여 취득되는 이미지 각각에 배경 부분과, 매 프레임마다 새로운 위치에 출현하는 위치 이동 객체와, 연속되는 이미지에서 움직임 변화 위치가 중첩되어 나타나는 운동위치중첩부분이 포함되며, 상기 연속하여 취득되는 이미지 각각에서 상기 위치 이동 객체 및 운동위치중첩부분은 추출하고 상기 배경 부분을 제거하도록 이미지 처리를 수행하는 단계; 상기 이미지 처리된 이미지들에 대해, 연속하는 두 이미지의 차연산을 통해 상기 운동위치중첩부분을 제거하여 상기 운동하는 객체에 관한 위치 이동 객체를 추출하는 단계; 및 상기 추출된 운동하는 객체의 위치 정보를 산출하는 단계를 포함한다.On the other hand, the sensing method of the sensing device for calculating the position information of the moving object according to another embodiment of the present invention, the method comprising: continuously obtaining an image at an angle of view looking at the moving object; Each successively acquired image includes a background portion, a position moving object appearing at a new position every frame, and a movement position overlapping portion in which a motion change position overlaps in a successive image. Performing image processing to extract the position moving object and the movement position overlapping portion and remove the background portion in each case; Extracting the moving object of the moving object by removing the moving position overlapping portion through the difference operation of two consecutive images with respect to the image processed images; And calculating position information of the extracted exercise object.
또한 바람직하게는, 상기 배경 부분을 제거하도록 이미지 처리를 수행하는 단계는, 상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산의 절대치를 통해 각 이미지의 배경 부분보다 더 밝은 픽셀값을 갖는 픽셀들 및 상기 배경 부분보다 더 어두운 픽셀값을 갖는 픽셀들을 모두 포함하도록 이미지 처리를 하는 단계를 포함하는 것을 특징으로 한다.Also preferably, the performing of the image processing to remove the background portion comprises: a pixel having a brighter pixel value than the background portion of each image through the absolute value of the difference operation of the reference image for each of the successively acquired images; And image processing to include both pixels having pixel values that are darker than the background portion.
한편, 본 발명의 일 실시예에 따른 사용자의 골프스윙에 따라 운동하는 골프클럽의 위치 정보를 산출하기 위한 센싱장치의 센싱방법은, 상기 사용자의 골프스윙을 바라보는 화각으로 연속하여 이미지를 취득하는 단계; 상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산의 절대치를 취하여 절대치 차연산 이미지를 생성하는 단계; 상기 생성된 절대치 차연산 이미지들 중 연속하는 두 절대치 차연산 이미지에 대해 각각 차연산 처리를 하여 차-차연산 이미지를 생성하는 단계; 및 상기 생성된 차-차연산 이미지 각각으로부터 상기 운동하는 골프클럽의 위치 정보를 산출하는 단계를 포함한다.On the other hand, the sensing method of the sensing device for calculating the position information of the golf club exercising in accordance with the golf swing of the user according to an embodiment of the present invention, to continuously acquire an image at an angle of view looking at the golf swing of the user step; Generating an absolute difference image by taking an absolute value of a difference operation of a reference image for each of the successively acquired images; Generating a difference-difference image by performing a difference operation on two consecutive absolute difference images among the generated absolute difference images; And calculating position information of the exercise golf club from each of the generated difference-calculated images.
한편, 본 발명의 일 실시예에 따른 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치는, 상기 운동하는 객체를 바라보는 화각으로 연속하여 이미지를 취득하는 카메라부; 상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산 이미지 각각에 대해, 연속하는 두 차연산 이미지의 차연산 이미지를 각각 추출하기 위해 이미지 처리를 수행하는 이미지 처리부; 및 상기 이미지 처리부에 의해 추출된 상기 연속하는 두 차연산 이미지의 차연산 이미지 각각으로부터 상기 운동하는 객체의 위치 정보를 산출하는 정보산출부를 포함한다.On the other hand, the sensing device for calculating the position information of the moving object according to an embodiment of the present invention, the camera unit for continuously obtaining an image at an angle of view looking at the moving object; An image processing unit which performs image processing for extracting a difference operation image of two successive difference images for each difference operation image of a reference image for each successively acquired image; And an information calculator configured to calculate positional information of the moving object from each of the successive computed images of the two successive computed images extracted by the image processor.
또한 바람직하게는, 상기 이미지 처리부는, 상기 연속하여 취득되는 이미지 중 일 프레임의 이미지를 상기 기준 이미지로서 결정하고, 상기 연속하여 취득되는 이미지 각각에 대한 상기 기준 이미지의 차연산에 의한 값의 절대치를 픽셀값으로 하는 절대치 차연산 이미지를 생성하며, 연속하는 두 개의 상기 절대치 차연산 이미지의 차연산을 통해 차-차연산 이미지를 생성하도록 구성되는 것을 특징으로 한다.Also preferably, the image processing unit may determine an image of one frame from among the continuously acquired images as the reference image, and determine an absolute value of a value by the difference operation of the reference image for each of the continuously acquired images. And generate an absolute difference image as a pixel value, and generate a difference-difference image through difference operations of two successive absolute difference images.
한편, 본 발명의 일 실시예에 따른 사용자의 골프스윙에 따라 운동하는 골프클럽의 위치 정보를 산출하기 위한 센싱장치는, 상기 사용자의 골프스윙을 바라보는 화각으로 연속하여 이미지를 취득하는 카메라부; 상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산의 절대치를 취하여 절대치 차연산 이미지를 생성하며, 상기 생성된 절대치 차연산 이미지들 중 연속하는 두 절대치 차연산 이미지에 대해 각각 차연산 처리를 하여 차-차연산 이미지를 생성하도록 이미지 처리를 수행하는 이미지 처리부; 및 상기 생성된 차-차연산 이미지 각각으로부터 상기 운동하는 골프클럽의 위치 정보를 산출하는 정보산출부를 포함한다.On the other hand, the sensing device for calculating the position information of the golf club exercising according to the golf swing of the user according to an embodiment of the present invention, the camera unit for continuously obtaining an image at an angle of view looking at the golf swing of the user; Absolute difference image is generated by taking the absolute value of the difference operation of the reference image with respect to each of the successively acquired images, and performs a difference operation on two consecutive absolute value difference images among the generated absolute value difference images, respectively. An image processing unit which performs image processing to generate a difference-difference image; And an information calculator configured to calculate location information of the golf club to be exercised from each of the generated difference-calculated images.
본 발명에 따른 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치 및 이를 이용한 센싱방법은 특별한 마커의 부착이나 별도의 장비 없이, 골프클럽과 같은 특정할 수 없는 형상, 크기, 재질, 반사도 등을 가진 객체가 운동하는 것에 대해 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산 이미지 각각에 대해, 연속하는 두 차연산 이미지의 차연산 이미지를 생성하는 방식으로 운동하는 객체를 상당히 정확하고 안정적으로 추출하여 그로부터 정확한 위치 정보를 산출할 수 있도록 하는 효과가 있다.The sensing device for calculating the position information of the moving object and the sensing method using the same according to the present invention has a non-specific shape, size, material, reflectivity, etc., such as a golf club, without attaching a special marker or additional equipment For each difference image of the reference image for each image acquired in succession with respect to the movement of the object, the moving object is extracted fairly accurately and reliably by generating a difference operation image of two consecutive difference images. There is an effect of calculating accurate position information therefrom.
도 1은 일반적으로 영상처리 기술분야에서 이용되고 있는 차영상 기법을 설명하기 위한 도면이다.FIG. 1 is a diagram for explaining a difference image technique, which is generally used in an image processing technology.
도 2는 본 발명의 일 실시예에 따른 센싱장치의 구성을 나타내는 블록도이다.2 is a block diagram illustrating a configuration of a sensing device according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 센싱장치의 센싱방법에 따른 프로세스를 나타낸 플로우차트이다.3 is a flowchart illustrating a process according to a sensing method of a sensing device according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 센싱장치의 카메라부에서 연속하여 취득되는 이미지들의 예를 나타낸 도면이다.4 is a diagram illustrating an example of images continuously acquired by a camera unit of a sensing apparatus according to an embodiment of the present invention.
도 5는 도 4에 도시된 이미지들을 이용하여 절대치 차연산 이미지를 생성하는 과정을 나타낸 도면이다.FIG. 5 illustrates a process of generating an absolute difference image using the images illustrated in FIG. 4.
도 6은 도 5에 도시된 절대치 차연산 이미지들을 이용하여 차-차연산 이미지를 생성하는 과정을 나타낸 도면이다.FIG. 6 is a diagram illustrating a process of generating a difference-difference image using the absolute value difference images shown in FIG. 5.
본 발명에 따른 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치 및 이를 이용한 센싱방법에 대한 좀 더 구체적인 내용을 도면을 참조하여 설명하도록 한다.A detailed description of a sensing device for calculating position information of a moving object and a sensing method using the same according to the present invention will be described with reference to the accompanying drawings.
먼저, 도 2를 참조하여 본 발명의 일 실시예에 따른 센싱장치의 구성에 대해 설명한다. 도 2는 본 발명의 일 실시예에 따른 센싱장치의 구성을 나타내는 블록도이다.First, a configuration of a sensing device according to an embodiment of the present invention will be described with reference to FIG. 2. 2 is a block diagram illustrating a configuration of a sensing device according to an embodiment of the present invention.
본 발명은 사용자가 골프클럽으로 골프스윙을 하여 골프공을 타격할 때 그에 대해 촬영되는 이미지들을 분석하여 골프공과 골프클럽을 각각 이미지에서 추출하여 그 위치 정보를 산출하는 센싱장치의 개발 과정에서 완성된 발명으로서, 특히 골프클럽과 같이 종래의 영상처리기법으로는 이미지에서 추출하는 것이 매우 어려운 객체를 정확하고 안정적으로 추출하는 방법을 제공하며, 골프클럽과 같이 이미지에서 추출하는 것이 어려운 객체를 효과적으로 추출할 수 있기 때문에, 골프공과 같은 정형화된 객체는 물론 어떠한 불특정한 객체에 대해서도 본 발명에 따른 센싱장치 및 센싱방법을 이용하여 운동에 따른 객체를 이미지 처리를 통해 효과적으로 추출하여 그 위치 정보를 정확하게 산출할 수 있다.The present invention is completed in the development process of the sensing device for extracting the golf ball and the golf club from the image to calculate the location information by analyzing the images taken when the user hits the golf ball by golf swing to the golf club In particular, the present invention provides a method for accurately and stably extracting an object that is difficult to extract from an image by a conventional image processing technique such as a golf club, and effectively extracts an object that is difficult to extract from an image such as a golf club. Therefore, using the sensing device and the sensing method according to the present invention, as well as a standardized object such as a golf ball, the object according to the motion can be effectively extracted through image processing to accurately calculate its position information. have.
이하에서는 상기한 '운동하는 객체'에 대해 주로 골프클럽을 그 예로써 설명하지만 상기한 바와 같이 골프클럽 뿐만 아니라 영상처리를 통해 추출하기 어려운 여하한 객체의 경우에도 본 발명이 동일하게 적용될 수 있음은 물론이다.Hereinafter, the golf club will be described mainly as an example of the above-mentioned 'working object', but as described above, the present invention can be equally applied to any object that is difficult to extract through image processing as well as the golf club. Of course.
도 2에 도시된 바와 같이, 본 발명의 일 실시예에 따른 센싱장치는 카메라부(100) 및 센싱처리부(200)를 포함하며, 상기 센싱처리부(200)는 이미지 처리부(210)와 정보산출부(220)를 포함하여 구성될 수 있다.As shown in FIG. 2, the sensing device according to the exemplary embodiment includes a camera unit 100 and a sensing processor 200, and the sensing processor 200 includes an image processor 210 and an information calculator. And 220.
상기 카메라부(100)는 운동하는 객체를 바라보는 화각으로 연속하여 이미지를 취득하도록 구성되는데, 운동하는 객체에 대한 3차원 공간 상에서의 위치 정보를 산출하기 위해서는 상기 카메라부(100)가 서로 다른 시야각으로 동일한 대상에 대해 각각 이미지를 취득하는 복수개의 카메라, 예컨대 도 2에 도시된 바와 같이 제1 카메라(110)와 제2 카메라(120)가 서로 동기화되어 스테레오 방식으로 구성됨이 바람직하다.The camera unit 100 is configured to continuously acquire an image at an angle of view looking at a moving object. In order to calculate position information in a three-dimensional space with respect to the moving object, the camera unit 100 has different viewing angles. For example, as illustrated in FIG. 2, a plurality of cameras acquiring images of the same object, for example, the first camera 110 and the second camera 120 are preferably synchronized with each other and configured in a stereo manner.
상기한 바와 같이 카메라부(100)의 복수개의 카메라(110, 120)가 서로 동기화되어 스테레오 방식으로 구성됨으로써, 동일한 객체에 대해 제1 카메라(110)를 통해 취득한 이미지와 제2 카메라(120)를 통해 취득한 이미지 각각에서 추출한 해당 객체의 2차원 정보를 3차원 정보로 변환할 수 있다.As described above, the plurality of cameras 110 and 120 of the camera unit 100 are synchronized with each other and configured in a stereo manner, thereby obtaining the image and the second camera 120 acquired through the first camera 110 for the same object. The 2D information of the corresponding object extracted from each of the acquired images may be converted into 3D information.
도 2에서는 카메라부(100)의 복수개의 카메라(110, 120)가 사용자(U)가 골프클럽(GC)을 들고 골프스윙하는 것에 대해 이미지를 취득하는 경우에 대해 나타내고 있는데, 본 발명은 이에 한정하지 않고, 예컨대 사용자가 야구스윙을 하는 경우 등의 각종 스포츠 시뮬레이션 시스템 등에 이용될 수 있다.2 illustrates a case where a plurality of cameras 110 and 120 of the camera unit 100 acquires an image of a user swinging a golf club GC with a user U, but the present invention is limited thereto. For example, the present invention can be used for various sports simulation systems, such as when the user swings a baseball.
상기 센싱처리부(200)는 상기 카메라부(100)의 각 카메라(110, 120)로부터 이미지를 수집하여 소정의 이미지 처리를 수행하여 해당 객체를 추출하는 이미지 처리부(210)와, 이미지로부터 추출된 객체의 2차원 위치 정보로부터 3차원 위치 정보 등을 산출하는 정보산출부(220)를 포함하도록 구성될 수 있다.The sensing processor 200 collects an image from each of the cameras 110 and 120 of the camera unit 100 and performs a predetermined image processing to extract a corresponding object, and an object extracted from the image. It may be configured to include an information calculation unit 220 for calculating the three-dimensional position information and the like from the two-dimensional position information of.
상기 센싱처리부(200)는 상기 카메라부(100)의 각 카메라(110, 120)를 통해 수집되는 이미지 각각으로부터 운동하는 객체를 추출하고 해당 객체의 위치 정보를 산출하여 클라이언트(300)로 그 산출된 정보를 전송하여, 상기 클라이언트(300)가 상기 전송받은 객체의 위치 정보를 이용하여 새로운 정보를 산출하거나 분석 정보를 산출하는 등의 클라이언트(300) 고유의 기능을 수행할 수 있도록 한다.The sensing processor 200 extracts a moving object from each of the images collected by the cameras 110 and 120 of the camera unit 100, calculates position information of the object, and calculates the position information of the corresponding object to the client 300. By transmitting the information, the client 300 may perform a unique function of the client 300 such as calculating new information or calculating analysis information using the position information of the received object.
예컨대, 클라이언트(300)를 스크린 골프 시스템에 이용되는 시뮬레이터로서 구현하는 경우, 센싱처리부(200)로부터 골프공과 골프클럽의 위치 정보를 전송받아 이를 이용하여 가상의 골프코스 상에서 볼이 비행하는 궤적의 시뮬레이션 영상을 구현할 수 있다.For example, when the client 300 is implemented as a simulator used for a screen golf system, the location information of a golf ball and a golf club is received from the sensing processing unit 200 and simulation of the trajectory of the ball flying on a virtual golf course using the location information is received. You can implement the image.
또한, 상기 클라이언트(300)를 골프스윙 분석장치로서 구현하는 경우, 센싱처리부(200)로부터 골프공과 골프클럽의 위치 정보를 전송받아 이를 이용하여 사용자의 골프스윙에 대한 분석 정보, 스윙의 문제점 진단 및 이를 해결하기 위한 레슨정보 등을 제공하도록 구현할 수 있다.In addition, in the case of implementing the client 300 as a golf swing analysis device, receiving the position information of the golf ball and the golf club from the sensing processing unit 200 using the analysis information on the user's golf swing, the diagnosis of the swing and It can be implemented to provide lesson information for solving this.
상기 이미지 처리부(210)는 상기 카메라부(100)에 의해 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산 이미지 각각에 대해, 연속하는 두 차연산 이미지의 차연산 이미지를 각각 추출하기 위해 이미지 처리를 수행하도록 구성되며, 상기 정보산출부(220)는 상기 이미지 처리부에 의해 추출된 상기 연속하는 두 차연산 이미지의 차연산 이미지 각각으로부터 상기 운동하는 객체의 위치 정보를 산출하도록 구성될 수 있다.The image processing unit 210 processes an image to extract a difference operation image of two successive difference images, respectively, for each difference operation image of the reference image for each image continuously acquired by the camera unit 100. The information calculating unit 220 may be configured to calculate position information of the moving object from each of the difference calculated images of the two consecutive difference images extracted by the image processing unit.
도 2에 도시된 바와 같이 상기 정보산출부(220)는 센싱장치에 포함되도록 구성될 수도 있으나, 이에 한정하지 않고 클라이언트(300)에 포함되도록 구성할 수도 있다. 즉, 센싱장치는 이미지의 취득 및 이미지 처리를 통해 운동하는 객체를 추출하는 기능까지 수행하고 그 추출된 정보를 클라이언트의 정보산출부로 전송하여 상기 추출된 운동하는 객체의 위치 정보 및 이를 이용한 각종 정보의 산출이 이루어지도록 하는 것도 가능하다.As shown in FIG. 2, the information calculating unit 220 may be configured to be included in the sensing device, but may be configured to be included in the client 300 without being limited thereto. That is, the sensing device performs a function of extracting a moving object by acquiring an image and processing the image, and transmits the extracted information to the information calculating unit of the client to obtain the location information of the extracted moving object and various information using the same. It is also possible to make calculations.
한편, 도 3에 도시된 플로우차트를 참조하여 본 발명의 일 실시예에 따른 센싱장치의 센싱방법에 관하여 설명한다.Meanwhile, a sensing method of a sensing device according to an embodiment of the present invention will be described with reference to the flowchart shown in FIG. 3.
먼저, 사용자가 골프클럽으로 골프 스윙을 하여 볼을 타격하는 것을 카메라부가 소정의 화각으로 촬영하여 연속하여 이미지를 취득한다(S10).First, the camera unit shoots a golf swing at a golf club at a predetermined angle of view to acquire an image continuously (S10).
상기 연속하여 취득되는 이미지는 센싱처리부의 이미지 처리부로 전달되고, 상기 이미지 처리부는 연속하여 취득되는 이미지들 중 미리 설정된 사항에 따라 차연산을 위한 기준 이미지를 선정한다(S11).The continuously acquired image is transferred to the image processing unit of the sensing processing unit, and the image processing unit selects the reference image for the difference calculation according to a preset item among the images acquired continuously (S11).
상기 기준 이미지는 검출하고자 하는 운동하는 객체(관심 대상 객체)를 이미지에서 추출하기 위한 차영상에 이용될 이미지 상의 배경 부분을 제거하기 위한 이미지이다.The reference image is an image for removing a background part on an image to be used in a difference image for extracting a moving object (object of interest) to be detected from the image.
상기 기준 이미지에는 관심 대상인 운동하는 객체를 제외한 배경 부분만 있는 이미지로 선정할 수 있으며, 적어도 관심 대상인 운동하는 객체가 이미지에 등장하더라도 다른 이미지 상에 나타난 객체와 겹치지 않는 위치에 등장하는 이미지로 선정하는 것이 바람직하다.The reference image may be selected as an image having only a background portion except for a moving object of interest, and at least a moving object of interest may be selected as an image appearing at a position that does not overlap with an object appearing in another image. It is preferable.
예컨대, ta < ... <t1 < t2 < ... < tn 의 시간에 각각 운동하는 객체에 대한 이미지를 연속하여 취득하였을 때, 관심 대상 객체를 추출하기 위하여 t1, t2 ... 등의 시점에 취득된 이미지들이 이용된다면, t1, t2 ... 등의 시점에 취득된 이미지들 각각에 대해 차연산을 하기 위한 기준 이미지로서 ta 시점의 이미지 또는 tn 시점의 이미지가 선정될 수 있다.For example, when images of moving objects are acquired continuously at a time of ta <... <t1 <t2 <... <tn, viewpoints such as t1, t2 ..., etc. are extracted to extract an object of interest. If images acquired at are used, an image at time ta or an image at time tn may be selected as a reference image for performing a difference operation on each of the images acquired at times t1, t2...
여기서 기준 이미지를 이용하여 차연산이 수행될 t1, t2 ... 등의 시점의 이미지들을 편의상 '타깃 이미지'라 하기로 한다.In this case, images of viewpoints t1, t2, etc., in which the difference operation is to be performed using the reference image will be referred to as 'target images' for convenience.
어느 시점에 취득되는 이미지를 기준 이미지로서 선정할 것인지는 운동하는 객체가 어떤 것이고 어떻게 운동하게 되는지 등으로부터 개별적으로 결정될 수 있는데, 예컨대 골프스윙 시에 운동하는 골프클럽을 타깃 이미지들로부터 추출하고자 할 때에는 타깃 이미지들 상에서 골프클럽이 등장하는 위치가 기준 이미지 상에서 골프클럽이 등장하는 위치와 겹치지 않도록 함이 바람직하기 때문에, 상기한 예에서 타깃 이미지들과 상당히 멀리 떨어진 시점인 tn 시점의 이미지를 기준 이미지로 선정하도록 센싱처리부에 미리 설정해 놓을 수 있다.Whether to select an image acquired at a certain point as a reference image can be determined individually from what the object is being exercised and how it is to be exercised. For example, when a golf club exercising during golf swing is to be extracted from the target images, the target is selected. Since the position where the golf club appears on the images is preferably not overlapped with the position where the golf club appears on the reference image, in the above example, the image of the tn viewpoint, which is a point far from the target images, is selected as the reference image. It can be set in advance in the sensing processing unit.
한편, 상기한 바와 같이 기준 이미지가 선정되면, 타깃 이미지들 각각에 대한 기준 이미지의 차연산을 통해 관심 대상 객체를 추출한다. 그런데, 본 발명에 따른 상기 차연산은 앞서 배경 기술 부분에서 설명한 바 있는 일반적인 차연산이 아니라, 두 번의 차연산이 수행되는 차연산의 차연산을 이용한 방법이며, 좀 더 구체적으로는 '절대치 차연산'을 수행하여 생성된 '절대치 차연산 이미지'들에 대해 다시 '차연산'이 수행되어 '차-차연산 이미지'를 생성함으로써 관심 대상 객체를 정확하게 추출하는 방법을 제공한다.On the other hand, when the reference image is selected as described above, the object of interest is extracted through the difference operation of the reference image for each target image. However, the difference operation according to the present invention is not a general difference operation described in the background art, but a method using a difference operation of a difference operation in which two difference operations are performed, and more specifically, an absolute value difference operation. 'Difference operation' is again performed on 'absolute difference calculation images' generated by performing 'to generate a difference difference image', thereby providing a method of accurately extracting an object of interest.
이에 대해서는 도 3의 S12 및 S13 단계에서 나타내고 있는데, 상기한 바와 같이 센싱처리부의 이미지 처리부가 기준 이미지를 선정하면(S11), 이미지 처리부는 연속하여 취득되는 이미지(타깃 이미지) 각각에 대한 기준 이미지의 차연산에 의한 값의 절대치를 픽셀값으로 하는 절대치 차연산 이미지를 생성하고(S12), 연속하는 두 절대치 차연산 이미지의 차연산을 통해 차-차연산 이미지를 생성한다(S13).This is illustrated in steps S12 and S13 of FIG. 3. As described above, when the image processor of the sensing processor selects the reference image (S11), the image processor determines the reference image of each of the images (target images) that are continuously acquired. An absolute value difference image is generated by using the absolute value of the difference value as the pixel value (S12), and a difference-difference image is generated through the difference operation of two consecutive absolute value difference images (S13).
즉, 상기 '절대치 차연산'은 타깃 이미지 상의 픽셀의 픽셀값(밝기값)과 기준 이미지 상의 대응되는 위치의 픽셀의 픽셀값(밝기값)의 차이에 대한 절대치를 취하는 연산이며, 상기한 두 픽셀값의 차이의 절대치를 픽셀값으로 하는 이미지를 '절대치 차연산 이미지'로 명명하기로 한다.That is, the absolute value difference operation is an operation that takes an absolute value of a difference between a pixel value (brightness value) of a pixel on a target image and a pixel value (brightness value) of a pixel at a corresponding position on a reference image. An image whose absolute value of the difference in values is a pixel value will be referred to as an "absolute value difference operation image".
그리고, 상기한 절대치 차연산 이미지들을 이용하여 다시 차연산을 수행하여 생성하는 이미지를 '차-차연산 이미지'로 명명하기로 한다.The image generated by performing the difference operation again using the absolute difference operation images described above will be referred to as a 'difference difference image'.
상기한 바와 같이 두 번의 차연산을 수행하여 생성되는 차-차연산 이미지에는 관심 대상 객체, 즉 추출하고자 하는 운동하는 객체에 대해 간섭이 되는 요소, 즉 간섭요소가 배제되고 관심 대상 객체가 상당히 정확하게 나타나기 때문에, 상기 차-차연산 이미지로부터는 관심 대상 객체의 추출이 용이하게 이루어질 수 있다.As described above, the difference-difference image generated by performing two difference calculations excludes an interfering element, that is, an interference element, from the object of interest, that is, a moving object to be extracted, and the object of interest appears fairly accurately. Therefore, the object of interest may be easily extracted from the difference-difference image.
상기한 바와 같이 생성된 각 차-차연산 이미지 상의 운동하는 객체에 해당하는 부분은 에지 정보를 이용하여 윤곽을 결정하고 그 결정된 윤곽의 중심점 또는 무게중심점 등의 특정 위치를 찾아서 그 위치를 운동하는 객체의 위치 정보로서 산출할 수 있다(S14).The portion corresponding to the moving object on each difference-difference image generated as described above is used to determine the contour using edge information, and to find a specific position such as the center point or the center of gravity of the determined contour and then move the position. It can calculate as positional information of (S14).
이미지에서 운동하는 객체에 해당하는 부분을 추출하는 것이 가장 중요하고 그 부분이 추출된다면 그 추출된 부분에서 위치 정보를 산출하는 것은 종래에 이용되고 있는 기술에 의해 이루어질 수도 있다.It is most important to extract the part corresponding to the moving object in the image, and if the part is extracted, calculating the position information in the extracted part may be made by a technique used in the related art.
이하, 도 4 내지 도 6을 참조하여 본 발명의 일 실시예에 따른 센싱장치의 센싱방법에 의한 '절대치 차연산 이미지'의 생성 및 '차-차연산 이미지'의 생성에 대해 구체적인 예를 이용하여 설명하도록 한다.Hereinafter, referring to FIGS. 4 to 6, a specific example of generation of an absolute value difference image and generation of a difference difference image by the sensing method of the sensing device according to an embodiment of the present invention will be described. Explain.
도 4는 본 발명의 일 실시예에 따른 센싱장치의 카메라부에서 연속하여 취득되는 이미지들의 예를 나타내고 있고, 도 5는 도 4에 도시된 이미지들을 이용하여 절대치 차연산 이미지를 생성하는 과정을 나타내고 있으며, 도 6은 도 5에 도시된 절대치 차연산 이미지들을 이용하여 차-차연산 이미지를 생성하는 과정을 나타내고 있다.4 illustrates an example of images continuously acquired by a camera unit of a sensing apparatus according to an embodiment of the present invention, and FIG. 5 illustrates a process of generating an absolute value difference image using the images shown in FIG. 4. FIG. 6 illustrates a process of generating a difference-difference image by using the absolute difference images shown in FIG. 5.
도 4 내지 도 6에서 나타내고 있는 이미지들은 실제로 촬영에 의해 취득된 이미지들은 아니며 본 발명에 따른 센싱장치의 센싱방법을 보다 효과적으로 설명하기 위하여 임의로 만든 이미지들로서 이미지의 구성을 간략화하여 설명과 이해의 편의를 도모하도록 하였다.The images shown in FIGS. 4 to 6 are not images acquired by photographing, but are images arbitrarily created to more effectively explain the sensing method of the sensing apparatus according to the present invention. It was planned.
도 4에 도시된 바와 같이, 카메라부에 의해 연속하여 취득되는 이미지들은 t1, t2, t3 ... tn 시점에서 각각 취득되는 이미지들로서, 도 4의 (a)에서는 t1 시점에서 취득되는 이미지(Frame_t1)를, (b)에서는 t2 시점에서 취득되는 이미지(Frame_t2)를, (c)에서는 t3 시점에서 취득되는 이미지(Frame_t3)를, 그리고 (d)에서는 t1, t2, t3... 로부터 다소 멀리 떨어진 시점인 tn 시점에서 취득되는 이미지(Frame_tn)를 각각 나타내고 있다.As shown in FIG. 4, the images continuously acquired by the camera unit are images acquired at the times t1, t2, t3 ... tn, respectively. In FIG. 4A, the images acquired at the time t1 (Frame_t1). ), (B) an image (Frame_t2) acquired at time t2, (c) an image (Frame_t3) acquired at time t3, and (d) somewhat far from t1, t2, t3 ... Each of the images Frame_tn acquired at the viewpoint tn viewpoint is shown.
여기서, 도 4의 (a) 내지 (c)에 도시된 이미지들(Frame_t1, Frame_t2, Frame_t3 등)은 타깃 이미지, 그리고 (d)에 도시된 이미지인 Frame_tn은 기준 이미지로서 선정된 이미지로 정의하기로 한다.Here, the images (Frame_t1, Frame_t2, Frame_t3, etc.) shown in (a) to (c) of FIG. 4 are a target image, and Frame_tn, which is the image shown in (d), is defined as an image selected as a reference image. do.
도 4의 (a) 내지 (d)에서 보듯이 연속하여 취득되는 이미지에는 동일한 배경 부분(BG)이 이미지 상에 존재한다.As shown in (a) to (d) of FIG. 4, the same background portion BG is present on the image in the images acquired continuously.
검출하고자 하는 운동하는 객체에 해당하는 관심 대상 객체(M1, M2, M3 ... Mn)는 이미지가 취득되는 t1 ~ tn 시간에 걸쳐 이동하면서 각 이미지 상에서 서로 다른 위치에 존재하게 된다(약간의 일부 중복이 발생할 수도 있지만 객체의 운동 속도와 카메라의 촬영 속도 등에 따라 대체적으로 매 프레임마다 서로 다른 위치에 존재하게 되는 것이 바람직하다).The objects of interest (M1, M2, M3 ... Mn) corresponding to the moving object to be detected are present at different positions on each image while moving over the time t1 to tn when the image is acquired (slightly some of them). Overlapping may occur, but it is desirable to exist in different positions every frame, depending on the movement speed of the object and the shooting speed of the camera.
그리고, 이미지가 취득되는 t1 ~ tn 시간에 걸쳐 움직이기는 하지만 크게 움직이지 않거나 흔들리거나 제자리에서 움직이는 등 중첩된 움직임 변화를 나타내는 운동위치중첩부분(V1, V2, V3 ... Vn)이 각 이미지 상에 존재한다고 가정한다.Then, the movement position overlapping parts (V1, V2, V3, ... Vn), which move over the time t1 to tn when the image is acquired, but do not move significantly, shake, or move in place, indicating overlapping movement changes. Suppose that exists in
도 4의 (a) 내지 (d)에서 운동위치중첩부분(V1, V2, V3 ... Vn)이 중첩된 움직임 변화를 나타내는 것을 볼 수 있다.4 (a) to (d), it can be seen that the movement position overlapping portions V1, V2, V3, ... Vn represent overlapping movement changes.
예컨대, 사용자가 골프클럽으로 골프스윙을 하는 것에 대한 연속하여 취득되는 이미지인 경우, 사용자의 신체 부분, 골프매트 부분, 고무티 부분 등은 중첩된 움직임 변화를 나타내는 운동위치중첩부분에 해당하며, 골프클럽은 매 프레임마다 중첩되지 않는 위치 변화를 나타내는 관심 대상 객체에 해당한다.For example, in the case where the user continuously acquires an image of golf swinging to a golf club, the user's body part, golf mat part, rubber tee part, etc. correspond to the exercise position overlapping part that indicates the overlapping movement change, and the golf Clubs correspond to objects of interest that represent positional changes that do not overlap every frame.
도 4의 (a) 내지 (d)에 나타낸 타깃 이미지들 및 기준 이미지를 이용하여, 도 3의 S12 단계에 해당하는 절대치 차연산 이미지를 생성하는 과정에 대해 도 5에서 나타내고 있다.A process of generating an absolute value difference operation image corresponding to step S12 of FIG. 3 using the target images and reference images illustrated in FIGS. 4A to 4D is illustrated in FIG. 5.
도 5의 (a)는 t1 시점의 타깃 이미지(Frame_t1)의 기준 이미지(Frame_tn)에 대한 절대치 차연산에 의해 절대치 차연산 이미지 AbsDiff(t1,tn)이 생성되는 경우에 대해 나타내고 있고, 도 5의 (b)는 t2 시점의 타깃 이미지(Frame_t2)의 기준 이미지(Frame_tn)에 대한 절대치 차연산에 의해 절대치 차연산 이미지 AbsDiff(t2,tn)이 생성되는 경우에 대해 나타내고 있으며, 도 5의 (c)는 t3 시점의 타깃 이미지(Frame_t3)의 기준 이미지(Frame_tn)에 대한 절대치 차연산에 의해 절대치 차연산 이미지 AbsDiff(t3,tn)이 생성되는 경우에 대해 나타내고 있다.FIG. 5A illustrates a case where an absolute value difference image AbsDiff (t1, tn) is generated by the absolute value difference operation on the reference image Frame_tn of the target image Frame_t1 at the time point t1. (b) illustrates a case in which the absolute value difference image AbsDiff (t2, tn) is generated by the absolute value difference operation on the reference image Frame_tn of the target image Frame_t2 at the time t2, and FIG. 5 (c) Denotes a case where the absolute value difference image AbsDiff (t3, tn) is generated by the absolute value difference operation on the reference image Frame_tn of the target image Frame_t3 at the time t3.
앞서 설명한 바와 같이, 절대치 차연산을 하게 되면 타깃 이미지와 기준 이미지의 픽셀값(밝기값)의 차이에 대한 절대치를 취하기 때문에 픽셀값이 차이가 나면 날 수록 절대치 차연산 이미지에서는 높은 픽셀값(밝기값)을 갖게 된다.As described above, the absolute difference calculation takes an absolute value for the difference between the pixel value (brightness value) of the target image and the reference image, so the higher the pixel value, the higher the pixel value (brightness value) in the absolute difference calculation image. ).
따라서, 타깃 이미지와 기준 이미지의 픽셀값이 픽셀값이 실질적으로 동일한 배경 부분(BG)은, 도 5의 (a) 내지 (c)에 나타낸 바와 같이 모두 제거될 수 있는 반면, 각각의 타깃 이미지와 기준 이미지 상의 관심 대상 객체(M1, M2, M3)와 운동위치중첩부분(V1, V2, V3, Vn), 그리고 불필요한 위치 이동 객체(Mn)는 절대치 차연산 이미지 상에 상당히 온전하게 나타나게 된다.Accordingly, the background portion BG of which the pixel values of the target image and the reference image are substantially the same as the pixel values can all be removed as shown in FIGS. Objects of interest (M1, M2, M3), motion position overlaps (V1, V2, V3, Vn), and unnecessary position shift objects (Mn) on the reference image appear fairly intact on the absolute difference image.
도 5의 (a)에 도시된 예에서, 타깃 이미지 Frame_t1의 기준 이미지 Frame_tn에 대한 절대치 차연산에 의해 절대치 차연산 이미지 AbsDiff(t1,tn)이 생성되는데, 상기 절대치 차연산 이미지 AbsDiff(t1,tn)에서 배경 부분(BG)은 대부분 소거되고 서로 다른 위치에 존재하는 관심 대상 객체 M1과 불필요한 위치 이동 객체(Mn)은 차연산의 절대치에 의해 거의 온전하게 나타나며, 운동위치중첩부분 VO1(V1과 Vn이 상당 부분 중첩되어 나타나는 부분)도 존재하게 된다.In the example shown in FIG. 5A, an absolute value difference image AbsDiff (t1, tn) is generated by an absolute value difference operation on the reference image Frame_tn of the target image Frame_t1, and the absolute value difference image AbsDiff (t1, tn) is generated. ), The background portion BG is mostly erased and the object of interest M1 and the unnecessary position movement object Mn, which exist at different positions, appear almost intact by the absolute value of the difference operation, and the movement position overlapping portions VO1 (V1 and Vn). A substantial part of this overlapping part) also exists.
이때, 상기 운동위치중첩부분 VO1은 V1과 Vn이 서로 중첩되는 부분에서 픽셀값이 같은 부분의 경우 절대치 차연산에 의해 소거되어 나타날 수 있다. 즉 도면 상에서는 온전하게 나타나는 것으로 도시하였으나 일부가 소거되어 나타나게 되는 경우가 많다.In this case, the moving position overlapping part VO1 may be erased by the absolute value difference operation when the pixel value is the same in the part where V1 and Vn overlap each other. That is, although the drawings are shown as appearing intact, some of them appear to be erased.
도 5의 (b)에 도시된 타깃 이미지 Frame_t2의 기준 이미지 Frame_tn에 대한 절대치 차연산에 의해 절대치 차연산 이미지 AbsDiff(t2,tn)의 생성에서 관심 대상 객체(M2), 불필요한 위치 이동 객체(Mn) 및 운동위치중첩부분(VO2)에 관한 사항과, 도 5의 (c)에 도시된 타깃 이미지 Frame_t3의 기준 이미지 Frame_tn에 대한 절대치 차연산에 의해 절대치 차연산 이미지 AbsDiff(t3,tn)의 생성에서 관심 대상 객체(M3), 불필요한 위치 이동 객체(Mn) 및 운동위치중첩부분(VO3)에 관한 사항도 상기한 도 5의 (a)와 동일하다.The object of interest M2 and the unnecessary position shift object Mn in the generation of the absolute value difference image AbsDiff (t2, tn) by the absolute value difference operation on the reference image Frame_t2 of the target image Frame_t2 shown in FIG. And in the generation of the absolute difference image AbsDiff (t3, tn) by the matter relating to the movement position overlapping part VO2 and the absolute difference operation on the reference image Frame_tn of the target image Frame_t3 shown in FIG. Matters relating to the target object M3, the unnecessary position moving object Mn, and the movement position overlapping portion VO3 are also the same as those in FIG.
따라서, 상기한 바와 같은 절대치 차연산을 통해 일반적인 차연산과는 달리 관심 대상 객체를 이루는 픽셀들 중 높은 픽셀값을 갖는 부분이든 낮은 픽셀값을 갖는 부분이든 모두 절대치 차연산 이미지 상에 상당히 온전하게 나타날 수 있을 뿐 아니라 낮은 픽셀값을 갖는 부분들이 높은 픽셀값을 갖도록 하여 주변과 상당히 구분될 수 있도록 할 수 있다(즉, 타깃 이미지에서 어둡게 나타나던 부분이 절대치 차연산 이미지에서 밝게 나타나게 되도록 할 수 있다).Therefore, unlike the general difference calculation, the absolute difference calculation as described above, whether the portion having the high pixel value or the low pixel value of the pixels constituting the object of interest, appears completely in the absolute difference calculation image. Not only that, but the low pixel values can have a high pixel value so that they can be significantly distinguished from the surroundings (i.e., the dark parts of the target image appear bright in the absolute difference image).
여기서, 절대치 차연산에 의해 배경 부분이 소거됨에 따라 관심 대상 객체(M1, M2, M3), 불필요한 위치 이동 객체(Mn) 및 운동위치중첩부분(VO1, VO2, VO3)을 제외한 나머지 부분이 거의 대부분 0의 픽셀값을 갖게 되어 검은색으로 나타나게 되지만, 도면 상에서는 설명과 이해의 편의를 위해 0의 픽셀값을 갖게 되는 부분들을 모두 흰색으로 나타내었다.Here, as the background part is erased by the absolute difference operation, most of the remaining parts except the object of interest (M1, M2, M3), unnecessary moving object (Mn), and the movement position overlapping part (VO1, VO2, VO3) It has a pixel value of 0 and appears black, but for convenience of explanation and understanding, all parts having a pixel value of 0 are shown in white.
도 5의 (a) 내지 (c)에서 각각 생성된 절대치 차연산 이미지인 AbsDiff(t1,tn), AbsDiff(t2,tn) 및 AbsDiff(t3,tn) 등은 다시 차연산을 통해 차-차연산 이미지가 생성되는데, 이에 대해서는 도 6을 참조하여 설명하도록 한다.AbsDiff (t1, tn), AbsDiff (t2, tn), AbsDiff (t3, tn), and the like, which are the absolute value difference images respectively generated in FIGS. An image is generated, which will be described with reference to FIG. 6.
도 5를 통해 설명한 바와 같이, 절대치 차연산에 의해 AbsDiff(t1,tn), AbsDiff(t2,tn) 및 AbsDiff(t3,tn) 등의 절대치 차연산 이미지가 생성되는데, 연속하는 두 절대치 차연산 이미지에 대해서 차연산(여기서 차연산은 통상적인 차연산을 의미한다)을 수행하면 도 6에 도시된 바와 같이 차-차연산 이미지를 획득할 수 있다.As described with reference to FIG. 5, an absolute value difference operation generates absolute difference operations images such as AbsDiff (t1, tn), AbsDiff (t2, tn), and AbsDiff (t3, tn). By performing a difference operation (where a difference operation means a conventional difference operation), a difference difference image can be obtained as shown in FIG. 6.
도 5에 도시된 절대치 차연산 이미지인 AbsDiff(t1,tn), AbsDiff(t2,tn) 및 AbsDiff(t3,tn) 등에서 보면, 관심 대상 객체는 M1, M2, M3이며, 운동위치중첩부분(VO1, VO2, VO3) 및 불필요한 위치 이동 객체(Mn)는 모두 간섭요소로서 제거되어야 할 부분들이다.In the AbsDiff (t1, tn), AbsDiff (t2, tn), AbsDiff (t3, tn), etc., which are absolute difference calculation images shown in FIG. 5, the objects of interest are M1, M2, M3, and the movement position overlapping part (VO1). , VO2, VO3) and the unnecessary displacement object Mn are all parts to be removed as an interference element.
이와 같은 간섭요소들이 연속하는 두 절대치 차연산 이미지의 차연산에 의해 제거될 수 있는데, 도 6의 (a)에 도시된 바와 같이 절대치 차연산 이미지 AbsDiff(t1,tn)의 AbsDiff(t2,tn)에 대한 차연산을 통해 두 이미지 상에 동일하게 존재하는 부분인 위치 이동 객체(Mn)와 거의 동일하게 존재하는 부분인 운동위치중첩부분(VO1 및 VO2)은 거의 대부분 소거되고, 위치가 중첩되지 않는 관심 대상 객체 M1은 보존되며, 차연산에 의해 마이너스의 값을 갖게되는 M2 부분은 픽셀값이 0으로 대체되어 차-차연산 이미지(DOD(t1,t2|tn)) 상에서 사라지게 된다. Such interferences can be eliminated by the difference operation of two consecutive absolute value difference images. As shown in FIG. 6 (a), AbsDiff (t2, tn) of the absolute value difference image AbsDiff (t1, tn) is shown. Through the difference operation on, the moving position overlapping portions VO1 and VO2 which are almost identical to the moving object Mn, which are the same portions on the two images, are almost eliminated, and the positions are not overlapped. The object of interest M1 is preserved, and the portion of M2 that has a negative value by the difference operation disappears on the difference-difference image (DOD (t1, t2 | tn)) by replacing the pixel value with zero.
마찬가지로, 도 6의 (b)에 도시된 바와 같이 절대치 차연산 이미지 AbsDiff(t2,tn)의 AbsDiff(t3,tn)에 대한 차연산을 통해 두 이미지 상에 동일하게 존재하는 부분인 위치 이동 객체(Mn)와 거의 동일하게 존재하는 부분인 운동위치중첩부분(VO2 및 VO3)은 거의 대부분 소거되고, 위치가 중첩되지 않는 관심 대상 객체 M2는 보존되며, 차연산에 의해 마이너스의 값을 갖게되는 M3 부분은 픽셀값이 0으로 대체되어 차-차연산 이미지(DOD(t2,t3|tn)) 상에서 사라지게 된다. Similarly, as shown in (b) of FIG. 6, the position shift object, which is a part that exists equally on both images through the difference operation on AbsDiff (t3, tn) of the absolute value difference image AbsDiff (t2, tn), The movement position overlapping portions VO2 and VO3, which are almost the same as Mn), are almost all erased, the object of interest M2 whose position does not overlap is preserved, and the M3 portion having a negative value by the difference operation. The pixel value is replaced by 0 and disappears on the difference-difference image (DOD (t2, t3 | tn)).
여기서, 연속하는 두 절대치 차연산 이미지의 차연산에 의해 완전히 중복되지 않는 운동위치중첩부분(VO1 및 VO2, VO2 및 VO3)은 차-차연산 이미지 DOD(t1,t2|tn) 및 DOD(t2,t3|tn) 각각에서 아주 작은 부분이 남아서 노이즈(NZ)가 될 수 있다.Here, the motion position overlapping portions VO1 and VO2, VO2 and VO3, which are not completely overlapped by the difference operations of two successive absolute difference images, the difference-difference images DOD (t1, t2 | tn) and DOD (t2, A small portion of each of t3 | tn) may remain and become noise (NZ).
여기서, 상기한 차-차연산에 의해 운동위치중첩부분(VO1 및 VO2 또는 VO2 및 VO3)이 정확하게 NZ로 표시되는 부분만 남기고 제거되는 것은 아니다. Here, by the difference-difference operation, the movement position overlapping portions VO1 and VO2 or VO2 and VO3 are not removed, leaving only the portions exactly indicated by NZ.
예컨대, 절대치 차연산 이미지 AbsDiff(t1,tn) 상의 운동위치중첩부분(VO1)과 절대치 차연산 이미지 AbsDiff(t2,tn) 상의 운동위치중첩부분(VO2)이 NZ 부분 외에 모두 완전히 동일한 것은 아니며 두 운동위치중첩부분에서 서로 대응되는 위치의 픽셀들의 픽셀값 크기가 서로 차이가 나는 부분들이 존재할 수 있기 때문에, 실제로는 도 6에서 나타낸 각 차-차연산 이미지 상에서 나타낸 NZ 부분 외에도 특정할 수 없는 여러 가지 크고 작은 노이즈들을 남길 수 있다.For example, the motion position overlap portion VO1 on the absolute difference image AbsDiff (t1, tn) and the motion position overlap portion VO2 on the absolute difference image AbsDiff (t2, tn) are not completely identical except for the NZ portion, Since there may be portions in which the pixel values of pixels at positions corresponding to each other in the position overlapping portions are different from each other, in reality, there are a number of large and unidentifiable large portions other than the NZ portions shown on each difference-difference image shown in FIG. You can leave small noises.
즉, 본 명세서에서는 설명의 간략화를 위해 도 6에서는 NZ 부분만을 노이즈로 나타내었으나 실제로는 NZ 부분 외에 다른 노이즈들이 더 존재할 수 있는데, 이러한 노이즈(NZ 부분 및 그 외의 다른 노이즈 부분들) 부분은 일반적으로 이용되는 노이즈 제거 기술을 적용하여 제거할 수 있다.That is, in the present specification, for the sake of simplicity, only NZ portion is shown as noise in FIG. 6, but in fact, other noises may exist in addition to the NZ portion, and such noise (NZ portion and other noise portions) is generally used. It can be removed by applying the noise reduction technique used.
따라서, 상기한 바와 같이 연속하는 두 절대치 차연산 이미지의 차연산을 통해 차-차연산 이미지를 획득하면 그 차-차연산 이미지 상에서 상당히 온전하게 관심 대상 객체가 나타나 있기 때문에 적절한 이미지 처리를 통해 관심 대상 객체를 정확하게 특정할 수 있게 되는 것이다.Therefore, if the difference-difference image is obtained through the difference operation of two consecutive absolute difference images as described above, the object of interest appears on the difference-difference image fairly intactly, and thus the object of interest is processed through appropriate image processing. You will be able to specify the object correctly.
상기한 바와 같이 차-차연산 이미지에서 각각 관심 대상 객체, 즉 운동하는 객체를 추출한 후에는 해당 부분에 대한 에지 정보 등을 이용하여 윤곽선을 특정하고 그 윤곽선에 의해 특정되는 부분의 중심점 또는 무게중심점 등의 특징점을 특정하여 위치 정보로서 산출할 수 있다.As described above, after extracting the object of interest, i.e., the moving object, from the difference-difference image, the contour is specified using the edge information of the corresponding part, and the center point or the center of gravity of the part specified by the contour. The feature point of can be specified and computed as positional information.
따라서, 골프클럽과 같이 특정할 수 없는 형상, 크기, 재질, 반사도 등을 가진 객체에 대해서도 상기한 바와 같은 본 발명에 따른 절대치 차연산 이미지 및 차-차연산 이미지의 생성으로 상당히 정확하게 추출할 수 있기 때문에, 그 운동하는 객체의 위치 정보를 정확하고 안정적으로, 그리고 간단한 방법으로 산출할 수 있다.Therefore, even an object having a shape, size, material, reflectivity, etc. which cannot be specified, such as a golf club, can be extracted quite accurately by the generation of the absolute difference image and the difference difference image according to the present invention as described above. Therefore, the positional information of the moving object can be calculated accurately, stably and in a simple manner.
본 발명에 따른 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치 및 이를 이용한 센싱방법은, 골프 스윙에 따른 타구 분석 등이 이루어지는 골프 연습과 관련된 산업 분야 및 가상 현실 기반의 골프 시뮬레이션이 영상 구현되도록 함으로써 사용자가 가상의 골프 경기를 즐길 수 있도록 할 수 있는 소위 스크린 골프 산업 분야 등에 이용 가능하다.Sensing device for calculating the position information of a moving object and a sensing method using the same according to the present invention, by allowing the golf simulation based on the industrial field and virtual reality related to the golf practice, such as hit analysis according to the golf swing It can be used in the so-called screen golf industry, etc., where the user can enjoy a virtual golf game.

Claims (10)

  1. 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치의 센싱방법으로서,A sensing method of a sensing device for calculating position information of a moving object,
    상기 운동하는 객체를 바라보는 화각으로 연속하여 이미지를 취득하는 단계;Acquiring an image continuously at an angle of view looking at the moving object;
    상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산 이미지 각각에 대해, 연속하는 두 차연산 이미지의 차연산 이미지를 각각 생성하는 단계; 및Generating a difference operation image of two successive difference images for each difference operation image of the reference image for each of the successively acquired images; And
    상기 연속하는 두 차연산 이미지의 차연산 이미지 각각으로부터 상기 운동하는 객체의 위치 정보를 산출하는 단계;Calculating position information of the moving object from each of the successive computed images of the two successive computed images;
    를 포함하는 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치의 센싱방법.Sensing method of the sensing device for calculating the position information of the moving object comprising a.
  2. 제1항에 있어서, 상기 연속하는 두 차연산 이미지의 차연산 이미지를 각각 생성하는 단계는,The method of claim 1, wherein the generating of the difference operation images of the two consecutive difference images, respectively,
    상기 연속하여 취득되는 이미지 중 일 프레임의 이미지를 상기 기준 이미지로서 결정하는 단계를 포함하는 것을 특징으로 하는 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치의 센싱방법.And determining an image of one frame among the continuously acquired images as the reference image.
  3. 제1항에 있어서, 상기 연속하는 두 차연산 이미지의 차연산 이미지를 각각 생성하는 단계는,The method of claim 1, wherein the generating of the difference operation images of the two consecutive difference images, respectively,
    상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산에 의하여 각 이미지 상의 배경 부분보다 더 밝은 픽셀값을 갖는 픽셀들 및 상기 배경 부분보다 더 어두운 픽셀값을 갖는 픽셀들을 모두 포함하도록 하는 차연산 이미지를 각각 생성하는 단계와,A difference operation image including both pixels having a lighter pixel value than the background part on each image and pixels having a darker pixel value than the background part by the difference calculation of the reference image for each of the successively acquired images Generating each of the
    상기 생성된 차연산 이미지들 중 연속하는 두 차연산 이미지에 대해 각각 차연산 처리를 하여 차-차연산 이미지를 생성하는 단계를 포함하는 것을 특징으로 하는 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치의 센싱방법.Sensing apparatus for calculating the position information of the moving object comprising the step of generating a difference-difference image by performing a difference operation on each of two consecutive difference image of the generated difference image Sensing method.
  4. 제1항에 있어서, 상기 연속하는 두 차연산 이미지의 차연산 이미지를 각각 생성하는 단계는,The method of claim 1, wherein the generating of the difference operation images of the two consecutive difference images, respectively,
    상기 연속하여 취득되는 이미지 각각에 있어서 각각의 픽셀의 픽셀값과 상기 기준 이미지의 대응되는 위치의 각 픽셀의 픽셀값의 차이에 대한 절대치를 픽셀값으로 하는 절대치 차연산 이미지를 생성하는 단계와,Generating an absolute value difference operation image having an absolute value for the difference between the pixel value of each pixel and the pixel value of each pixel at a corresponding position of the reference image in each of the successively acquired images;
    연속하는 두 개의 상기 절대치 차연산 이미지에 대해 각각의 대응 픽셀의 픽셀값의 차이가 0보다 크면 그 값을 픽셀값으로 하고 0보다 작으면 0을 픽셀값으로 하는 차연산 처리를 하여 차-차연산 이미지를 생성하는 단계를 포함하는 것을 특징으로 하는 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치의 센싱방법.Difference-difference processing is performed for two consecutive absolute value difference images, where the difference between the pixel values of each corresponding pixel is greater than zero, and the value is a pixel value; Sensing method of the sensing device for calculating the position information of the moving object comprising the step of generating an image.
  5. 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치의 센싱방법으로서,A sensing method of a sensing device for calculating position information of a moving object,
    상기 운동하는 객체를 바라보는 화각으로 연속하여 이미지를 취득하는 단계;Acquiring an image continuously at an angle of view looking at the moving object;
    상기 연속하여 취득되는 이미지 각각에 배경 부분과, 매 프레임마다 새로운 위치에 출현하는 위치 이동 객체와, 연속되는 이미지에서 움직임 변화 위치가 중첩되어 나타나는 운동위치중첩부분이 포함되며, 상기 연속하여 취득되는 이미지 각각에서 상기 위치 이동 객체 및 운동위치중첩부분은 추출하고 상기 배경 부분을 제거하도록 이미지 처리를 수행하는 단계;Each successively acquired image includes a background portion, a position moving object appearing at a new position every frame, and a movement position overlapping portion in which a motion change position overlaps in a successive image. Performing image processing to extract the position moving object and the movement position overlapping portion and remove the background portion in each case;
    상기 이미지 처리된 이미지들에 대해, 연속하는 두 이미지의 차연산을 통해 상기 운동위치중첩부분을 제거하여 상기 운동하는 객체에 관한 위치 이동 객체를 추출하는 단계; 및Extracting the moving object of the moving object by removing the moving position overlapping portion through the difference operation of two consecutive images with respect to the image processed images; And
    상기 추출된 운동하는 객체의 위치 정보를 산출하는 단계;Calculating position information of the extracted exercise object;
    를 포함하는 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치의 센싱방법.Sensing method of the sensing device for calculating the position information of the moving object comprising a.
  6. 제5항에 있어서, 상기 배경 부분을 제거하도록 이미지 처리를 수행하는 단계는,The method of claim 5, wherein performing image processing to remove the background portion comprises:
    상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산의 절대치를 통해 각 이미지의 배경 부분보다 더 밝은 픽셀값을 갖는 픽셀들 및 상기 배경 부분보다 더 어두운 픽셀값을 갖는 픽셀들을 모두 포함하도록 이미지 처리를 하는 단계를 포함하는 것을 특징으로 하는 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치의 센싱방법.Image processing to include both pixels having a lighter pixel value than the background portion of each image and pixels having a darker pixel value than the background portion through the absolute value of the difference operation of the reference image for each of the successively acquired images Sensing method of the sensing device for calculating the position information of the moving object, characterized in that it comprises a step of doing.
  7. 사용자의 골프스윙에 따라 운동하는 골프클럽의 위치 정보를 산출하기 위한 센싱장치의 센싱방법으로서,A sensing method of the sensing device for calculating the position information of the golf club exercising according to the user's golf swing,
    상기 사용자의 골프스윙을 바라보는 화각으로 연속하여 이미지를 취득하는 단계;Acquiring an image continuously at an angle of view looking at the golf swing of the user;
    상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산의 절대치를 취하여 절대치 차연산 이미지를 생성하는 단계;Generating an absolute difference image by taking an absolute value of a difference operation of a reference image for each of the successively acquired images;
    상기 생성된 절대치 차연산 이미지들 중 연속하는 두 절대치 차연산 이미지에 대해 각각 차연산 처리를 하여 차-차연산 이미지를 생성하는 단계; 및Generating a difference-difference image by performing a difference operation on two consecutive absolute difference images among the generated absolute difference images; And
    상기 생성된 차-차연산 이미지 각각으로부터 상기 운동하는 골프클럽의 위치 정보를 산출하는 단계;Calculating position information of the exercise golf club from each of the generated difference-calculated images;
    를 포함하는 사용자의 골프스윙에 따라 운동하는 골프클럽의 위치 정보를 산출하기 위한 센싱장치의 센싱방법.Sensing method of the sensing device for calculating the position information of the golf club exercising in accordance with the user's golf swing.
  8. 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치로서,A sensing device for calculating position information of a moving object,
    상기 운동하는 객체를 바라보는 화각으로 연속하여 이미지를 취득하는 카메라부;A camera unit which continuously acquires an image at an angle of view looking at the moving object;
    상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산 이미지 각각에 대해, 연속하는 두 차연산 이미지의 차연산 이미지를 각각 추출하기 위해 이미지 처리를 수행하는 이미지 처리부; 및An image processing unit which performs image processing for extracting a difference operation image of two successive difference images for each difference operation image of a reference image for each successively acquired image; And
    상기 이미지 처리부에 의해 추출된 상기 연속하는 두 차연산 이미지의 차연산 이미지 각각으로부터 상기 운동하는 객체의 위치 정보를 산출하는 정보산출부;An information calculation unit for calculating position information of the moving object from each of the difference operation images of the two successive difference operations images extracted by the image processing unit;
    를 포함하는 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치.Sensing device for calculating the position information of the moving object including a.
  9. 제9항에 있어서, 상기 이미지 처리부는,The method of claim 9, wherein the image processing unit,
    상기 연속하여 취득되는 이미지 중 일 프레임의 이미지를 상기 기준 이미지로서 결정하고, 상기 연속하여 취득되는 이미지 각각에 대한 상기 기준 이미지의 차연산에 의한 값의 절대치를 픽셀값으로 하는 절대치 차연산 이미지를 생성하며, 연속하는 두 개의 상기 절대치 차연산 이미지의 차연산을 통해 차-차연산 이미지를 생성하도록 구성되는 것을 특징으로 하는 운동하는 객체의 위치 정보를 산출하기 위한 센싱장치.An image of one frame among the images acquired continuously is determined as the reference image, and an absolute value difference calculation image is generated in which an absolute value of a value obtained by the difference operation of the reference image with respect to each of the images acquired continuously is a pixel value. And generate a difference-difference image through difference operations of two successive absolute difference images. 2.
  10. 사용자의 골프스윙에 따라 운동하는 골프클럽의 위치 정보를 산출하기 위한 센싱장치로서,A sensing device for calculating the position information of the golf club exercising according to the user's golf swing,
    상기 사용자의 골프스윙을 바라보는 화각으로 연속하여 이미지를 취득하는 카메라부;A camera unit which continuously acquires an image at an angle of view looking at the golf swing of the user;
    상기 연속하여 취득되는 이미지 각각에 대한 기준 이미지의 차연산의 절대치를 취하여 절대치 차연산 이미지를 생성하며, 상기 생성된 절대치 차연산 이미지들 중 연속하는 두 절대치 차연산 이미지에 대해 각각 차연산 처리를 하여 차-차연산 이미지를 생성하도록 이미지 처리를 수행하는 이미지 처리부; 및Absolute difference image is generated by taking the absolute value of the difference operation of the reference image with respect to each of the successively acquired images, and performs a difference operation on two consecutive absolute value difference images among the generated absolute value difference images, respectively. An image processing unit which performs image processing to generate a difference-difference image; And
    상기 생성된 차-차연산 이미지 각각으로부터 상기 운동하는 골프클럽의 위치 정보를 산출하는 정보산출부;An information calculation unit for calculating position information of the golf club to be exercised from each of the generated difference-calculated images;
    를 포함하는 사용자의 골프스윙에 따라 운동하는 골프클럽의 위치 정보를 산출하기 위한 센싱장치.Sensing device for calculating the position information of the golf club to exercise in accordance with the user's golf swing.
PCT/KR2018/000897 2017-01-25 2018-01-19 Sensing apparatus for calculating position information of object in motion, and sensing method using same WO2018139810A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2017-0012196 2017-01-25
KR1020170012196A KR101932525B1 (en) 2017-01-25 2017-01-25 Sensing device for calculating information on position of moving object and sensing method using the same

Publications (1)

Publication Number Publication Date
WO2018139810A1 true WO2018139810A1 (en) 2018-08-02

Family

ID=62978049

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/000897 WO2018139810A1 (en) 2017-01-25 2018-01-19 Sensing apparatus for calculating position information of object in motion, and sensing method using same

Country Status (3)

Country Link
KR (1) KR101932525B1 (en)
TW (1) TWI647424B (en)
WO (1) WO2018139810A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114317675A (en) * 2022-01-06 2022-04-12 福州大学 Detection method and system for qualitatively and quantitatively detecting bacteria on different wound surfaces based on machine learning

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102256260B1 (en) * 2020-02-10 2021-05-27 (주) 알디텍 Smart camera sensor for screen golf
KR102439549B1 (en) * 2020-09-14 2022-09-02 주식회사 골프존 Device for sensing golf swing and method for sensing impact position on club head using the same

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003111880A (en) * 2001-10-04 2003-04-15 Ditect:Kk Golf swing analyzing system and image processing method
JP2003296741A (en) * 2002-03-29 2003-10-17 Toshiba Corp Method, program and device for cutting object image
KR101078954B1 (en) * 2011-03-22 2011-11-01 (주) 골프존 Apparatus for virtual golf simulation, and sensing device and method used to the same
JP2012212373A (en) * 2011-03-31 2012-11-01 Casio Comput Co Ltd Image processing device, image processing method and program
JP2013118876A (en) * 2011-12-06 2013-06-17 Dunlop Sports Co Ltd Diagnosing method of golf swing

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1158270A1 (en) * 2000-05-24 2001-11-28 Seiko Epson Corporation Mesuring system for sports events
EP3057016A1 (en) * 2007-02-14 2016-08-17 NIKE Innovate C.V. Collection and display of athletic information
KR20130050405A (en) * 2011-11-07 2013-05-16 오수미 Method for determining temporal candidate in inter prediction mode
CN104567919A (en) * 2013-10-12 2015-04-29 北京航天计量测试技术研究所 Device for calibrating dynamic measurement errors of photogrammetric system and application method thereof
CN103713148A (en) * 2013-12-13 2014-04-09 柳州市蓝海科技有限公司 Golf simulation system ball detection device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003111880A (en) * 2001-10-04 2003-04-15 Ditect:Kk Golf swing analyzing system and image processing method
JP2003296741A (en) * 2002-03-29 2003-10-17 Toshiba Corp Method, program and device for cutting object image
KR101078954B1 (en) * 2011-03-22 2011-11-01 (주) 골프존 Apparatus for virtual golf simulation, and sensing device and method used to the same
JP2012212373A (en) * 2011-03-31 2012-11-01 Casio Comput Co Ltd Image processing device, image processing method and program
JP2013118876A (en) * 2011-12-06 2013-06-17 Dunlop Sports Co Ltd Diagnosing method of golf swing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114317675A (en) * 2022-01-06 2022-04-12 福州大学 Detection method and system for qualitatively and quantitatively detecting bacteria on different wound surfaces based on machine learning

Also Published As

Publication number Publication date
KR20180087748A (en) 2018-08-02
TWI647424B (en) 2019-01-11
KR101932525B1 (en) 2018-12-27
TW201827788A (en) 2018-08-01

Similar Documents

Publication Publication Date Title
WO2012128574A2 (en) Virtual golf simulation device and sensing device and method used in same
WO2013043021A2 (en) System and method for photographing moving subject by means of camera, and acquiring actual movement trajectory of subject based on photographed image
WO2014109545A1 (en) Apparatus and method for sensing ball in motion
WO2017204571A1 (en) Camera sensing apparatus for obtaining three-dimensional information of object, and virtual golf simulation apparatus using same
WO2012108699A2 (en) Virtual golf simulation apparatus and method
WO2012128568A2 (en) Virtual golf simulation device and sensing device and method used in same
WO2016200208A1 (en) Device and method for sensing moving ball
CN107543530B (en) Method, system, and non-transitory computer-readable recording medium for measuring rotation of ball
WO2015080431A1 (en) Golf simulator and golf simulation method
WO2018139810A1 (en) Sensing apparatus for calculating position information of object in motion, and sensing method using same
WO2011081471A2 (en) Virtual golf simulation apparatus providing putting guide
CN107871120A (en) Competitive sports based on machine learning understand system and method
WO2014109544A1 (en) Device and methdo for sensing moving ball and method for processing ball image for spin calculation of moving ball
WO2018030673A1 (en) Device for calculating flight information of ball, and method for calculating flight information of ball and recording medium readable by computing device for recording same
WO2018030656A1 (en) Interactive virtual reality baseball game device and method for controlling virtual baseball game by same
JP7076955B2 (en) Methods, systems, and non-temporary computer-readable recording media for correcting the brightness of the ball&#39;s image.
WO2017160057A1 (en) Screen golf system, method for implementing image for screen golf, and computer-readable recording medium for recording same
WO2020054954A1 (en) Method and system for providing real-time virtual feedback
WO2011081470A2 (en) Apparatus and method for virtual golf simulation imaging sub display and replay display
WO2014189315A1 (en) Golf practice system for providing golf swing, server, and method for processing information about golf swing using same
CN109001484A (en) The detection method and device of rotation speed
WO2012128566A2 (en) Virtual golf simulation device and method, and sensing device and method used in same
WO2012134209A2 (en) Virtual golf simulation apparatus and method
KR20090112538A (en) Apparatus for obtaining golf images using illumination control, and golf practice system based on image processing using it
WO2018097612A1 (en) Sensing device for calculating information on user&#39;s golf shot and sensing method using same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18744280

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18744280

Country of ref document: EP

Kind code of ref document: A1