WO2020099369A1 - A method for movement analysis and related portable electronic device - Google Patents

A method for movement analysis and related portable electronic device Download PDF

Info

Publication number
WO2020099369A1
WO2020099369A1 PCT/EP2019/080953 EP2019080953W WO2020099369A1 WO 2020099369 A1 WO2020099369 A1 WO 2020099369A1 EP 2019080953 W EP2019080953 W EP 2019080953W WO 2020099369 A1 WO2020099369 A1 WO 2020099369A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
indicators
item
images
obtaining
Prior art date
Application number
PCT/EP2019/080953
Other languages
French (fr)
Inventor
João Carlos Prazeres FIGUEIRAS
Thomas Veje FLINTEGAARD
Original Assignee
Mopare Aps
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mopare Aps filed Critical Mopare Aps
Publication of WO2020099369A1 publication Critical patent/WO2020099369A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Definitions

  • the present disclosure pertains to the field of movement analysis.
  • the present disclosure relates to a method for movement analysis and related portable electronic device.
  • the present disclosure provides a method, performed by a portable electronic device.
  • the method comprises obtaining one or more images including a first image of an item.
  • the method comprises obtaining one or more indicators based on the first image, wherein the one or more indicators comprise a first indicator, and the first indicator is indicative of a movement feature of the item of the first image.
  • the method comprises comparing the one or more indicators with a reference model comprising a first set of reference indicators indicative of reference movement features of a reference item in one or more reference images.
  • the method comprises determining a comparison result based on the comparison.
  • the method may comprise communicating the comparison result.
  • a portable electronic device comprising a memory module, a processor module, and an interface, wherein the portable electronic device is configured to perform any of the steps described in relation to the methods performed by the portable electronic device.
  • a portable electronic device is capable of providing comparison result between images reflecting a movement. This is particularly advantageous in the field of image processing for performance comparison (e.g. in re-education of limbs, e.g. in training of athletes, e.g. in assessing performance of animals or performance comparison of machines (which may not necessarily be able to communicate status)).
  • the present disclosure provides a portable electronic device which enables an objective and quantifiable assessment of movement of an item in comparison with a reference movement.
  • the technique and portable electronic device disclosed herein are also scalable and adaptable to include reference models from any source.
  • the portable electronic device disclosed herein is also scalable and adaptable to assess any type of movement.
  • the portable electronic device disclosed herein enables comparison of movement estimations of any dimensions (e.g. 2D or 3D) without additional sensors (e.g. sensors external to the portable electronic device).
  • Figs. 1A-1 B schematically illustrates a first exemplary image and an exemplary reference image processed according to the present disclosure
  • FIGs. 2A-2B schematically illustrates exemplary indicators and comparison results according to the present disclosure
  • Fig. 3 is a flow-chart of an exemplary method 100 according to the disclosure.
  • Fig. 4 is a block diagram illustrating an exemplary portable electronic device according to the disclosure.
  • Fig. 5A-5D schematically illustrates an exemplary video processed according to the present disclosure.
  • the term“item” refers to a physical object, an article or a thing having material existence.
  • the term“item” may refer to a human being or an animal, or a part thereof such as a limb.
  • the item may be an inert item or an inert thing.
  • the item may be an object carried by an individual.
  • the item may be under test with an audience such as a training team.
  • an item may for example be a robot and/or a robot part (e.g. a robotic part of a machine).
  • an item may comprise any one or more of: a human being, a body part, an animal, an animal body part, a robot, a robot part, an object, and an object part.
  • the item may comprise any combination thereof.
  • Figs. 1A-1 B schematically illustrates an exemplary first image 10 and an exemplary reference image 20 processed according to the present disclosure.
  • Fig. 1A shows an exemplary first image 10 and a reference image 20.
  • the first image 10 is an image of a football player A performing a movement or mathematical representation of an image of a football player A.
  • the reference image 20 is an image of a football player B performing a movement. It may be envisaged that the footballer A wants to perform a similar kick as footballer B.
  • the method disclosed herein when initiated permits a user to record a video of a physical movement (e.g. a sports move).
  • a physical movement e.g. a sports move
  • the video is processed according to method disclosed in order to detect the movement in the video via the indicators disclosed herein.
  • the extracted indicator(s) representative of the movement is then subsequently compared to a reference model which may be embodied by database hosting data that represents a plurality of reference movements.
  • the reference movements may be selected to represent the highest possible skills that for example elite references have in doing a particular movement. For example, in a football movement, the reference could be a movement from a professional football player or from a mathematical model.
  • Fig. 1 B shows exemplary indicators according to this disclosure.
  • Fig. 1 B shows an exemplary reference image 21 and an exemplary first image 11.
  • the first image 11 is processed according to this disclosure and one or more of the following indicators are obtained based on the first image 11 to characterize the movement: 11 A, 11 B, 11 C, 11 D, 1 1 E, 11 F, 11G, 11 H, 111, 11 J, 11 K, 11 L, 11 M, 11 N, 110, 11 P, 11Q, 11 R, 11S, 11T, 11 U, 1 1V, 11W, 11X, 11 Y, 11Z, 11AA, 11AB, 11 AC, 1 1 AD, 11AE.
  • the reference image 21 may be associated with a first set of reference indicators indicative of reference movement features of footballer B in the reference image 21.
  • the first set of reference indicators may comprise one or more of the following indicators: 21 A, 21 B, 21 C, 21 D, 21E, 21 F, 21G, 21H, 211, 21J, 21 K, 21L, 21M, 21N, 210, 21P, 21Q, 21R, 21S, 21T, 21U, 21V,
  • the reference image 21 may be processed according to this disclosure and one or more of the following indicators are obtained based on the reference image to characterize the movement: 21A, 21B, 21C, 21D, 21E, 21F, 21G, 21H, 211, 21J, 21 K, 21L, 21M, 21N, 210, 21P, 21Q, 21R, 21S, 21T, 21U, 21V, 21W, 21X, 21Y, 21Z, 21AA, 21AB, 21 AC, 21AD, 21AE, 21AF, 21AG, 21AH, 21AI, 21AJ, 21AK.
  • an indicator may comprise a point indicator and/or a vertex indicator.
  • the reference image 21 shows indicators 21 A of a point indicator, and 21 B of a vertex indicator.
  • reference image 21 and first image 11 are not taken from a same view point.
  • the present disclosure provides, in one or more embodiments, to perform a geometric
  • transformation e.g. a translation of the one or more indicators of the first image 11, a rotation of the one or more indicators of the first image 11 , and/or scaling the one or more indicators of the first image 11 ) so as to enable an improved comparison with the reference image 21.
  • a reference model may be selected based on its similarity with the movement characterized in the first image, the indicator(s) extracted from the first image (e.g.11 A, 11 B, 11C, 11 D, 11 E, 11 F, 11G, 11H, 111, 11 J, 11K, 11 L, 11M, 11 N, 110, 11P, 11C, 11R, 11S, 11T, 11U, 11V, 11W, 11X, 11Y,
  • the indicator(s) extracted from the first image e.g.11 A, 11 B, 11C, 11 D, 11 E, 11 F, 11G, 11H, 111, 11 J, 11K, 11 L, 11M, 11 N, 110, 11P, 11C, 11R, 11S, 11T, 11U, 11V, 11W, 11X, 11Y,
  • 11 Z, 11AA, 11AB, 11AC, 11AD, 11AE) or from a video is compared to reference indicator(s) (e.g. 21A, 21 B, 21C, 21D, 21E, 21F, 21G, 21H, 211, 21J, 21K, 21L, 21M, 21N, 210, 21P, 210, 21R,
  • reference indicator(s) e.g. 21A, 21 B, 21C, 21D, 21E, 21F, 21G, 21H, 211, 21J, 21K, 21L, 21M, 21N, 210, 21P, 210, 21R,
  • the indicators from the first image may be compared with reference indicators based on a point to point comparison. In one or more exemplary methods, the indicators from the first image may be compared based on extrapolation of indicator(s) in image 11 into image 21. In one or more exemplary methods, an indicator comprises a point-cloud representation.
  • Figs.2A-2B schematically illustrates exemplary indicators and comparison results according to the present disclosure.
  • Fig.2A shows a reference representation 40 of the movement characterized in reference image 21 and a first representation 30 of the movement characterized in first image 11.
  • Fig. 2B shows a reference representation 41 of the movement characterized in reference image 21 and a first representation 31 of the movement characterized in first image 1 1 , as well as a comparison result shown as a result user interface object 50.
  • the result user interface object 50 may be displayed on a display of the portable electronic device and optionally with one or more of: the reference representation 41 , first representation 31 , the first image, and the reference image.
  • a comparison intends to determine similarity between movements characterized in image 21 and image 1 1 respectively (illustrated in reference representation 40, 41 and first representation 30, 31 ) and to determine a comparison result. It may be envisaged that the higher the comparison result, the higher the similarity to the reference movement, therefore the highest the skills.
  • Fig. 3 is a flow-chart of an exemplary method 100 according to the disclosure.
  • the method 100 is performed by a portable electronic device.
  • the method 100 may be performed by an electronic device comprising a housing and a capture module arranged in the housing.
  • the portable electronic device may be one or more of: a mobile phone, a wireless device, and a tablet.
  • the capture module may comprise a camera module, and/or an infrared, IR, detector module, and/or a sound sensor module.
  • the portable electronic device comprises one of: a camera module, or an IR detector module.
  • the method 100 may be a method for movement analysis and/or comparison.
  • the method 100 comprises obtaining S102 one or more images including a first image of an item.
  • the one or more images may form part of a set of images comprising a first image and/or a second image; and optionally a third image.
  • an image may be taken of an item, the item is thus captured in the image.
  • the image represents an item. In other words, the item is in the image.
  • Obtaining S102 may comprise capturing the one or more images.
  • Obtaining S102 may comprise generating the one or more images.
  • Obtaining S102 may comprise obtaining any media, e.g. a video, a sound, and/or an audio file.
  • obtaining S102 may be carried out to capture a real world scene which has one or more modalities.
  • obtaining S102 one or more images including a first image may comprise obtaining image data indicative of the image, wherein the image data comprises first image data indicative of a first image.
  • the image data may characterize a dimension of the item.
  • the image data may be related to two dimensions and/or three dimensions.
  • Obtaining S102 may comprise obtaining image data comprising a first derivative indicative a difference between two images and associating the first derivative to an additional dimension indicative of the movement (e.g. a 3rd dimension indicative of the movement).
  • an item may for example be of a robot part or of an animal.
  • the first image of an item may for example be of an individual or of a part of the animal such as a body part.
  • the first image of an item may be any object, such as a golf club or an independently moving object.
  • obtaining S102 the one or more images including the first image comprises capturing S102A the one or more images using a capture module of the electronic device and/or using one or more external camera modules of one or more external devices.
  • obtaining S102 the one or more images including the first image may comprise capturing the first image, a second image, a third image.
  • obtaining S102 the one or more images including the first image may comprise capturing a video, e.g. a first video, a second video.
  • obtaining S102 the one or more images including the first image may comprise capturing a first sequence of images, and/or a second sequence of images.
  • a capture module may comprise a camera module, an infrared, IR, detector module, and/or a sound sensor module (e.g. an ultra-sound sensor module).
  • obtaining S102 the one or more images including the first image comprises receiving S102B the one or more images from one or more external devices, e.g. via a communication system, e.g. a wireless communication system, e.g. an Internet based communication system, e.g. local communication system (e.g. a bus communication system, e.g. a port-based communication system).
  • a communication system e.g. a wireless communication system, e.g. an Internet based communication system, e.g. local communication system (e.g. a bus communication system, e.g. a port-based communication system).
  • obtaining S102 the one or more images including the first image comprises combining S102C the one or more images from the one or more external devices to obtain an aggregated representation of a movement of the item.
  • the method 100 comprises obtaining S104 one or more indicators based on the first image.
  • obtaining S104 one or more indicators may comprise generating the indicators based on the first image.
  • obtaining S104 one or more indicators may comprise extracting the indicators based on the first image.
  • the indicator may be indicative of a position of the item in the image.
  • the first indicator of the first image comprises a first vertex of a part of the item, and/or a first position indicator and/or a timestamp of the first image, and/or a first contour indicator of the first image.
  • the first indicator of the first image comprises a first point-cloud representation of the first image.
  • obtaining S104 the one or more indicators based on the first image comprises identifying S104A, based on the first image, one or more vertices and/or one or more contour indicators.
  • the one or more indicators comprise a first indicator, and the first indicator is indicative of a movement feature of the item of the first image.
  • a movement feature may refer to a feature indicative of movement of the item, such as a feature indicative of a position of the item.
  • a movement feature may be representative of a feature of the movement detected across the one or more images.
  • Obtaining S104 one or more indicators may be performed based on one or more schemes e.g. pose estimation and/or skeletal detection techniques.
  • Obtaining S104 one or more indicators may be performed based on extracting data from the one or more schemes in e.g. 3-dimentional matrix:
  • Movement (:, :, :) (body vertex, cartesian coordinate, time) (1 )
  • Movement is a vector representation of an indicator of a position representative of a movement in the first image.
  • Cartesian coordinate may be related to the position of the item in the image.
  • one or more indicators may comprise modelling a movement of an item (e.g. a body, e.g. a body part, e.g. a head, e.g. a head position at time S):
  • Movement (:, :, :) is a vector representation of an indicator of a position representative of a movement in the first image
  • pixel J is the pixel number of a first position of the item in the image
  • pixel K is the pixel number of a second position of the item in the image.
  • item may be a head.
  • the vector Movement (:, :, :) comprises an additional parameter indicative of image depth.
  • obtaining S104 the one or more indicators based on the first image comprises adjusting S104B the one or more indicators for comparison.
  • adjusting S104B the one or more indicators for comparison comprises performing 104BB a geometrical transformation.
  • the geometrical transformation comprises one or more of: scaling the one or more indicators, performing a translation of the one or more indicators, and performing a rotation of the one or more indicators. For example, performing 104BB a geometrical transformation may be performed on the one or more indicators of the first image, and/or first set of reference indicators.
  • scaling the one or more indicators, performing a translation of the one or more indicators, and performing a rotation of the one or more indicators may be performed by dividing all the cartesian coordinates by the largest distance between any two vertexes belonging to a moment in time captured by a video or a sequence of images. This may be performed separately for each video or image sequence, as well as for a reference image or a reference video.
  • the translation can be performed by aligning the centres of mass of both matrices.
  • the orientation can be performed by nonlinear optimization on a target function that minimizes the distance between the two matrices.
  • adjusting S104B the one or more indicators for comparison comprises aligning S104BC time of the one or more indicators with the time of the set of reference indicators.
  • aligning S104BC time of the one or more indicators with the time of the set of reference indicators may be performed by convoluting two videos.
  • the method 100 may comprise applying one or more of: scaling, a translation, a rotation, in combination with aligning S104BC time. This allows manipulation (e.g. transformation) of a plurality of reference models, including time dependency.
  • the method 100 comprises comparing S106 the one or more indicators with a reference model comprising a first set of reference indicators indicative of reference movement features of a reference item in one or more reference images.
  • a reference item is an item similar to the item of the first image. For example, if the item of the first image is a footballer, the reference item is a footballer. Item and reference items may be organized in categories to facilitate the comparison.
  • one or more indicators of the first image are compared to the reference model, such as to one or more reference indicators. Stated differently, the one or more indicators are compared to the first set of reference indicators.
  • the method comprises obtaining S116 the reference model.
  • obtaining S116 may comprise obtaining the reference model based on a mathematical model.
  • Obtaining S1 16 may comprise obtaining the reference model based on aggregated videos. This may lead to higher accuracy or higher 3D model precision.
  • Obtaining S116 may comprise obtaining the reference model based on a single video.
  • Obtaining S116 the reference model may comprise selecting the reference model.
  • a reference model may comprise a reference image and/or a reference video, and/or a reference library of media.
  • the reference model may be based on theoretical model (e.g. partially or entirely).
  • the reference model may be based on computation of one or more data sets.
  • the reference model may be developed based on machine learning techniques applied to a plurality of images of an image library. This allows comparison of pose estimations of different videos (e.g. individual videos, models of movement or aggregated estimated movement, mathematical model of movement).
  • the method 100 comprises determining S108 a comparison result based on the comparison.
  • the comparison result may comprise an indicator representative of the difference between the indicator of the first image and the first set of reference indicators or of the similarity between the indicator of the first image and the first set of reference indicators.
  • the comparison result may be expressed in percentage, such as for a score.
  • determining S108 a comparison result based on the comparison comprises calculating S108B a first distance parameter between the first indicator and a reference indicator of the first set of reference indicators.
  • a first distance parameter may comprise a distance vector. This may allow distinguishing a comparison result for each item part, for a first item (e.g. a first body part, e.g. a leg) and a second item (e.g. a second body part, e.g. a foot).
  • a first distance parameter comprises a matrix distance which can be calculated.
  • a translation operation may comprise a translation operation with a vector, e.g. a positive vector or a negative vector.
  • a translation operation with a negative vector may result in a symmetric mapping. This enables for example to compare right-foot movement and left-foot movement.
  • scaling can be performed by dividing the cartesian coordinates by the largest distance between any two vertexes belonging to the same moment in time in the image(s).
  • the translation can be performed by aligning the centres of mass of both matrices.
  • the orientation can be performed by nonlinear optimization on a target function that minimizes the distance between the two matrices.
  • the first distance parameter may be calculated in the following manner e.g.:
  • DP1 denotes a first distance parameter and Movement ⁇ , :, :) is obtained from equation (1 ) or (2), Movement (:, :, :) is a vector representation of an indicator of a position representative of a movement in the first image and Reference is a vector representation of a reference indicator of a position representative of a movement in a reference image.
  • determining S108 the comparison result based on the comparison comprises applying S108A a mapping based on the first distance parameter.
  • the mapping may comprise an exponential mapping, a finite mapping and/or a mapping with a countable scale.
  • Applying S108A a mapping based on the first distance parameter may be performed based on a decay parameter (e.g. used to control the decay of higher distances to comparison result of zero (e.g. the smallest possible results)).
  • the comparison result may be determined in the following manner e.g.:
  • mapping (- K x DP1 ) (4)
  • K denotes a decay parameter
  • DP1 is the first distance parameter
  • mapping denotes a mapping applied (e.g. a finite mapping, e.g. an exponential mapping.)
  • the method 100 comprises communicating S1 10 the comparison result.
  • the method 100 comprises communicating S1 10 the comparison result.
  • communicating S1 10 may comprise communicating the comparison result in real time as the images are obtained.
  • Communicating S1 10 the comparison result may comprise displaying the comparison result via an interface (e.g. a display) of the portable electronic device.
  • communicating S1 10 comprises displaying, on a display of the portable electronic device, a user interface object representative of the comparison result, e.g. in real-time.
  • communicating S1 10 may comprise transmitting to an external device (e.g. another portable electronic device, and/or a server device).
  • communicating S1 10 may comprise outputting a haptic feedback representative of the comparison result and/or a sound representative of the comparison result and/or a light indicator representative of the comparison result.
  • the method may comprise: displaying or communicating the one or more images, the one or more indicators, the one or more reference indicators which may be performed at any operation of the disclosed method, such as between step S104 and S106, such as between S106 and S108, such as after S102 (e.g. S102A, S102B and/or S102C).
  • the one or more images include a second image, wherein the first image and the second image are part of a series of images forming a video.
  • the steps S104, S106, and S108 may be performed image per image.
  • the method 100 comprises performing S1 12 for the second image: the step of obtaining S104 one or more indicators, the step of comparing S106 the indicators, and the step of determining S108 the comparison result, and determining S1 14 a global comparison result for the video.
  • the step of S 104, S106, and/or S108 may be performed iteratively, e.g. image per image, e.g. image sequence per image sequence, e.g. video per video,
  • obtaining S1 16 the reference model comprises receiving S1 16A or capturing one or more reference images and generating S1 16B one or more sets of reference indicators based on the one or more reference images.
  • the method 100 comprises identifying S1 18 the item in the first image.
  • a first item part of the item and a second item part of the item may be identified.
  • an image may comprise a plurality of items comprising a first item and a second item
  • identifying S1 18 may comprise identifying a first item and a second item.
  • the first item and/or the second item may be taken into account in S104 to obtain indicators thereof, e.g. a first set of indicator related to the first item and/or a second set of indicators related to the second item.
  • identifying S1 18 the item may be performed at initialization in configuration of the process.
  • identifying S1 18 the item may be based on an initial configuration of the method 100.
  • the method comprises completing the image data set by applying a predictive data model to the image data set.
  • the completed image data set may be associated with an accuracy parameter, e.g. a probability of prediction accuracy.
  • an item may comprise one or more parts, wherein the indicator is generated based on the one or more parts of the item.
  • an item may comprise a first item and a second item.
  • determining S108 the comparison result comprises determining a total comparison result by applying one or more factors to comparison results for each part of the image.
  • the method 100 comprises providing S120 feedback to a user based on the comparison result.
  • the feedback is to provide information on how to impact (e.g. improve) the comparison result.
  • the method 100 may comprise providing feedback to a user based on the comparison result, e.g. via a display of the portable electronic device.
  • providing S120 feedback to a user may comprise outputting a haptic feedback representative of the feedback and/or a sound representative of the feedback and/or a light indicator representative of the feedback.
  • the method 100 may comprise comparing a first comparison result and a second comparison result and identifying the highest comparison result amongst the first comparison result and the second comparison result.
  • the present disclosure relates to an electronic device comprising a housing and a capture module arranged in the housing.
  • the electronic device may be one or more of: a mobile phone, a wireless device, and a tablet.
  • the electronic device is configured to perform method 100.
  • Fig. 4 shows a block diagram illustrating an exemplary portable electronic device 200 of the present disclosure.
  • the present disclosure relates to a portable electronic device 200 comprising a memory module 201 , a processor module 202, a capture module 205 and an interface 203.
  • the portable electronic device 200 is configured to perform any of the steps disclosed in Fig. 3.
  • the portable electronic device 200 may comprise a mobile phone, a tablet, a camera.
  • the capture module 205 may comprise a camera module, an infrared, IR, detector module, and/or a sound sensor module.
  • the interface 203 is configured to communicate with a server device using wired and/or wireless communications systems.
  • the portable electronic device 200 is configured to obtain one or more images including a first image of an item (e.g. via the interface module 203 and/or the capture module 205).
  • the portable electronic device 200 is configured to obtain one or more indicators based on the first image (e.g. via the processor module 202 and/or the interface module 203).
  • the one or more indicators comprise a first indicator, and the first indicator is indicative of a movement feature of the item of the first image.
  • the portable electronic device 200 is configured to compare (e.g. via the processor module 202) the one or more indicators with a reference model comprising a first set of reference indicators indicative of reference movement features of a reference item in one or more reference images.
  • the portable electronic device 200 is configured to determine, e.g. via the processor module 202, a comparison result based on the comparison.
  • the portable electronic device 200 is configured to communicating, via e.g. the interface module 203, the comparison result.
  • the interface module 203 may comprise a display module configured to display a user interface object representative of the comparison result.
  • the portable electronic device 200 is configured to perform any of the operations S102, S104, S106, S108, S1 10, optionally S1 12, S1 14, S1 16, S1 18, S120.
  • the processor module 202 is optionally configured to perform any of the operations disclosed in Fig. 3.
  • the operations of the portable electronic device 200 may be embodied in the form of executable logic routines (e.g., lines of code, software programs, etc.) that are stored on a non- transitory computer readable medium (e.g., the memory module 201 ) and are executed by the processor module 202).
  • the operations of the portable electronic device 200 may be considered a method that the wireless module is configured to carry out.
  • the described functions and operations may be implemented in software, such functionality may as well be carried out via dedicated hardware or firmware, or some combination of hardware, firmware and/or software.
  • the memory module 201 may be one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, a random access memory (RAM), or other suitable device.
  • the memory module 201 may include a non-volatile memory for long term data storage and a volatile memory that functions as system memory for the processor module 202.
  • the memory module 201 may exchange data with the processor module 202 over a data bus. Control lines and an address bus between the memory module 201 and the processor module 202 also may be present (not shown in Fig. 4).
  • the memory module 201 is considered a non-transitory computer readable medium.
  • the memory module 201 may be configured to store a reference model in a part of the memory.
  • Figs.5A-5D schematically illustrate an exemplary video processed according to the present disclosure.
  • Fig. 5A shows an exemplary reference video 50 (or sequence of reference images) and an exemplary first video 60 (or sequence of images).
  • Fig. 5B shows exemplary indicators according to this disclosure.
  • Fig. 5B shows an exemplary reference video formed by a sequence of four reference images 51 , 52, 53, 54.
  • the reference image 51 is processed according to this disclosure and one or more of the following indicators are obtained based on the reference image 51 to characterize the movement of the baseball player and the bat forming the item: 51 A, 51 B, 51 C, 51 D, 51 E, 51 F, 51G, 51 H, 511, 51J, 51 K, 51 L, 51 M, 51 N, 510, 51 P, 51Q, 51 R, 51S, 51T, 51 U, 51V, 51W, 51X,
  • reference images 51 , 52, 53, 54 each illustrate the item comprising a baseball player and a bat indicated by indicators 51 S, 52AA, 53 AE, 54C respectively.
  • a reference image 52 is processed according to this disclosure and one or more of the following indicators are obtained based on the reference image 52 to characterize the movement of the baseball player and the bat forming the item: 52A, 52B, 52C, 52D, 52E, 52F, 52G, 52H, 52I, 52 J, 52K, 52L, 52M, 52N, 520, 52P, 52Q, 52R, 52S, 52T, 52U, 52V, 52W, 52X,
  • a reference image 52 is processed according to this disclosure and one or more of the following indicators are obtained based on the reference image 52 to characterize the movement of the baseball player and the bat forming the item: 52A, 52B, 52C, 52D, 52E, 52F,
  • a reference image 53 is processed according to this disclosure and one or more of the following indicators are obtained based on the reference image 53 to characterize the movement of the baseball player and the bat forming the item: 53A, 53B, 53C, 53D, 53E, 53F,
  • a reference image 54 is processed according to this disclosure and one or more of the following indicators are obtained based on the reference image 54 to characterize the movement of the baseball player and the bat forming the item: 54A, 54B, 54C, 54D, 54E, 54F,
  • Figs. 5C-5D show an exemplary first video 60 comprising a first image 61 , a second 62, a third image 63, a fourth image 64, and a fifth image 65. It is noted that images 61 , 62, 63, 64, 65 each illustrate the item comprising a baseball player and a bat indicated by indicators 61AB, 62E, 63D, 64V, 65F respectively.
  • the first image 61 is processed according to this disclosure and one or more of the following indicators are obtained based on the first image 61 to characterize the movement of the baseball player and the bat forming the item: 61A, 61 B, 61C, 61 D, 61 E, 61 F, 61G, 61 H, 611, 61J,
  • the second image 62 is processed according to this disclosure and one or more of the following indicators are obtained based on the second image 62 to characterize the movement of the baseball player and the bat forming the item: 62A, 62B, 62C, 62D, 62E, 62F, 62G, 62H, 62I, 62 J, 62 K, 62L, 62M, 62N, 620, 62P, 62Q, 62R, 62S, 62T, 62U, 62V, 62W, 62X, 62Y, 62Z, 62AA.
  • the third image 63 is processed according to this disclosure and one or more of the following indicators are obtained based on the third image 63 to characterize the movement of the baseball player and the bat forming the item: 63A, 63B, 630, 63D, 63E, 63F, 63G, 63H, 63I, 63 J, 63K, 63L, 63M, 63N, 630, 63P, 63Q, 63R, 63S, 63T, 63U, 63V, 63W, 63X, 63Y, 63Z, 63AA, 63 AB, 63 AC.
  • the fourth image 64 is processed according to this disclosure and one or more of the following indicators are obtained based on the fourth image 64 to characterize the movement of the baseball player and the bat forming the item: 64A, 64B, 64C, 64D, 64E, 64F, 64G, 64H, 641, 64J, 64K, 64L, 64M, 64N, 640, 64P, 64Q, 64R, 64S, 64T, 64U, 64V, 64W, 64X, 64Y, 64Z, 64AA, 64AB, 64 AC.
  • the fifth image 65 is processed according to this disclosure and one or more of the following indicators are obtained based on the fifth image 65 to characterize the movement of the baseball player and the bat forming the item: 65A, 65B, 65C, 65D, 65E, 65F, 65G, 65H, 651, 65J, 65K, 65L, 65M, 65N, 650, 65P, 65Q, 65R, 65S, 65T, 65U, 65V, 65W, 65X, 65Y, 65Z, 65AA, 65AB, 65AC.
  • reference video 50 and the first video 60 do not include the same number of images, and that the movement performed in the reference video 50 does not have the same timing as the movement performed in the first video 60. It is noted that the reference video 50 and the first video 60 are not taken from the same point of view.
  • the portable electronic device disclosed herein adjusts (as in step S104B) the one or more indicators for comparison by aligning (e.g. in step S104BC) time of the one or more indicators with the time of the set of reference indicators.
  • the portable electronic device disclosed herein performs a geometric transformation e.g. performing a translation of the one or more indicators of images of the first video 60 (e.g. a geometric translation of a joint time-point optimization) in order to compare with the reference indicators of the reference images of the reference video to determine the comparison result.
  • Figs. 1-5D comprises some modules or operations which are illustrated with a solid line and some modules or operations which are illustrated with a dashed line.
  • the modules or operations which are comprised in a solid line are modules or operations which are comprised in the broadest example embodiment.
  • the modules or operations which are comprised in a dashed line are example embodiments which may be comprised in, or a part of, or are further modules or operations which may be taken in addition to the modules or operations of the solid line example embodiments. It should be appreciated that these operations need not be performed in order presented. Furthermore, it should be appreciated that not all of the operations need to be performed.
  • the exemplary operations may be performed in any order and in any combination.
  • a computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc.
  • program modules may include routines, programs, objects, components, data structures, etc. that perform specified tasks or implement specific abstract data types.
  • Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.

Abstract

The present disclosure provides a method, performed by a portable electronic device. The method comprises obtaining one or more images including a first image of an item. The method comprises obtaining one or more indicators based on the first image, wherein the one or more indicators comprise a first indicator, and the first indicator is indicative of a movement feature of the item of the first image. The method comprises comparing the one or more indicators with a reference model comprising a first set of reference indicators indicative of reference movement features of reference item in one or more reference images. The method comprises determining a comparison result based on the comparison. The method may comprise communicating the comparison result.

Description

A METHOD FOR MOVEMENT ANALYSIS AND RELATED PORTABLE ELECTRONIC DEVICE
The present disclosure pertains to the field of movement analysis. The present disclosure relates to a method for movement analysis and related portable electronic device.
BACKGROUND
Analysis and evaluation of movements have usually been performed by an individual. It is difficult to objectively evaluate the quality of a movement due to the subjective consideration of the individual. For instance, when a patient goes through re-education of one of his limbs, it may be difficult for the medical team to evaluate objectively the progress of the movement of a limb.
Similarly, when a horse is being evaluated for purchase based on movement quality, it is difficult to obtain an objective quantifiable assessment of the movement quality of the horse. In the sports field, when an amateur football player practices a pass or a dribbling technique to try to enact a professional football player, it is difficult for the football player to evaluate how good his technique is to reach the level of the professional football player.
There is a need for a technique that allows objective and quantifiable assessment of movement of an item, which is also scalable.
SUMMARY
Accordingly, there is a need for methods and devices which overcome, mitigate or address the shortcomings mentioned in the background and achieve an objective and quantifiable assessment of movement of an item, which is also scalable.
The present disclosure provides a method, performed by a portable electronic device. The method comprises obtaining one or more images including a first image of an item. The method comprises obtaining one or more indicators based on the first image, wherein the one or more indicators comprise a first indicator, and the first indicator is indicative of a movement feature of the item of the first image. The method comprises comparing the one or more indicators with a reference model comprising a first set of reference indicators indicative of reference movement features of a reference item in one or more reference images. The method comprises determining a comparison result based on the comparison. The method may comprise communicating the comparison result.
Further, a portable electronic device is provided. A portable electronic device comprising a memory module, a processor module, and an interface, wherein the portable electronic device is configured to perform any of the steps described in relation to the methods performed by the portable electronic device.
It is an advantage of the present disclosure that a portable electronic device is capable of providing comparison result between images reflecting a movement. This is particularly advantageous in the field of image processing for performance comparison (e.g. in re-education of limbs, e.g. in training of athletes, e.g. in assessing performance of animals or performance comparison of machines (which may not necessarily be able to communicate status)).
Further, the present disclosure provides a portable electronic device which enables an objective and quantifiable assessment of movement of an item in comparison with a reference movement. The technique and portable electronic device disclosed herein are also scalable and adaptable to include reference models from any source. The portable electronic device disclosed herein is also scalable and adaptable to assess any type of movement. The portable electronic device disclosed herein enables comparison of movement estimations of any dimensions (e.g. 2D or 3D) without additional sensors (e.g. sensors external to the portable electronic device).
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other features and advantages of the present disclosure will become readily apparent to those skilled in the art by the following detailed description of exemplary embodiments thereof with reference to the attached drawings, in which:
Figs. 1A-1 B schematically illustrates a first exemplary image and an exemplary reference image processed according to the present disclosure,
Figs. 2A-2B schematically illustrates exemplary indicators and comparison results according to the present disclosure,
Fig. 3 is a flow-chart of an exemplary method 100 according to the disclosure, and
Fig. 4 is a block diagram illustrating an exemplary portable electronic device according to the disclosure, and
Fig. 5A-5D schematically illustrates an exemplary video processed according to the present disclosure.
DETAILED DESCRIPTION
Various exemplary embodiments and details are described hereinafter, with reference to the figures when relevant. It should be noted that the figures may or may not be drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the invention or as a limitation on the scope of the invention. In addition, an illustrated embodiment needs not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated, or if not so explicitly described.
As used herein, the term“item” refers to a physical object, an article or a thing having material existence. In one or more embodiment, the term“item” may refer to a human being or an animal, or a part thereof such as a limb. The item may be an inert item or an inert thing. The item may be an object carried by an individual. The item may be under test with an audience such as a training team. In one or more exemplary methods, an item may for example be a robot and/or a robot part (e.g. a robotic part of a machine). Stated differently, an item may comprise any one or more of: a human being, a body part, an animal, an animal body part, a robot, a robot part, an object, and an object part. The item may comprise any combination thereof.
The figures are schematic and simplified for clarity, and they merely show details which are essential to the understanding of the invention, while other details have been left out. Throughout, the same reference numerals are used for identical or corresponding parts.
Figs. 1A-1 B schematically illustrates an exemplary first image 10 and an exemplary reference image 20 processed according to the present disclosure. Fig. 1A shows an exemplary first image 10 and a reference image 20. The first image 10 is an image of a football player A performing a movement or mathematical representation of an image of a football player A. The reference image 20 is an image of a football player B performing a movement. It may be envisaged that the footballer A wants to perform a similar kick as footballer B.
In an example, the method disclosed herein when initiated permits a user to record a video of a physical movement (e.g. a sports move). For example, once the user ends recording the video, the video is processed according to method disclosed in order to detect the movement in the video via the indicators disclosed herein. The extracted indicator(s) representative of the movement is then subsequently compared to a reference model which may be embodied by database hosting data that represents a plurality of reference movements. The reference movements may be selected to represent the highest possible skills that for example elite references have in doing a particular movement. For example, in a football movement, the reference could be a movement from a professional football player or from a mathematical model.
Fig. 1 B shows exemplary indicators according to this disclosure. Fig. 1 B shows an exemplary reference image 21 and an exemplary first image 11. For example, the first image 11 is processed according to this disclosure and one or more of the following indicators are obtained based on the first image 11 to characterize the movement: 11 A, 11 B, 11 C, 11 D, 1 1 E, 11 F, 11G, 11 H, 111, 11 J, 11 K, 11 L, 11 M, 11 N, 110, 11 P, 11Q, 11 R, 11S, 11T, 11 U, 1 1V, 11W, 11X, 11 Y, 11Z, 11AA, 11AB, 11 AC, 1 1 AD, 11AE. For example, the reference image 21 may be associated with a first set of reference indicators indicative of reference movement features of footballer B in the reference image 21. The first set of reference indicators may comprise one or more of the following indicators: 21 A, 21 B, 21 C, 21 D, 21E, 21 F, 21G, 21H, 211, 21J, 21 K, 21L, 21M, 21N, 210, 21P, 21Q, 21R, 21S, 21T, 21U, 21V,
21 W, 21X, 21 Y, 21Z, 21AA, 21AB, 21 AC, 21AD, 21AE, 21AF, 21AG, 21AH, 21AI, 21AJ, 21AK.
For example, the reference image 21 may be processed according to this disclosure and one or more of the following indicators are obtained based on the reference image to characterize the movement: 21A, 21B, 21C, 21D, 21E, 21F, 21G, 21H, 211, 21J, 21 K, 21L, 21M, 21N, 210, 21P, 21Q, 21R, 21S, 21T, 21U, 21V, 21W, 21X, 21Y, 21Z, 21AA, 21AB, 21 AC, 21AD, 21AE, 21AF, 21AG, 21AH, 21AI, 21AJ, 21AK.
In one or more exemplary methods, an indicator may comprise a point indicator and/or a vertex indicator. The reference image 21 shows indicators 21 A of a point indicator, and 21 B of a vertex indicator.
It may be noted that reference image 21 and first image 11 are not taken from a same view point. The present disclosure provides, in one or more embodiments, to perform a geometric
transformation (e.g. a translation of the one or more indicators of the first image 11, a rotation of the one or more indicators of the first image 11 , and/or scaling the one or more indicators of the first image 11 ) so as to enable an improved comparison with the reference image 21.
A reference model may be selected based on its similarity with the movement characterized in the first image, the indicator(s) extracted from the first image (e.g.11 A, 11 B, 11C, 11 D, 11 E, 11 F, 11G, 11H, 111, 11 J, 11K, 11 L, 11M, 11 N, 110, 11P, 11C, 11R, 11S, 11T, 11U, 11V, 11W, 11X, 11Y,
11 Z, 11AA, 11AB, 11AC, 11AD, 11AE) or from a video is compared to reference indicator(s) (e.g. 21A, 21 B, 21C, 21D, 21E, 21F, 21G, 21H, 211, 21J, 21K, 21L, 21M, 21N, 210, 21P, 210, 21R,
21S, 21T, 21 U, 21V, 21W, 21X, 21Y, 21Z, 21AA, 21AB, 21 AC, 21AD, 21AE, 21AF, 21AG, 21AH, 21AI, 21AJ, 21AK). In one or more exemplary methods, the indicators from the first image may be compared with reference indicators based on a point to point comparison. In one or more exemplary methods, the indicators from the first image may be compared based on extrapolation of indicator(s) in image 11 into image 21. In one or more exemplary methods, an indicator comprises a point-cloud representation.
Figs.2A-2B schematically illustrates exemplary indicators and comparison results according to the present disclosure. Fig.2A shows a reference representation 40 of the movement characterized in reference image 21 and a first representation 30 of the movement characterized in first image 11. Fig. 2B shows a reference representation 41 of the movement characterized in reference image 21 and a first representation 31 of the movement characterized in first image 1 1 , as well as a comparison result shown as a result user interface object 50. The result user interface object 50 may be displayed on a display of the portable electronic device and optionally with one or more of: the reference representation 41 , first representation 31 , the first image, and the reference image.
A comparison intends to determine similarity between movements characterized in image 21 and image 1 1 respectively (illustrated in reference representation 40, 41 and first representation 30, 31 ) and to determine a comparison result. It may be envisaged that the higher the comparison result, the higher the similarity to the reference movement, therefore the highest the skills.
It may be envisaged as illustrated in Figs. 2A-2B that the comparison is performed based on indicators and reference indicators, and displayed without the respective images.
Fig. 3 is a flow-chart of an exemplary method 100 according to the disclosure. The method 100 is performed by a portable electronic device. The method 100 may be performed by an electronic device comprising a housing and a capture module arranged in the housing. The portable electronic device may be one or more of: a mobile phone, a wireless device, and a tablet. The capture module may comprise a camera module, and/or an infrared, IR, detector module, and/or a sound sensor module. In a preferred embodiment, the portable electronic device comprises one of: a camera module, or an IR detector module. The method 100 may be a method for movement analysis and/or comparison.
The method 100 comprises obtaining S102 one or more images including a first image of an item. For example, the one or more images may form part of a set of images comprising a first image and/or a second image; and optionally a third image. For example, an image may be taken of an item, the item is thus captured in the image. The image represents an item. In other words, the item is in the image. Obtaining S102 may comprise capturing the one or more images. Obtaining S102 may comprise generating the one or more images. Obtaining S102 may comprise obtaining any media, e.g. a video, a sound, and/or an audio file. For example, obtaining S102 may be carried out to capture a real world scene which has one or more modalities.
In other words, obtaining S102 one or more images including a first image may comprise obtaining image data indicative of the image, wherein the image data comprises first image data indicative of a first image. The image data may characterize a dimension of the item. The image data may be related to two dimensions and/or three dimensions. Obtaining S102 may comprise obtaining image data comprising a first derivative indicative a difference between two images and associating the first derivative to an additional dimension indicative of the movement (e.g. a 3rd dimension indicative of the movement). In one or more exemplary methods, an item may for example be of a robot part or of an animal. In one or more exemplary methods, the first image of an item may for example be of an individual or of a part of the animal such as a body part. In one or more exemplary methods, the first image of an item may be any object, such as a golf club or an independently moving object.
In one or more exemplary methods, obtaining S102 the one or more images including the first image comprises capturing S102A the one or more images using a capture module of the electronic device and/or using one or more external camera modules of one or more external devices. For example, obtaining S102 the one or more images including the first image may comprise capturing the first image, a second image, a third image. For example, obtaining S102 the one or more images including the first image may comprise capturing a video, e.g. a first video, a second video. For example, obtaining S102 the one or more images including the first image may comprise capturing a first sequence of images, and/or a second sequence of images. A capture module may comprise a camera module, an infrared, IR, detector module, and/or a sound sensor module (e.g. an ultra-sound sensor module).
In one or more exemplary methods, obtaining S102 the one or more images including the first image comprises receiving S102B the one or more images from one or more external devices, e.g. via a communication system, e.g. a wireless communication system, e.g. an Internet based communication system, e.g. local communication system (e.g. a bus communication system, e.g. a port-based communication system).
In one or more exemplary methods, obtaining S102 the one or more images including the first image comprises combining S102C the one or more images from the one or more external devices to obtain an aggregated representation of a movement of the item.
The method 100 comprises obtaining S104 one or more indicators based on the first image. In other words, obtaining S104 one or more indicators may comprise generating the indicators based on the first image. Also, obtaining S104 one or more indicators may comprise extracting the indicators based on the first image. The indicator may be indicative of a position of the item in the image. In one or more exemplary methods, the first indicator of the first image comprises a first vertex of a part of the item, and/or a first position indicator and/or a timestamp of the first image, and/or a first contour indicator of the first image. In one or more exemplary methods, the first indicator of the first image comprises a first point-cloud representation of the first image.
In one or more exemplary methods, obtaining S104 the one or more indicators based on the first image comprises identifying S104A, based on the first image, one or more vertices and/or one or more contour indicators.
In one or more exemplary methods, the one or more indicators comprise a first indicator, and the first indicator is indicative of a movement feature of the item of the first image. A movement feature may refer to a feature indicative of movement of the item, such as a feature indicative of a position of the item. In one or more exemplary methods, a movement feature may be representative of a feature of the movement detected across the one or more images.
Obtaining S104 one or more indicators may be performed based on one or more schemes e.g. pose estimation and/or skeletal detection techniques.
Obtaining S104 one or more indicators may be performed based on extracting data from the one or more schemes in e.g. 3-dimentional matrix:
Movement (:, :, :) = (body vertex, cartesian coordinate, time) (1 )
Wherein Movement (:, :, :) is a vector representation of an indicator of a position representative of a movement in the first image. Cartesian coordinate may be related to the position of the item in the image.
Obtaining S104 one or more indicators may comprise modelling a movement of an item (e.g. a body, e.g. a body part, e.g. a head, e.g. a head position at time S):
Movement (item, :,S) = (pixel J, pixel K). (2)
Wherein Movement (:, :, :) is a vector representation of an indicator of a position representative of a movement in the first image, pixel J is the pixel number of a first position of the item in the image, pixel K is the pixel number of a second position of the item in the image. For example, item may be a head.
In one or more exemplary methods, the vector Movement (:, :, :) comprises an additional parameter indicative of image depth.
In one or more exemplary methods, obtaining S104 the one or more indicators based on the first image comprises adjusting S104B the one or more indicators for comparison. In one or more exemplary methods, adjusting S104B the one or more indicators for comparison comprises performing 104BB a geometrical transformation. In one or more exemplary methods, the geometrical transformation comprises one or more of: scaling the one or more indicators, performing a translation of the one or more indicators, and performing a rotation of the one or more indicators. For example, performing 104BB a geometrical transformation may be performed on the one or more indicators of the first image, and/or first set of reference indicators. In one or more exemplary methods, scaling the one or more indicators, performing a translation of the one or more indicators, and performing a rotation of the one or more indicators may be performed by dividing all the cartesian coordinates by the largest distance between any two vertexes belonging to a moment in time captured by a video or a sequence of images. This may be performed separately for each video or image sequence, as well as for a reference image or a reference video. In one or more exemplary methods, the translation can be performed by aligning the centres of mass of both matrices. In one or more exemplary methods, the orientation can be performed by nonlinear optimization on a target function that minimizes the distance between the two matrices. In one or more exemplary methods, adjusting S104B the one or more indicators for comparison comprises aligning S104BC time of the one or more indicators with the time of the set of reference indicators. For example, aligning S104BC time of the one or more indicators with the time of the set of reference indicators may be performed by convoluting two videos. The method 100 may comprise applying one or more of: scaling, a translation, a rotation, in combination with aligning S104BC time. This allows manipulation (e.g. transformation) of a plurality of reference models, including time dependency.
The method 100 comprises comparing S106 the one or more indicators with a reference model comprising a first set of reference indicators indicative of reference movement features of a reference item in one or more reference images. A reference item is an item similar to the item of the first image. For example, if the item of the first image is a footballer, the reference item is a footballer. Item and reference items may be organized in categories to facilitate the comparison.
For example, one or more indicators of the first image are compared to the reference model, such as to one or more reference indicators. Stated differently, the one or more indicators are compared to the first set of reference indicators. In one or more exemplary methods, the method comprises obtaining S116 the reference model. In one or more exemplary methods, obtaining S116 may comprise obtaining the reference model based on a mathematical model. Obtaining S1 16 may comprise obtaining the reference model based on aggregated videos. This may lead to higher accuracy or higher 3D model precision. Obtaining S116 may comprise obtaining the reference model based on a single video. Obtaining S116 the reference model may comprise selecting the reference model. A reference model may comprise a reference image and/or a reference video, and/or a reference library of media. For example, the reference model may be based on theoretical model (e.g. partially or entirely). For example, the reference model may be based on computation of one or more data sets. For example, the reference model may be developed based on machine learning techniques applied to a plurality of images of an image library. This allows comparison of pose estimations of different videos (e.g. individual videos, models of movement or aggregated estimated movement, mathematical model of movement).
The method 100 comprises determining S108 a comparison result based on the comparison. The comparison result may comprise an indicator representative of the difference between the indicator of the first image and the first set of reference indicators or of the similarity between the indicator of the first image and the first set of reference indicators. The comparison result may be expressed in percentage, such as for a score. In one or more exemplary methods, determining S108 a comparison result based on the comparison comprises calculating S108B a first distance parameter between the first indicator and a reference indicator of the first set of reference indicators. In one or more exemplary methods, a first distance parameter may comprise a distance vector. This may allow distinguishing a comparison result for each item part, for a first item (e.g. a first body part, e.g. a leg) and a second item (e.g. a second body part, e.g. a foot).
In an example where the disclosed technical is applied to compare a movement characterized in the one or more images to a reference movement modelled in the reference model, a first distance parameter comprises a matrix distance which can be calculated. One or more of the following points are addressed to properly compare: translate, rotate, scale and/or time alignment. For example, a translation operation may comprise a translation operation with a vector, e.g. a positive vector or a negative vector. A translation operation with a negative vector may result in a symmetric mapping. This enables for example to compare right-foot movement and left-foot movement. For example, scaling can be performed by dividing the cartesian coordinates by the largest distance between any two vertexes belonging to the same moment in time in the image(s). This may be performed separately for each video or image sequence, as well as for a reference image or a reference video. In one or more exemplary methods, the translation can be performed by aligning the centres of mass of both matrices. In one or more exemplary methods, the orientation can be performed by nonlinear optimization on a target function that minimizes the distance between the two matrices. The first distance parameter may be calculated in the following manner e.g.:
DP1 = distance(Movement( :, :, :) , Reference^,:,:)) (3)
Wherein DP1 denotes a first distance parameter and Movement^, :, :) is obtained from equation (1 ) or (2), Movement (:, :, :) is a vector representation of an indicator of a position representative of a movement in the first image and Reference is a vector representation of a reference indicator of a position representative of a movement in a reference image.
In one or more exemplary methods, determining S108 the comparison result based on the comparison comprises applying S108A a mapping based on the first distance parameter. For example, the mapping may comprise an exponential mapping, a finite mapping and/or a mapping with a countable scale. Applying S108A a mapping based on the first distance parameter may be performed based on a decay parameter (e.g. used to control the decay of higher distances to comparison result of zero (e.g. the smallest possible results)). For example the comparison result may be determined in the following manner e.g.:
Com parison_resu It = mapping (- K x DP1 ) (4) Wherein K denotes a decay parameter, and DP1 is the first distance parameter, and mapping denotes a mapping applied (e.g. a finite mapping, e.g. an exponential mapping.)
The method 100 comprises communicating S1 10 the comparison result. For example,
communicating S1 10 may comprise communicating the comparison result in real time as the images are obtained. Communicating S1 10 the comparison result may comprise displaying the comparison result via an interface (e.g. a display) of the portable electronic device. In one or more exemplary methods, communicating S1 10 comprises displaying, on a display of the portable electronic device, a user interface object representative of the comparison result, e.g. in real-time. In one or more exemplary methods, communicating S1 10 may comprise transmitting to an external device (e.g. another portable electronic device, and/or a server device). In one or more exemplary methods, communicating S1 10 may comprise outputting a haptic feedback representative of the comparison result and/or a sound representative of the comparison result and/or a light indicator representative of the comparison result.
In one or more exemplary methods, the method may comprise: displaying or communicating the one or more images, the one or more indicators, the one or more reference indicators which may be performed at any operation of the disclosed method, such as between step S104 and S106, such as between S106 and S108, such as after S102 (e.g. S102A, S102B and/or S102C).
In one or more exemplary methods, the one or more images include a second image, wherein the first image and the second image are part of a series of images forming a video. In one or more exemplary methods, the steps S104, S106, and S108 may be performed image per image.
In one or more exemplary methods, the method 100 comprises performing S1 12 for the second image: the step of obtaining S104 one or more indicators, the step of comparing S106 the indicators, and the step of determining S108 the comparison result, and determining S1 14 a global comparison result for the video. For example, the step of S 104, S106, and/or S108 may be performed iteratively, e.g. image per image, e.g. image sequence per image sequence, e.g. video per video,
In one or more exemplary methods, obtaining S1 16 the reference model comprises receiving S1 16A or capturing one or more reference images and generating S1 16B one or more sets of reference indicators based on the one or more reference images.
In one or more exemplary methods, the method 100 comprises identifying S1 18 the item in the first image. For example, a first item part of the item and a second item part of the item may be identified. For example, an image may comprise a plurality of items comprising a first item and a second item, identifying S1 18 may comprise identifying a first item and a second item. For example, the first item and/or the second item may be taken into account in S104 to obtain indicators thereof, e.g. a first set of indicator related to the first item and/or a second set of indicators related to the second item. For example, identifying S1 18 the item may be performed at initialization in configuration of the process. For example, identifying S1 18 the item may be based on an initial configuration of the method 100.
In one or more exemplary methods, when an image data set is incomplete (e.g. when an image is of poor quality), the method comprises completing the image data set by applying a predictive data model to the image data set. For example, the completed image data set may be associated with an accuracy parameter, e.g. a probability of prediction accuracy.
In one or more exemplary methods, an item may comprise one or more parts, wherein the indicator is generated based on the one or more parts of the item. In one or more exemplary methods, an item may comprise a first item and a second item.
In one or more exemplary methods, determining S108 the comparison result comprises determining a total comparison result by applying one or more factors to comparison results for each part of the image.
In one or more exemplary methods, the method 100 comprises providing S120 feedback to a user based on the comparison result. In one or more exemplary methods, the feedback is to provide information on how to impact (e.g. improve) the comparison result. In one or more exemplary methods, the method 100 may comprise providing feedback to a user based on the comparison result, e.g. via a display of the portable electronic device. In one or more exemplary methods, providing S120 feedback to a user may comprise outputting a haptic feedback representative of the feedback and/or a sound representative of the feedback and/or a light indicator representative of the feedback.
In one or more exemplary methods, the method 100 may comprise comparing a first comparison result and a second comparison result and identifying the highest comparison result amongst the first comparison result and the second comparison result.
The present disclosure relates to an electronic device comprising a housing and a capture module arranged in the housing. The electronic device may be one or more of: a mobile phone, a wireless device, and a tablet. The electronic device is configured to perform method 100.
Fig. 4 shows a block diagram illustrating an exemplary portable electronic device 200 of the present disclosure. The present disclosure relates to a portable electronic device 200 comprising a memory module 201 , a processor module 202, a capture module 205 and an interface 203. The portable electronic device 200 is configured to perform any of the steps disclosed in Fig. 3. The portable electronic device 200 may comprise a mobile phone, a tablet, a camera.
The capture module 205 may comprise a camera module, an infrared, IR, detector module, and/or a sound sensor module.
The interface 203 is configured to communicate with a server device using wired and/or wireless communications systems.
The portable electronic device 200 is configured to obtain one or more images including a first image of an item (e.g. via the interface module 203 and/or the capture module 205).
The portable electronic device 200 is configured to obtain one or more indicators based on the first image (e.g. via the processor module 202 and/or the interface module 203). The one or more indicators comprise a first indicator, and the first indicator is indicative of a movement feature of the item of the first image.
The portable electronic device 200 is configured to compare (e.g. via the processor module 202) the one or more indicators with a reference model comprising a first set of reference indicators indicative of reference movement features of a reference item in one or more reference images.
The portable electronic device 200 is configured to determine, e.g. via the processor module 202, a comparison result based on the comparison.
The portable electronic device 200 is configured to communicating, via e.g. the interface module 203, the comparison result. The interface module 203 may comprise a display module configured to display a user interface object representative of the comparison result.
The portable electronic device 200 is configured to perform any of the operations S102, S104, S106, S108, S1 10, optionally S1 12, S1 14, S1 16, S1 18, S120.
The processor module 202 is optionally configured to perform any of the operations disclosed in Fig. 3. The operations of the portable electronic device 200 may be embodied in the form of executable logic routines (e.g., lines of code, software programs, etc.) that are stored on a non- transitory computer readable medium (e.g., the memory module 201 ) and are executed by the processor module 202). Furthermore, the operations of the portable electronic device 200 may be considered a method that the wireless module is configured to carry out. Also, while the described functions and operations may be implemented in software, such functionality may as well be carried out via dedicated hardware or firmware, or some combination of hardware, firmware and/or software.
The memory module 201 may be one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, a random access memory (RAM), or other suitable device. In a typical arrangement, the memory module 201 may include a non-volatile memory for long term data storage and a volatile memory that functions as system memory for the processor module 202. The memory module 201 may exchange data with the processor module 202 over a data bus. Control lines and an address bus between the memory module 201 and the processor module 202 also may be present (not shown in Fig. 4). The memory module 201 is considered a non-transitory computer readable medium.
The memory module 201 may be configured to store a reference model in a part of the memory.
Figs.5A-5D schematically illustrate an exemplary video processed according to the present disclosure.
Fig. 5A shows an exemplary reference video 50 (or sequence of reference images) and an exemplary first video 60 (or sequence of images).
Fig. 5B shows exemplary indicators according to this disclosure. Fig. 5B shows an exemplary reference video formed by a sequence of four reference images 51 , 52, 53, 54.
For example, the reference image 51 is processed according to this disclosure and one or more of the following indicators are obtained based on the reference image 51 to characterize the movement of the baseball player and the bat forming the item: 51 A, 51 B, 51 C, 51 D, 51 E, 51 F, 51G, 51 H, 511, 51J, 51 K, 51 L, 51 M, 51 N, 510, 51 P, 51Q, 51 R, 51S, 51T, 51 U, 51V, 51W, 51X,
51 Y, 51Z.
It is noted that reference images 51 , 52, 53, 54 each illustrate the item comprising a baseball player and a bat indicated by indicators 51 S, 52AA, 53 AE, 54C respectively.
For example, a reference image 52 is processed according to this disclosure and one or more of the following indicators are obtained based on the reference image 52 to characterize the movement of the baseball player and the bat forming the item: 52A, 52B, 52C, 52D, 52E, 52F, 52G, 52H, 52I, 52 J, 52K, 52L, 52M, 52N, 520, 52P, 52Q, 52R, 52S, 52T, 52U, 52V, 52W, 52X,
52 Y, 52Z, 52AA. For example, a reference image 52 is processed according to this disclosure and one or more of the following indicators are obtained based on the reference image 52 to characterize the movement of the baseball player and the bat forming the item: 52A, 52B, 52C, 52D, 52E, 52F,
52G, 52H, 521, 52J, 52K, 52L, 52M, 52N, 520, 52P, 52Q, 52R, 52S, 52T, 52U, 52V, 52W, 52X,
52 Y, 52Z, 52AA.
For example, a reference image 53 is processed according to this disclosure and one or more of the following indicators are obtained based on the reference image 53 to characterize the movement of the baseball player and the bat forming the item: 53A, 53B, 53C, 53D, 53E, 53F,
53G, 53H, 531, 53 J, 53K, 53L, 53M, 53N, 530, 53P, 53Q, 53R, 53S, 53T, 53U, 53V, 53W, 53X, 53Y, 53Z, 53AA, 53 AB, 53AC, 53 AD, 53 AE.
For example, a reference image 54 is processed according to this disclosure and one or more of the following indicators are obtained based on the reference image 54 to characterize the movement of the baseball player and the bat forming the item: 54A, 54B, 54C, 54D, 54E, 54F,
54G, 54H, 541, 54 J, 54K, 54L, 54M, 54N, 540, 54P, 54Q, 54R, 54S, 54T, 54U, 54V, 54W, 54X,
54 Y, 54Z.
Figs. 5C-5D show an exemplary first video 60 comprising a first image 61 , a second 62, a third image 63, a fourth image 64, and a fifth image 65. It is noted that images 61 , 62, 63, 64, 65 each illustrate the item comprising a baseball player and a bat indicated by indicators 61AB, 62E, 63D, 64V, 65F respectively.
For example, the first image 61 is processed according to this disclosure and one or more of the following indicators are obtained based on the first image 61 to characterize the movement of the baseball player and the bat forming the item: 61A, 61 B, 61C, 61 D, 61 E, 61 F, 61G, 61 H, 611, 61J,
61 K, 61 L, 61 M, 61 N, 610, 61 P, 61Q, 61 R, 61S, 61T, 61 U, 61V, 61W, 61X, 61Y, 61Z, 61AA, 61AB.
For example, the second image 62 is processed according to this disclosure and one or more of the following indicators are obtained based on the second image 62 to characterize the movement of the baseball player and the bat forming the item: 62A, 62B, 62C, 62D, 62E, 62F, 62G, 62H, 62I, 62 J, 62 K, 62L, 62M, 62N, 620, 62P, 62Q, 62R, 62S, 62T, 62U, 62V, 62W, 62X, 62Y, 62Z, 62AA.
For example, the third image 63 is processed according to this disclosure and one or more of the following indicators are obtained based on the third image 63 to characterize the movement of the baseball player and the bat forming the item: 63A, 63B, 630, 63D, 63E, 63F, 63G, 63H, 63I, 63 J, 63K, 63L, 63M, 63N, 630, 63P, 63Q, 63R, 63S, 63T, 63U, 63V, 63W, 63X, 63Y, 63Z, 63AA, 63 AB, 63 AC.
For example, the fourth image 64 is processed according to this disclosure and one or more of the following indicators are obtained based on the fourth image 64 to characterize the movement of the baseball player and the bat forming the item: 64A, 64B, 64C, 64D, 64E, 64F, 64G, 64H, 641, 64J, 64K, 64L, 64M, 64N, 640, 64P, 64Q, 64R, 64S, 64T, 64U, 64V, 64W, 64X, 64Y, 64Z, 64AA, 64AB, 64 AC.
For example, the fifth image 65 is processed according to this disclosure and one or more of the following indicators are obtained based on the fifth image 65 to characterize the movement of the baseball player and the bat forming the item: 65A, 65B, 65C, 65D, 65E, 65F, 65G, 65H, 651, 65J, 65K, 65L, 65M, 65N, 650, 65P, 65Q, 65R, 65S, 65T, 65U, 65V, 65W, 65X, 65Y, 65Z, 65AA, 65AB, 65AC.
It is noted that the reference video 50 and the first video 60 do not include the same number of images, and that the movement performed in the reference video 50 does not have the same timing as the movement performed in the first video 60. It is noted that the reference video 50 and the first video 60 are not taken from the same point of view.
The portable electronic device disclosed herein adjusts (as in step S104B) the one or more indicators for comparison by aligning (e.g. in step S104BC) time of the one or more indicators with the time of the set of reference indicators. For example, the portable electronic device disclosed herein performs a geometric transformation e.g. performing a translation of the one or more indicators of images of the first video 60 (e.g. a geometric translation of a joint time-point optimization) in order to compare with the reference indicators of the reference images of the reference video to determine the comparison result.
The use of the terms“first”,“second”,“third” and“fourth”,“primary”,“secondary”,“tertiary” etc. does not imply any particular order, but are included to identify individual elements. Moreover, the use of the terms“first”,“second”,“third” and“fourth”,“primary”,“secondary”,“tertiary” etc. does not denote any order or importance, but rather the terms“first”,“second”,“third” and“fourth”,“primary”, “secondary”,“tertiary” etc. are used to distinguish one element from another. Note that the words “first”,“second”,“third” and“fourth”,“primary”,“secondary”,“tertiary” etc. are used here and elsewhere for labelling purposes only and are not intended to denote any specific spatial or temporal ordering. Furthermore, the labelling of a first element does not imply the presence of a second element and vice versa.
It may be appreciated that Figs. 1-5D comprises some modules or operations which are illustrated with a solid line and some modules or operations which are illustrated with a dashed line. The modules or operations which are comprised in a solid line are modules or operations which are comprised in the broadest example embodiment. The modules or operations which are comprised in a dashed line are example embodiments which may be comprised in, or a part of, or are further modules or operations which may be taken in addition to the modules or operations of the solid line example embodiments. It should be appreciated that these operations need not be performed in order presented. Furthermore, it should be appreciated that not all of the operations need to be performed. The exemplary operations may be performed in any order and in any combination.
It is to be noted that the word "comprising" does not necessarily exclude the presence of other elements or steps than those listed.
It is to be noted that the words "a" or "an" preceding an element do not exclude the presence of a plurality of such elements.
It should further be noted that any reference signs do not limit the scope of the claims, that the exemplary embodiments may be implemented at least in part by means of both hardware and software, and that several "means", "units" or "devices" may be represented by the same item of hardware.
The various exemplary methods, devices, nodes and systems described herein are described in the general context of method steps or processes, which may be implemented in one aspect by a computer program product, embodied in a computer-readable medium, including computer- executable instructions, such as program code, executed by computers in networked
environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform specified tasks or implement specific abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.
Although features have been shown and described, it will be understood that they are not intended to limit the claimed invention, and it will be made obvious to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the claimed invention. The specification and drawings are, accordingly to be regarded in an illustrative rather than restrictive sense. The claimed invention is intended to cover all alternatives, modifications, and equivalents.

Claims

1. A method, performed by a portable electronic device, the method comprising:
- obtaining (S102) one or more images including a first image of an item;
- obtaining (S104) one or more indicators based on the first image, wherein the one or more indicators comprise a first indicator, and the first indicator is indicative of a movement feature of the item of the first image;
- comparing (S106) the one or more indicators with a reference model comprising a first set of reference indicators indicative of reference movement features of a reference item in one or more reference image;
- determining (S108) a comparison result based on the comparison; and
- communicating (S1 10) the comparison result.
2. The method according to claim 1 , wherein the one or more images include a second image, wherein the first image and the second image are part of a series of images forming a video.
3. The method according to any of the previous claims, the method comprising: obtaining (S1 16) the reference model.
4. The method according to claim 3, wherein obtaining (S1 16) the reference model comprises receiving (S1 16A) or capturing one or more reference images and generating (S1 16B) one or more sets of reference indicators based on the one or more reference images.
5. The method according to any of the previous claims, wherein obtaining (S102) the one or more images including the first image comprises capturing (S102A) the one or more images using a capture module of the electronic device and/or using one or more external camera modules of one or more external devices.
6. The method according to any of the previous claims, wherein obtaining (S102) the one or more images including the first image comprises receiving (S102B) the one or more images from one or more external devices.
7. The method according to any of claims 5-6, wherein obtaining (S102) the one or more images including the first image comprises combining (S102C) the one or more images from the one or more external devices to obtain an aggregated representation of a movement.
8. The method according to any of the previous claims, wherein obtaining (S104) the one or more indicators based on the first image comprises identifying (S104A), based on the first image, one or more vertices and/or one or more contour indicators.
9. The method according to any of the previous claims, wherein the first indicator of the first image comprises a first vertex of a part of the item, and/or a first position indicator and/or a timestamp of the first image.
10. The method according to any of the previous claims, wherein obtaining (S104) the one or more indicators based on the first image comprises adjusting (S104B) the one or more indicators for comparison.
11. The method according to claim 10, wherein adjusting (S104B) the one or more indicators for comparison comprises performing (104BB) a geometrical transformation, wherein the geometrical transformation comprises one or more of: scaling the one or more indicators, performing a translation of the one or more indicators, and performing a rotation of the one or more indicators.
12. The method according to any of claims 10-11 , wherein adjusting (S104B) the one or more indicators for comparison comprises aligning (S104BC) time of the one or more indicators with the time of the set of reference indicators.
13. The method according to any of the previous claims, wherein determining (S108) a comparison result based on the comparison comprises calculating (S108B) a first distance parameter between the first indicator and a reference indicator of the first set of reference indicators.
14. The method according to claim 13, wherein determining (S108) the comparison result based on the comparison comprises applying (S108A) a mapping based on the first distance parameter.
15. The method according to any of the previous claims, the method comprising: providing (S120) feedback to a user based on the comparison result, wherein the feedback is to provide information on how to improve the comparison result.
16. A portable electronic device comprising a memory module, a processor module, and an interface, wherein the portable electronic device is configured to perform any of the steps in claims 1-15.
PCT/EP2019/080953 2018-11-12 2019-11-12 A method for movement analysis and related portable electronic device WO2020099369A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP18205760 2018-11-12
EP18205760.4 2018-11-12

Publications (1)

Publication Number Publication Date
WO2020099369A1 true WO2020099369A1 (en) 2020-05-22

Family

ID=64277618

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/080953 WO2020099369A1 (en) 2018-11-12 2019-11-12 A method for movement analysis and related portable electronic device

Country Status (1)

Country Link
WO (1) WO2020099369A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8314840B1 (en) * 2011-09-10 2012-11-20 Conley Jack Funk Motion analysis using smart model animations
US20160232676A1 (en) * 2015-02-05 2016-08-11 Electronics And Telecommunications Research Institute System and method for motion evaluation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8314840B1 (en) * 2011-09-10 2012-11-20 Conley Jack Funk Motion analysis using smart model animations
US20160232676A1 (en) * 2015-02-05 2016-08-11 Electronics And Telecommunications Research Institute System and method for motion evaluation

Similar Documents

Publication Publication Date Title
CN109191588B (en) Motion teaching method, motion teaching device, storage medium and electronic equipment
US11071887B2 (en) Evaluation and coaching of athletic performance
US8550819B2 (en) Motion training schematic and method of instruction
US7404774B1 (en) Rule based body mechanics calculation
US20200335007A1 (en) Apparatus for writing motion script, apparatus for self-teaching of motion and method for using the same
WO2021098616A1 (en) Motion posture recognition method, motion posture recognition apparatus, terminal device and medium
US11726550B2 (en) Method and system for providing real-time virtual feedback
JP6757010B1 (en) Motion evaluation device, motion evaluation method, motion evaluation system
KR20220028654A (en) Apparatus and method for providing taekwondo movement coaching service using mirror dispaly
CN114022512A (en) Exercise assisting method, apparatus and medium
CN112288766A (en) Motion evaluation method, device, system and storage medium
KR20210128943A (en) Apparatus and method for comparing and correcting sports posture using neural network
Hachaj et al. The open online repository of karate motion capture data: A tool for scientists and sport educators
Lin et al. The effect of real-time pose recognition on badminton learning performance
CN113657184B (en) Piano playing fingering evaluation method and device
WO2020099369A1 (en) A method for movement analysis and related portable electronic device
CN112633261A (en) Image detection method, device, equipment and storage medium
Xiong et al. Robust vision-based workout analysis using diversified deep latent variable model
CN110070036B (en) Method and device for assisting exercise motion training and electronic equipment
KR20220039440A (en) Display apparatus and method for controlling the display apparatus
US20240013675A1 (en) A computerized method for facilitating motor learning of motor skills and system thereof
CN116844084A (en) Sports motion analysis and correction method and system integrating blockchain
US20230274660A1 (en) Method of scoring a move of a user and system thereof
WO2022088290A1 (en) Motion assessment method, apparatus and system, and storage medium
CN110858328A (en) Data acquisition method and device for simulating learning and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19801873

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19801873

Country of ref document: EP

Kind code of ref document: A1