CN115239939A - Method and device for simulating action track of object - Google Patents

Method and device for simulating action track of object Download PDF

Info

Publication number
CN115239939A
CN115239939A CN202211013373.3A CN202211013373A CN115239939A CN 115239939 A CN115239939 A CN 115239939A CN 202211013373 A CN202211013373 A CN 202211013373A CN 115239939 A CN115239939 A CN 115239939A
Authority
CN
China
Prior art keywords
area
action
image
contour
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211013373.3A
Other languages
Chinese (zh)
Inventor
黄莘扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AU Optronics Corp
Original Assignee
AU Optronics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AU Optronics Corp filed Critical AU Optronics Corp
Publication of CN115239939A publication Critical patent/CN115239939A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/23Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on positionally close patterns or neighbourhood relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for simulating an action track of an object, which comprises the steps of capturing multi-frame images from a film; determining a plurality of contour coordinates of a first tracked object in each image according to a first feature of the first tracked object; identifying a first frame shape according to the contour coordinates; generating a vector according to the relative relation between the reference coordinate and the coordinate of the center point of the first frame shape, and simulating the action track of a target object according to the vector, wherein the target object is related to the first tracking object; and digitizing the action track and drawing the action track to a corresponding coordinate of a data visualization graph. The invention also provides a device for simulating the action track of the object.

Description

Method and device for simulating action track of object
Technical Field
The invention relates to the technology of image tracking; more particularly, the present invention relates to a method and apparatus for simulating the action trajectory of a particular object.
Background
With the development of artificial intelligence, the technologies of image recognition and object detection (such as the object detection algorithm YOLO (young Only Look Once)) are also greatly improved, and applied in various fields such as automatic driving, smart medical care, and face recognition. With machine learning techniques, the AI can continuously learn to recognize the features of a specific object through a training set, thereby quickly and accurately capturing the target object in different images and continuously tracking the target object while the target object is moving.
Object detection techniques are currently available for object tracking, such as identifying vehicles on roads, classifying vehicle types, and continuously tracking vehicles during their movement. However, when the object to be tracked is not fixed (such as air flow, water column, etc.), the AI cannot recognize the shape of the object, learn its features through the training set, or even sense its existence from the image, so the conventional object tracking technology is difficult to be applied to tracking such objects.
Disclosure of Invention
An objective of the present invention is to provide a method and apparatus for simulating an action trajectory of an object, which can track a target object and further simulate the trajectory or range of its action (e.g. jetting water or air).
An objective of the present invention is to provide a method and an apparatus for simulating an action trajectory of an object, which can estimate a range of the image affected by the object in a simple manner and present the estimated range to a user.
The method for simulating the action track of the object comprises the steps of capturing a plurality of frames of images from a film; determining a plurality of contour coordinates of a first tracked object in each image according to a first feature of the first tracked object; identifying a first frame shape according to the outline coordinates; generating a vector according to the relative relation between the reference coordinate and the coordinate of the central point of the first frame shape, and simulating the action track of a target object according to the vector, wherein the target object is associated with a first tracking object; and digitizing the action track and drawing the action track to a corresponding coordinate of a data visualization graph. By the method, the action track (such as the direction and the range of air injection) of the tangible object (such as gloves, spray guns and the like) can be simulated by tracking the tangible object, and the like, so that the distribution range of the intangible substance (such as air, water and the like) can be estimated. And the user can easily know the affected (e.g. cleaned) area and the remaining affected area in the fixed area according to the data visualization.
The device for simulating the action track of the object comprises at least one memory; and at least one processor coupled to the memory, the processor configured to: determining and positioning a plurality of contour coordinates of a first tracked object in each image according to a first characteristic of the first tracked object; identifying and positioning a first frame according to the contour coordinates of the first tracked object, wherein the first frame surrounds the contour coordinates of the first tracked object; generating a vector according to a relative relation connecting line from a reference coordinate and a coordinate of a center point facing to the first frame shape, and simulating an action track of a target object according to the vector, wherein the target object can be the same as the first tracking object or directly or indirectly connected with the first tracking object; and digitizing the action track and drawing to a corresponding coordinate of a data visualization graph. By means of the device, the action track (such as the direction and the range of air injection) of the physical object (such as a glove, a spray gun and the like) can be simulated by tracking the physical object, and the distribution range of the intangible substance (such as air, water and the like) can be estimated. And the user can easily know the affected (e.g. cleaned) area and the remaining affected area in the fixed area according to the data visualization.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for simulating an object action trajectory according to an embodiment of the invention.
Fig. 2A is a schematic diagram of capturing an image according to an embodiment of the invention.
Fig. 2B is a schematic diagram of an image captured including a region of interest (ROI) according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an image and an identification result thereof according to another embodiment of the invention.
FIG. 4 is a diagram illustrating a simulated object action trajectory according to an embodiment of the invention.
FIG. 5A is a graphical representation of action trajectory values in accordance with an embodiment of the present invention.
FIG. 5B is a graphical illustration of action trajectory values in accordance with a further embodiment of the present invention.
Fig. 5C is a schematic diagram of a data visualization graph according to an embodiment of the invention.
FIG. 6 is a diagram illustrating a simulated object action trajectory when multiple target objects are included according to another embodiment of the present invention.
FIG. 7 is a diagram illustrating an apparatus for simulating an action trajectory of an object according to still another embodiment of the present invention.
Wherein the reference numerals are as follows:
100 … step diagram
101 … step
103 … step
105 … step
107 … step
108 … step
109 … step
201 … image
202 … region of interest (ROI)
203 … first tracking object
204 … minus part
205 … first frame shape
207 … reference coordinates
303 … inverted image
305 … error recognition frame
307 … correctly recognizes the frame
403 … coordinates of center point of first frame
405 … vector
407a … coordinate
407b … coordinate
407c … coordinate
Visualization of 500 … data
503 … reference index
503a … instantaneous coverage
503b … track index
700 … device
710 … memory
720 … processor
721 … image capturing unit
722 … image identification module
723 … computing unit
725 … data conversion unit
730 … display unit
740 … image source
T … action trajectory
R … range
A … flare angle
V N … variable
Detailed Description
As used herein below, "about", "approximately", or "substantially" includes the stated value and the average value within an acceptable range of deviation of the stated value, as determined by one of ordinary skill in the art, taking into account the particular number of measurements in question and the errors associated with the measurements (i.e., the limitations of the measurement system). For example, "about" may mean within one or more standard deviations of the stated value, or within ± 30%, ± 20%, ± 10%, ± 5%. Further, as used herein, "about", "approximately" or "substantially" may be selected based on optical properties, etch properties, or other properties, with a more acceptable range of deviation or standard deviation, and not all properties may be applied with one standard deviation.
It will be understood that, although the terms "first," "second," "third," etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a "first element," "component," "region," "layer" or "portion" discussed below could be termed a second element, component, region, layer or portion without departing from the teachings herein.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present invention and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention relates to simulating the action track of a target object (such as a spray gun) to evaluate the action range of an invisible or non-visible substance (such as air and water). For example, when cleaning equipment in a factory, it is possible to perform operations using devices such as water columns and air spray guns, and although the conventional image recognition technology can recognize a tangible object (for example, recognize a spray gun), it is difficult to recognize the distribution range of water columns and air flows emitted from the spray gun. The invention is therefore directed to providing a trajectory of the jet of air or water emitted by the lance, by means of which the cleaning of the installation can be assessed and the cleaned/uncleaned installation area can be quickly distinguished. The present disclosure will be described in detail below with reference to various embodiments, it is to be understood that the embodiments are merely illustrative for enabling those skilled in the art to understand the disclosure of the present disclosure, and that modifications may be made to the following illustrative embodiments without departing from the scope of the spirit of the present disclosure, and the present disclosure is intended to cover such modifications.
Referring to fig. 1 and fig. 2A, a method for simulating an object action trajectory according to an embodiment of the present invention is illustrated, and at step 101, the method for simulating an object action trajectory includes capturing a plurality of frames of images 201 from a film. The movie is a scene that records the target object (e.g., a spray gun) and the range in which the target object may be used (e.g., the equipment to be cleaned). In a preferred embodiment, the multi-frame images 201 are continuously captured images 201, and in other embodiments, the images 201 may be captured at intervals. For example, the images 201 are captured from the film every 0.5 seconds within a period of time, the time for capturing the images 201 may have a longer interval (e.g., every second) or a shorter interval (e.g., every 0.3 seconds) according to the requirement, and the difference in the interval may affect the accuracy of the action track of the simulation object, for example, when the interval time is longer, the timing difference between the multiple frames of images 201 is larger, and the simulated track has larger discontinuity, and thus is less accurate.
Continuing with the embodiment described with reference to fig. 2B, capturing the multi-frame images 201 from the movie may further include capturing only the user-defined region of interest (ROI) 202 (the region shown by oblique lines). For example, when a user wants to know that a piece of equipment is currently clean, he can only capture an image 201 of the film covering that piece of equipment. The manner in which the ROI 202 is extracted may further include morphological processing of the extracted ROI 202, such as dilation of the extracted ROI 202 to increase the area of the ROI 202. By doing so, the possibility of objects to be identified being too close to the boundary of the ROI 202 or being missed slightly beyond the boundary can be avoided. Alternatively, the image 201 may be processed by a logical operation to obtain the ROI 202 having a specific shape, for example, a hollow frame-shaped ROI 202 may be obtained by subtracting a portion of the image 204 from the image 201).
It should be noted that the image 201 may be a file of any format (e.g., a JPEG file, etc.), and image processing (e.g., dimension reduction) may be performed on the image 201 to improve the efficiency of image processing and reduce the required performance. Or the image 201 is converted from the RGB model to the HSV model, so that the subsequent image recognition by color is facilitated, and the user can adjust the color more intuitively when setting the color-related characteristics. The above description of the format and processing of the image 201 are exemplary, and the image 201 may be in any other format or processing without conflict with the present disclosure.
The method for simulating the action trajectory of the object according to an embodiment of the present invention is further described with reference to fig. 2A. At step 103, the method includes determining contour coordinates of a first tracked object 203 in the image 201 based on the first feature. The first tracking object 203 may be an object having a shape, such as a glove, and may have a particular first characteristic (e.g., red). According to the first feature, the first tracking object 203 can be identified from the image 201 (e.g., by image color filtering), and a plurality of contour coordinates of the location of the first tracking object 203 can be obtained. In various embodiments, the first characteristic can also be set by the user (for example, the user can set the first characteristic to be different colors such as blue, green, etc.). At step 105, the method may identify a first box 205 surrounding the outline coordinates based on the outline coordinates (i.e., the range surrounded by the first box 205 may encompass all outline coordinates). It should be noted that the first frame 205 has the smallest area surrounding the contour coordinates in this embodiment, but the first frame 205 may have a different area range (e.g., two times the area surrounding the contour coordinates) in different embodiments, and the invention is not limited thereto.
Another embodiment of the method for simulating the action trajectory of the object may include performing morphological image processing (e.g., erosion, dilation, etc.) on the identified contour coordinates of the first tracked object 203 to eliminate noise, restore the broken contours of the identified object, etc., so that the contour coordinates are closer to the real shape of the first tracked object 203. Thereafter, the area of the contour coordinates may be calculated and compared with the estimated area of the first tracked object 203 (i.e., the area of the first tracked object 203 represented in the image 201), thereby excluding the contour coordinates outside the error range (e.g., +/-5%). Similarly, the area of the first frame 205 can be calculated and compared with the estimated area of the first frame 205 (i.e., the area of the frame surrounding the first tracking object 203 appearing in the image 201), so as to find the first frame 205 outside the error range (e.g., +/-5%), thereby excluding the contour coordinates corresponding to the first frame 205. Specifically, referring to fig. 3, the left side of fig. 3 is an exemplary diagram of the image 201, and the right side is a schematic diagram of the result of the image 201 after being identified. When a scene in the image 201 has a smooth object such as metal, the first tracking object 203 may generate an inverted image 303 on the smooth object, and the image 201 may recognize the first tracking object 203 and the inverted image 303 as the first tracking object 203 at the same time after being recognized (the misidentification box 305 in the right figure shows a misidentification result of incorrectly recognizing the inverted image 303 as the first tracking object 203, and the correct recognition box 307 shows a correct recognition result of correctly recognizing the first tracking object 203), which may affect the recognition result. Since the reflection of the object (as indicated by the misidentification box 305) does not usually have the same complete shape as the original object (as indicated by the correct identification box 307), the present embodiment eliminates the contour coordinates with the area outside the error range, thereby avoiding the chance of misinterpreting the reflection 303 of the first tracking object 203 as the first tracking object 203.
A method for simulating an object action trajectory according to an embodiment of the present invention is further described with reference to fig. 2A-2B and fig. 4. At step 107, the method includes generating a vector 405 based on the relative relationship of the reference coordinates 207 and the center point coordinates 403 of the first box shape 205. In this embodiment, the reference coordinate 207 is a predetermined fixed coordinate (e.g., a center coordinate of the image), and the vector 405 is generated by connecting the reference coordinate 207 to the center coordinate 403 of the first frame 205. The target object may be the first tracking object 203 (e.g., the same spray gun), or when the target object is difficult to identify and track (e.g., features are less obvious or hidden), the target object may be an object directly or indirectly connected to the first tracking object 203 (e.g., the first tracking object 203 is a glove and the target object is a spray gun held in the glove). With this arrangement, even when the target object is difficult to identify and track (for example, when the cleaning person holds the spray gun in his hand, the spray gun itself is small and hidden by the hand of the cleaning person), the target object can be tracked by identifying and tracking the first tracking object 203 directly or indirectly connected thereto.
It should be noted that although the reference coordinates 207 are fixed coordinates set in advance in the embodiment, the center point coordinates 207 may be determined by the second tracked object in different embodiments. The second tracking object includes a second feature (e.g., blue) that is different from the first feature to avoid confusion during the recognition process. Wherein the second characteristic of the second tracked object is also settable by the user. The second tracked object can be identified from the image 201 according to the second feature, contour coordinates of the second tracked object can be obtained, and a second frame shape surrounding the contour coordinates can be generated. The center point of the second frame can be used as a reference coordinate 207, and a vector 405 can be generated by connecting the reference coordinate 207 to the center coordinate 403 of the first frame 205.
The method for simulating the action trajectory of the object according to the embodiment is further described with reference to fig. 4. Step 108 includes simulating an action trajectory T of the target object based on the vector 405. In this embodiment, the action trajectory T of the simulated target object includes action trajectory parameters of the target object in addition to the vector 405, and the action trajectory parameters can be set by the user. Where the vector 405 is associated with a base direction of action of the target object and the action trajectory parameters are associated with a range of action trajectories T. For example, assuming that the action trajectory T is a sector, the action trajectory parameters may include a range R of action of the target object (e.g., a linear distance that the air or water sprayed by the spray gun may reach, i.e., a radius of the sector is long), an angle of spread a (i.e., a central angle of the sector), and the like. The vector 405 may preliminarily simulate the action direction of the target object, and the action track parameters (range R, dispersion angle a, etc.) may be combined to make the simulation of the action track of the target object more accurate. It should be noted that although the action track T is a sector in the embodiment, the action track T may be simulated in other shapes (such as triangle) in different embodiments, and the invention is not limited thereto.
The following proceeds to describe a method for simulating an object action trajectory according to an embodiment of the present invention with reference to fig. 4, fig. 5A, fig. 5B, and fig. 5C. Step 109 includes quantifying the trajectory of action T and plotting it to corresponding coordinates of the data visualization graph 500. Referring to fig. 5A, the action track T of the target object can be simulated according to the above description, wherein the quantifying the action track T may include, for example, assigning the same value to each coordinate of the action track T distribution in the same frame of image 201 (i.e., each point of the fan-shaped range distribution has the same value), such as assigning the same value to the coordinates 407a, 407b, and 407c (e.g., 10). After the multiple frames of images 201 are processed identically, since the distribution range of the action trajectory T in each frame of image 201 may be different, the variable V representing the accumulated value of each point coordinate n The values obtained for the frame at the point coordinates are accumulated with the timing of each frame of image 201. Thus, the variable V of each coordinate point corresponds to the distribution of the action trajectory T in the multi-frame image 201 n It is possible to accumulate values of different sizes (e.g. in fig. 5A, the variable V having a value of 20 in the overlapping part of the two action trajectories T n ) The data visualization graph 500 may further present the degree to which each coordinate point is affected by the target object.
Referring to FIG. 5B, in various embodiments, different coordinate points of the distribution of the action trajectory T in the image 201 may be given different values. For example, when the water or air stream is ejected from the nozzle, it is influenced by air resistance and gravity to decrease the distribution density with increasing distance. Thus, coordinates closer to the target object may be given larger values (e.g., 40), and coordinates further away from the target object may be given sequentially decreasing values (i.e., coordinate 407a may be given a value greater than coordinate 407b, and coordinate 407b may be given a value greater than coordinate 407 c). Similarly, after the images 201 are processed identically, the respective values are accumulated for each coordinate point, thereby simulating that the water column will be applied to the region to a decreasing extent as the distance increases, for example, when the target object is a water column spray gun. It should be noted that the above-mentioned setting of the numerical value is exemplary, and other different numerical settings can be made without departing from the spirit of the present invention, and the present invention is not limited thereto.
The data visualization graph 500 is a graphical representation of values through colors, blocks, lines, etc., and referring to fig. 5C, in the present embodiment, values of each coordinate point may be represented in a thermodynamic diagram manner according to the magnitude of the values (i.e., the coordinate points with higher action on the thermodynamic diagram by the target object are closer to red, and the coordinate points with lower action are closer to purple).
Continuing with the method of simulating the action trajectory of the object according to an embodiment of the present invention with reference to FIG. 5C, in this embodiment, the data visualization 500 may present only the region of the ROI 202. For example, when the device to be cleaned by the user presents a frame-shaped area in the image, the data visualization graph 500 may only present the color distribution status of the frame-shaped area, so as to facilitate the user to quickly know the cleaning condition of the device. In addition, the data visualization graph 500 may further include a reference index 503, and the reference index 503 may include an instant coverage rate 503a of the action trajectory T, a trajectory index 503b. Wherein the area of the region covered by the action track T (i.e. acted upon by the target object) on the data visualization 500 is calculated, and divided by the total area of the image 201 or ROI 202, to obtain the real-time coverage rate 503a of the action track T; the total numerical value of the action trajectory T on the data visualization graph 500 is calculated and divided by the total area of the image 201 or ROI 202 to obtain a trajectory index 503b of the action trajectory T. Since the real-time coverage 503a and the track index 503b change with time and are associated with the distribution of the action track T, the user can determine the real-time action of the target object by using the reference indexes 503. It should be noted that the foregoing description of the reference index 503 is merely exemplary, and in different embodiments, the reference index 503 may also include other reference indexes 503 capable of providing reference information associated with the action trajectory T, such as the cumulative action time of the action trajectory T.
Referring to fig. 6, in various embodiments, the image 201 of the same frame may include a plurality of first tracking objects 203 having the same first feature (e.g., red), wherein according to the first feature, the first tracking objects 203 may be identified from the image 201, so as to respectively obtain the contour coordinates of a plurality of groups of the first tracking objects 203 (the contour coordinates of the same group correspond to the same first tracking object 203), and generate a plurality of first frames 205 surrounding the first contour coordinates (the contour coordinates of the same group correspond to one first frame 205). In the present embodiment, a plurality of vectors 405 can be generated from the reference coordinates 207 toward the center coordinates of the first frames 205, and a plurality of action tracks T can be generated by the vectors 405, thereby simulating the situation that a plurality of target objects act simultaneously. Specifically, when monitoring the cleaning condition of the equipment, there may be a condition where multiple cleaning tasks are performed simultaneously (for example, multiple cleaning personnel simultaneously spray water jets with the spray guns), and by this configuration, the action tracks T of the multiple spray guns can be simulated simultaneously to know the real-time cleaning state of the equipment.
Referring to fig. 7, an apparatus 700 for simulating an action trajectory of an object according to another embodiment of the present invention is illustrated, wherein the apparatus includes a memory 710, a processor 720 coupled to the memory, and a display unit 730. Wherein the memory 710 stores instructions that, when executed, cause the processor to performWhen executing these instructions, the processor 720 is configured to perform the method for simulating the action trajectory of the object in any of the embodiments described above. Specifically, the processor 720 includes an image capturing unit 721, an image recognizing module 722, a calculating unit 723 and a data converting unit 725. The capturing unit 721 captures the multi-frame image 201 from the image source 740 (i.e. the film containing the target object and the scene of the action range), and transmits the captured multi-frame image to the image recognition module 722. The image recognition module 722 determines the contour coordinates of the first tracked object 203 from the captured image 201 and identifies the first frame 205. Thereafter, the image recognition module 722 transmits the recognition result to the calculating unit 723, and the calculating unit 723 generates the vector 405 according to the relative relationship between the first frame 205 and the reference coordinate 207, which is identified by the image recognition module 722. By using the action track parameter and the vector 405, the calculating unit 723 can simulate an action track T of the target object in each frame of the image 201 and digitize the action track T. Wherein a variable V representing the cumulative value of each coordinate point N May be stored in a register, and the variable V is associated with the timing of the multi-frame image 201 N The continuous accumulation calculation unit 723 digitizes the results of the trace T. And the data conversion unit 725 converts the variable V representing each coordinate point N The values are imaged (e.g., color representation values) such that each coordinate point has a respective image characteristic (e.g., color), and the signals are transmitted to the display unit 730 for viewing by the user.

Claims (24)

1. A method for simulating an action trajectory of an object, comprising:
capturing a plurality of frames of images from a film;
determining a plurality of contour coordinates of a first tracked object in each image according to a first feature of the first tracked object;
identifying a first frame shape according to the contour coordinates;
generating a vector according to the relative relation between a reference coordinate and the coordinate of the center point of the first frame shape, and simulating an action track of a target object according to the vector, wherein the target object is related to the first tracking object; and
the action trajectory is digitized and plotted to corresponding coordinates of a data visualization graph.
2. The method of claim 1, wherein the target object is the same as the first tracking object or is directly or indirectly connected to the first tracking object.
3. The method of claim 1, wherein the image capturing step comprises setting a region to be monitored, so that only the region to be monitored is captured when capturing the film; the step of drawing to the data visualization map comprises generating the data visualization map only within the area to be monitored.
4. The method of claim 3, wherein capturing the image further comprises dilating the area to be monitored to capture a coarse area to be monitored.
5. The method of claim 1, wherein determining the contour coordinates of the first tracked object in the image comprises dilating and eroding the contour of the first tracked object.
6. The method of claim 1, wherein simulating the action trajectory comprises simulating the action trajectory based on an action trajectory parameter and the vector.
7. The method of claim 1, further comprising:
calculating a contour area according to the contour coordinate;
comparing the outline area with an actual area of the first tracked object to generate a comparison result; and
excluding the contour coordinates outside the error range based on the comparison result.
8. The method of claim 1, further comprising:
calculating a first frame area of the first frame;
comparing the area of the first frame with an estimated area of the first frame to generate a comparison result; and
and according to the comparison result, excluding the first frame shape out of the error range, and further excluding the contour coordinate out of the error range corresponding to the first frame shape.
9. The method of claim 1, wherein the step of mapping to the data visualization further comprises:
associating the action track with at least one reference index; and
and merging the reference index into the data visualization graph.
10. The method of claim 1, further comprising:
determining coordinates of a center point of a second tracked object, wherein the second tracked object has a second characteristic different from the first characteristic of the first tracked object; and
setting the center point coordinate of the second tracked object as the reference coordinate.
11. The method of claim 1, wherein the step of digitizing the action trajectory comprises setting the same or different values according to different location coordinates of the action trajectory.
12. The method of claim 1, wherein the image can be reduced in dimensionality or converted from an RGB color model to an HSV color model.
13. An apparatus for simulating an action trajectory of an object, comprising:
at least one memory having a plurality of instructions stored thereon; and
at least one processor is coupled to the memory, and when executing the instructions, the processor is configured to:
capturing a plurality of frames of images from a film;
determining a plurality of contour coordinates of a first tracked object in each image according to a first feature of the first tracked object;
identifying a first frame shape according to the contour coordinates;
generating a vector according to the relative relation between a reference coordinate and the coordinate of the center point of the first frame shape, and simulating an action track of a target object according to the vector, wherein the target object is related to the first tracking object; and
the action trajectory is digitized and plotted to corresponding coordinates of a data visualization graph.
14. The apparatus of claim 13, wherein the target object is identical to the first tracking object or directly or indirectly connected to the first tracking object.
15. The apparatus of claim 13, wherein the processor is configured to set a region to be monitored in the image capturing, so that only the region to be monitored is captured when the movie is captured; drawing to the data visualization map comprises generating the data visualization map only in the region of the region to be monitored.
16. The apparatus of claim 15, wherein capturing the image further comprises dilating the area to be monitored to capture a coarse area to be monitored.
17. The apparatus of claim 13, wherein determining the contour coordinates of the first tracked object in the image comprises dilating and eroding the contour of the first tracked object.
18. The apparatus of claim 13 wherein the processor being configured to simulate the action track includes simulating the action track based on an action track parameter and the vector.
19. The device of claim 13, the processor further configured to:
calculating a contour area according to the contour coordinate;
comparing the outline area with an actual area of the first tracked object to generate a comparison result; and
excluding the contour coordinates outside the error range based on the comparison result.
20. The apparatus of claim 13, the processor further configured to:
calculating a first frame area of the first frame;
comparing the area of the first frame with an estimated area of the first frame to generate a comparison result; and
and according to the comparison result, excluding the first frame shape out of the error range, and further excluding the contour coordinate out of the error range corresponding to the first frame shape.
21. The device of claim 13, wherein the processor is further configured to:
associating the action track with at least one reference index; and
and merging the reference index into the data visualization graph.
22. The device of claim 13, wherein the processor is further configured to:
determining coordinates of a center point of a second tracked object, wherein the second tracked object has a second characteristic different from the first characteristic of the first tracked object; and
and taking the center point coordinate of the second tracked object as a reference coordinate.
23. The apparatus of claim 13, wherein the processor is configured to set the same or different values according to different position coordinates of the action track when quantifying the action track.
24. The apparatus of claim 13, wherein the image may be reduced in dimensionality or converted from an RGB color model to an HSV color model.
CN202211013373.3A 2021-10-14 2022-08-23 Method and device for simulating action track of object Pending CN115239939A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW110138215A TWI835011B (en) 2021-10-14 2021-10-14 A method and apparatus for simulating the acting track of an object
TW110138215 2021-10-14

Publications (1)

Publication Number Publication Date
CN115239939A true CN115239939A (en) 2022-10-25

Family

ID=83681505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211013373.3A Pending CN115239939A (en) 2021-10-14 2022-08-23 Method and device for simulating action track of object

Country Status (2)

Country Link
CN (1) CN115239939A (en)
TW (1) TWI835011B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI294107B (en) * 2006-04-28 2008-03-01 Univ Nat Kaohsiung 1St Univ Sc A pronunciation-scored method for the application of voice and image in the e-learning
TWI507919B (en) * 2013-08-23 2015-11-11 Univ Kun Shan Method for tracking and recordingfingertip trajectory by image processing
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method
KR101744042B1 (en) * 2016-01-14 2017-06-07 주식회사 골프존뉴딘 Apparatus for base-ball practice, sensing device and sensing method used to the same and control method for the same
KR101912126B1 (en) * 2016-02-04 2018-10-29 주식회사 골프존뉴딘홀딩스 Apparatus for base-ball practice, sensing device and sensing method used to the same and control method for the same

Also Published As

Publication number Publication date
TW202316314A (en) 2023-04-16
TWI835011B (en) 2024-03-11

Similar Documents

Publication Publication Date Title
CN109977813B (en) Inspection robot target positioning method based on deep learning framework
CN108986064B (en) People flow statistical method, equipment and system
CN103677274B (en) A kind of interaction method and system based on active vision
US20130136307A1 (en) Method for counting objects and apparatus using a plurality of sensors
EP3518146A1 (en) Image processing apparatus and image processing method
CN107748860A (en) Method for tracking target, device, unmanned plane and the storage medium of unmanned plane
CN104134222A (en) Traffic flow monitoring image detecting and tracking system and method based on multi-feature fusion
CN115131821A (en) Improved YOLOv5+ Deepsort-based campus personnel crossing warning line detection method
CN106326860A (en) Gesture recognition method based on vision
Bandlow et al. Fast image segmentation, object recognition and localization in a robocup scenario
CN113608663B (en) Fingertip tracking method based on deep learning and K-curvature method
KR20220023726A (en) Deep learning based realtime process monitoring system and method
CN106175780A (en) Facial muscle motion-captured analysis system and the method for analysis thereof
Li et al. Robust multiperson detection and tracking for mobile service and social robots
CN106650628B (en) Fingertip detection method based on three-dimensional K curvature
CN108364306B (en) Visual real-time detection method for high-speed periodic motion
CN104766331B (en) A kind of image processing method and electronic equipment
CN110580708B (en) Rapid movement detection method and device and electronic equipment
CN114067271A (en) Safety risk early warning method based on AIOT and personnel trajectory analysis
Bao et al. A new approach to hand tracking and gesture recognition by a new feature type and HMM
CN115239939A (en) Method and device for simulating action track of object
CN117011341A (en) Vehicle track detection method and system based on target tracking
CN110197123A (en) A kind of human posture recognition method based on Mask R-CNN
Yang et al. Method for building recognition from FLIR images
CN115464651A (en) Six groups of robot object grasping system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination