CN115457666A - Method and system for identifying moving gravity center of living body object and computer readable storage medium - Google Patents

Method and system for identifying moving gravity center of living body object and computer readable storage medium Download PDF

Info

Publication number
CN115457666A
CN115457666A CN202211271932.0A CN202211271932A CN115457666A CN 115457666 A CN115457666 A CN 115457666A CN 202211271932 A CN202211271932 A CN 202211271932A CN 115457666 A CN115457666 A CN 115457666A
Authority
CN
China
Prior art keywords
gravity
center
motion
living
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211271932.0A
Other languages
Chinese (zh)
Inventor
杨世亮
刘子仪
肖何
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tangmi Technology Co ltd
Original Assignee
Chengdu Tangmi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tangmi Technology Co ltd filed Critical Chengdu Tangmi Technology Co ltd
Priority to CN202211271932.0A priority Critical patent/CN115457666A/en
Publication of CN115457666A publication Critical patent/CN115457666A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

The invention discloses a live object motion center-of-gravity identification method, a live object motion center-of-gravity identification system and a computer readable storage medium, belonging to the technical field of pet photography, and the live object motion center-of-gravity identification method is characterized by comprising the following steps of S1, acquiring original data of a live object, which leaves the ground and is only subjected to gravity, wherein the original data comprises video data and image data; s2, acquiring a reference image of only a living object in the original data; s3, acquiring the relative speed of the local body of the living body object at a first moment in the horizontal direction from the reference image; s4, fitting the relative speed of the local body at the first moment in the horizontal direction to obtain the average speed of the living body object in the horizontal direction; according to the invention, the result of the identification of the motion center of gravity is closer to the actual center of the living body object, so that the calculation amount is greatly reduced.

Description

Method and system for identifying moving gravity center of living body object and computer readable storage medium
Technical Field
The invention belongs to the technical field of pet photography, relates to a moving object photography technology, and particularly relates to a method and a system for identifying the moving gravity center of a living object and a computer readable storage medium.
Background
One of the key technologies in photography is to focus accurately on the object to be photographed. If the object is moving, it is necessary to keep a following state, i.e., focus following, for the moving object during photographing. The filming of moving objects generally places high demands on the cameraman's level of focus. Based on this need, auto-focus following techniques are developed and applied in many scenarios.
For the focus-following photographing of a moving object, two approaches are generally adopted. One is to perform target identification and extraction on an image acquired by a camera, and calculate the position of an object to perform feedback type focus following adjustment; one is to actively adjust and control the focus following according to a preset track for a target with a specific track.
However, when the method is applied to living objects such as animals and pets, the scheme of performing feedback type focus following adjustment by target extraction and identification has the following defects:
(1) The feedback type focus following adjustment requires that the camera system has stronger image recognition computing capability and focusing response speed, and is difficult to meet the requirements when the object moving speed is higher or the requirement on focus following is higher;
(2) When living bodies such as animals and pets serve as targets, the living bodies are unfixed in shapes (the animals have large differences when different postures such as squatting, standing, running, jumping and curling are taken as picture objects), and have the characteristics of multiple change types and high change speed, so that great difficulty is brought to the target identification process.
Disclosure of Invention
In order to solve the above-mentioned problems of the prior art, the present invention provides a method, a system and a computer readable storage medium for identifying the moving center of gravity of a living object.
In order to achieve the purpose, the invention adopts the technical scheme that:
a method for recognizing the moving gravity center of a living object is provided, which is characterized by comprising the following steps,
s1, acquiring original data of a living object which leaves the ground and is only subjected to gravity, wherein the original data comprises video data and image data;
s2, acquiring a reference image of only a living object in the original data;
s3, acquiring the relative speed of the local body of the living body object at a first moment in the horizontal direction from the reference image;
s4, fitting the relative speed of the local body at the first moment in the horizontal direction to obtain the average speed of the living body object in the horizontal direction;
and S5, acquiring a two-dimensional image block with the same average speed in the reference image, wherein the coordinate of the two-dimensional image block is the motion gravity center coordinate.
Preferably, in step S4, the step of,
the average velocity of the living subject in the horizontal direction is,
V=F(v1,v2,...,vm) (1)
wherein the living subject's local body has a head, eyes, a front paw, a back paw;
wherein vm is a velocity of a local body of the living subject at the first time in a horizontal direction, and m is an integer greater than 1;
where F () is a fitting function of the average velocity.
Preferably, the living body object surface hair feature at the position corresponding to the motion barycentric coordinate in step S5 is taken as the motion barycentric recognition object of the living body object.
Preferably, the method for recognizing moving center of gravity of a living subject according to claim 3,
and before the step S2, uploading the original data to a cloud computer, and after the step S5, returning the motion barycentric coordinates obtained by processing of the cloud computer and the corresponding reference image to the local.
Preferably, the coordinates of the center of gravity of the motion of the same living subject are obtained by capturing a plurality of video data.
Preferably, when the living object is completely stretched, calculating and recording a distance S = { S1, S2,.., sn } between the motion barycentric coordinates of the living object and the characteristic portion of the living object;
wherein the characteristic parts comprise eyes, nose, ears, head, legs, stomach and tail;
wherein Sn represents a distance from a local feature in the image data to the center of gravity of the motion, and n is an integer greater than 2.
Preferably, the motion barycentric coordinates of the living object are updated at intervals.
Preferably, after the step S5, a motion center library is established, and the motion center information is stored in the motion center library;
wherein the motion center of gravity information includes a motion center of gravity recognition object at a certain time and a reference image corresponding thereto.
Preferably, the time information is used as a storage label of the motion gravity center information;
the time information comprises first time information and second time information;
and taking the first time information as a first storage label of the motion center of gravity information, and taking the second time information as a second storage label of the motion center of gravity information.
Preferably, the first time information includes morning, afternoon, evening and morning;
the second time information has a pre-meal time and a post-meal time.
A moving center-of-gravity recognition system for a living subject, comprising,
an executable program that can execute the method for recognizing the center of gravity of motion of a living object.
A computer-readable storage medium, comprising,
for storing a specific computer program, the execution of which can implement the one moving center-of-gravity recognition method for a living subject.
The invention has the beneficial effects that the method, the system and the computer readable storage medium for identifying the moving gravity center of the living object are provided. According to the invention, the calculation amount is greatly reduced according to the motion gravity center identification, and the motion gravity center of the living body object is solved by simplifying complex data, so that the motion gravity center identification can be realized under the condition of poor hardware, and the requirements of low-cost and mass production of hardware can be met.
Description of the drawings:
FIG. 1 is a flow chart of a method for identifying the center of gravity of a moving object;
FIG. 2 is a flow chart of another method for identifying the center of gravity of a moving living subject;
FIG. 3 is a schematic diagram of a method for identifying the center of gravity of a moving object;
FIG. 4 is a schematic view of a motion center library;
FIG. 5 is a schematic view of another motion center library.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-5, the embodiments of the present invention are as follows:
example 1:
a method for recognizing the moving gravity center of a living object is characterized by comprising the following steps,
s1, acquiring original data of a living object which leaves the ground and is only subjected to gravity, wherein the original data comprises video data and image data;
s2, acquiring a reference image of only a living object in the original data;
s3, acquiring the relative speed of the local body of the living body object at a first moment in the horizontal direction from the reference image;
s4, fitting the relative speed of the local body at the first moment in the horizontal direction to obtain the average speed of the living object in the horizontal direction;
and S5, acquiring a two-dimensional image block with the same average speed in the reference image, wherein the coordinate of the two-dimensional image block is the motion gravity center coordinate.
With the continuous development of the internet, a large number of pet fans share their pet information, including videos, pictures, language characters and the like, the speed of short-video information transmission is very high, the transmission range is very wide, the pet information occupies most of the content of the short-video gradually, especially, users of the self-maintained pets want to watch the videos, the pictures and the like of their own pets, and therefore, the provision of the equipment for shooting the self-maintained pets is very meaningful.
When the camera of installation at home was shot, often met an automatic problem of following burnt, when shooting the object of motion, in order to guarantee the definition of shooting the object, need in time focus to the object of motion, among the prior art, follow burnt and divide into two kinds of modes, the people are followed burnt and are followed burnt with automatic. The photographer at the master level can achieve better shooting effect by focusing freely according to the moving object, but the number of the photographers at the master level is limited, and the cost of the method is too high.
An automatic focus following mode is a feedback focus following mode, wherein a moving object needs to be identified firstly, then the distance between the moving object and a lens is located, and then focusing is performed. The other automatic focus following mode is to focus under a fixed motion track, and can preset focus according to a known motion track to achieve the effect of automatic focus following, but the method has the premise that the motion track is known in advance, and living objects such as pets and the like do irregular motion and cannot know the motion track in advance, so that the method cannot accurately follow the living objects, and a clear motion video of the living objects is difficult to shoot.
When a living subject moves rapidly, there is a high probability that a local body moves, such as a pet's head does not move but moves; or the pet does not move and the head moves, under the similar condition, the camera can automatically adjust a focal length, but because the motion of the pet is irregular, the camera has a high probability of overlarge adjustment focal length, so that the photographed video has an unacceptable jitter phenomenon, and an image is blurred, so that the quality of the whole original data is greatly influenced, and the user has extremely poor experience.
When a living object (such as an animal like a cat/dog) such as a pet moves, the moving mode is single as a whole object, particularly when the living object moves in a jumping and off-ground mode, the object is only acted by gravity, and the gravity center follows a single-stressed simple moving mode no matter how the posture of the object changes. Therefore, if the unique center of gravity of the object can be identified and accurately acquired, and then the center of gravity coordinates are applied to the shooting of the follow-focus motion of the object, the method is one of the key technologies for the quality of the motion shooting film of the object, and particularly the follow-focus shooting capable of realizing accurate track prejudgment on the ground motion behaviors such as shooting jumping and the like can be realized.
In the present embodiment, as shown in fig. 1, a method for recognizing the moving center of gravity of a living subject is provided, which includes a step S1 of acquiring raw data of the living subject which is away from the ground and is subjected to only gravity, the raw data having video data and image data; s2, acquiring a reference image of only a living object in the original data; s3, acquiring the relative speed of the local body of the living body object at a first moment in the horizontal direction from the reference image; s4, fitting the relative speed of the local body at the first moment in the horizontal direction to obtain the average speed of the living body object in the horizontal direction; and S5, acquiring a two-dimensional image block with the same average speed in the reference image, wherein the coordinate of the two-dimensional image block is the motion gravity center coordinate. Since the number of pets in a user's home is limited, generally 1-2 pets, and the pet object is relatively fixed, the moving gravity center of the living object is calculated from the daily moving track of the living object by learning and storing the moving gravity center image of the living object and then tracking and shooting the object. The motion gravity center identification method only needs to calculate the motion gravity center of a pet at home of a user, and acquires the motion gravity center coordinates of a living object in video data by acquiring the motion video of a fixed pet and processing the video. Compared with the feedback type automatic focus following, the method disclosed by the invention can be used for predicting the track of the two-dimensional image block identified by the motion gravity center, the predicted track is close to the actual motion track of the living object, the picture shot by the camera in focus following according to the two-dimensional image block identified based on the motion gravity center is clear, and the situations of blurring, jumping and the like can be avoided. The invention simplifies complex data to solve the motion gravity center of the living body object, greatly reduces the calculation amount of motion gravity center identification, can realize the identification of the motion gravity center under the condition of poor hardware, and can meet the requirement of hardware production with low cost and large batch.
Example 2:
in the step S4, the process is carried out,
the average velocity of the living subject in the horizontal direction is,
V=F(v1,v2,...,vm) (1)
wherein the living subject's local body has a head, eyes, a front paw, a back paw;
wherein vm is a velocity of a local body of the living subject at the first time in a horizontal direction, and m is an integer greater than 1; s
Where F () is a fitting function of the average velocity.
When the living body object jumps away from the ground, the living body object is only under the action of gravity, the gravity center of the living body object is unique, and the unique motion gravity center track of the living body object can be determined according to the unique motion track of the living body object, so that the focal length of the camera can be determined and adjusted, and the phenomenon of picture jitter can be avoided. Since the living object is not subjected to a force in the horizontal direction, the momentum of the living object is conserved in the horizontal direction.
In the present embodiment, the average velocity V of the living body object in the horizontal direction is fitted by the fitting function according to the conservation of momentum in the horizontal direction by calculating the relative velocity of the local body in the horizontal direction at the first time to obtain the average velocity of the living body object in the horizontal direction, and the average velocity V of the living body object in the horizontal direction is the velocity of the center of gravity of the living body object in the horizontal direction. In one embodiment, the living object is a cat, the average velocity of the cat in the horizontal direction is V = F (V1, V2, V3), V1 represents the velocity of the front paw of the cat at a certain time, V2 represents the velocity of the rear paw of the cat at a certain time, and V3 represents the velocity of the head of the cat at a certain time.
Example 3:
and taking the surface hair characteristics of the living body object at the position corresponding to the motion barycentric coordinate in the step S5 as the motion barycentric recognition object of the living body object.
Since the center of gravity of a living body object is generally located in its body, and data taken by a camera is a two-dimensional image in which it is difficult to locate the three-dimensional coordinates of the center of gravity of motion, in the present embodiment, the characteristics of the hair on the surface of the living body object at the position corresponding to the coordinates of the center of gravity of motion are taken as the moving center recognition object of the living body object. The hair features can be approximately used as a motion center recognition object in a two-dimensional image, and then the calculation amount of the motion center of gravity recognition in the later period is greatly reduced.
In one embodiment, the hair feature image is used as a template, and in the next captured original data, for each frame of image, the motion center of gravity of the living body object in the image is quickly found through template matching, so that accurate focus following of the living body object can be realized.
Example 4:
and before the step S2, uploading the original data to a cloud computer, and after the step S5, returning the motion barycentric coordinates obtained by processing of the cloud computer and the corresponding reference image to the local.
In this embodiment, as shown in fig. 2 to 3, the moving video data is uploaded to the cloud computer by acquiring the moving video of the fixed pet, the moving barycentric coordinates and the corresponding reference image data are calculated by the cloud computer, and the moving barycentric recognition object and the corresponding reference image data obtained by the processing of the cloud computer are returned to the local area. The original data shot in the local area do not need to be subjected to complex fitting and the motion barycentric coordinate of a living body object is searched, but the returned motion barycentric recognition object is directly used as a target to search the motion barycenter, the method can greatly reduce the calculation amount of the later-stage motion barycenter recognition, and the recognition speed also achieves great improvement.
Example 5:
the motion barycentric coordinates of the same living subject are obtained by shooting a plurality of video data.
In this embodiment, due to the partial distortion of the two-dimensional image, there may be a case that there is more than one two-dimensional image block with the same speed as the average speed V in the original data, some of the two-dimensional image blocks do not correspond to the motion center of gravity, and it is necessary to perform filtering through a plurality of video data to filter out the two-dimensional image blocks that do not correspond to the motion center of gravity.
Example 6:
when the living object is completely unfolded, calculating and recording the distance S = { S1, S2,. Once, sn } between the motion barycentric coordinates of the living object and the characteristic part of the living object;
wherein the characteristic parts comprise eyes, nose, ears, head, legs, belly and tail;
wherein Sn represents a distance from a local feature in the image data to the center of gravity of the motion, and n is an integer greater than 2.
When the hair characteristics or the fur characteristics of the surfaces of the living objects are close, it is difficult to identify the center of gravity by the hair characteristics or the fur characteristics of the surfaces of the living objects, such as a solid living object, a hairless cat, and the like. In this embodiment, the moving center of gravity is determined by the relative position of the living object, when the living object is jumping, in many cases, when the living object is jumping in the air, for example, a cat, the body is completely unfolded for a long time, including two front legs, two rear legs, a head, and the like, all of which are extended, and at this time, the distance S = { S1, S2.., sn } between the coordinates of the moving center of gravity of the living object and the characteristic portion of the living object is calculated and recorded, so as to construct the relative relationship between the moving center of gravity and the posture of the living object, determine the hair feature corresponding to the moving center of gravity of the cat in the completely unfolded state, and then use the hair feature as the object for focusing. It should be noted that the living subject is a cat as an example in this embodiment, but the living subject is not limited to a cat.
Example 7:
the motion barycentric coordinates of the living object are updated at intervals.
In the embodiment, since the shape, weight and the like of the pet change with time, the coordinates of the motion center of gravity of the pet also change, when the actual center of gravity of the pet changes, if the coordinates of the motion center of gravity of the pet are not updated, an unclear video image is obtained in a shooting process, and therefore, the updating of the motion center of gravity is very valuable. The definition of the shot image can be ensured by updating the motion barycentric coordinates of the living body object at intervals, and the updating time is set according to the growth rule of the living body object. In one embodiment, the interval time period is set according to the specific age bracket of the pet, taking the cat as an example, for a kitten, the updating time period is 10-15 days; for adult male cats and non-pregnant female cats, the renewal period is 25-40 days; for pregnant queens, the renewal period is 15-20 days.
Example 8:
after the step S5, establishing a motion center library, and storing the motion center information into the motion center library;
wherein the motion center of gravity information includes a motion center of gravity recognition object at a certain time and a reference image corresponding thereto.
In the actual household environment for pet breeding, there is not less than one pet, there are a plurality of corresponding motion barycenters for a plurality of living objects, because the motion barycenter of one living object is unique, the motion barycenter coordinate in the three-dimensional coordinate system is mapped to the two-dimensional image, there is a hair feature corresponding to the motion barycenter, but the living object is in different postures or directions, there are a plurality of sets of hair features corresponding to the motion barycenter in the two-dimensional image shot by the camera, and when only one set is taken as the motion barycenter identification mark, there is a great probability that the motion barycenter identification mark cannot be identified when the living object changes the posture, and there is no way to accurately and real-timely locate the motion barycenter of the living object.
In the present embodiment, as shown in fig. 4, by creating a motion center-of-gravity library and storing the motion center-of-gravity recognition object of the living body object in the image data in the motion center-of-gravity library, a more sophisticated motion center-of-gravity recognition object can be stored. In another embodiment, as shown in fig. 5, the moving center of gravity recognition object of the living object is stored in the moving database together with the reference image corresponding to the moving center of gravity recognition object, so that the error caused by the similarity of the characteristics of the moving center of gravity recognition objects can be reduced, and the moving center of gravity of the living object can be recognized more accurately.
Example 9:
the time information is used as a storage label of the motion gravity center information;
the time information comprises first time information and second time information;
and taking the first time information as a first storage label of the motion center of gravity information, and taking the second time information as a second storage label of the motion center of gravity information.
Preferably, the first time information has morning, afternoon, evening and morning;
the second time information has a pre-meal time and a post-meal time.
Preferably, the system for recognizing the center of gravity of the living subject is characterized by comprising,
an executable program that can execute the method for recognizing the center of gravity of motion of a living object.
A computer-readable storage medium, comprising,
for storing a specific computer program, the execution of which can implement the one moving center-of-gravity recognition method for a living subject.
In the embodiment, as the shape, the weight and the like of the pet can change along with the time, the coordinates of the movement center of gravity of the pet also change along with the time, and the coordinates of the movement center of gravity of the pet in a specific time period are collected according to the work and rest time of the pet, the time information is used as the storage label of the movement center of gravity information; the time information comprises first time information and second time information; and taking the first time information as a first storage label of the motion center of gravity information, and taking the second time information as a second storage label of the motion center of gravity information. The time information and the corresponding motion gravity center coordinate information are stored in the motion gravity center library, the motion gravity centers are further classified and managed, when the pet is really shot, the best motion gravity center recognition object can be selected according to the real-time state of the pet to carry out follow-up focusing, the method can improve the shooting quality after follow-up focusing, and the follow-up focusing accuracy is greatly improved.
In the description of the embodiments of the present invention, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "center", "top", "bottom", "inner", "outer", and the like indicate an orientation or positional relationship.
In the description of the embodiments of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "assembled" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in a specific case to those of ordinary skill in the art.
In the description of the embodiments of the invention, the particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
In the description of the embodiments of the present invention, it should be understood that "-" and "-" indicate the same range as two numerical values, and the range includes the endpoints. For example, "A-B" means a range greater than or equal to A and less than or equal to B. "A to B" means a range of not less than A and not more than B.
In the description of the embodiments of the present invention, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (12)

1. A method for recognizing the moving gravity center of a living object is characterized by comprising the following steps,
s1, acquiring original data of a living object which leaves the ground and is only subjected to gravity, wherein the original data comprises video data and image data;
s2, acquiring a reference image of only a living object in the original data;
s3, acquiring the relative speed of the local body of the living body object at a first moment in the horizontal direction from the reference image;
s4, fitting the relative speed of the local body at the first moment in the horizontal direction to obtain the average speed of the living body object in the horizontal direction;
and S5, acquiring a two-dimensional image block with the same average speed in the reference image, wherein the coordinate of the two-dimensional image block is the motion gravity center coordinate.
2. The method for recognizing the moving center of gravity of a living subject as claimed in claim 1, wherein in step S4,
the average velocity of the living subject in the horizontal direction is,
V=F(v1,v2,...,vm) (1)
wherein the living subject's local body has a head, eyes, a front paw, a back paw;
wherein vm is a velocity of a local body of the living subject at the first time in a horizontal direction, and m is an integer greater than 1;
where F () is a fitting function of the average velocity.
3. The method for recognizing the moving center of gravity of a living subject as claimed in claim 2,
and taking the surface hair characteristics of the living body object at the position corresponding to the motion barycentric coordinate in the step S5 as the motion barycentric recognition object of the living body object.
4. The method for recognizing the moving center of gravity of a living subject according to claim 3,
and uploading the original data to a cloud computer before the step S2, and returning the motion barycentric coordinates and the corresponding reference images obtained by processing of the cloud computer to the local after the step S5.
5. The method for recognizing the moving center of gravity of a living subject as claimed in claim 4,
the motion barycentric coordinates of the same living subject are obtained by shooting a plurality of video data.
6. The method for recognizing the moving center of gravity of a living subject as claimed in claim 5, wherein after step S5,
when the living object is completely unfolded, calculating and recording the distance S = { S1, S2,. Once, sn } between the motion barycentric coordinates of the living object and the characteristic part of the living object;
wherein the characteristic parts comprise eyes, nose, ears, head, legs, stomach and tail;
wherein Sn represents a distance from a local feature in the image data to the center of gravity of the motion, and n is an integer greater than 2.
7. The method for recognizing the moving center of gravity of a living subject as claimed in claim 6,
the motion barycentric coordinates of the living object are updated at intervals.
8. The method for recognizing the moving center of gravity of a living subject according to claim 7,
after the step S5, establishing a motion center library, and storing the motion center information into the motion center library;
wherein the motion center of gravity information includes a motion center of gravity recognition object at a certain time and a reference image corresponding thereto.
9. The method for recognizing the moving center of gravity of a living subject as claimed in claim 8,
the time information is used as a storage label of the motion gravity center information;
the time information comprises first time information and second time information;
and taking the first time information as a first storage label of the motion center of gravity information, and taking the second time information as a second storage label of the motion center of gravity information.
10. The method for recognizing the moving center of gravity of a living subject according to claim 9,
the first time information comprises morning, afternoon, evening and morning;
the second time information has a pre-meal time and a post-meal time.
11. A moving center-of-gravity recognition system for a living subject, comprising,
executable program for executing a method for moving center of gravity identification of a living subject according to any one of claims 1 to 10.
12. A computer-readable storage medium, comprising,
for storing a specific computer program, the execution of which can implement a method for moving center of gravity recognition of a living object according to any one of claims 1 to 10.
CN202211271932.0A 2022-10-18 2022-10-18 Method and system for identifying moving gravity center of living body object and computer readable storage medium Pending CN115457666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211271932.0A CN115457666A (en) 2022-10-18 2022-10-18 Method and system for identifying moving gravity center of living body object and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211271932.0A CN115457666A (en) 2022-10-18 2022-10-18 Method and system for identifying moving gravity center of living body object and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115457666A true CN115457666A (en) 2022-12-09

Family

ID=84310644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211271932.0A Pending CN115457666A (en) 2022-10-18 2022-10-18 Method and system for identifying moving gravity center of living body object and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115457666A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11893749B1 (en) * 2022-10-17 2024-02-06 Chengdu Tommi Technology Co., Ltd. Focus following method based on motion gravity center, storage medium and photographing system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11893749B1 (en) * 2022-10-17 2024-02-06 Chengdu Tommi Technology Co., Ltd. Focus following method based on motion gravity center, storage medium and photographing system

Similar Documents

Publication Publication Date Title
US10769480B2 (en) Object detection method and system
US11887318B2 (en) Object tracking
CN115334249B (en) Focus following method based on motion center, storage medium and camera system
CN112055158B (en) Target tracking method, monitoring device, storage medium and system
WO2019157690A1 (en) Automatic image capturing method and device, unmanned aerial vehicle and storage medium
Chen et al. End-to-end learning of object motion estimation from retinal events for event-based object tracking
CN115348392B (en) Shooting method and system based on template material
Bultmann et al. Real-time multi-view 3D human pose estimation using semantic feedback to smart edge sensors
JP7143260B2 (en) Methods and systems for assisting users in creating and selecting images
CN110992378B (en) Dynamic updating vision tracking aerial photographing method and system based on rotor flying robot
WO2020042126A1 (en) Focusing apparatus, method and related device
CN115457666A (en) Method and system for identifying moving gravity center of living body object and computer readable storage medium
CN115565130A (en) Unattended system and monitoring method based on optical flow
Haggui et al. Human detection in moving fisheye camera using an improved YOLOv3 framework
CN115345901B (en) Animal motion behavior prediction method and system and camera system
Gibson et al. Quadruped gait analysis using sparse motion information
WO2012153868A1 (en) Information processing device, information processing method and information processing program
CN115294508B (en) Focus following method and system based on static space three-dimensional reconstruction and camera system
Micilotta Detection and tracking of humans for visual interaction
CN113691731B (en) Processing method and device and electronic equipment
Jiang Application of Rotationally Symmetrical Triangulation Stereo Vision Sensor in National Dance Movement Detection and Recognition
JP7277829B2 (en) Camera parameter estimation device, camera parameter estimation method and camera parameter estimation program
WO2022151507A1 (en) Movable platform and method and apparatus for controlling same, and machine-readable storage medium
CN113992845A (en) Image shooting control method and device and computing equipment
Matsuyama Cooperative Distributed Vision: Dynamic Integration of Visual Perception, Action, and

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination