CN117392707A - Tracking and positioning system based on image data and using method - Google Patents

Tracking and positioning system based on image data and using method Download PDF

Info

Publication number
CN117392707A
CN117392707A CN202311561871.6A CN202311561871A CN117392707A CN 117392707 A CN117392707 A CN 117392707A CN 202311561871 A CN202311561871 A CN 202311561871A CN 117392707 A CN117392707 A CN 117392707A
Authority
CN
China
Prior art keywords
data
image
key
tracking
positioning system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311561871.6A
Other languages
Chinese (zh)
Inventor
张璜
陈旻骋
韩小明
袁崇雯
沈颖彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Normal University
Original Assignee
Hangzhou Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Normal University filed Critical Hangzhou Normal University
Priority to CN202311561871.6A priority Critical patent/CN117392707A/en
Publication of CN117392707A publication Critical patent/CN117392707A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A tracking and positioning system and a using method based on image data comprise the following steps: the system comprises a plurality of cameras, an identification server and a tracking and positioning system server. The image processing algorithm is utilized to obtain the direction data of five dimensions of eyes, heads, shoulders, hands and feet, so that the accuracy and the rationality of the current direction judgment of the personnel are enhanced, and the direction judgment efficiency is also improved; in addition, technical means such as floor tile sensing direction data and action routes of people are added, and the predicted movement direction of the key observer is obtained in an auxiliary mode by utilizing the big data means, so that the rotation and movement direction of the key observer are predicted more accurately.

Description

Tracking and positioning system based on image data and using method
Technical Field
The invention belongs to the field of tracking and positioning, and particularly relates to a tracking and positioning system based on image data and a using method thereof.
Background
The existing image data tracking and positioning technology has single analysis dimension and image factors, so that the capture of personnel actions and the prediction result of the next movement direction have certain deviation.
Disclosure of Invention
The invention aims to provide a tracking and positioning system based on image data and a using method thereof, which are used for solving the technical problems in the prior art.
A tracking and positioning system based on image data, comprising:
a plurality of cameras: acquiring face images and whole-body images of all people in a monitoring area;
identity recognition server: after processing the received face image data of the person, carrying out person identification based on the face image data; marking key observers, and sending whole-body image data of the key observers to a tracking and positioning system server;
tracking positioning system server: based on the acquired whole body image data of the key observer, decomposing the image data of the observer into an eye image, a head image, a shoulder image, a hand image and a foot image by utilizing an image processing algorithm; the tracking and positioning system server processes the last three frames of images of the eye image, the head image, the shoulder image, the hand image and the foot image respectively; obtaining eye sight data, head orientation data, shoulder orientation data, hand waving direction data and foot orientation data of key observers; and judging the current direction of the key observer based on the eye sight line data, the head direction data, the shoulder direction data, the hand waving direction data and the foot direction data.
Preferably, the eye sight line vector, the head direction vector, the shoulder direction vector, the hand waving direction vector and the foot direction vector are weighted and overlapped to obtain the current direction of the key observer.
Preferably, the tracking and positioning system server obtains head rotation data, shoulder rotation data, foot rotation and movement data through a machine learning algorithm based on the received complete head image, shoulder image and foot image of the key observer; the tracking and positioning system server calculates and obtains the predicted rotating and moving directions of key observation personnel based on the head rotating data, the shoulder rotating data and the foot rotating and moving data; the tracking and positioning system server judges and obtains the predicted movement direction of the key observer based on the current direction and the predicted rotation and movement direction of the key observer.
Preferably, the tracking and positioning system server is combined with the floor tile sensing direction to correct the predicted rotation and movement direction of the key observer.
Preferably, the tracking and positioning system server judges whether the key observer is currently in a static state or a moving state: when the key observer is in a static state, the predicted rotation and movement direction of the key observer is obtained based on the head rotation data; when the key observer is in a moving state, the predicted rotating and moving direction of the key observer is obtained based on the shoulder rotating data, the foot rotating data and the moving data.
A method for using a tracking and positioning system based on image data, comprising:
step S100: personnel enter a monitoring area;
step S200: a plurality of cameras acquire facial images of all people in a monitoring area;
step S300: the personal identification server processes the received face image data of the person and then carries out personal identification based on the face image data;
step S350: the identity recognition server marks key observation personnel and sends whole body image data of the key observation personnel to the tracking and positioning system server;
step S400: the tracking and positioning system server utilizes an image processing algorithm, such as an ROI algorithm, to decompose the image data of the person into an eye image, a head image, a shoulder image, a hand image and a foot image based on the acquired whole body image data of the key observer;
step S500: the tracking and positioning system server processes the last three frames of images of the eye image, the head image, the shoulder image, the hand image and the foot image respectively; obtaining eye sight data, head orientation data, shoulder orientation data, hand waving direction data and foot orientation data of key observers;
step S600: and judging the current direction of the key observer based on the eye sight line data, the head direction data, the shoulder direction data, the hand waving direction data and the foot direction data.
Preferably, a tracking and positioning system based on image data further comprises: and carrying out weighted superposition on the eye sight vector, the head orientation vector, the shoulder orientation vector, the hand waving direction vector and the foot orientation vector to obtain the current direction of the key observer.
Preferably, a tracking and positioning system based on image data further comprises:
step S700: the tracking and positioning system server obtains head rotation data, shoulder rotation data and foot rotation and movement data through a machine learning algorithm based on the received complete head image, shoulder image and foot image of the key observer;
step S800: the tracking and positioning system server calculates and obtains the predicted rotating and moving directions of key observation personnel based on the head rotating data, the shoulder rotating data and the foot rotating and moving data;
step S900: the tracking and positioning system server judges and obtains the predicted movement direction of the key observer based on the current direction and the predicted rotation and movement direction of the key observer.
Preferably, a tracking and positioning system based on image data further comprises:
step S810: the tracking and positioning system server is combined with the floor tile sensing direction to correct the predicted rotation and movement direction of key observation personnel.
Preferably, a tracking and positioning system based on image data further comprises:
step S800: the tracking and positioning system server acquires head rotation data, shoulder rotation data, foot rotation and movement data;
step S810: the tracking and positioning system server judges whether the key observer is currently in a static state or a moving state: when the key observer is in a stationary state, jumping to step S811; when the key observer is in a moving state, the process goes to step S812;
step S811: and obtaining the predicted rotation and movement direction of the key observer based on the head rotation data.
Step S812: and obtaining the predicted rotation and movement direction of the key observer based on the shoulder rotation data, the foot rotation and movement data.
The image data-based tracking and positioning system and the application method thereof can be applied to large-scale outdoor places such as stations, squares, airports, hospitals, schools and the like, and the image processing algorithm is utilized to obtain the direction data of five dimensions of eyes, heads, shoulders, hands and feet, so that the accuracy and rationality of judging the current direction of personnel are enhanced; meanwhile, the weighted superposition priorities of the eyes, the heads, the shoulders, the hands and the feet are distinguished, so that the judgment of the current direction of the personnel is more accurate and reasonable, and the direction judgment efficiency is improved; in addition, technical means such as floor tile sensing direction data and action routes of people are added, and the predicted movement direction of the key observer is obtained in an auxiliary mode by utilizing the big data means, so that the rotation and movement direction of the key observer are predicted more accurately.
Drawings
The following is a further description of embodiments of the invention, taken in conjunction with the accompanying drawings:
fig. 1 is a schematic structural diagram of a tracking and positioning system based on image data.
Fig. 2 is a block diagram of a tracking and positioning system camera based on image data.
Fig. 3 is a block diagram of a tracking and positioning system server based on image data.
FIG. 4 is a flow chart of a method of using the tracking and positioning system based on image data.
Fig. 5 is a schematic diagram of a whole body image decomposition area.
Fig. 6 is a weighted overlap diagram of the current direction of the key observer.
FIG. 7 is a flowchart of the steps of person identification.
Fig. 8 is a flow chart of predicted motion directions for an accentuated observer.
FIG. 9 is a schematic diagram of three compound dimensions of head rotation, shoulder rotation, foot rotation, and movement.
Fig. 10 is a schematic diagram of a weight sensing tile system.
FIG. 11 is a schematic diagram of a course of action of an accentuated observer.
Fig. 12 is a flow chart of another embodiment of predicting a direction of movement of an accentuated observer.
Detailed Description
The present invention is further illustrated in the accompanying drawings and detailed description which are to be understood as being merely illustrative of the invention and not limiting of its scope of use, and various modifications of the invention, which are equivalent to those skilled in the art upon reading the invention, will fall within the scope of the invention as defined in the appended claims.
As shown in figure 1 of the specification: a tracking and positioning system based on image data, comprising: the system comprises a plurality of cameras, an identity recognition server and a tracking and positioning system server; the camera is used for collecting face images and whole body images of people and comprises a fixed camera, a movable camera, an unmanned aerial vehicle camera, a mobile robot camera and the like; the identification server identifies personnel identity information based on the face recognition technology, and transmits whole-body image data to the tracking and positioning system server; the tracking and positioning system server is used for capturing motions, judging gestures and predicting motion directions of key observation personnel.
As shown in fig. 2 of the specification: the camera includes: an optical element, a control unit, a memory, a data transmission unit; the optical element is used for generating a face image and a whole-body image of a person; the control unit is used for managing the flow of the optical image; the memory is used for storing image data and whole body images; the data transmission unit is used for network information transmission.
As shown in fig. 3 of the specification: the identification server includes: the device comprises a control unit, a mass storage unit, a display unit and a data transmission unit; the control unit is used for identity identification flow control and whole body image data transmission flow control; the large-capacity storage unit is used for storing face images of people, key observation personnel information and key monitoring personnel information; the key monitoring personnel comprise suspicious personnel, criminal suspects, wanted persons, forbidden blacklist personnel and the like; when the face image of the person accords with the face image of the key monitoring person, the identity recognition server judges that the person is the key observation person; the display unit is used for displaying the identity recognition result and the whole body image transmission process information; the data transmission unit is used for network information transmission.
The tracking positioning system server includes: the device comprises a control unit, a mass storage unit, a display unit and a data transmission unit; the control unit is used for motion capture, gesture judgment, motion direction prediction and data transmission flow control; the large-capacity storage unit is used for storing control flow information, whole body image information, comparison information and the like; the display unit is used for displaying information such as the action route of the key observer, the predicted action direction and the like; the data transmission unit is used for network information transmission.
As shown in fig. 4 of the specification: in one embodiment, a method for using a tracking and positioning system based on image data includes:
step S100: personnel enter a monitoring area;
step S200: a plurality of cameras acquire facial images of all people in a monitoring area;
step S300: the personal identification server processes the received face image data of the person and then carries out personal identification based on the face image data;
step S350: the identity recognition server marks key observation personnel and sends whole body image data of the key observation personnel to the tracking and positioning system server;
step S400: the tracking and positioning system server utilizes an image processing algorithm, such as an ROI algorithm, to decompose the image data of the person into an eye image A, a head image B, a shoulder image C, a hand image D and a foot image E based on the acquired whole body image data of the key observer;
step S500: the tracking and positioning system server processes the last three frames of images of the eye image, the head image, the shoulder image, the hand image and the foot image respectively; obtaining eye sight data, head orientation data, shoulder orientation data, hand waving direction data and foot orientation data of key observers;
step S600: the current direction F of the key observer is determined based on the eye sight line data Y, the head orientation data T, the shoulder orientation data J, the hand waving direction data S, and the foot orientation data X.
In the embodiment, the image processing algorithm is utilized to obtain the direction data of five dimensions of eyes, heads, shoulders, hands and feet, so that the accuracy and the rationality of judging the current direction of the personnel are enhanced.
As shown in fig. 5 of the specification: the whole body image data of the person to be observed is decomposed into eye image data A, head image data B, shoulder image data C, hand image data D and foot image data E.
In one embodiment, a method for using a tracking and positioning system based on image data includes: step S300: after processing the received facial image data of the person, the identity recognition server extracts head/eye/nose/mouth characteristics of the person by using a HOG (Histogram ofOriented Gridients, feature detection algorithm), an LBP (Local Binary Patterns, local binary pattern algorithm) and a Gabor filtering algorithm; based on convolutional neural network algorithm (CNN, convolutional Neural Network) and facial landmark detection, personnel identity recognition is performed, and the diversity and accuracy of the identity recognition are improved.
As shown in fig. 6 of the specification: the eye sight vector Y, the head facing vector T, the shoulder facing vector J, the hand waving direction vector S and the foot facing vector X are weighted and overlapped to obtain the current direction F of the key observer; the direction vector superimposes the first gear of the weight: foot facing vector X and shoulder facing vector J, direction vector superimposed weight second gear: head orientation vector T, orientation vector superposition weight third gear: eye sight line vector Y, direction vector superposition weight fourth gear: hand swing direction vector S, weight size: first gear > second gear > third gear > fourth gear. In this embodiment, the weighted overlapping priorities of five dimensions of eyes, head, shoulders, hands and feet are distinguished, so that the judgment of the current direction of the person is more accurate and reasonable, and the efficiency of direction judgment is improved.
As shown in fig. 7 of the specification: in one embodiment, a method for using a tracking and positioning system based on image data includes:
step S300: after the identity recognition server processes the received facial image data of the person, the facial outline/eyes/nose/ears/mouth of the person is recognized by utilizing a region of interest (ROI) technology;
step S310: feature extraction and feature classification technology are utilized to obtain feature point information of facial contours/eyes/nose/ears/mouth of a person, and similarity and distance of feature points are calculated;
step S320: converting the calculation results of the face image data and the feature points into byte stream (IO stream) data, comparing the byte stream data with the pre-stored face image byte stream data in a key monitor database, wherein the pre-stored face image of the key monitor and the identity file information of the corresponding person are stored in the key monitor database, and the comparison algorithm comprises one or more of the following algorithms: convolutional neural network algorithms, eigenfacies algorithms, haar Cascade algorithms.
In the embodiment, the efficiency and the accuracy of the step of identifying the personnel identity are enhanced through the design and combination of an artificial intelligence algorithm.
As shown in fig. 8 of the specification: in one embodiment, a method for using a tracking and positioning system based on image data includes:
step S700: the tracking and positioning system server obtains head rotation data, shoulder rotation data and foot rotation and movement data through a machine learning algorithm based on the received complete head image, shoulder image and foot image of the key observer;
step S800: the tracking and positioning system server calculates and obtains the predicted rotating and moving directions of key observation personnel based on the head rotating data, the shoulder rotating data and the foot rotating and moving data;
step S900: the tracking and positioning system server judges and obtains the predicted movement direction of the key observer based on the current direction and the predicted rotation and movement direction of the key observer.
In the embodiment, on the basis of acquiring the current direction of the key observer, the predicted movement direction of the key observer is obtained by combining the superposition of three composite dimensions of head rotation, shoulder rotation, foot rotation and movement, so that the function of the tracking and positioning system is expanded.
As shown in fig. 9 of the specification: the received complete head image of the key observer is one end of video data, such as 3S video data, and the tracking and positioning system server obtains head rotation data based on the video dataTThe method comprises the steps of carrying out a first treatment on the surface of the The received complete shoulder image of the key observer is one end video data, such as 3S video data, and the tracking and positioning system server obtains shoulder rotation data based on the video dataJThe method comprises the steps of carrying out a first treatment on the surface of the The received complete foot image of the key observer is one end video data, such as 3S video data, and the tracking and positioning system server obtains foot rotation and movement data based on the video dataX
In this embodiment, for three composite dimensional data of head rotation, shoulder rotation, foot rotation and movement, video data of a certain duration is extracted, and the three composite dimensional data is further accurate by combining big data technical means.
In one embodiment, a method for using a tracking and positioning system based on image data includes:
step S100: personnel enter a monitoring area, and weight sensing floor tiles are paved in the monitoring area;
step S810: the tracking and positioning system server is combined with the floor tile sensing direction to correct the predicted rotation and movement direction of key observation personnel.
In the embodiment, the technical means and the data dimension are widened, and the weight sensing technical means is combined, so that the predicted rotation and movement directions of key observers are further corrected.
As shown in fig. 10 of the specification: each weight sensing floor tile is provided with a plurality of weight sensors, if a person steps on one foot in one floor tile, the figure shows that10AAs shown, ten weight sensors of the tile can sense the inclination direction and degree of the foot; if a person steps on two tiles with one foot, the figure shows10BThe tracking and positioning system server can sense the inclination direction and degree of the foot according to six weight sensors A2 to A4 and B6 to B8; if a person steps on four tiles with one foot, the figure shows10CAs shown, the tracking and positioning system server can sense the inclination direction and degree of the foot according to twelve weight sensors A3-A5, B5-B7, C1-C3 and D1, D7-D8.
The tracking and positioning system server identifies two feet of the person according to the whole body image of the person, combines the inclination directions and the degrees of the two feet, obtains the current floor tile induction direction of the person through a deep learning algorithm based on a large amount of data of the inclination directions and the degrees of the two feet and the floor tile induction direction data of the person at the time.
Reference is made to the accompanying figure 11 of the specification: in one embodiment, a method for using a tracking and positioning system based on image data includes:
step S350: the identity recognition server marks key observation personnel and sends whole body image data of the key observation personnel to the tracking and positioning system server;
step S351: the tracking and positioning system server starts to record floor tile sensing direction data of key observers and fits the floor tile sensing direction data to the action route of the key observers, and the current direction, the rotation direction and the movement direction of the key observers are combined together to form a data set.
Step S900: the tracking and positioning system server judges and obtains the predicted movement direction of the key observer based on the action route, the current direction and the predicted rotation and movement direction of the key observer.
In this embodiment, the action route of the key observer is combined, and the current direction, the rotation direction and the movement direction of the key observer are combined together to form a data set, and the predicted movement direction of the key observer is obtained in an auxiliary manner by using a big data means.
As shown in fig. 12 of the specification: in one embodiment, a method for using a tracking and positioning system based on image data includes:
step S800: the tracking and positioning system server acquires head rotation data, shoulder rotation data, foot rotation and movement data;
step S810: the tracking and positioning system server judges whether the key observer is currently in a static state or a moving state: when the key observer is in a stationary state, jumping to step S811; when the key observer is in a moving state, the process goes to step S812;
step S811: and obtaining the predicted rotation and movement direction of the key observer based on the head rotation data.
Step S812: and obtaining the predicted rotation and movement direction of the key observer based on the shoulder rotation data, the foot rotation and movement data.
In this embodiment, when the motion states of the personnel are different, the prediction influence degrees of the directions of different body parts on the rotation and movement directions are different, and in consideration of this, the prediction of the rotation and movement directions of the important observation personnel is more accurate through the distinction of the stationary or movement states of the personnel.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (10)

1. A tracking and positioning system based on image data, comprising:
a plurality of cameras: acquiring face images and whole-body images of all people in a monitoring area;
identity recognition server: after processing the received face image data of the person, carrying out person identification based on the face image data; marking key observers, and sending whole-body image data of the key observers to a tracking and positioning system server;
tracking positioning system server: based on the acquired whole body image data of the key observer, decomposing the image data of the observer into an eye image (A), a head image (B), a shoulder image (C), a hand image (D) and a foot image (E) by using an image processing algorithm; the tracking and positioning system server processes the last three frames of images of the eye image, the head image, the shoulder image, the hand image and the foot image respectively; obtaining eye sight data, head orientation data, shoulder orientation data, hand waving direction data and foot orientation data of key observers; the current direction (F) of the key observer is determined based on the eye sight line data (Y), the head direction data (T), the shoulder direction data (J), the hand swing direction data (S), and the foot direction data (X).
2. The tracking positioning system of claim 1, in which the eye gaze vector (Y), the head orientation vector (T), the shoulder orientation vector (J), the hand swing direction vector (S), the foot orientation vector (X) are weighted and superimposed to obtain the current direction (F) of the accentuated observer.
3. The tracking and positioning system of claim 2, wherein the tracking and positioning system server obtains head rotation data, shoulder rotation data, foot rotation and movement data by a machine learning algorithm based on the received complete head image, shoulder image, foot image of the accentuated observer; the tracking and positioning system server calculates and obtains the predicted rotating and moving directions of key observation personnel based on the head rotating data, the shoulder rotating data and the foot rotating and moving data; the tracking and positioning system server judges and obtains the predicted movement direction of the key observer based on the current direction and the predicted rotation and movement direction of the key observer.
4. The tracking system of claim 3, wherein the tracking system server modifies the predicted rotational and movement direction of the accentuated observer in combination with the tile sense direction.
5. The tracking positioning system of claim 4, wherein the tracking positioning system server determines whether the accent observer is currently stationary or moving: when the key observer is in a static state, the predicted rotation and movement direction of the key observer is obtained based on the head rotation data; when the key observer is in a moving state, the predicted rotating and moving direction of the key observer is obtained based on the shoulder rotating data, the foot rotating data and the moving data.
6. A method of using a tracking positioning system as claimed in any one of claims 1 to 5, comprising:
step S100: personnel enter a monitoring area;
step S200: a plurality of cameras acquire facial images of all people in a monitoring area;
step S300: the personal identification server processes the received face image data of the person and then carries out personal identification based on the face image data;
step S350: the identity recognition server marks key observation personnel and sends whole body image data of the key observation personnel to the tracking and positioning system server;
step S400: the tracking and positioning system server utilizes an image processing algorithm, such as an ROI algorithm, to decompose the image data of the person into an eye image (A), a head image (B), a shoulder image (C), a hand image (D) and a foot image (E) based on the acquired whole body image data of the key observer;
step S500: the tracking and positioning system server processes the last three frames of images of the eye image, the head image, the shoulder image, the hand image and the foot image respectively; obtaining eye sight data, head orientation data, shoulder orientation data, hand waving direction data and foot orientation data of key observers;
step S600: the current direction (F) of the key observer is determined based on the eye sight line data (Y), the head direction data (T), the shoulder direction data (J), the hand swing direction data (S), and the foot direction data (X).
7. A method of using a tracking positioning system as defined in claim 6, comprising: and (3) carrying out weighted superposition on the eye sight vector (Y), the head orientation vector (T), the shoulder orientation vector (J), the hand swing direction vector (S) and the foot orientation vector (X) to obtain the current direction (F) of the key observer.
8. A method of using a tracking positioning system as defined in claim 7, comprising:
step S700: the tracking and positioning system server obtains head rotation data, shoulder rotation data and foot rotation and movement data through a machine learning algorithm based on the received complete head image, shoulder image and foot image of the key observer;
step S800: the tracking and positioning system server calculates and obtains the predicted rotating and moving directions of key observation personnel based on the head rotating data, the shoulder rotating data and the foot rotating and moving data;
step S900: the tracking and positioning system server judges and obtains the predicted movement direction of the key observer based on the current direction and the predicted rotation and movement direction of the key observer.
9. A method of using a tracking positioning system as defined in claim 8, comprising:
step S810: the tracking and positioning system server is combined with the floor tile sensing direction to correct the predicted rotation and movement direction of key observation personnel.
10. A method of using a tracking positioning system as defined in claim 10, comprising:
step S800: the tracking and positioning system server acquires head rotation data, shoulder rotation data, foot rotation and movement data;
step S810: the tracking and positioning system server judges whether the key observer is currently in a static state or a moving state: when the key observer is in a stationary state, jumping to step S811; when the key observer is in a moving state, the process goes to step S812;
step S811: and obtaining the predicted rotation and movement direction of the key observer based on the head rotation data.
Step S812: and obtaining the predicted rotation and movement direction of the key observer based on the shoulder rotation data, the foot rotation and movement data.
CN202311561871.6A 2023-11-22 2023-11-22 Tracking and positioning system based on image data and using method Pending CN117392707A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311561871.6A CN117392707A (en) 2023-11-22 2023-11-22 Tracking and positioning system based on image data and using method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311561871.6A CN117392707A (en) 2023-11-22 2023-11-22 Tracking and positioning system based on image data and using method

Publications (1)

Publication Number Publication Date
CN117392707A true CN117392707A (en) 2024-01-12

Family

ID=89439326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311561871.6A Pending CN117392707A (en) 2023-11-22 2023-11-22 Tracking and positioning system based on image data and using method

Country Status (1)

Country Link
CN (1) CN117392707A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766059A (en) * 2015-04-01 2015-07-08 上海交通大学 Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning
CN113850145A (en) * 2021-08-30 2021-12-28 中国科学院上海微系统与信息技术研究所 Hand-eye orientation cooperative target positioning method
CN114511589A (en) * 2022-01-05 2022-05-17 北京中广上洋科技股份有限公司 Human body tracking method and system
CN116630853A (en) * 2023-05-19 2023-08-22 惠州市德赛西威智能交通技术研究院有限公司 Real-time video personnel tracking method and system for key transportation hub

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766059A (en) * 2015-04-01 2015-07-08 上海交通大学 Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning
CN113850145A (en) * 2021-08-30 2021-12-28 中国科学院上海微系统与信息技术研究所 Hand-eye orientation cooperative target positioning method
CN114511589A (en) * 2022-01-05 2022-05-17 北京中广上洋科技股份有限公司 Human body tracking method and system
CN116630853A (en) * 2023-05-19 2023-08-22 惠州市德赛西威智能交通技术研究院有限公司 Real-time video personnel tracking method and system for key transportation hub

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙洛;邸慧军;陶霖密;徐光;: "多摄像机人体姿态跟踪", 清华大学学报(自然科学版), no. 07, 15 July 2011 (2011-07-15) *
徐萧萧;王智灵;陈宗海;: "视频序列中基于头肩分割的人体位姿估计算法", 中国图象图形学报, no. 12, 16 December 2010 (2010-12-16) *

Similar Documents

Publication Publication Date Title
CN109271832B (en) People stream analysis method, people stream analysis device, and people stream analysis system
US7693310B2 (en) Moving object recognition apparatus for tracking a moving object based on photographed image
US8599266B2 (en) Digital processing of video images
CN110135249B (en) Human behavior identification method based on time attention mechanism and LSTM (least Square TM)
Wheeler et al. Face recognition at a distance system for surveillance applications
CN109657533A (en) Pedestrian recognition methods and Related product again
KR101839827B1 (en) Smart monitoring system applied with recognition technic of characteristic information including face on long distance-moving object
US20220180534A1 (en) Pedestrian tracking method, computing device, pedestrian tracking system and storage medium
JP2007317062A (en) Person recognition apparatus and method
JP5598751B2 (en) Motion recognition device
Hasan et al. Robust pose-based human fall detection using recurrent neural network
Bertoni et al. Perceiving humans: from monocular 3d localization to social distancing
CN106030610A (en) Real-time 3D gesture recognition and tracking system for mobile devices
Li et al. Robust multiperson detection and tracking for mobile service and social robots
JP5718632B2 (en) Part recognition device, part recognition method, and part recognition program
Sun et al. Real-time elderly monitoring for senior safety by lightweight human action recognition
Dileep et al. Suspicious human activity recognition using 2D pose estimation and convolutional neural network
Caliwag et al. Distance estimation in thermal cameras using multi-task cascaded convolutional neural network
Gupta et al. A robust approach of facial orientation recognition from facial features
CN117392707A (en) Tracking and positioning system based on image data and using method
Rothmeier et al. Comparison of Machine Learning and Rule-based Approaches for an Optical Fall Detection System
EP3901820A2 (en) Event analysis system and event analysis method
KR102356165B1 (en) Method and device for indexing faces included in video
Lin et al. A novel fall detection framework with age estimation based on cloud-fog computing architecture
JP2006136430A (en) Evaluation function generation method, individual identification system and individual identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination