CN111488858A - Pedestrian behavior analysis method and system for big data financial security system and robot - Google Patents

Pedestrian behavior analysis method and system for big data financial security system and robot Download PDF

Info

Publication number
CN111488858A
CN111488858A CN202010362579.1A CN202010362579A CN111488858A CN 111488858 A CN111488858 A CN 111488858A CN 202010362579 A CN202010362579 A CN 202010362579A CN 111488858 A CN111488858 A CN 111488858A
Authority
CN
China
Prior art keywords
image
human body
target
point
pedestrian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010362579.1A
Other languages
Chinese (zh)
Other versions
CN111488858B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qisheng Technology Co.,Ltd.
Original Assignee
Yang Jiumei
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yang Jiumei filed Critical Yang Jiumei
Priority to CN202010362579.1A priority Critical patent/CN111488858B/en
Priority to CN202110032162.3A priority patent/CN112749658A/en
Publication of CN111488858A publication Critical patent/CN111488858A/en
Application granted granted Critical
Publication of CN111488858B publication Critical patent/CN111488858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a pedestrian behavior analysis method, a system and a robot for a big data financial security system, wherein a section of video of a pedestrian entering a preset area is received; selecting a target pedestrian in a first frame image in the video; tracking the target pedestrian in the multi-frame image by adopting Kalman filtering; obtaining the distance between other targets and the target pedestrian in each frame of image; acquiring human body posture information of the target pedestrian and posture information of other targets in each frame of image; aiming at each frame of image, adjusting the human body posture information of the target pedestrian based on the distance and the posture information of other targets to obtain the human body posture information of the target pedestrian after adjustment; and predicting the behavior category of the target behavior according to the adjusted human body posture information. The influence of people or animals around the target pedestrian on the next step of the target behavior is considered, the accuracy of behavior analysis is improved, and the calculated amount is small.

Description

Pedestrian behavior analysis method and system for big data financial security system and robot
Technical Field
The invention relates to the technical field of computers, in particular to a pedestrian behavior analysis method and system for a big data financial security system and a robot.
Background
In the prior art, behavior analysis of pedestrians and target pedestrians is obtained by predicting based on a large amount of historical motion data of the pedestrians. In real life, however, the action of a person is influenced by the surrounding environment, for example by surrounding people or animals. Therefore, the analysis of the human behavior is performed only depending on the historical behavior of the human, and the analysis result is inaccurate.
Disclosure of Invention
The invention aims to provide a pedestrian behavior analysis method, a pedestrian behavior analysis system and a robot of a big data financial security system, which are used for solving the problems in the prior art.
In a first aspect, an embodiment of the present invention provides a pedestrian behavior analysis method for a big data financial security system, which is applied to a cloud computing platform, and the method includes:
receiving a section of video of a pedestrian entering a preset area, wherein the section of video comprises a plurality of frames of images, the images contain image information of the pedestrian, and the section of video is shot at the time when the pedestrian enters the preset area;
selecting a target pedestrian in a first frame image in the video, wherein the first frame image is an image with the first shooting time in the video;
tracking the target pedestrian in the multi-frame image by adopting Kalman filtering; obtaining the distance between other targets and the target pedestrian in each image, wherein the other targets are people or animals contained in the image and different from the target pedestrian;
acquiring human body posture information of the target pedestrian and posture information of other targets in each frame of image;
aiming at each frame of image, adjusting the human body posture information of the target pedestrian based on the distance and the posture information of other targets to obtain the human body posture information of the target pedestrian after adjustment;
and predicting the behavior category of the target behavior according to the adjusted human body posture information.
Optionally, the human body posture information includes a human body skeleton diagram; the obtaining of the human body posture information of the target pedestrian in each frame of image includes:
detecting human body key points in each frame of image, wherein the human body key points comprise two crotch point positions, a caudal vertebra point position, two shoulder point positions, a neck point position, a head point position, two elbow point positions, two wrist point positions, two knee point positions and two ankle point positions;
connecting an ankle point position, a knee point position, a crotch point position, a caudal vertebra point position, another crotch point position, another knee point position and another ankle point position in sequence through line segments, connecting a caudal vertebra point position, a neck point position and a head point position in sequence, and connecting a wrist point position, an elbow point position, a neck point position, another wrist point position and another elbow point position in sequence to obtain a human body skeleton diagram;
the posture information of the other targets comprises a skeleton map, and the obtaining of the posture information of the other targets in each frame of image comprises:
if the other target is a person, obtaining the posture information of the other target in each frame of image is the same as obtaining the human body posture information of the target pedestrian in each frame of image, and obtaining a human body skeleton diagram of the other target, wherein the human body skeleton diagram of the other target comprises other target key points, and the other target key points comprise two crotch point locations, one caudal vertebra point location, two shoulder point locations, one neck point location, one head point location, two elbow point locations, two wrist point locations, two knee point locations and two ankle point locations;
if the other target is an animal, the obtaining the pose information of the other target in each frame of image includes:
detecting animal key points in each frame of image, wherein the animal key points comprise two rear leg ankle joint point positions, two rear leg knee joint point positions, a caudal vertebra point position, a back middle point position, a head point position, two front leg ankle joint point positions and two front leg knee joint point positions; connecting lines according to the sequence of one posterior leg ankle joint point position, one posterior leg knee joint point position, a caudal vertebra point position, the other posterior leg knee joint point position and the other posterior leg ankle joint point position, and then connecting lines according to the sequence of one caudal vertebra point position, one back middle point position and one head point position; and connecting the one front leg knee point position, the back middle point position, the other front leg ankle joint point position and the other front leg knee point position in sequence to obtain an animal skeleton diagram.
Optionally, if the other target is a person, the other target key points include two crotch point locations, one caudal vertebra point location, two shoulder point locations, one neck point location, one head point location, two elbow point locations, two wrist point locations, two knee point locations, two ankle point locations, and the two crotch point locations, the one caudal vertebra point location, the two shoulder point locations, the one neck point location, the one head point location, the two elbow point locations, the two wrist point locations, the two knee point locations, and the two ankle point locations included in the human body key points are respectively in one-to-one correspondence;
the adjusting the human body posture information of the target pedestrian based on the distance and the posture information of the other targets to obtain the human body posture information of the target pedestrian after adjustment comprises the following steps:
scoring key points of other targets in the human skeleton diagrams of the other targets;
obtaining the ratio of the hitting distance to the distance from the other target to the target pedestrian;
adding the ratio to the coordinates of the human body key points corresponding to the other target key points to obtain adjusted human body key points;
and connecting the adjusted key points of the human body to obtain an adjusted human body skeleton diagram.
Optionally, scoring key points of other targets in the human skeleton diagram of the other targets includes:
obtaining the distance from the other target key points to the key points of the human body in the X-axis direction and the distance from the other target key points to the key points of the human body in the Y-axis direction, wherein the X-axis and the Y-axis respectively represent a horizontal axis and a vertical axis in an image coordinate;
if the distance in the Y-axis direction is greater than the preset value, determining the scoring calculation mode of the other target key points as follows: q = Y/(X + Y), wherein Y represents the distance between other target key points and the human body key point in the Y-axis direction, and X represents the distance between other target key points and the human body key point in the X-axis direction; q represents the score;
if the distance in the Y-axis direction is smaller than or equal to the preset value, determining the scoring calculation mode of the other target key points as follows: q = x/(x + y).
Optionally, if the other target is an animal, the other target key points are animal key points, and the animal key points include two rear leg ankle joint points, two rear leg knee joint points, one caudal vertebra point, one dorsal middle point, one head point, two front leg ankle joint points, two front leg knee points, and the two ankle points, the two knee points, one caudal vertebra point, the two shoulder points, one neck point, one head point, the two wrist points, and the two elbow points included in the human body key points, which are respectively in one-to-one correspondence;
adjusting the human body posture information of the target pedestrian based on the distance and the posture information of the other targets to obtain the human body posture information of the target pedestrian after adjustment, and the method comprises the following steps:
scoring animal key points in the animal skeleton map;
obtaining the ratio of the hitting distance to the distance from the other target to the target pedestrian;
adding the ratio to the coordinates of the human body key points corresponding to the animal key points to obtain accumulated human body key points;
multiplying the coordinates of the accumulated human body key points by a fear coefficient to obtain adjusted human body key points; wherein the fear coefficient characterizes a degree of fear of the target pedestrian for the animal;
and connecting the adjusted key points of the human body to obtain an adjusted human body skeleton diagram.
Optionally, scoring the animal key points in the animal skeleton map includes:
obtaining the distance from the animal key point to the human body key point in the X-axis direction and the Y-axis direction, wherein the X-axis and the Y-axis respectively represent a horizontal axis and a vertical axis in an image coordinate;
if the distance in the X-axis direction is greater than the threshold value, determining that the scoring calculation mode of the animal key points is as follows: q = g X Y/(X + Y), wherein Y represents the distance between other target key points and the human key point in the Y-axis direction, and X represents the distance between an animal key point and the human key point in the X-axis direction;
if the distance in the X-axis direction is smaller than or equal to a preset value, determining that the scoring calculation mode of the animal key points is as follows: q = x/(x + y).
Optionally, the adjusted human body posture information includes a step length, a step frequency and human body posture data; the predicting the behavior category of the target behavior according to the adjusted human body posture information comprises the following steps:
obtaining the step length of a target pedestrian in each image in a video, the step frequency of the target pedestrian between every two adjacent images in the video and the human body posture data of the target pedestrian in each image in the video according to the adjusted human body posture information of the images in the video; the step length refers to the distance between two foot plates of a target pedestrian, and the step frequency refers to the step frequency of the target pedestrian in two adjacent images; the human body posture data represents the posture of the target pedestrian in the image;
step length of a target pedestrian in a section of video is formed into a step length stream according to the sequence of the shooting time of the image where the step length is located; forming body state flow by human body posture data of a target pedestrian in a section of video according to the sequence of the shooting time of the image of the human body posture; forming a step frequency stream by the step frequency of the target pedestrian in a section of video according to the sequence of the step frequency occurrence time;
inputting the step flow, the body state flow and the step frequency flow into a behavior analysis network, wherein the behavior analysis network comprises a plurality of analysis layers, and the number of the plurality of analysis layers in the behavior analysis network is 1 more than that of the images in the video segment;
each layer of analysis layer in the behavior analysis network comprises 3 analysis nodes, and the 3 analysis nodes comprise a first analysis node, a second analysis node and a third analysis node;
the first analysis node of the nth layer of analysis layer obtains the adjusted step length of the nth frame of image according to the step length of the nth frame of image, the human body posture data of the nth frame of image, the expression information of the nth frame of image, the step frequency between the nth frame of image and the (n + 1) th frame of image and the output adjustment step length of the second analysis node of the n-1 th layer of analysis layer; wherein N is a positive integer less than N +1, and N +1 represents the number of analysis layers in the behavior analysis network; the nth frame of image is an image with the shooting time sequence of nth in the video at one end; n represents the number of images in a piece of video;
the second analysis node of the nth layer of analysis layer adjusts the human body posture data according to the step length of the nth frame of image, the human body posture data of the nth frame of image, the expression information of the nth frame of image, the step frequency between the nth frame of image and the (n + 1) th frame of image and the output of the third analysis node of the nth-1 layer of analysis layer, so as to obtain the adjusted human body posture data of the nth frame of image;
the third analysis node of the nth layer of analysis layer adjusts the step frequency according to the step length of the nth frame of image, the human body posture data of the nth frame of image, the expression information of the nth frame of image, the step frequency between the nth frame of image and the (n + 1) th frame of image and the output of the fourth analysis node of the nth-1 layer of analysis layer, and obtains the adjusted step frequency between the nth frame of image and the (n + 1) th frame of image;
in the behavior analysis network, the (N + 1) th layer of analysis layer performs fusion calculation on the adjusted step length, the adjusted human body posture data and the adjusted step frequency output by the nth layer of analysis layer to obtain a target pedestrian behavior value, and the target pedestrian behavior value represents the behavior category of the target behavior.
Optionally, after the predicting the behavior category of the target behavior according to the adjusted human body posture information, the method further includes:
acquiring the items which need to be dealt with by the target pedestrian and correspond to the behavior types from a big database; the items to be dealt with by the target pedestrian are stored in a large database in advance;
and/or sending out an alarm and/or an action instruction corresponding to the behavior category, wherein the action instruction is used for instructing a manager to take a normative action aiming at the target pedestrian.
In a second aspect, an embodiment of the present invention provides a pedestrian behavior analysis system for a big data financial security system, which is applied to a cloud computing platform, and the system includes:
the receiving module is used for receiving a section of video when a pedestrian enters a preset area, wherein the section of video comprises a plurality of frames of images, the images contain image information of the pedestrian, and the section of video is shot at the time when the pedestrian enters the preset area;
the selecting module is used for selecting a target pedestrian in a first frame image in the video segment, wherein the first frame image is an image with the first shooting time in the video segment;
the tracking module is used for tracking the target pedestrian in the multi-frame images by adopting Kalman filtering; obtaining the distance between other targets and the target pedestrian in each image, wherein the other targets are people or animals contained in the image and different from the target pedestrian;
the acquisition module is used for acquiring the human body posture information of the target pedestrian and the posture information of the other targets in each frame of image;
the adjusting module is used for adjusting the human body posture information of the target pedestrian according to each frame of image and based on the distance and the posture information of the other targets to obtain the human body posture information of the target pedestrian after adjustment;
and the analysis module is used for predicting the behavior category of the target behavior according to the adjusted human body posture information.
In a second aspect, an embodiment of the present invention provides a robot, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of any one of the methods described above when executing the program.
Compared with the prior art, the invention achieves the following beneficial effects:
the embodiment of the invention provides a pedestrian behavior analysis method, a pedestrian behavior analysis system and a robot for a big data financial security system, wherein the method comprises the following steps: receiving a section of video of a pedestrian entering a preset area, wherein the section of video comprises a plurality of frames of images, the images contain image information of the pedestrian, and the section of video is shot at the time when the pedestrian enters the preset area; selecting a target pedestrian in a first frame image in the video, wherein the first frame image is an image with the first shooting time in the video; tracking the target pedestrian in the multi-frame image by adopting Kalman filtering; obtaining the distance between other targets and the target pedestrian in each image, wherein the other targets are people or animals contained in the image and different from the target pedestrian; acquiring human body posture information of the target pedestrian and posture information of other targets in each frame of image; aiming at each frame of image, adjusting the human body posture information of the target pedestrian based on the distance and the posture information of other targets to obtain the human body posture information of the target pedestrian after adjustment; and predicting the behavior category of the target behavior according to the adjusted human body posture information. By adopting the scheme, the influence of people or animals around the target pedestrian on the next step of the target behavior is considered, the accuracy of behavior analysis is improved, meanwhile, the behavior category of the target behavior is predicted according to the adjusted human posture information of all the images in a section of video, a large amount of training is not needed, the behavior of the target pedestrian is directly predicted according to the behavior condition of the target pedestrian in a short time, the calculated amount is small, and the accuracy is high.
Drawings
Fig. 1 is a flowchart of a pedestrian behavior analysis method of a big data financial security system according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a behavior analysis network according to an embodiment of the present invention.
Fig. 3 is a schematic block structural diagram of a pedestrian behavior analysis system 200 of a big data financial security system according to an embodiment of the present invention.
Fig. 4 is a schematic block diagram of a robot according to an embodiment of the present invention.
The labels in the figure are: a big data financial security system pedestrian behavior analysis system 100; a receiving module 210; a selection module 220; a tracking module 230; an obtaining module 240; an adjustment module 250; an analysis module 260; a bus 500; a receiver 501; a processor 502; a transmitter 503; a memory 504; a bus interface 505.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
In a group, the feeling of a person is influenced by the conditions of other people or animals in the group, so that the behavior of the person is influenced, namely the behavior of the person is influenced by the behaviors of people or animals around the person, therefore, the analysis and prediction accuracy is high based on the analysis and prediction of the behaviors of target pedestrians by the people around or the animals around.
Examples
The embodiment of the invention provides a pedestrian behavior analysis method of a big data financial security system, which is applied to a cloud computing platform and comprises the following steps of:
s101: receiving a video segment of a pedestrian entering a preset area, wherein the video segment comprises a plurality of frames of images.
The image contains the image information of the pedestrian, and the video is shot at the time when the pedestrian enters the preset area. The image pickup device, such as a camera, may be set in a preset area, once the target pedestrian enters the preset area, the target pedestrian is photographed, and a video obtained after photographing for a period of time (the period of time may be one minute) is sent to the cloud computing platform.
S102: and selecting a target pedestrian in the first frame image in the video segment. The target pedestrian may be a pedestrian whose behavior is abnormal, which is manually selected by a user (manager) and is automatically selected by the system as the target pedestrian.
The first frame image is an image with the first shooting time in the video.
S103: tracking the target pedestrian in the multi-frame image by adopting Kalman filtering; and obtaining the distance between other targets and the target pedestrian in each frame of image.
Wherein the other object is a person or an animal contained in the image that is distinguished from the target pedestrian.
S104: and acquiring the human body posture information of the target pedestrian and the posture information of the other targets in each frame of image.
S105: and adjusting the human body posture information of the target pedestrian based on the distance and the posture information of the other targets aiming at each frame of image to obtain the human body posture information of the target pedestrian after adjustment.
S106: and predicting the behavior category of the target behavior according to the adjusted human body posture information of all images in a video.
By adopting the scheme, the influence of people or animals around the target pedestrian on the next step of the target behavior is considered, the accuracy of behavior analysis is improved, meanwhile, the behavior category of the target behavior is predicted according to the adjusted human posture information of all the images in a section of video, a large amount of training is not needed, the behavior of the target pedestrian is directly predicted according to the behavior condition of the target pedestrian in a short time, the calculated amount is small, and the accuracy is high.
The preset area can be a hall of a bank, a hall of a government department, a subway, a railway station and other places. The behavior of the target pedestrian (the items to be handled) is accurately predicted in advance, on one hand, the service effects and the service efficiency of services such as bank service and municipal service are improved, the effectiveness and the efficiency of bank operation and municipal work are improved, on the other hand, the safety problem can be prevented according to the predicted behavior (the items to be handled), the security protection effects of places such as a bank hall, a hall of a government department, a subway, a railway station and the like are improved, and the safety of the target pedestrian is ensured.
Optionally, the human body posture information includes a human body skeleton diagram; the obtaining of the human body posture information of the target pedestrian in each frame of image includes: detecting human body key points in each frame of image, wherein the human body key points comprise two crotch point positions, a caudal vertebra point position, two shoulder point positions, a neck point position, a head point position, two elbow point positions, two wrist point positions, two knee point positions and two ankle point positions; connecting an ankle point position, a knee point position, a crotch point position, a caudal vertebra point position, another crotch point position, another knee point position and another ankle point position in sequence through line segments, connecting a caudal vertebra point position, a neck point position and a head point position in sequence, and connecting a wrist point position, an elbow point position, a neck point position, another wrist point position and another elbow point position in sequence to obtain a human body skeleton diagram; the posture information of the other targets comprises a skeleton map, and the obtaining of the posture information of the other targets in each frame of image comprises: if the other targets are people, acquiring the posture information of the other targets in each frame of image, and acquiring the human body posture information of the target pedestrians in each frame of image, wherein the obtained human body skeleton map of the other targets comprises other target key points, and the other target key points comprise two crotch point positions, a tail vertebra point position, two shoulder point positions, a neck point position, a head point position, two elbow point positions, two wrist point positions, two knee point positions and two ankle point positions.
The specific way of detecting the human body key points in each frame of image may be to detect the human body key points by using a graphical Structure key point detection method (graphical Structure).
If the other target is an animal, the obtaining the pose information of the other target in each frame of image includes:
detecting animal key points in each frame of image, wherein the animal key points comprise two rear leg ankle joint point positions, two rear leg knee joint point positions, a caudal vertebra point position, a back middle point position, a head point position, two front leg ankle joint point positions and two front leg knee joint point positions; connecting lines according to the sequence of one posterior leg ankle joint point position, one posterior leg knee joint point position, a caudal vertebra point position, the other posterior leg knee joint point position and the other posterior leg ankle joint point position, and then connecting lines according to the sequence of one caudal vertebra point position, one back middle point position and one head point position; and connecting the one front leg knee point position, the back middle point position, the other front leg ankle joint point position and the other front leg knee point position in sequence to obtain an animal skeleton diagram. The specific way of detecting the animal key points in each frame of image may be to detect the animal key points by using a graphical structure key point detection method (pictorial structure).
Optionally, if the other targets are people, two crotch point locations, a caudal vertebra point location, two shoulder point locations, a neck point location, a head point location, two elbow point locations, two wrist point locations, two knee point locations, two ankle point locations that the other target key points include and the human body key points include two crotch point locations, one caudal vertebra point location, two shoulder point locations, one neck point location, one head point location, two elbow point locations, two wrist point locations, two knee point locations, two ankle point locations respectively correspond one-to-one. For example, the shoulder points on the left shoulder correspond to the shoulder points on the left shoulder of other targets, the shoulder points on the right shoulder correspond to the shoulder points on the right shoulder of other targets, and so on.
Optionally, the adjusting the human body posture information of the target pedestrian based on the distance and the posture information of the other targets to obtain the human body posture information of the target pedestrian after adjustment includes: scoring key points of other targets in the human skeleton diagrams of the other targets; obtaining the ratio of the hitting distance to the distance from the other target to the target pedestrian; adding the ratio to the coordinates of the human body key points corresponding to the other target key points to obtain adjusted human body key points; and connecting the adjusted key points of the human body to obtain an adjusted human body skeleton diagram.
Therefore, the human body skeleton map of the target pedestrian is adjusted, the problem that the human body skeleton map of the target pedestrian is inaccurate in detection due to the camera device is solved, the accuracy of the human body skeleton map of the target pedestrian is improved, and the accuracy of analyzing the behavior of the target pedestrian based on the human body skeleton map of the target pedestrian is improved.
Optionally, scoring is performed on other target key points in the human skeleton diagram of the other targets, specifically, the scoring may be: obtaining the distance from the other target key points to the key points of the human body in the X-axis direction and the distance from the other target key points to the key points of the human body in the Y-axis direction, wherein the X-axis and the Y-axis respectively represent a horizontal axis and a vertical axis in an image coordinate; if the distance in the Y-axis direction is greater than the preset value, determining the scoring calculation mode of the other target key points as follows: q = Y/(X + Y), wherein Y represents the distance between other target key points and the human body key point in the Y-axis direction, X represents the distance between other target key points and the human body key point in the X-axis direction, and q represents the score; if the distance in the Y-axis direction is smaller than or equal to the preset value, determining the scoring calculation mode of the other target key points as follows: q = x/(x + y). Wherein, the value of the preset value can be 2 meters, 3 meters, 4 meters and the like.
Optionally, if the other targets are animals, the other target key points are animal key points, and the animal key points include two rear leg ankle joint points, two rear leg knee joint points, one caudal vertebra point, one dorsal middle point, one head point, two front leg ankle joint points, and two front leg knee points, and the human key points include two ankle points, two knee points, one caudal vertebra point, two shoulder points, one neck point, one head point, two wrist points, and two elbow points, which are respectively in one-to-one correspondence.
Optionally, adjusting the human body posture information of the target pedestrian based on the distance and the posture information of the other targets to obtain the human body posture information of the target pedestrian after adjustment, including: scoring animal key points in the animal skeleton map; obtaining the ratio of the hitting distance to the distance from the other target to the target pedestrian; adding the ratio to the coordinates of the human body key points corresponding to the animal key points to obtain accumulated human body key points; multiplying the coordinates of the accumulated human body key points by a fear coefficient to obtain adjusted human body key points; wherein the fear coefficient characterizes a degree of fear of the target pedestrian for the animal; and connecting the adjusted key points of the human body to obtain an adjusted human body skeleton diagram.
Wherein, the animal key points in the animal skeleton diagram are scored, and the method specifically comprises the following steps: obtaining the distance from the animal key point to the human body key point in the X-axis direction and the Y-axis direction, wherein the X-axis and the Y-axis respectively represent a horizontal axis and a vertical axis in an image coordinate; if the distance in the X-axis direction is greater than the threshold value, determining that the scoring calculation mode of the animal key points is as follows: q = g X Y/(X + Y), wherein Y represents the distance between other target key points and the human key point in the Y-axis direction, and X represents the distance between an animal key point and the human key point in the X-axis direction; if the distance in the X-axis direction is smaller than or equal to a preset value, determining that the scoring calculation mode of the animal key points is as follows: q = x/(x + y). The threshold value can be 5 m, 6 m, 7 m, etc. In the embodiment of the invention, the value of the threshold is larger than the value of the preset value. The fear coefficient is set to 0.1, 1, 2, 3, where g =0.1 indicates no fear, g =1 indicates slight fear, g =2 indicates moderate fear, and g =3 indicates extreme fear.
Therefore, the accuracy of the influence of the scoring representation of other targets on the target pedestrian is improved by directly taking the position coordinates as the scoring calculation reference.
Optionally, the adjusted human body posture information includes a step size, a step frequency and human body posture data, and predicting the behavior category of the target behavior according to the adjusted human body posture information includes:
obtaining the step length of a target pedestrian in each image in a video, the step frequency of the target pedestrian between every two adjacent images in the video and the human body posture data of the target pedestrian in each image in the video according to the adjusted human body posture information of the images in the video; the step length refers to the distance between two foot plates of a target pedestrian, and the step frequency refers to the step frequency of the target pedestrian in two adjacent images; the human body posture data represents the posture of the target pedestrian in the image;
step length of a target pedestrian in a section of video is formed into a step length stream according to the sequence of the shooting time of the image where the step length is located; forming body state flow by human body posture data of a target pedestrian in a section of video according to the sequence of the shooting time of the image of the human body posture; and forming a step frequency stream by the step frequency of the target pedestrian in a section of video according to the sequence of the step frequency occurrence time. For example, a shot video segment includes 3 images, which are sorted from early to late according to the shooting time, and the 3 images are the 1 st frame image, the 2 nd frame image, and the 3 rd frame image, respectively. Then the step flow is: the step size of the 1 st frame image, the step size of the 2 nd frame image and the step size of the 3 rd frame image, and the body state flow is as follows: the human body posture data of the 1 st frame image, the 2 nd frame image and the 3 rd frame image are as follows: the step frequency of the 1 st frame image, the step frequency of the 2 nd frame image and the step frequency of the 3 rd frame image.
Then, inputting the step flow, the body state flow and the step frequency flow into a behavior analysis network, wherein the behavior analysis network comprises a plurality of analysis layers, and the number of the plurality of analysis layers in the behavior analysis network is 1 more than that of the images in the video segment;
each layer of analysis layer in the behavior analysis network comprises 3 analysis nodes, and the 3 analysis nodes comprise a first analysis node, a second analysis node and a third analysis node;
the first analysis node of the nth layer of analysis layer obtains the adjusted step length of the nth frame of image according to the step length of the nth frame of image, the human body posture data of the nth frame of image, the expression information of the nth frame of image, the step frequency between the nth frame of image and the (n + 1) th frame of image and the output adjustment step length of the second analysis node of the n-1 th layer of analysis layer; wherein N is a positive integer less than N +1, and N +1 represents the number of analysis layers in the behavior analysis network; the nth frame of image is an image with the shooting time sequence of nth in the video at one end; n represents the number of images in a piece of video;
the second analysis node of the nth layer of analysis layer adjusts the human body posture data according to the step length of the nth frame of image, the human body posture data of the nth frame of image, the expression information of the nth frame of image, the step frequency between the nth frame of image and the (n + 1) th frame of image and the output of the third analysis node of the nth-1 layer of analysis layer, so as to obtain the adjusted human body posture data of the nth frame of image;
the third analysis node of the nth layer of analysis layer adjusts the step frequency according to the step length of the nth frame of image, the human body posture data of the nth frame of image, the expression information of the nth frame of image, the step frequency between the nth frame of image and the (n + 1) th frame of image and the output of the fourth analysis node of the nth-1 layer of analysis layer, and obtains the adjusted step frequency between the nth frame of image and the (n + 1) th frame of image;
in the behavior analysis network, the (N + 1) th layer of analysis layer performs fusion calculation on the adjusted step length, the adjusted human body posture data and the adjusted step frequency output by the nth layer of analysis layer to obtain a target pedestrian behavior value, and the target pedestrian behavior value represents the behavior category of the target behavior.
Optionally, the step length of the nth frame image after adjustment is obtained by the first analysis node of the nth layer analysis layer according to the step length of the nth frame image, the human body posture data of the nth frame image, the expression information of the nth frame image, the step frequency between the nth frame image and the (n + 1) th frame image, and the output adjustment step length of the first analysis node of the n-1 th layer analysis layer, and the step length after adjustment of the nth frame image includes:
the first analysis node of the nth layer analysis layer is according to the formula:
a1= (0.5 a0+0.05 b +0.1 c) a/a 0, the step size of the n frame image is adjusted, and the adjusted step size of the n frame image is obtained; wherein a1 represents the adjusted step size of the nth frame image; a0 represents the adjusted step size of the n-1 th frame image, i.e. the output of the first analysis node of the n-1 th layer analysis layer is the adjusted step size of the n-1 th frame image; for n =1, the adjusted step size of the n-1 frame image is 0. Wherein, a represents the step length of the nth frame image, and b represents the index of the human body posture data of the nth frame image; and c, representing the step frequency between the nth frame image and the (n + 1) th frame image, wherein the index of the human body posture data of the nth frame image is a decimal number converted by the binary coding of the human body posture data (human body skeleton diagram) of the nth frame image.
Optionally, the index of the human body posture data may be obtained by integrating the scores of the key points of each human body in the human body skeleton diagram, or the human body skeleton diagram may be directly used as the human body posture information, and then the decimal number obtained by binary coding conversion of the human body skeleton diagram is used as the index of the human body posture data. Therefore, the obtained index of the human body posture data can accurately represent the posture of the target pedestrian.
The index of the human body posture data obtained by integrating the scores of the human body key points can be specifically as follows: and calculating the score of each human body key point, and taking the score of the human body key point as human body posture data. The specific way of calculating the score of each human body key point is as follows: and aiming at one of the human body key points, obtaining the distance between the rest of the human body key points and the human body key point and the cosine value of the included angle between the connecting line of the rest of the human body key points and the human body key point and the normal of the key point, wherein the normal of the human body key point is the ray of the key point pointing to the caudal vertebra point, and if the human body key point is the caudal vertebra point, the normal of the caudal vertebra point is the ray of the caudal vertebra point pointing to the cervical vertebra point. And taking the cosine value as a weight to carry out weighted summation on the distances between the rest human key points and the human key points to obtain the scoring of the human key points. For example, the human skeleton map includes 3 human key points, the distances between the remaining two human key points and the corresponding human key point are 2 and 3, respectively, and the cosine values of the included angles between the connecting lines of the remaining two human key points and the human key point and the normal of the human key point are 0.5 and 0.1, respectively, so that the score h =2 × 0.5+3 × 0.1=1.3 for the human key point.
Optionally, the adjusting the human body posture data by the second analysis node of the nth layer analysis layer according to the step length of the nth frame image, the human body posture data of the nth frame image, the expression information of the nth frame image, the step frequency between the nth frame image and the (n + 1) th frame image, and the output of the second analysis node of the n-1 th layer analysis layer to obtain the adjusted human body posture data of the nth frame image includes:
the second analysis node of the nth layer analysis layer is according to a formula:
b1= (0.06 a +0.5 b0+0.05 c) b/b 0 adjusting the human body posture data to obtain an index of the adjusted human body posture data, wherein b1 represents the index of the adjusted human body posture data; b0 represents the index of the adjusted human body posture data of the n-1 th frame image, namely the index of the adjusted human body posture data of the n-1 th frame image is output by the second analysis node of the n-1 th layer analysis layer; when n =1, the index of the adjusted human posture data of the n-1 th frame image is 0.
Optionally, the step frequency is adjusted by the third analysis node of the nth layer analysis layer according to the step length of the nth frame image, the human body posture data of the nth frame image, the expression information of the nth frame image, the step frequency between the nth frame image and the (n + 1) th frame image, and the output of the third analysis node of the n-1 th layer analysis layer, so as to obtain the adjusted step frequency between the nth frame image and the (n + 1) th frame image, including:
the fourth analysis node of the nth layer analysis layer is according to the formula:
c1= (0.1 a +0.01 b +0.5 c 0) c/c 0 adjusts the step frequency to obtain the adjusted step frequency, wherein c1 represents the adjusted step frequency of the nth frame image, c0 represents the adjusted step frequency of the n-1 th frame image, namely the output of the fourth analysis node of the n-1 th analysis layer is the adjusted step frequency of the n-1 th frame image; for n =1, the adjusted stride frequency of the n-1 frame image is 0.
Optionally, the N +1 th layer of analysis layer fuses the calculation to the step length after the adjustment, the human posture data after the adjustment, the step frequency after the adjustment of the output of the nth layer of analysis layer, obtains target pedestrian behavior value, and target pedestrian behavior value represents the matters that the target pedestrian needs to deal with, includes:
and calculating the target pedestrian behavior value by the N +1 th analysis layer according to the formula f =0.2 aN +0.5 bN +0.3 cN, wherein f represents the target pedestrian behavior value, aN represents the output of the first analysis node of the Nth analysis layer of the behavior analysis network, bN represents the output of the second analysis node of the Nth analysis layer of the behavior analysis network, and cN represents the output of the third analysis node of the Nth analysis layer of the behavior analysis network.
Please refer to fig. 2, which is a schematic structural diagram of a behavior analysis network, in which the behavior analysis network includes four analysis layers. Step length flow, body state flow and step frequency in the step frequency flow are respectively input into a first analysis node, a second analysis node and a third analysis node in a first layer analysis layer, a second layer analysis layer and a third layer analysis layer, a fourth layer analysis layer calculates a user behavior value according to a formula f =0.2 a3+0.5 b3+0.3 c3, wherein f represents a user behavior value, a3 represents an output of the first analysis node of the third layer analysis layer of the behavior analysis network, b3 represents an output of the second analysis node of the third layer analysis layer of the behavior analysis network, and c3 represents an output of the third analysis node of the third layer analysis layer of the behavior analysis network. The user behavior values characterize user behavior classifications, which have a one-to-one correspondence with the user behavior classifications. In fig. 2, F1 denotes a step stream, F2 denotes a body state stream, and F3 denotes a step frequency stream.
Therefore, the behavior analysis network comprehensively considers the influence of the step length, the step frequency, the human body posture data and the expression information before the time on the step length, the step frequency and the human body posture data of the user after the time, and simultaneously considers the mutual influence among the step length, the step frequency and the human body posture data in the same time, the behavior of the user is analyzed through the behavior analysis network based on the step length, the step frequency and the human body posture data of the user, the accuracy of the behavior analysis of the user is improved, finally the output user behavior value can accurately represent the items needing to be handled by the user, so that the next intention of the user can be accurately predicted, namely the items needing to be handled, the items needing to be handled can be prepared or serviced in advance for the behavior of the user, on the one hand, the service effects and the service efficiency of services such as bank service, municipal service and the like are improved, and the bank operation is improved, Validity and efficiency of municipal work, on the other hand can also in time prevent the emergence of security problem according to the action of prediction (the matter that needs to be handled), improve the security protection effect in places such as the hall of bank, the hall of government department, subway, railway station, guarantee user's safety.
Optionally, after the predicting the behavior category of the target behavior according to the adjusted human body posture information, the method further includes: acquiring the items which need to be dealt with by the target pedestrian and correspond to the behavior types from a big database; the items to be dealt with by the target pedestrian are stored in a large database in advance; and/or sending out an alarm and/or an action instruction corresponding to the behavior category, wherein the action instruction is used for instructing a manager to take a normative action aiming at the target pedestrian. Therefore, management and control or service can be timely informed to the target user, and the management and control and service efficiency is improved.
It should be noted that the pedestrian behavior analysis method of the big data financial security system provided by the embodiment of the invention can be used for detecting the movement of the pedestrian, can also be used for identifying the movement of the patient and the fetus in the abdomen, and can be widely applied to the fields of security, medical treatment, driving, makeup, direct broadcast, education, agriculture, military, railway, highway, public transportation, and the like. The pedestrian behavior analysis method can play a significant role in the field of artificial intelligence, namely the pedestrian behavior analysis method of the big data financial security system provided by the application can be widely applied to the field of artificial intelligence. The user behavior analysis method of the financial institution security system is also a data processing method.
The embodiment of the application also correspondingly provides an executing main body for executing the steps, and the executing main body can be the pedestrian behavior analysis system 200 of the big data financial security system in fig. 3. The pedestrian behavior analysis system 200 of the big data financial security system is configured in a cloud computing platform, please refer to fig. 3, and the system includes:
the receiving module 210 is configured to receive a section of video when a pedestrian enters a preset area, where the section of video includes multiple frames of images, and the images include image information of the pedestrian, and the section of video is shot when the pedestrian enters the preset area;
the selecting module 220 is configured to select a target pedestrian in a first frame image in the video segment, where the first frame image is an image with the first shooting time in the video segment;
a tracking module 230, configured to track the target pedestrian in the multi-frame image by using kalman filtering; obtaining the distance between other targets and the target pedestrian in each image, wherein the other targets are people or animals contained in the image and different from the target pedestrian;
an obtaining module 240, configured to obtain, in each frame of image, human body posture information of the target pedestrian and posture information of the other targets;
an adjusting module 250, configured to adjust, for each frame of image, human body posture information of the target pedestrian based on the distance and the posture information of the other targets, so as to obtain the human body posture information after the adjustment of the target pedestrian;
and the analysis module 260 is used for predicting the behavior category of the target behavior according to the adjusted human body posture information.
With regard to the system in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present invention further provides a robot, as shown in fig. 4, including a memory 504, a processor 502, and a computer program stored on the memory 504 and executable on the processor 502, where the processor 502 implements the steps of any one of the methods for analyzing pedestrian behaviors in the big data financial security system when executing the program.
Where in fig. 4 a bus architecture (represented by bus 500) is shown, bus 500 may include any number of interconnected buses and bridges, and bus 500 links together various circuits including one or more processors, represented by processor 502, and memory, represented by memory 504. The bus 500 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 505 provides an interface between the bus 500 and the receiver 501 and transmitter 503. The receiver 501 and the transmitter 503 may be the same element, i.e. a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 502 is responsible for managing the bus 500 and general processing, and the memory 504 may be used for storing data used by the processor 502 in performing operations.
Optionally, the robot further comprises a communication module and a camera module;
the camera module is connected with the memory, the processor and the communication module; the processor is connected with the communication module. The camera module is used for collecting and receiving a section of video of a target pedestrian entering a preset area and sending the section of video to the memory and/or the processor and/or the communication module. The communication module is used for sending the face image to a cloud computing terminal; the communication module is further used for obtaining a section of video or action indication stored in the large database from the cloud computing terminal and sending the section of video or action indication to the processor.
The large database can be a database in a memory or a database arranged on a cloud computing platform.
The pedestrian behavior analysis method of the big data financial security system is applied to the field of artificial intelligence, namely the robot is used for executing the pedestrian behavior analysis method of the big data financial security system, and therefore corresponding service is provided for target pedestrians.
In the embodiment of the invention, the pedestrian behavior analysis system of the big data financial security system is installed in the robot, and the pedestrian behavior analysis system can be specifically stored in a memory in the form of a software functional module and can be processed and operated by a processor. As an embodiment, when a target pedestrian (target pedestrian) walks into a hall or an area of a financial institution or a public place, a camera in a camera device is started by a machine to shoot and collect a section of video of the target pedestrian, and then the section of video is sent to the memory and/or the processor and/or the communication module. The communication module is used for sending the face image to a cloud computing platform; the communication module is further used for obtaining target pedestrian flow guidance stored in the big database from the cloud computing platform and sending the target pedestrian flow guidance to the processor, and then the robot starts the target pedestrian behavior analysis system of the financial institution security system to execute the pedestrian behavior analysis method of the big data financial security system, so that the target pedestrian behavior analysis is analyzed.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in an apparatus according to an embodiment of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A pedestrian behavior analysis method of a big data financial security system is applied to a cloud computing platform and is characterized by comprising the following steps:
receiving a section of video of a pedestrian entering a preset area, wherein the section of video comprises a plurality of frames of images, the images contain image information of the pedestrian, and the section of video is shot at the time when the pedestrian enters the preset area;
selecting a target pedestrian in a first frame image in the video, wherein the first frame image is an image with the first shooting time in the video;
tracking the target pedestrian in the multi-frame image by adopting Kalman filtering; obtaining the distance between other targets and the target pedestrian in each image, wherein the other targets are people or animals contained in the image and different from the target pedestrian;
acquiring human body posture information of the target pedestrian and posture information of other targets in each frame of image;
aiming at each frame of image, adjusting the human body posture information of the target pedestrian based on the distance and the posture information of other targets to obtain the human body posture information of the target pedestrian after adjustment;
and predicting the behavior category of the target behavior according to the adjusted human body posture information.
2. The method of claim 1, wherein the body pose information comprises a body skeleton map; the obtaining of the human body posture information of the target pedestrian in each frame of image includes:
detecting human body key points in each frame of image, wherein the human body key points comprise two crotch point positions, a caudal vertebra point position, two shoulder point positions, a neck point position, a head point position, two elbow point positions, two wrist point positions, two knee point positions and two ankle point positions;
connecting an ankle point position, a knee point position, a crotch point position, a caudal vertebra point position, another crotch point position, another knee point position and another ankle point position in sequence through line segments, connecting a caudal vertebra point position, a neck point position and a head point position in sequence, and connecting a wrist point position, an elbow point position, a neck point position, another wrist point position and another elbow point position in sequence to obtain a human body skeleton diagram;
the posture information of the other targets comprises a skeleton map, and the obtaining of the posture information of the other targets in each frame of image comprises:
if the other target is a person, obtaining the posture information of the other target in each frame of image is the same as obtaining the human body posture information of the target pedestrian in each frame of image, and obtaining a human body skeleton diagram of the other target, wherein the human body skeleton diagram of the other target comprises other target key points, and the other target key points comprise two crotch point locations, one caudal vertebra point location, two shoulder point locations, one neck point location, one head point location, two elbow point locations, two wrist point locations, two knee point locations and two ankle point locations;
if the other target is an animal, the obtaining the pose information of the other target in each frame of image includes:
detecting animal key points in each frame of image, wherein the animal key points comprise two rear leg ankle joint point positions, two rear leg knee joint point positions, a caudal vertebra point position, a back middle point position, a head point position, two front leg ankle joint point positions and two front leg knee joint point positions; connecting lines according to the sequence of one posterior leg ankle joint point position, one posterior leg knee joint point position, a caudal vertebra point position, the other posterior leg knee joint point position and the other posterior leg ankle joint point position, and then connecting lines according to the sequence of one caudal vertebra point position, one back middle point position and one head point position; and connecting the one front leg knee point position, the back middle point position, the other front leg ankle joint point position and the other front leg knee point position in sequence to obtain an animal skeleton diagram.
3. The method of claim 2, wherein if the other target is a person, the other target key points comprise two crotch points, one caudal vertebra point, two shoulder points, one neck point, one head point, two elbow points, two wrist points, two knee points, two ankle points, and the two crotch points, one caudal vertebra point, two shoulder points, one neck point, one head point, two elbow points, two wrist points, two knee points, and two ankle points comprised by the body key points correspond one-to-one with respect to each other;
the adjusting the human body posture information of the target pedestrian based on the distance and the posture information of the other targets to obtain the human body posture information of the target pedestrian after adjustment comprises the following steps:
scoring key points of other targets in the human skeleton diagrams of the other targets;
obtaining the ratio of the hitting distance to the distance from the other target to the target pedestrian;
adding the ratio to the coordinates of the human body key points corresponding to the other target key points to obtain adjusted human body key points;
and connecting the adjusted key points of the human body to obtain an adjusted human body skeleton diagram.
4. The method of claim 3, wherein scoring other target keypoints in the human skeleton map of the other target comprises:
obtaining the distance from the other target key points to the key points of the human body in the X-axis direction and the distance from the other target key points to the key points of the human body in the Y-axis direction, wherein the X-axis and the Y-axis respectively represent a horizontal axis and a vertical axis in an image coordinate;
if the distance in the Y-axis direction is greater than the preset value, determining the scoring calculation mode of the other target key points as follows: q = Y/(X + Y), wherein Y represents the distance between other target key points and the human body key point in the Y-axis direction, and X represents the distance between other target key points and the human body key point in the X-axis direction; q represents the score;
if the distance in the Y-axis direction is smaller than or equal to the preset value, determining the scoring calculation mode of the other target key points as follows: q = x/(x + y).
5. The method of claim 4, wherein if the other target is an animal, the other target key points are animal key points, the animal key points comprise two hind leg ankle joint points, two hind leg knee joint points, one caudal vertebra point, one dorsal middle point, one head point, two foreleg ankle joint points, two foreleg knee joint points, and the two ankle points, two knee points, one caudal vertebra point, two shoulder points, one neck point, one head point, two wrist points, two elbow points of the human key points respectively correspond one-to-one;
adjusting the human body posture information of the target pedestrian based on the distance and the posture information of the other targets to obtain the human body posture information of the target pedestrian after adjustment, and the method comprises the following steps:
scoring animal key points in the animal skeleton map;
obtaining the ratio of the hitting distance to the distance from the other target to the target pedestrian;
adding the ratio to the coordinates of the human body key points corresponding to the animal key points to obtain accumulated human body key points;
multiplying the coordinates of the accumulated human body key points by a fear coefficient to obtain adjusted human body key points; wherein the fear coefficient characterizes a degree of fear of the target pedestrian for the animal;
and connecting the adjusted key points of the human body to obtain an adjusted human body skeleton diagram.
6. The method of claim 5, wherein scoring animal keypoints in the animal skeleton map comprises:
obtaining the distance from the animal key point to the human body key point in the X-axis direction and the Y-axis direction, wherein the X-axis and the Y-axis respectively represent a horizontal axis and a vertical axis in an image coordinate;
if the distance in the X-axis direction is greater than the threshold value, determining that the scoring calculation mode of the animal key points is as follows: q = g X Y/(X + Y), wherein Y represents the distance between other target key points and the human key point in the Y-axis direction, and X represents the distance between an animal key point and the human key point in the X-axis direction;
if the distance in the X-axis direction is smaller than or equal to a preset value, determining that the scoring calculation mode of the animal key points is as follows: q = x/(x + y).
7. The method of claim 6, wherein the adjusted body pose information comprises step size, step frequency, and body pose data; the predicting the behavior category of the target behavior according to the adjusted human body posture information comprises the following steps:
obtaining the step length of a target pedestrian in each image in a video, the step frequency of the target pedestrian between every two adjacent images in the video and the human body posture data of the target pedestrian in each image in the video according to the adjusted human body posture information of the images in the video; the step length refers to the distance between two foot plates of a target pedestrian, and the step frequency refers to the step frequency of the target pedestrian in two adjacent images; the human body posture data represents the posture of the target pedestrian in the image;
step length of a target pedestrian in a section of video is formed into a step length stream according to the sequence of the shooting time of the image where the step length is located; forming body state flow by human body posture data of a target pedestrian in a section of video according to the sequence of the shooting time of the image of the human body posture; forming a step frequency stream by the step frequency of the target pedestrian in a section of video according to the sequence of the step frequency occurrence time;
inputting the step flow, the body state flow and the step frequency flow into a behavior analysis network, wherein the behavior analysis network comprises a plurality of analysis layers, and the number of the plurality of analysis layers in the behavior analysis network is 1 more than that of the images in the video segment;
each layer of analysis layer in the behavior analysis network comprises 3 analysis nodes, and the 3 analysis nodes comprise a first analysis node, a second analysis node and a third analysis node;
the first analysis node of the nth layer of analysis layer obtains the adjusted step length of the nth frame of image according to the step length of the nth frame of image, the human body posture data of the nth frame of image, the expression information of the nth frame of image, the step frequency between the nth frame of image and the (n + 1) th frame of image and the output adjustment step length of the second analysis node of the n-1 th layer of analysis layer; wherein N is a positive integer less than N +1, and N +1 represents the number of analysis layers in the behavior analysis network; the nth frame of image is an image with the shooting time sequence of nth in the video at one end; n represents the number of images in a piece of video;
the second analysis node of the nth layer of analysis layer adjusts the human body posture data according to the step length of the nth frame of image, the human body posture data of the nth frame of image, the expression information of the nth frame of image, the step frequency between the nth frame of image and the (n + 1) th frame of image and the output of the third analysis node of the nth-1 layer of analysis layer, so as to obtain the adjusted human body posture data of the nth frame of image;
the third analysis node of the nth layer of analysis layer adjusts the step frequency according to the step length of the nth frame of image, the human body posture data of the nth frame of image, the expression information of the nth frame of image, the step frequency between the nth frame of image and the (n + 1) th frame of image and the output of the fourth analysis node of the nth-1 layer of analysis layer, and obtains the adjusted step frequency between the nth frame of image and the (n + 1) th frame of image;
in the behavior analysis network, the (N + 1) th layer of analysis layer performs fusion calculation on the adjusted step length, the adjusted human body posture data and the adjusted step frequency output by the nth layer of analysis layer to obtain a target pedestrian behavior value, and the target pedestrian behavior value represents the behavior category of the target behavior.
8. The method of claim 1, wherein after the predicting the behavior class of the target behavior according to the adjusted body posture information, the method further comprises:
acquiring the items which need to be dealt with by the target pedestrian and correspond to the behavior types from a big database; the items to be dealt with by the target pedestrian are stored in a large database in advance;
and/or sending out an alarm and/or an action instruction corresponding to the behavior category, wherein the action instruction is used for instructing a manager to take a normative action aiming at the target pedestrian.
9. The utility model provides a big data financial security protection system pedestrian behavior analytic system, is applied to cloud computing platform which characterized in that, the system includes:
the receiving module is used for receiving a section of video when a pedestrian enters a preset area, wherein the section of video comprises a plurality of frames of images, the images contain image information of the pedestrian, and the section of video is shot at the time when the pedestrian enters the preset area;
the selecting module is used for selecting a target pedestrian in a first frame image in the video segment, wherein the first frame image is an image with the first shooting time in the video segment;
the tracking module is used for tracking the target pedestrian in the multi-frame images by adopting Kalman filtering; obtaining the distance between other targets and the target pedestrian in each image, wherein the other targets are people or animals contained in the image and different from the target pedestrian;
the acquisition module is used for acquiring the human body posture information of the target pedestrian and the posture information of the other targets in each frame of image;
the adjusting module is used for adjusting the human body posture information of the target pedestrian according to each frame of image and based on the distance and the posture information of the other targets to obtain the human body posture information of the target pedestrian after adjustment;
and the analysis module is used for predicting the behavior category of the target behavior according to the adjusted human body posture information.
10. A robot comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of any one of claims 1 to 8 when executing the program.
CN202010362579.1A 2020-04-30 2020-04-30 Pedestrian behavior analysis method and system for cloud computing big data financial security system Active CN111488858B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010362579.1A CN111488858B (en) 2020-04-30 2020-04-30 Pedestrian behavior analysis method and system for cloud computing big data financial security system
CN202110032162.3A CN112749658A (en) 2020-04-30 2020-04-30 Pedestrian behavior analysis method and system for big data financial security system and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010362579.1A CN111488858B (en) 2020-04-30 2020-04-30 Pedestrian behavior analysis method and system for cloud computing big data financial security system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110032162.3A Division CN112749658A (en) 2020-04-30 2020-04-30 Pedestrian behavior analysis method and system for big data financial security system and robot

Publications (2)

Publication Number Publication Date
CN111488858A true CN111488858A (en) 2020-08-04
CN111488858B CN111488858B (en) 2021-07-06

Family

ID=71793073

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110032162.3A Pending CN112749658A (en) 2020-04-30 2020-04-30 Pedestrian behavior analysis method and system for big data financial security system and robot
CN202010362579.1A Active CN111488858B (en) 2020-04-30 2020-04-30 Pedestrian behavior analysis method and system for cloud computing big data financial security system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110032162.3A Pending CN112749658A (en) 2020-04-30 2020-04-30 Pedestrian behavior analysis method and system for big data financial security system and robot

Country Status (1)

Country Link
CN (2) CN112749658A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112133051A (en) * 2020-11-24 2020-12-25 兰和科技(深圳)有限公司 Behavior pre-judgment monitoring system based on big data

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136066A (en) * 2011-04-29 2011-07-27 电子科技大学 Method for recognizing human motion in video sequence
CN104156819A (en) * 2014-08-08 2014-11-19 中国矿业大学(北京) Method and device used for automatically observing and correcting unsafe behaviors at important posts
WO2016042357A1 (en) * 2014-09-16 2016-03-24 Singapore Telecommunications Limited Predicting human movement behaviors using location services model
CN108549835A (en) * 2018-03-08 2018-09-18 深圳市深网视界科技有限公司 Crowd counts and its method, terminal device and the storage medium of model construction
CN108875712A (en) * 2018-08-01 2018-11-23 四川电科维云信息技术有限公司 A kind of act of violence detection system and method based on ViF descriptor
CN109522793A (en) * 2018-10-10 2019-03-26 华南理工大学 More people's unusual checkings and recognition methods based on machine vision
CN109670474A (en) * 2018-12-28 2019-04-23 广东工业大学 A kind of estimation method of human posture based on video, device and equipment
CN110008867A (en) * 2019-03-25 2019-07-12 五邑大学 A kind of method for early warning based on personage's abnormal behaviour, device and storage medium
CN110110613A (en) * 2019-04-19 2019-08-09 北京航空航天大学 A kind of rail traffic exception personnel's detection method based on action recognition
CN110188599A (en) * 2019-04-12 2019-08-30 哈工大机器人义乌人工智能研究院 A kind of human body attitude behavior intellectual analysis recognition methods
CN110532948A (en) * 2019-08-29 2019-12-03 南京泛在地理信息产业研究院有限公司 A kind of high-precision pedestrian track extracting method based on video
CN110796472A (en) * 2019-09-02 2020-02-14 腾讯科技(深圳)有限公司 Information pushing method and device, computer readable storage medium and computer equipment
CN110929638A (en) * 2019-11-20 2020-03-27 北京奇艺世纪科技有限公司 Human body key point identification method and device and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10810414B2 (en) * 2017-07-06 2020-10-20 Wisconsin Alumni Research Foundation Movement monitoring system
CN110675426B (en) * 2018-07-02 2022-11-22 百度在线网络技术(北京)有限公司 Human body tracking method, device, equipment and storage medium
CN109977856B (en) * 2019-03-25 2023-04-07 中国科学技术大学 Method for identifying complex behaviors in multi-source video
CN110287923B (en) * 2019-06-29 2023-09-15 腾讯科技(深圳)有限公司 Human body posture acquisition method, device, computer equipment and storage medium
CN110674785A (en) * 2019-10-08 2020-01-10 中兴飞流信息科技有限公司 Multi-person posture analysis method based on human body key point tracking

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136066A (en) * 2011-04-29 2011-07-27 电子科技大学 Method for recognizing human motion in video sequence
CN104156819A (en) * 2014-08-08 2014-11-19 中国矿业大学(北京) Method and device used for automatically observing and correcting unsafe behaviors at important posts
WO2016042357A1 (en) * 2014-09-16 2016-03-24 Singapore Telecommunications Limited Predicting human movement behaviors using location services model
CN108549835A (en) * 2018-03-08 2018-09-18 深圳市深网视界科技有限公司 Crowd counts and its method, terminal device and the storage medium of model construction
CN108875712A (en) * 2018-08-01 2018-11-23 四川电科维云信息技术有限公司 A kind of act of violence detection system and method based on ViF descriptor
CN109522793A (en) * 2018-10-10 2019-03-26 华南理工大学 More people's unusual checkings and recognition methods based on machine vision
CN109670474A (en) * 2018-12-28 2019-04-23 广东工业大学 A kind of estimation method of human posture based on video, device and equipment
CN110008867A (en) * 2019-03-25 2019-07-12 五邑大学 A kind of method for early warning based on personage's abnormal behaviour, device and storage medium
CN110188599A (en) * 2019-04-12 2019-08-30 哈工大机器人义乌人工智能研究院 A kind of human body attitude behavior intellectual analysis recognition methods
CN110110613A (en) * 2019-04-19 2019-08-09 北京航空航天大学 A kind of rail traffic exception personnel's detection method based on action recognition
CN110532948A (en) * 2019-08-29 2019-12-03 南京泛在地理信息产业研究院有限公司 A kind of high-precision pedestrian track extracting method based on video
CN110796472A (en) * 2019-09-02 2020-02-14 腾讯科技(深圳)有限公司 Information pushing method and device, computer readable storage medium and computer equipment
CN110929638A (en) * 2019-11-20 2020-03-27 北京奇艺世纪科技有限公司 Human body key point identification method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AMIR FARID AMINIAN MODARRES 等: "Body posture graph: a new graph-based posture descriptor for human behaviour recognition", 《IET COMPUTER VISION》 *
王娜: "视频中人体行为预测的方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112133051A (en) * 2020-11-24 2020-12-25 兰和科技(深圳)有限公司 Behavior pre-judgment monitoring system based on big data

Also Published As

Publication number Publication date
CN111488858B (en) 2021-07-06
CN112749658A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
CN111261301B (en) Big data infectious disease prevention and control method and system
US11478169B2 (en) Action recognition and pose estimation method and apparatus
Ward et al. Simulation of foot-and-mouth disease spread within an integrated livestock system in Texas, USA
CN111914636B (en) Method and device for detecting whether pedestrian wears safety helmet
CN109741309A (en) A kind of stone age prediction technique and device based on depth Recurrent networks
CN106225681B (en) A kind of Longspan Bridge health status monitoring device
CN111488853B (en) Big data face recognition method and system for financial institution security system and robot
CN112465855B (en) Passenger flow statistical method, device, storage medium and equipment
Suju et al. FLANN: Fast approximate nearest neighbour search algorithm for elucidating human-wildlife conflicts in forest areas
CN111611895B (en) OpenPose-based multi-view human skeleton automatic labeling method
CN110659391A (en) Video detection method and device
CN111488858B (en) Pedestrian behavior analysis method and system for cloud computing big data financial security system
CN106600652A (en) Panorama camera positioning method based on artificial neural network
IL257092A (en) A method and system for tracking objects between cameras
Baig et al. A robust motion detection technique for dynamic environment monitoring: A framework for grid-based monitoring of the dynamic environment
CN111476202B (en) User behavior analysis method and system of security system
CN114120436A (en) Motion recognition model training method, motion recognition method and related device
CN114913550A (en) Wounded person identification method and system based on deep learning under wound point gathering scene
CN114973425A (en) Traffic police gesture recognition method and device
CN114511922A (en) Physical training posture recognition method, device, equipment and storage medium
CN113674306A (en) Pedestrian trajectory acquisition method, system, device and medium based on fisheye lens
CN113114994A (en) Behavior sensing method, device and equipment
Lee et al. Design of integrated control system for preventing the spread of livestock diseases
CN111123734A (en) Complex scene testing method and device for unmanned vehicle and storage medium
CN113449674B (en) Pig face identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210621

Address after: 310000 room 201-1, building 4, Yangfan business center, Liangzhu street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Qisheng Technology Co.,Ltd.

Address before: 15 / F, block a, building 1, Shangding international, hi tech Zone, Chengdu, Sichuan 610000

Applicant before: Yang Jiumei

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant