CN117392743A - Human running recognition method and device, electronic equipment and storage medium - Google Patents
Human running recognition method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN117392743A CN117392743A CN202311182981.1A CN202311182981A CN117392743A CN 117392743 A CN117392743 A CN 117392743A CN 202311182981 A CN202311182981 A CN 202311182981A CN 117392743 A CN117392743 A CN 117392743A
- Authority
- CN
- China
- Prior art keywords
- human body
- head
- frame
- human
- target image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000001514 detection method Methods 0.000 claims abstract description 228
- 238000004590 computer program Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 abstract description 7
- 238000004364 calculation method Methods 0.000 description 8
- 230000004044 response Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The application is applicable to the technical field of image processing, and provides a human running identification method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring target images of at least two frames including a target pedestrian; based on each target image, determining a human body center point and a human head detection frame corresponding to each target image; determining the moving speed of the head pixels of the target pedestrians based on each target image and the human body center point and the head detection frame corresponding to each target image; if the moving speed of the human head pixels is greater than or equal to the moving speed threshold of the human head pixels, determining that the target pedestrian is in a running state, compared with the problem that the classification accuracy is low or the recognition time is long in the human body running recognition method in the prior art, the human body running recognition accuracy is improved, and the human body running recognition efficiency in multiple scenes is also improved.
Description
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a human running recognition method, a device, electronic equipment and a storage medium.
Background
In many scenes such as urban management, it is necessary to recognize whether or not pedestrians are running through image processing.
The existing human running recognition method is low in classification accuracy or recognition time, low in response speed and low in recognition efficiency of human running.
The prior art has the problem that the identification method of human running has low identification efficiency in a plurality of scenes.
Disclosure of Invention
The embodiment of the application provides a human running recognition method, a device, electronic equipment and a storage medium, which can solve the problem that the human running recognition method in the prior art is low in recognition efficiency in a plurality of scenes.
In a first aspect, an embodiment of the present application provides a method for identifying running of a human body, including:
acquiring target images of at least two frames including a target pedestrian;
based on each target image, determining a human body center point and a human head detection frame corresponding to each target image;
determining the moving speed of the head pixels of the target pedestrian based on each target image and the human body center point and the head detection frame corresponding to each target image;
and if the moving speed of the head pixels is greater than or equal to the threshold value of the moving speed of the head pixels, determining the artificial running state of the target pedestrian.
In one embodiment, each of the target images includes a first frame target image and a second frame target image;
based on each target image, determining a human body center point and a human head detection frame corresponding to each target image, including:
determining a first human body detection frame of the first frame target image and a second human body detection frame of the second frame target image based on the first frame target image and the second frame target image;
determining a first human body center point corresponding to the first human body detection frame and a second human body center point corresponding to the second human body detection frame based on the first human body detection frame and the second human body detection frame;
based on the first human body detection frame and the second human body detection frame, a first human head detection frame corresponding to the first human body center point and a second human head detection frame corresponding to the second human body center point are determined, the first human head detection frame is located in the first human body detection frame, and the second human head detection frame is located in the second human body detection frame.
In one embodiment, each of the target images includes a first frame target image and a second frame target image;
Based on each target image, determining a human body center point and a human head detection frame corresponding to each target image, and further comprising:
based on the first frame target image, synchronously determining a first human body detection frame and a first human head detection frame of the first frame target image;
synchronously determining a second human body detection frame and a second human head detection frame of the second frame target image based on the second frame target image;
and based on the first human body detection frame and the second human body detection frame, respectively determining a first human body center point corresponding to the first human body detection frame and a second human body center point corresponding to the second human body detection frame.
In one embodiment, each target image includes a first frame target image and a second frame target image, where the first frame target image corresponds to a first human body center point and a first human head detection frame, and the second frame target image corresponds to a second human body center point and a second human head detection frame;
based on each of the target images and the human body center point and the human head detection frame corresponding to each of the target images, determining a human head pixel movement speed of the target pedestrian includes:
and determining the moving speed of the head pixels of the target pedestrian based on the first human body center point, the second human body center point, the first head detection frame, the second head detection frame and the interval time between the first frame target image and the second frame target image.
In one embodiment, the determining the moving speed of the head pixel of the target pedestrian based on the first human body center point, the second human body center point, the first head detection frame, the second head detection frame, and the interval time between the first frame target image and the second frame target image includes:
determining a human body pixel moving distance of the target pedestrian based on the first human body center coordinates of the first human body center point and the second human body center coordinates of the second human body center point;
determining the width of a head pixel of the target pedestrian based on a first corner coordinate and a second corner coordinate of the first head detection frame or based on a third corner coordinate and a fourth corner coordinate of the second head detection frame, wherein a first corner corresponding to the first corner coordinate and a second corner corresponding to the second corner coordinate are diagonal points of the first head detection frame, and a third corner corresponding to the second corner coordinate and a fourth corner corresponding to the fourth corner coordinate are diagonal points of the second head detection frame;
and determining the head pixel moving speed of the target pedestrian based on the human body pixel moving distance, the head pixel width and the interval time.
In one embodiment, the head pixel movement speed comprises a first head pixel movement speed in a first direction and a second head pixel movement speed in a second direction, wherein the first direction is perpendicular to the second direction;
the determining the moving speed of the head pixel of the target pedestrian based on the first human body center point, the second human body center point, the first head detection frame, the second head detection frame, and the interval time between the first frame target image and the second frame target image further includes:
determining the first human head pixel moving speed based on the first human body center point, the second human body center point, the first human head detecting frame, the second human head detecting frame and the interval time between the first frame target image and the second frame target image;
determining the second human head pixel moving speed based on the first human body center point, the second human body center point, the first human head detecting frame, the second human head detecting frame and the interval time between the first frame target image and the second frame target image;
and determining the head pixel moving speed of the target pedestrian based on the first head pixel moving speed and the second head pixel moving speed.
In one embodiment, determining the head pixel movement speed threshold comprises:
acquiring a first sample data set, a second sample data set and a preset head pixel moving speed, wherein the first sample data set comprises a plurality of first pixel moving speeds of sample pedestrians in a human running state, and the second sample data set comprises a plurality of second pixel moving speeds of the sample pedestrians in a walking state;
if the preset head pixel moving speed is smaller than or equal to the first pixel moving speed of the preset proportion number of the first sample data set, and the preset head pixel moving speed is larger than or equal to the second pixel moving speed of the preset proportion number of the second sample data set, determining that the preset head pixel moving speed is the head pixel moving speed threshold.
In a second aspect, an embodiment of the present application provides an identification device for running by a human body, including:
the acquisition module is used for acquiring target images of at least two frames including a target pedestrian;
the first determining module is used for determining a human body center point and a human head detection frame corresponding to each target image based on each target image;
the second determining module is used for determining the moving speed of the head pixels of the target pedestrian based on the target images, the human body center point corresponding to the target images and the head detection frame;
And the third determining module is used for determining the artificial running state of the target pedestrian if the moving speed of the head pixel is greater than or equal to the threshold value of the moving speed of the head pixel.
In a third aspect, embodiments of the present application provide an electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the method according to any one of the first aspects when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, which when executed by a processor implements a method as in any of the first aspects.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
according to the identification method for human running provided by the first aspect of the embodiment of the application, target images of at least two frames including a target pedestrian are acquired; based on each target image, determining a human body center point and a human head detection frame corresponding to each target image; determining the moving speed of the head pixels of the target pedestrians based on each target image and the human body center point and the head detection frame corresponding to each target image; if the moving speed of the human head pixels is greater than or equal to the moving speed threshold of the human head pixels, the human running state of the target pedestrian is determined, and as the moving speed of the human head pixels of the target pedestrian can be determined through the human body center point corresponding to the target image of the target pedestrian and the human head detection frame and compared with the moving speed threshold of the human head pixels, whether the target pedestrian is in the running state or not can be judged.
It will be appreciated that the advantages of the second, third and fourth aspects may be found in the relevant description of the first aspect and are not repeated here.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for identifying human running according to an embodiment of the present application;
fig. 2 is a schematic flow chart of determining a human body center point and a human head detection frame corresponding to each target image based on each target image provided in the embodiment of the present application;
FIG. 3 is a schematic diagram of a first frame of target image and a second frame of target image combined in an image coordinate system according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of determining a human body center point and a human head detection frame corresponding to each target image based on each target image according to another embodiment of the present application;
Fig. 5 is a schematic flow chart of determining a moving speed of a head pixel of a target pedestrian based on a first human body center point, a second human body center point, a first head detection frame, a second head detection frame, and an interval time between a first frame target image and a second frame target image according to an embodiment of the present application;
fig. 6 is a schematic flow chart of determining a moving speed of a head pixel of a target pedestrian based on a first human body center point, a second human body center point, a first head detection frame, a second head detection frame, and an interval time between a first frame target image and a second frame target image according to another embodiment of the present application;
fig. 7 is a schematic flow chart of determining a moving speed of a head pixel of a target pedestrian based on a first human body center point, a second human body center point, a first head detection frame, a second head detection frame, and an interval time between a first frame target image and a second frame target image according to another embodiment of the present application;
FIG. 8 is a flowchart illustrating a process of determining a running state of a target pedestrian if a head pixel moving speed is greater than or equal to a head pixel moving speed threshold according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a running recognition device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise. "plurality" means "two or more".
In many scenes such as urban management, it is necessary to recognize whether or not pedestrians are running through image processing.
The existing human running recognition method is low in classification accuracy or recognition time, low in response speed and low in recognition efficiency of human running.
Therefore, the identification method for human running in the prior art has the problem of low identification efficiency in a plurality of scenes.
In view of the above problems, the method for identifying human running provided in the embodiments of the present application obtains target images including at least two frames of target pedestrians; based on each target image, determining a human body center point and a human head detection frame corresponding to each target image; determining the moving speed of the head pixels of the target pedestrians based on each target image and the human body center point and the head detection frame corresponding to each target image; if the moving speed of the human head pixels is greater than or equal to the moving speed threshold of the human head pixels, the human running state of the target pedestrian is determined, and as the moving speed of the human head pixels of the target pedestrian can be determined through the human body center point corresponding to the target image of the target pedestrian and the human head detection frame and compared with the moving speed threshold of the human head pixels, whether the target pedestrian is in the running state or not can be judged.
The running recognition method provided in the present application is exemplarily described below with reference to specific embodiments.
In a first aspect, as shown in fig. 1, the present embodiment provides a method for identifying running of a human body, including:
s100, acquiring target images of at least two frames including a target pedestrian.
In one embodiment, a device with image acquisition or video acquisition acquires a target image including at least two frames of a target pedestrian, for example, a mobile terminal such as a mobile electronic eye shoots an image including at least 2 frames of the target pedestrian, or acquires an image including at least 2 frames of the target pedestrian from a video stream of a monitoring camera, wherein the target pedestrian is one or more pedestrians to be subjected to human running recognition, so that the running state of the target pedestrian is conveniently recognized through the image of at least 2 frames, the specific frame number of the target image is determined according to scene requirements, the more the frame number of the target image, the more accurate the running state recognition of the human body is, but the recognition time of the running state is increased, and the recognition efficiency is reduced.
S200, based on each target image, determining a human body center point and a human head detection frame corresponding to each target image.
In one embodiment, based on each target image, a human body center point and a human head detection frame corresponding to each target image are determined, so that the interval time between the images can be conveniently acquired through each target image, the pedestrian pixel moving distance can be conveniently acquired through each human body center point, the human head width can be conveniently acquired through the human head detection frame, and the human head pixel moving speed can be further acquired.
In one embodiment, each target image includes a first frame target image and a second frame target image, wherein the first frame target image and the second frame target image each include a target pedestrian, so as to obtain a human body detection frame and a human head detection frame of the target pedestrian.
In one embodiment, as shown in fig. 2, based on each target image, determining a human body center point and a human head detection frame corresponding to each target image includes:
s211, determining a first human body detection frame of the first frame target image and a second human body detection frame of the second frame target image based on the first frame target image and the second frame target image.
In one embodiment, as shown in fig. 3, based on a first frame target image (left image in the figure) and a second frame target image (right image in the figure), tracking a target pedestrian by a trained human detection model, and determining a first human detection frame of the target pedestrian in the first frame target image and a second human detection frame of the target image in the second frame is advantageous for determining a pixel movement distance of the target pedestrian by each human detection frame.
In one embodiment, the human detection model includes a Yolo target detection model or a center net network model, and the first frame target image and the second frame target image are detected by the constructed Yolo (You Only Look Once) target detection model or center net network model to determine whether the target pedestrian is included in the first frame target image and the second frame target image, and the first human detection frame of the target pedestrian in the first frame target image and the second human detection frame in the second frame target image are acquired. The central Net network model comprises a main network model and a target detection model, and target images of a plurality of frames are images obtained by normalizing pixel values of each frame of image.
S212, determining a first human body center point corresponding to the first human body detection frame and a second human body center point corresponding to the second human body detection frame based on the first human body detection frame and the second human body detection frame.
In one embodiment, as shown in fig. 3, based on the first human body detection frame and the second human body detection frame, the geometric center of the first human body detection frame is adopted as a corresponding first human body center point, and the geometric center of the second human body detection frame is adopted as a corresponding second human body center point, so that the coordinates of the first human body center point and the second human body center point can be conveniently and quickly confirmed, and the human body running recognition efficiency is improved.
S213, determining a first human head detection frame corresponding to the first human body center point and a second human head detection frame corresponding to the second human body center point based on the first human body detection frame and the second human body detection frame.
In one embodiment, as shown in fig. 3, a first human head detection frame corresponding to a first human body center point and a second human head detection frame corresponding to a second human body center point are determined by a trained human head detection model based on the first human body detection frame and the second human body detection frame; the first human head detection frame is positioned in the first human body detection frame, the second human head detection frame is positioned in the second human body detection frame, after the human body detection frame is acquired, the human head detection frame of the target pedestrian is determined from the human body detection frame, the image range of the head of the target pedestrian is reduced, the operation speed of human running recognition is improved, the efficiency of human running recognition is improved, meanwhile, the first human head detection frame corresponds to the first human body center point, the second human head detection frame corresponds to the second human body center point, the human head detection frame corresponds to the corresponding human body center point, the false recognition of the human head detection frame is avoided, and the accuracy of human running recognition is improved.
In one embodiment, training the human head detection model includes: acquiring a first training set and a first verification set, wherein the first training set comprises marked objects similar to a human head and marked target images containing target pedestrians, and the first verification set comprises unlabeled target images containing target pedestrians; and training the human head detection model by adopting the first training set until the accuracy of the human head detection frame of the first verification set predicted by the human head detection model is greater than or equal to a preset percentage, and the fluctuation range of the accuracy of the preset prediction times is smaller than or equal to a preset percentage range. The value range of the preset percentage is greater than or equal to 99%, the value range of the preset percentage is 0.1%, and the human head detection model comprises at least one of a Yolo target detection model, a CenterNet network model or a Faster R-CNN.
In another embodiment, as shown in fig. 4, based on each target image, a human body center point and a human head detection frame corresponding to each target image are determined, and further including:
s221, based on the first frame target image, a first human body detection frame and a first human head detection frame of the first frame target image are synchronously determined.
S222, based on the second frame target image, synchronously determining a second human body detection frame and a second human head detection frame of the second frame target image.
S223, based on the first human body detection frame and the second human body detection frame, a first human body center point corresponding to the first human body detection frame and a second human body center point corresponding to the second human body detection frame are respectively determined.
In another embodiment, the first human body detection frame and the first human head detection frame of the first frame target image are determined directly in the first frame target image through the human body detection model and the human head detection model, the second human body detection frame and the second human head detection frame of the second frame target image are determined synchronously in the second frame target image through the human body detection model and the human head detection model, the first human body detection frame is not required to be determined in the first frame target image, the first human head detection frame in the first human body detection frame is not required to be determined in the second frame target image, the second human body detection frame is not required to be determined in the second human body detection frame, and the second human head detection frame in the second human body detection frame is not required to be determined.
S300, determining the moving speed of the head pixels of the target pedestrians based on the target images, the human body center points corresponding to the target images and the head detection frames.
In one embodiment, the moving speed of the head pixels of the target pedestrian is determined based on each target image and the human body center point and the head detection frame corresponding to each target image, without mapping to the physical speed of the target pedestrian in the actual scene, so that the speed for determining the moving speed of the head pixels of the target pedestrian is improved, and the efficiency of human running recognition is improved.
In one embodiment, each target image includes a first frame target image and a second frame target image, the first frame target image corresponding to a first human body center point and a first human head detection frame, the second frame target image corresponding to a second human body center point and a second human head detection frame.
In one embodiment, determining the moving speed of the head pixel of the target pedestrian based on each target image and the human body center point and the head detection frame corresponding to each target image includes:
and determining the moving speed of the head pixels of the target pedestrian based on the first human body center point, the second human body center point, the first head detection frame, the second head detection frame and the interval time between the first frame target image and the second frame target image.
In one embodiment, as shown in fig. 5, determining the head pixel moving speed of the target pedestrian based on the first human body center point, the second human body center point, the first human head detection frame, the second human head detection frame, and the interval time between the first frame target image and the second frame target image includes:
s311, determining the human body pixel moving distance of the target pedestrian based on the first human body center coordinates of the first human body center point and the second human body center coordinates of the second human body center point.
In one embodiment, since the human body detection model can form a motion track of a target pedestrian at different moments in an image coordinate system, as shown in fig. 3, a first frame target image and a second frame target image including the target pedestrian are synthesized in the same image coordinate system, namely, the synthesized target image obtained by overlapping the first frame target image and the second frame target image in the same image coordinate system is equivalent to the synthesized target image, the image coordinate system of the motion track of the target pedestrian takes the upper left corner of the synthesized image as an origin, takes the vertical downward direction as an X axis, takes the horizontal rightward direction as a Y axis, and takes the first human body center coordinate of the target pedestrian of the first frame target image on the left side as a Y axis The second human body center coordinates of the target pedestrian of the second frame target image on the right side are +.>And determining the human body pixel moving distance L of the target pedestrian through a Euclidean distance calculation formula based on the first human body center coordinate of the first human body center point A0 and the second human body center coordinate of the second human body center point A1, wherein the human body pixel moving distance L is the pixel distance between the first human body center point A0 and the second human body center point A1.
In one embodiment, the Euclidean distance calculation for obtaining the human pixel movement distance L1 is as follows:
s312, the human head pixel width of the target pedestrian is determined based on the first corner coordinate and the second corner coordinate of the first human head detection frame or based on the third corner coordinate and the fourth corner coordinate of the second human head detection frame.
In one embodiment, as shown in fig. 3, the first corner point corresponding to the first corner point coordinate and the second corner point corresponding to the second corner point coordinate are diagonal points of the first human head detection frame, that is, the corner point H01 and the diagonal point H03 of the first human head detection frame; the third corner point corresponding to the second corner point coordinate and the fourth corner point corresponding to the fourth corner point coordinate are diagonal points of the second human head detection frame, namely the corner point H11 and the diagonal point H13 of the second human head detection frame, so that the human head pixel width of the target pedestrian in the horizontal direction or the vertical direction can be determined by the 2 diagonal point coordinates of the human head detection frame of the rectangle.
In one embodiment, as shown in FIG. 3, the first corner coordinates based on corner H01 of the first human head detection frameAnd second corner coordinates of corner points +.>Or the third corner coordinate based on the second human head detection frame +.>And diagonal pointsFourth corner coordinates->Determining the width of the head pixel of the target pedestrian, i.e. head pixel width +.>Or human head pixel width
S313, determining the head pixel moving speed of the target pedestrian based on the human body pixel moving distance, the head pixel width and the interval time.
In one embodiment, the moving speed v of the head pixel of the target pedestrian is determined through the calculation of the moving speed of the head pixel based on the moving distance L of the head pixel, the width W of the head pixel and the interval time t, and since the moving speed of the head pixel is not required to be mapped to the physical speed of the target pedestrian in the actual scene, and the width of the head pixel is determined only for one head detection frame, the speed of determining the moving speed of the head pixel of the target pedestrian is further improved, and the efficiency of human running recognition is further improved.
In one embodiment, the head pixel movement speed is calculated as:
v=L/W/t
wherein v is the moving speed of the human head pixel, L is the moving distance of the human body pixel, W is the width of the human head pixel, and t is the interval time, namely the interval time is the time period between the time corresponding to the first frame of target image and the time corresponding to the second frame of target image.
In one embodiment, the value range of the interval time is less than or equal to 1 second, and in this embodiment, the specific value of the interval time is set according to the requirements of human running recognition of different scenes, for example, the interval time can also be 0.5 second.
In another embodiment, the head pixel movement speed includes a first head pixel movement speed in a first direction and a second head pixel movement speed in a second direction, wherein the first direction is perpendicular to the second direction, e.g., the first direction is a horizontal direction (i.e., the image coordinate system Y-axis direction) and the second direction is a vertical direction (i.e., the image coordinate system X-axis direction).
In another embodiment, as shown in fig. 6, determining the moving speed of the head pixel of the target pedestrian based on the first human body center point, the second human body center point, the first human head detection frame, the second human head detection frame, and the interval time between the first frame target image and the second frame target image further includes:
s321, determining the first human head pixel moving speed based on the first human body center point, the second human body center point, the first human head detection frame, the second human head detection frame and the interval time between the first frame target image and the second frame target image.
In another embodiment, the first human head pixel moving speed is determined based on the first human body center point, the second human body center point, the first human head detecting frame, the second human head detecting frame and the interval time between the first frame target image and the second frame target image, and the accuracy of the human head pixel moving speed is further improved due to the fact that the first human head pixel moving speed in the horizontal direction is calculated, and the accuracy of human body running recognition is also further improved.
In another embodiment, as shown in fig. 7, determining the first human pixel movement speed based on the first human center point, the second human center point, the first human head detection frame, the second human head detection frame, and the interval time between the first frame target image and the second frame target image includes:
s3211, determining a first human pixel movement distance of the target pedestrian in the first direction based on the first human center coordinates of the first human center point and the second human center coordinates of the second human center point.
In another embodiment, the first human center coordinates of the target pedestrian based on the first frame target image areAnd the second human body center coordinate of the target pedestrian of the second frame target image is +. > Determining a first human pixel movement distance in a first direction of a target pedestrian>
S3212, determining a first head pixel width of the target pedestrian in the first direction based on the first corner coordinates and the second corner coordinates of the diagonal point of the first head detection frame and the third corner coordinates and the fourth corner coordinates of the diagonal point of the second head detection frame,
in another embodiment, the position of the first corner point corresponding to the first corner point coordinate is the same as the position of the third corner point corresponding to the third corner point coordinate in the second corner point detection frame, for example, the position of the corner point H01 of the first corner point detection frame is the same as the position of the corner point H11 of the second corner point detection frame.
In another embodiment, the first corner coordinates based on the corner point H01 of the first human head detection frameAnd second corner coordinates of the diagonal point H03 +.>And the third corner coordinates ++of the corner H11 of the second human head detection frame>And the fourth angular point coordinates of the diagonal point H13Determining a first person of the target pedestrian along a first direction by a first person pixel width calculationThe average value of the human head pixel width in the first direction of the first human head detection frame and the second human head detection frame is the first human head pixel width W1, so that the accuracy of the first human head pixel width is improved, and the accuracy of human running recognition is also improved.
In another embodiment, the first head pixel width calculation is:
wherein W1 is the first human pixel width.
S3213, determining a first head pixel movement speed of the target pedestrian in the first direction based on the first head pixel movement distance and the first head pixel width.
In another embodiment, a first head pixel movement speed of the target pedestrian in the first direction is determined by a first head pixel movement speed calculation based on the first head pixel movement distance and the first head pixel width.
In another embodiment, the first head pixel movement velocity is calculated as:
v1=L1/W1/t
wherein v1 is the first human pixel moving speed, L1 is the first human pixel moving distance, W1 is the first human pixel width, and t is the interval time, i.e. the interval time is the time period between the time corresponding to the first frame target image and the time corresponding to the second frame target image.
S322, determining a second human head pixel moving speed based on the first human body center point, the second human body center point, the first human head detecting frame, the second human head detecting frame and the interval time between the first frame target image and the second frame target image.
In another embodiment, determining the second human head pixel movement speed based on the first human body center point, the second human body center point, the first human head detection frame, the second human head detection frame, and the interval time between the first frame target image and the second frame target image includes:
S3211, determining a second human pixel movement distance of the target pedestrian in the second direction based on the first human center coordinates of the first human center point and the second human center coordinates of the second human center point.
In another embodiment, the first human center coordinates of the target pedestrian based on the first frame target image areAnd the second human body center coordinate of the target pedestrian of the second frame target image is +.> Determining a second human pixel movement distance of the target pedestrian in a second direction>
S3212, determining a second head pixel width of the target pedestrian along the second direction based on the first corner coordinates of the first head detection frame and the second corner coordinates of the diagonal points, and the third corner coordinates of the second head detection frame and the fourth corner coordinates of the diagonal points.
In another embodiment, the position of the first corner point corresponding to the first corner point coordinate is the same as the position of the third corner point corresponding to the third corner point coordinate in the second corner point detection frame, for example, the position of the corner point H01 of the first corner point detection frame is the same as the position of the corner point H11 of the second corner point detection frame.
In another embodiment, the first corner coordinates based on the corner point H01 of the first human head detection frame And second corner coordinates of the diagonal point H03 +.>And the third corner coordinates ++of the corner H11 of the second human head detection frame>And the fourth angular point coordinates of the diagonal point H13The second head pixel width of the target pedestrian along the second direction is determined by the second head pixel width calculation formula, and the accuracy of the second head pixel width is improved and the accuracy of human running recognition is also improved by taking the average value of the head pixel widths of the first head detection frame and the second head detection frame in the second direction as the second head pixel width W2.
In another embodiment, the second head pixel width is calculated as:
wherein W2 is the second human head pixel width.
S3213, determining a second head pixel movement speed of the target pedestrian in the second direction based on the second human pixel movement distance and the second head pixel width.
In another embodiment, a second head pixel movement speed of the target pedestrian in the second direction is determined by a second head pixel movement speed calculation based on the second body pixel movement distance and the second head pixel width.
In another embodiment, the second head pixel movement speed is calculated as:
v2=L2/W2/t
wherein v2 is the second head pixel moving speed, L2 is the second human body pixel moving distance, W2 is the second head pixel width, and t is the interval time, i.e. the interval time is the time period between the time corresponding to the first frame target image and the time corresponding to the second frame target image.
S323, determining the head pixel moving speed of the target pedestrian based on the first head pixel moving speed and the second head pixel moving speed.
In another embodiment, based on the first head pixel moving speed v1 and the second head pixel moving speed v2, the head pixel moving speed of the target pedestrian is determined by combining the head pixel moving speed calculation formula, and since the first body pixel moving speed of the target pedestrian along the first direction is determined first, and then the second body pixel moving speed of the target pedestrian along the second direction is determined, then the first body pixel moving speed and the second body pixel moving speed are combined into the head pixel moving speed, the accuracy of the head pixel moving speed is further improved, and the accuracy of the human running recognition is also further improved.
In another embodiment, the synthetic human head pixel movement velocity is calculated as:
wherein v is the moving speed of the human head pixel, v1 is the moving speed of the first human head pixel, and v2 is the moving speed of the second human head pixel.
S400, if the head pixel moving speed is greater than or equal to the head pixel moving speed threshold, determining that the target pedestrian is in a running state.
In one embodiment, if the moving speed of the head pixel is greater than or equal to the threshold value of the moving speed of the head pixel, the target pedestrian is determined to be in a human running state, and as the moving speed of the head pixel is not required to be mapped to the physical moving speed of the target pedestrian in the scene, the operation time of human running recognition is reduced, and the efficiency of human running recognition is improved.
In one embodiment, as shown in FIG. 8, determining the head pixel movement speed threshold includes:
s410, acquiring a first sample data set, a second sample data set and a preset head pixel moving speed, wherein the first sample data set comprises first pixel moving speeds of a plurality of sample lines in a human running state, and the second sample data set comprises second pixel moving speeds of the plurality of sample lines in a human running state.
S420, if the preset moving speed of the head pixels is smaller than or equal to the first moving speed of the first pixels with the preset proportional number of the first sample data set, and the preset moving speed of the head pixels is larger than or equal to the second moving speed of the second pixels with the preset proportional number of the second sample data set, determining that the preset moving speed of the head pixels is the threshold value of the moving speed of the head pixels.
In one embodiment, the value range of the preset proportion is greater than or equal to 95%, so that more accurate head pixel moving speed thresholds can be obtained in multiple scenes, and the accuracy of human running recognition in the multiple scenes is improved.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
according to the identification method for human running provided by the first aspect of the embodiment of the application, target images of at least two frames including a target pedestrian are acquired; based on each target image, determining a human body center point and a human head detection frame corresponding to each target image; determining the moving speed of the head pixels of the target pedestrians based on each target image and the human body center point and the head detection frame corresponding to each target image; if the moving speed of the human head pixels is greater than or equal to the moving speed threshold of the human head pixels, the human running state of the target pedestrian is determined, and as the moving speed of the human head pixels of the target pedestrian can be determined through the human body center point corresponding to the target image of the target pedestrian and the human head detection frame and compared with the moving speed threshold of the human head pixels, whether the target pedestrian is in the running state or not can be judged.
The running recognition apparatus provided in the present application is exemplarily described below with reference to the accompanying drawings.
In correspondence to the running recognition method described in the above embodiments, in a second aspect, as shown in fig. 9, an embodiment of the present application provides a recognition device 100 for running by a human body, including:
the acquiring module 110 is configured to acquire a target image including at least two frames of a target pedestrian.
The first determining module 120 is configured to determine, based on each of the target images, a human body center point and a human head detection frame corresponding to each of the target images.
And a second determining module 130, configured to determine a moving speed of a head pixel of the target pedestrian based on each of the target images and the human body center point and the head detection frame corresponding to each of the target images.
And a third determining module 140, configured to determine that the target pedestrian is in a running state if the moving speed of the head pixel is greater than or equal to the moving speed threshold of the head pixel.
It should be noted that, because the content of information interaction and execution process between the modules/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and details thereof are not repeated herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In a third aspect, embodiments of the present application further provide an electronic device including a memory, a processor 902, and a computer program stored in the memory and executable on the processor, the processor 902 implementing the steps of the running recognition method as described above when the computer program is executed.
In applications, an electronic device may include, but is not limited to, a processor and memory, and an electronic device may also include more or fewer components than shown, or may combine certain components, or different components, e.g., input-output devices, network access devices, etc. The input output devices may include cameras, audio acquisition/playback devices, display screens, and the like. The network access device may include a network module for wireless networking with an external device.
In application, the processor may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In applications, the memory may in some embodiments be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device. The memory may also be an external storage device of the electronic device in other embodiments, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the electronic device. The memory may also include both internal storage units and external storage devices of the electronic device. The memory is used to store an operating system, application programs, boot Loader (Boot Loader), data, and other programs, etc., such as program code for a computer program, etc. The memory may also be used to temporarily store data that has been output or is to be output.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium storing a computer program, where the computer program when executed by a processor may implement the steps of the method embodiments described above.
All or part of the process in the method of the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and which, when executed by a processor, implements the steps of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to an electronic device, a recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative apparatus and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other manners. For example, the apparatus/device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (10)
1. A method for identifying human running, comprising:
acquiring target images of at least two frames including a target pedestrian;
based on each target image, determining a human body center point and a human head detection frame corresponding to each target image;
Determining the moving speed of the head pixels of the target pedestrian based on each target image and the human body center point and the head detection frame corresponding to each target image;
and if the moving speed of the head pixels is greater than or equal to the threshold value of the moving speed of the head pixels, determining the artificial running state of the target pedestrian.
2. The method of claim 1, wherein each of the target images comprises a first frame target image and a second frame target image;
based on each target image, determining a human body center point and a human head detection frame corresponding to each target image, including:
determining a first human body detection frame of the first frame target image and a second human body detection frame of the second frame target image based on the first frame target image and the second frame target image;
determining a first human body center point corresponding to the first human body detection frame and a second human body center point corresponding to the second human body detection frame based on the first human body detection frame and the second human body detection frame;
based on the first human body detection frame and the second human body detection frame, determining a first human head detection frame corresponding to the first human body center point and a second human head detection frame corresponding to the second human body center point, wherein the first human head detection frame is positioned in the first human body detection frame, and the second human head detection frame is positioned in the second human body detection frame.
3. The method of claim 1, wherein each of the target images comprises a first frame target image and a second frame target image;
based on each target image, determining a human body center point and a human head detection frame corresponding to each target image, and further comprising:
based on the first frame target image, synchronously determining a first human body detection frame and a first human head detection frame of the first frame target image;
synchronously determining a second human body detection frame and a second human head detection frame of the second frame target image based on the second frame target image;
and based on the first human body detection frame and the second human body detection frame, respectively determining a first human body center point corresponding to the first human body detection frame and a second human body center point corresponding to the second human body detection frame.
4. The method of claim 1, wherein each of the target images comprises a first frame of target image and a second frame of target image, the first frame of target image corresponding to a first human body center point and a first human head detection frame, the second frame of target image corresponding to a second human body center point and a second human head detection frame;
based on each of the target images and the human body center point and the human head detection frame corresponding to each of the target images, determining a human head pixel movement speed of the target pedestrian includes:
And determining the moving speed of the head pixels of the target pedestrian based on the first human body center point, the second human body center point, the first head detection frame, the second head detection frame and the interval time between the first frame target image and the second frame target image.
5. The method of claim 4, wherein the determining the head pixel movement speed of the target pedestrian based on the first human body center point, the second human body center point, the first human head detection frame, the second human head detection frame, and a spacing time between the first frame target image and the second frame target image comprises:
determining a human body pixel moving distance of the target pedestrian based on the first human body center coordinates of the first human body center point and the second human body center coordinates of the second human body center point;
determining the width of a head pixel of the target pedestrian based on a first corner coordinate and a second corner coordinate of the first head detection frame or based on a third corner coordinate and a fourth corner coordinate of the second head detection frame, wherein a first corner corresponding to the first corner coordinate and a second corner corresponding to the second corner coordinate are diagonal points of the first head detection frame, and a third corner corresponding to the second corner coordinate and a fourth corner corresponding to the fourth corner coordinate are diagonal points of the second head detection frame;
And determining the head pixel moving speed of the target pedestrian based on the human body pixel moving distance, the head pixel width and the interval time.
6. The method of claim 4, wherein the head pixel movement speed comprises a first head pixel movement speed in a first direction and a second head pixel movement speed in a second direction, wherein the first direction is perpendicular to the second direction;
the determining the moving speed of the head pixel of the target pedestrian based on the first human body center point, the second human body center point, the first head detection frame, the second head detection frame, and the interval time between the first frame target image and the second frame target image further includes:
determining the first human head pixel moving speed based on the first human body center point, the second human body center point, the first human head detecting frame, the second human head detecting frame and the interval time between the first frame target image and the second frame target image;
determining the second human head pixel moving speed based on the first human body center point, the second human body center point, the first human head detecting frame, the second human head detecting frame and the interval time between the first frame target image and the second frame target image;
And determining the head pixel moving speed of the target pedestrian based on the first head pixel moving speed and the second head pixel moving speed.
7. The method of claim 1, wherein determining the head pixel movement speed threshold comprises:
acquiring a first sample data set, a second sample data set and a preset head pixel moving speed, wherein the first sample data set comprises a plurality of first pixel moving speeds of sample pedestrians in a human running state, and the second sample data set comprises a plurality of second pixel moving speeds of the sample pedestrians in a walking state;
if the preset head pixel moving speed is smaller than or equal to the first pixel moving speed of the preset proportion number of the first sample data set, and the preset head pixel moving speed is larger than or equal to the second pixel moving speed of the preset proportion number of the second sample data set, determining that the preset head pixel moving speed is the head pixel moving speed threshold.
8. An identification device for running by a human body, comprising:
the acquisition module is used for acquiring target images of at least two frames including a target pedestrian;
The first determining module is used for determining a human body center point and a human head detection frame corresponding to each target image based on each target image;
the second determining module is used for determining the moving speed of the head pixels of the target pedestrian based on the target images, the human body center point corresponding to the target images and the head detection frame;
and the third determining module is used for determining the artificial running state of the target pedestrian if the moving speed of the head pixel is greater than or equal to the threshold value of the moving speed of the head pixel.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311182981.1A CN117392743A (en) | 2023-09-13 | 2023-09-13 | Human running recognition method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311182981.1A CN117392743A (en) | 2023-09-13 | 2023-09-13 | Human running recognition method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117392743A true CN117392743A (en) | 2024-01-12 |
Family
ID=89436301
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311182981.1A Pending CN117392743A (en) | 2023-09-13 | 2023-09-13 | Human running recognition method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117392743A (en) |
-
2023
- 2023-09-13 CN CN202311182981.1A patent/CN117392743A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443210B (en) | Pedestrian tracking method and device and terminal | |
US20190122059A1 (en) | Signal light detection | |
CN111507327B (en) | Target detection method and device | |
CN111259868B (en) | Reverse vehicle detection method, system and medium based on convolutional neural network | |
CN107945523B (en) | Road vehicle detection method, traffic parameter detection method and device | |
CN111047908B (en) | Detection device and method for cross-line vehicle and video monitoring equipment | |
CN111967396A (en) | Processing method, device and equipment for obstacle detection and storage medium | |
CN111179302B (en) | Moving target detection method and device, storage medium and terminal equipment | |
CN111428644A (en) | Zebra crossing region monitoring method, system and medium based on deep neural network | |
CN116249015A (en) | Camera shielding detection method and device, camera equipment and storage medium | |
CN111382606A (en) | Tumble detection method, tumble detection device and electronic equipment | |
CN112101139B (en) | Human shape detection method, device, equipment and storage medium | |
CN114724119B (en) | Lane line extraction method, lane line detection device, and storage medium | |
CN116051373A (en) | Image stitching method, image stitching device and terminal equipment | |
CN117392743A (en) | Human running recognition method and device, electronic equipment and storage medium | |
Van Beeck et al. | A Warping Window Approach to Real-time Vision-based Pedestrian Detection in a Truck's Blind Spot Zone. | |
CN113674316A (en) | Video noise reduction method, device and equipment | |
CN107977644B (en) | Image data processing method and device based on image acquisition equipment and computing equipment | |
CN114373001B (en) | Combined calibration method and device for radar and image | |
CN112836631B (en) | Vehicle axle number determining method, device, electronic equipment and storage medium | |
CN118537819B (en) | Low-calculation-force frame difference method road vehicle visual identification method, medium and system | |
CN118015567B (en) | Lane dividing method and related device suitable for highway roadside monitoring | |
CN114550289B (en) | Behavior recognition method, system and electronic equipment | |
CN116883656A (en) | Semantic segmentation method and device, computer readable storage medium and robot | |
CN115761616A (en) | Control method and system based on storage space self-adaption |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |