CN111860261B - Passenger flow value statistical method, device, equipment and medium - Google Patents

Passenger flow value statistical method, device, equipment and medium Download PDF

Info

Publication number
CN111860261B
CN111860261B CN202010664483.0A CN202010664483A CN111860261B CN 111860261 B CN111860261 B CN 111860261B CN 202010664483 A CN202010664483 A CN 202010664483A CN 111860261 B CN111860261 B CN 111860261B
Authority
CN
China
Prior art keywords
head frame
target
identification information
determining
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010664483.0A
Other languages
Chinese (zh)
Other versions
CN111860261A (en
Inventor
王彦斌
朱宏吉
张彦刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Orion Star Technology Co Ltd
Original Assignee
Beijing Orion Star Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Orion Star Technology Co Ltd filed Critical Beijing Orion Star Technology Co Ltd
Priority to CN202010664483.0A priority Critical patent/CN111860261B/en
Publication of CN111860261A publication Critical patent/CN111860261A/en
Application granted granted Critical
Publication of CN111860261B publication Critical patent/CN111860261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion

Abstract

The invention discloses a passenger flow value statistical method, device, equipment and medium, which are used for solving the problem that the accuracy of the existing passenger flow value statistical method is not high. In the passenger flow statistics process, for each head frame contained in the image to be identified, comparing the first similarity between the head frame and each head frame in the currently stored verification queue with a preset first threshold value, and when each first similarity corresponding to the head frame is smaller than the preset first threshold value, indicating that the head frame is dissimilar to each head frame, determining that the head frame is a target head frame, and carrying out subsequent steps of updating the current statistical passenger flow value, thereby avoiding counting head frames which may be 'dummy' in a poster or a billboard into the passenger flow, and improving the accuracy of passenger flow value statistics.

Description

Passenger flow value statistical method, device, equipment and medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a passenger flow value statistics method, apparatus, device, and medium.
Background
The customer flow and personnel distribution situation are counted in open scenes such as a mall, a scenic spot and the like, the attribute of a customer in the scene is analyzed, quantitative service is provided for a merchant, and technical support is provided for the merchant to make better business decisions. Therefore, how to count the passenger flow is a problem which is increasingly focused in recent years.
In the prior art, a human head frame contained in an image to be identified is generally obtained through a conventional visual algorithm, such as a local binary pattern (Local binary pattern, LBP). And calculating the similarity of each head frame corresponding to each identification information in the current tracking queue for each head frame, determining the target identification information of the head frame according to each similarity through a Hungary algorithm, and storing the head frame of the target identification information in the tracking queue. And finally, determining the statistical passenger flow value according to the number of different identification information in the updated tracking queue.
For the method, when more people posters or billboards containing people exist in an application scene, whether each head frame in the acquired image is a 'dummy' in the posters or billboards cannot be identified, so that statistics of the 'dummy' in the posters or billboards into passenger flow values occurs, and the accuracy of the counted passenger flow values is affected.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a medium for counting a guest flow value, which are used for solving the problem that the existing guest flow value counting method is low in accuracy.
The embodiment of the invention provides a passenger flow value statistical method, which comprises the following steps:
acquiring a head frame of a person contained in an image to be identified;
for each head frame, if the first similarity between the head frame and each verification head frame in the current verification queue is smaller than a preset first threshold value, determining that the head frame is a target head frame;
for each target head frame, determining target identification information of the target head frame according to the second similarity of the target head frame and each tracking head frame in the current tracking queue; and if the target identification information meets a preset updating condition, updating the passenger flow value of the current statistics.
The embodiment of the invention provides a passenger flow value statistics device, which comprises:
the acquisition unit is used for acquiring a head frame contained in the image to be identified;
the judging unit is used for determining that each head frame is a target head frame if the first similarity between the head frame and each verification head frame in the current verification queue is smaller than a preset first threshold;
the processing unit is used for determining target identification information of each target head frame according to the second similarity between the target head frame and each tracking head frame in the current tracking queue; and if the target identification information meets a preset updating condition, updating the passenger flow value of the current statistics.
The embodiment of the invention provides electronic equipment, which at least comprises a processor and a memory, wherein the processor is used for realizing the steps of the passenger flow value statistical method when executing a computer program stored in the memory.
An embodiment of the invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of a method of statistics of passenger flow values as described in any of the above.
In the passenger flow statistics process, for each head frame contained in the image to be identified, comparing the first similarity between the head frame and each head frame in the currently stored verification queue with a preset first threshold value, and when each first similarity corresponding to the head frame is smaller than the preset first threshold value, indicating that the head frame is dissimilar to each head frame, determining that the head frame is a target head frame, and carrying out subsequent steps of updating the current statistical passenger flow value, thereby avoiding counting head frames which may be 'dummy' in a poster or a billboard into the passenger flow, and improving the accuracy of passenger flow value statistics.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a statistical process of a passenger flow value according to an embodiment of the present invention;
FIG. 2 is a flow chart of statistics of a specific guest flow value according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a statistical flow of specific passenger flow values according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a passenger flow value statistics device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: fig. 1 is a schematic diagram of a statistical process of a guest current value according to an embodiment of the present invention, where the process includes:
s101: and acquiring a head frame of a person contained in the image to be identified.
The statistical method of the guest flow value provided by the embodiment of the invention is applied to the electronic equipment, and the electronic equipment can be intelligent equipment, such as an intelligent robot, monitoring equipment and the like, and can also be a server and the like.
After the image to be identified is obtained, the image to be identified is correspondingly processed based on the statistical method of the passenger flow value provided by the embodiment of the invention, so that the current statistical passenger flow value is updated.
The image to be identified may be acquired by the electronic device itself, or may be transmitted by other image acquisition devices, which is not limited herein.
In the implementation process, the human head frame contained in the received image to be identified can be acquired through a traditional visual algorithm, such as LBP. However, since the conventional vision algorithm cannot well acquire the head frame in the image to be identified with a larger shooting range, when the head frame in the image to be identified with a larger shooting range is expected to be acquired, the head frame contained in the received image to be identified can also be acquired through a head detector or a human body detector which is trained in advance.
The head frames included in the image to be identified are obtained, that is, the head areas in the image to be identified are obtained, and each head frame included in the image to be identified can be determined according to the position information of each head frame in the image to be identified, for example, the coordinate value of the pixel point at the upper left corner of the head frame in the image to be identified and the coordinate value of the pixel point at the lower right corner of the head frame in the image to be identified.
It should be noted that, the human head detector or the human body detector can be trained by the target recognition method of deep learning, and a specific training process belongs to the prior art, and is not described herein.
S102: and for each head frame, if the first similarity between the head frame and each verification head frame in the current verification queue is smaller than a preset first threshold value, determining that the head frame is a target head frame.
Since there may be a poster or a billboard in the image to be identified, and there may also be a character image in the poster or the billboard, we call "dummy", when the head frame is obtained through the above embodiment, the head frame of "dummy" in the image to be identified may be extracted, so as to affect statistics of subsequent passenger flow values. Therefore, in order to improve the accuracy of the statistical passenger flow value, in the embodiment of the present invention, a verification queue is preset, and the verification queue stores a verification header frame. The verifying head frame is a head frame displayed through a transmission medium in the acquired surrounding environment, such as a head frame of a 'dummy' in a poster or a billboard, a head frame of a 'dummy' in video playing equipment, and the like. When an initial verification queue is set, the verification queue can be empty, and can also contain head frames of 'dummy' collected in the surrounding environment in advance, and in the subsequent statistics process of the guest flow value, the head frames of the verification in the verification queue can be updated in real time. For each head frame contained in the acquired image to be identified, the similarity (for convenience of description, first similarity is marked) between the head frame and each head frame to be identified in the current verification queue is respectively determined, and whether the object corresponding to the head frame is a 'dummy' is determined according to each first similarity and a preset threshold (for convenience of description, first threshold).
When the first threshold is set, different values can be set according to different scenes, and if the head frame of the person possibly being the dummy in the image to be identified is hoped to be screened out as much as possible, the first threshold can be set smaller; if the head box of a person who is not a "dummy" in the image to be recognized is not recognized by mistake, the first threshold value may be set larger. The implementation can be flexibly set according to the requirements, and the implementation is not particularly limited.
Specifically, for each head frame included in the acquired image to be identified, if each first similarity of the head frame is smaller than a preset first threshold value, which indicates that an object corresponding to the head frame may not be a "dummy", the head frame is determined to be a target head frame, and a subsequent process of updating the current statistical passenger flow value is performed.
For example, the first threshold is 85, the calculated three first similarities are 24, 17 and 8, and it is determined that the three first similarities 24, 17 and 8 are smaller than the first threshold, which indicates that the similarity between the head frame and each verification head frame is not high, and then the object corresponding to the head frame is considered to be not a "dummy", and then the head frame is determined as a target head frame, and the subsequent process of updating the current statistical passenger flow value is performed.
The first similarity between the head frame and the verification head frame is determined in the prior art, and may be determined through a network model or an image similarity algorithm. In the specific implementation, the flexible setting can be performed according to actual requirements, and the specific limitation is not limited herein.
In another possible embodiment, the method further comprises:
and if the first similarity between the head frame and any verification head frame in the current verification queue is not smaller than the first threshold value, not executing the process of updating the passenger flow value of the current statistics.
In the implementation process, for each head frame, the first similarity of the head frame and each verification head frame in the current verification queue is calculated respectively, as long as the first similarity of the head frame and any verification head frame in the current verification queue is not smaller than a preset first threshold value, it is indicated that an object corresponding to the head frame may be a "dummy", statistics of the current value according to the head frame will affect the accuracy of the statistics, and processing of updating the current statistical current value is not performed based on the head frame.
When determining whether any head frame contained in the image to be identified is a target head frame, sequentially calculating first similarity between the head frame and the verification head frame in the verification queue according to a preset sequence, for example, a storage sequence of each verification head frame and a collection time of each verification head frame, and ending the step of continuously calculating the first similarity when determining that the first similarity calculated currently is not less than a preset first threshold; and after the first similarity of the head frame and each verification head frame in the current verification queue is calculated, determining whether any first similarity not smaller than a preset first threshold exists according to each calculated first similarity. Of course, the first similarity between the head frame and the verified head frame in the current verification queue may be calculated randomly, and when it is determined that the first similarity is not smaller than the preset first threshold, the step of continuing to calculate the first similarity is ended. The specific implementation may be set according to actual requirements, and is not specifically limited herein.
For example, the first threshold is 85, the calculated first similarity is 24, 31 and 90, and it is determined that the first similarity 90 is greater than the first threshold 85, which indicates that the object corresponding to the head frame may be a "dummy", and the statistics of the passenger flow value according to the head frame will affect the accuracy of the statistics, and the process of updating the passenger flow value of the current statistics is not performed based on the head frame.
S103: for each target head frame, determining target identification information of the target head frame according to the second similarity of the target head frame and each tracking head frame in the current tracking queue; and if the target identification information meets a preset updating condition, updating the passenger flow value of the current statistics.
After each target head frame is obtained based on the above embodiment, for each target head frame, the similarity (for convenience of description, denoted as second similarity) of each tracking head frame corresponding to each identification information in the current tracking queue and the target head frame is determined, and based on each second similarity corresponding to the target head frame, corresponding processing is performed to determine the target identification information of the target head frame. The head frame tracking device is used for tracking the head frame of the pedestrian in the acquired surrounding environment. When an initial tracking queue is set, the tracking queue can be empty, and in the subsequent statistics process of the guest flow value, the tracking head frames in the tracking queue are updated in real time according to each target head frame allocated with target identification information.
When determining the second similarity between the target head frame and each tracking head frame in the current tracking queue, the second similarity determination method may be the same as or different from the first similarity determination method. When the target head frame and each tracking head frame in the current tracking queue are different, the second similarity may be calculated by calculating the overlapping proportion of the target head frame and the area of each tracking head frame in the current tracking queue, that is, the overlapping rate of the area corresponding to any tracking head frame and the area of the target head frame in the image to be identified may be calculated, or the spatial distance between the target head frame and each tracking head frame in the current tracking queue may be calculated, for example, the euclidean distance, chebyshev distance, and the like. Specifically, the manner in which the second similarity is determined is not particularly limited herein.
In a possible implementation manner, when determining the target identification information of each target head frame, determining the second similarity between the target head frame and each tracking head frame in the current tracking queue, determining the identification information of the tracking head frame with higher similarity to the target head frame through a hungarian algorithm and each second similarity, and if determining that the second similarity between any tracking head frame corresponding to the identification information and the target head frame is greater than a set threshold, determining that the identification information is the identification information of the target head frame (for convenience of description, the identification information is denoted as target identification information); otherwise, new identification information is allocated to the target head frame, and the newly allocated identification information is determined to be the target identification information of the target head frame.
The identification information of the head frame (including verifying head frame, tracking head frame, and target head frame) is used for uniquely identifying the identity information of the object to which the head frame belongs, and the identification information may be numbers, letters, special symbols, character strings, etc., or may be in other forms, so long as the identity information capable of uniquely identifying the object to which the head frame belongs can be used as the identification information in the embodiment of the present invention.
In the practical application process, after the target identification information of the target head frame is determined, if the target identification information is the identification information corresponding to any existing tracking head frame in the tracking queue, before the target head frame is acquired, the process of updating the counted passenger flow value is executed according to the tracking head frame corresponding to the target identification information, and the process of updating the current counted passenger flow value is not needed to be executed based on the target head frame. Therefore, in order to further accurately count the current value, in the embodiment of the present invention, an update condition is configured, and the update condition may be whether the target identification information is the same as any one of the identification information in the current tracking queue. After determining the target identification information of any target head frame based on the above embodiment, it is determined whether the target identification information meets a preset update condition, that is, whether the target identification information is the same as the identification information of the head frame in the current tracking queue, so as to determine whether to update the passenger flow value of the current statistics.
Specifically, if it is determined that the target identification information is the same as the identification information of any tracking head frame in the current tracking queue, which indicates that the process of updating the counted passenger flow value is performed according to the tracking head frame corresponding to the target identification information, it is determined that the target identification information does not meet a preset updating condition, and the currently counted passenger flow value is not updated according to the target identification information. If the target identification information is determined to be different from any identification information in the current tracking queue, the object corresponding to the head frame of the target identification information is not counted in the passenger flow value, and the target identification information is determined to meet the preset updating condition, and the passenger flow value counted at present is updated.
Example 2: in order to further improve the accuracy of the statistical passenger flow value, in the embodiment of the present invention, determining the first similarity between the head box and each verification head box in the current verification queue includes:
determining a hash value of the head frame according to the image information in the head frame;
for each verifying head frame, determining a first distance between the hash value of the head frame and the hash value of the verifying head frame, and determining the first distance as a first similarity between the head frame and the verifying head frame.
In the practical application process, the hash value can be used for representing the content information in each image, which is equivalent to a fingerprint character string of each image, and then the similarity between different images can be determined according to the hash values of different images, and the higher the similarity is, the more similar the two images are. Based on this, in the embodiment of the present invention, the first similarity may be determined according to the hash value of the image in the head frame and the hash value corresponding to each verified head frame, so as to determine whether the head frame may be a head frame of a "dummy".
In the implementation process, after any head frame contained in an image to be identified is acquired, determining a hash value of the image in the head frame according to image information in the head frame, and then determining a first distance, such as a hamming distance, between the hash value of the image in the head frame and a hash value corresponding to the head frame for each head frame to be verified in a current verification queue, and determining a first similarity between the head frame and the head frame to be verified according to the first distance.
When determining the first similarity between the head frame and the verification head frame according to the first distance, the first distance may be directly used as the first similarity between the head frame and the verification head frame, or the first distance may be subjected to certain processing, such as weighting, function operation, quantization, and the like, and the result obtained after the processing is used as the first similarity between the head frame and the verification head frame. The specific method for determining the first similarity according to the first distance can be flexibly set according to actual requirements, and is not specifically limited herein.
The hash value corresponding to each verifying head frame in the current verifying queue may be determined when the verifying head frame is acquired, or may be determined in real time according to each verifying head frame in the current verifying queue when the first similarity is determined. In the specific implementation, the flexible setting can be performed according to actual requirements, and the specific limitation is not limited herein.
It should be noted that, the process of determining the hash value in the head frame image and determining the first distance belongs to the prior art, and is not described herein.
In one possible implementation manner, to accurately determine the hash value of each head frame, determining the hash value of the head frame according to the image information in the head frame includes:
determining a target size according to the size of the head frame and the set multiple;
determining an image area which contains the head frame and has the size of the target size in the image to be identified; and
and determining the hash value of the image area according to the image information in the image area, and determining the hash value of the image area as the hash value of the head frame.
In order to fully combine the background information in the head frame, the hash value in the head frame image is determined, the area where the head frame is located can be enlarged by a set multiple, and then the hash value of the head frame is calculated based on the enlarged image area. Specifically, in the embodiment of the invention, for each head frame included in an image to be identified, a target size is determined according to the size and the set multiple of the head frame; in the image to be identified, determining an image area which contains the head frame and has a target size, for example, in the identification image, the image area which has the target size is obtained by outwards expanding the frame based on the head frame; and determining a hash value of the head frame based on the image information in the image area. For example, the size of the acquired head frame is 10×10, the position information in the image to be identified is (10, 20), (20, 30), and the target size is determined to be 20×20 if the set multiple is 4, and the position information of the image area including the head frame and having the size of 20×20 in the image to be identified is determined to be (5, 15), (25, 35).
When setting the value of the set multiple, different values can be set according to different scenes, and if the background information in the head frame of the person is expected to be learned as much as possible, the value can be set to be larger; if the image area contains information of the head frame of the other object, the set multiple may be set smaller, preferably, the set multiple may be 2 times.
Example 3: the present embodiment provides a further way of determining whether the target identification information meets a preset update condition, which specifically includes the following steps:
acquiring a tracking head frame corresponding to the target identification information from a current tracking queue;
determining the corresponding moving distance of each two tracking head frames according to the image information of each two tracking head frames adjacent to each other in acquisition time, and determining the sum of the distances according to each moving distance; and
if the number of the head frames of the tracking person corresponding to the target identification information is larger than a set number threshold value and the sum of the distances is larger than a set distance threshold value, determining that the target identification information meets a preset updating condition.
In practical applications, it may happen that the acquired image to be identified is acquired at the gate of an application scene such as a hall, a mall, a bus, etc., and although the image to be identified acquired at the gate of the application scene may contain a head frame of each object entering the application scene, it is very easy to contain a head frame of an object passing through the gate of the application scene. For example, in a scenario of counting the current value of a bus, an image to be identified is generally collected at a gate of a bus door, and in the collected image to be identified, not only a head frame of an object to be on the bus is included, but also a head frame of an object passing through the gate of the bus may be included, and if it is determined whether the target identification information meets a preset updating condition or not, the counted current value is inaccurate directly according to whether the target identification information is stored in a current tracking queue or not. In addition, there may be video playing devices such as a projection television or a projector for playing video in an application scene, and if the acquired image to be identified includes a picture including a person being played by the video playing device, the statistical passenger flow value may be inaccurate.
Therefore, when determining whether the target identification information meets the preset updating condition, if the target identification information is newly allocated identification information, the current statistical passenger flow value may be directly updated according to the manner in the above embodiment when determining that the identification information identical to the target identification information does not exist in the current tracking queue.
However, in order to further improve the accuracy of the counted current passenger flow value, when it is determined that the identification information identical to the target identification information does not exist in the current tracking queue, the current counted passenger flow value is not updated, and the target head frame and the corresponding target identification information thereof are stored in the tracking queue, so that whether the target identification information meets the preset updating condition can be determined based on the number of the tracking head frames corresponding to the target identification information in the current tracking queue and the moving distance of the object to which the tracking head frame corresponding to the target identification information belongs in the shooting range.
In general, the images to be identified are acquired at preset time intervals, or may be acquired in real time, and the object entering the application scene needs a certain time from entering the shooting range to leaving the shooting range, so that a plurality of images to be identified including the head frame of the object can be acquired. And the distance that an object will travel from entering the shooting range to exiting the shooting range will typically be greater than the shortest distance through the shooting range, i.e. the distance that the object will travel from entering the shooting range to exiting the shooting range will typically be greater than a certain threshold. Therefore, in the embodiment of the present invention, in order to further improve the accuracy of the counted current value, the number threshold may be set according to a preset time interval for image acquisition, a preset shooting range, and an average speed of the object, and the distance threshold may be set according to the preset shooting range.
In the implementation process, if the number of the tracking head frames corresponding to the target identification information in the current tracking queue is greater than a number threshold and the distance travelled by the object corresponding to the tracking head frame corresponding to the target identification information in the shooting range is greater than a distance threshold, determining that the target identification information meets a preset updating condition.
After the target identification information of the target head frame is obtained based on the embodiment, the tracking head frame corresponding to the target identification information is obtained in the current tracking queue, and then the moving distance corresponding to each two tracking head frames is determined according to the image information of each two tracking head frames adjacent to each other in acquisition time. And then determining the sum of the distances according to each acquired moving distance. And judging whether the number of the head frames of the tracked person corresponding to the target identification information is larger than a number threshold value, and whether the sum of the acquired distances is larger than a distance threshold value.
If the number of the tracking head frames corresponding to the target identification information is larger than a number threshold value and the sum of the acquired distances is larger than a distance threshold value, indicating that the object corresponding to the head frames of the target identification information is not a dummy, determining that the target identification information meets a preset updating condition;
If the number of the tracking head frames corresponding to the target identification information is not greater than a number threshold, or the sum of the acquired distances is not greater than a distance threshold, the object corresponding to the head frames indicating the target identification information is likely to be a passerby passing through a gate of an application scene, or a 'dummy' contained in a video played on video playing equipment, it is determined that the target identification information does not meet a preset updating condition, and the process of updating the passenger flow value of the current statistics is not executed.
When the moving distance corresponding to the two tracking head frames is determined, the moving distance can be determined according to the position information of the pixel points of the set positions of the two tracking head frames in the image to be identified. For example, according to the distance of the coordinate values of the pixel points at the upper left corner of the two tracking head frames at the image to be identified, the distance of the coordinate values of the pixel points at the lower right corner of the tracking head frames at the image to be identified, the distance of the coordinate values of the pixel points at the midpoint of the diagonal line of the tracking head frames at the image to be identified, and the like.
The quantity threshold is generally not greater than a quotient determined from the product of the maximum recording distance in the predetermined recording range and the time interval and the average speed. In order to improve the accuracy of the statistical passenger flow values, the quantity threshold value is not too small; in order to avoid that the object is not counted by the missing situation because the walking speed of the object is too fast, the number threshold value is not too large. In the specific implementation, the flexible setting can be performed according to actual requirements, and the specific limitation is not limited herein.
The distance threshold is generally not greater than the maximum shooting distance in the preset shooting range. In order to improve the accuracy of the statistical passenger flow value, the distance threshold value is not too small; in order to avoid the situation that the object is not counted because the walking speed of the object is too high, the distance threshold value is not too large. In the specific implementation, the flexible setting can be performed according to actual requirements, and the specific limitation is not limited herein.
In the embodiment of the invention, if the number of the head frames of the tracked person corresponding to the target identification information in the current tracking queue is greater than the set number threshold and the sum of the acquired distances is greater than the set distance threshold, the target identification information is determined to meet the preset updating condition, so that passersby passing through a gate of an application scene in the acquired image to be identified is eliminated, or the head frames of the dummy person contained in the video played on the video playing equipment are eliminated, and the accuracy of the counted passenger flow value is further improved.
Example 4: because the 'dummy' is possibly stored in the tracking queue to influence the update of the passenger flow value of the current statistics, in the practical application process, the object which is not the 'dummy' generally moves in the application scene, and each acquired image to be identified is acquired according to the set time interval, therefore, the images of the object are contained in the plurality of images to be identified acquired in the process that the same object enters the application scene. In addition, since many pedestrian objects are generally included in the application scene, since each pedestrian object is moving, the background of each pedestrian object and the position thereof are also generally changed. For the "dummies" in the posters and billboards in the application scene, the background and the position of each "dummies" are not changed generally because the "dummies" are usually motionless. Based on this, in order to further improve the accuracy of the counted passenger flow value, whether the object corresponding to each tracking head frame corresponding to the identification information is a "dummy" can be determined according to the similarity (for convenience of description, namely, the third similarity) between the areas corresponding to any two tracking head frames in the current tracking queue and a preset threshold (for convenience of description, the second threshold is recorded), so as to update the tracking head frames in the current tracking queue.
The method for determining the third similarity between any two tracking head frames may be similar to the method for determining the first similarity between the head frames and the verification head frames described above, for example, the hash values corresponding to the two tracking head frames are determined according to the image information in the two tracking head frames, then the distance between the hash values of the two tracking head frames is determined, and the third similarity between the two tracking head frames is determined according to the distance.
Specifically, the head frame of the tracking person in the current tracking queue may be updated in the following two ways:
in the first mode, in order to timely update the tracking head frames in the current tracking queue, after the target identification information of any target head frame is acquired, the tracking head frame corresponding to the target identification information in the current tracking queue can be updated in real time. Specifically, after the target identification information of the target head frame is obtained based on the above embodiment, third similarity between any two tracking head frames corresponding to the target identification information is obtained in the current tracking queue, and whether each third similarity is larger than a preset second threshold value is determined.
If the third similarity of any two tracking head frames corresponding to the target identification information is greater than a preset second threshold value, the fact that the similarity between any two tracking head frames corresponding to the target identification information is extremely high is indicated, namely, image information such as the background in each tracking head frame of the target identification information is unchanged, it is determined that an object corresponding to the tracking head frame corresponding to the target identification information is likely to be a dummy, relevant information of each tracking head frame corresponding to the target identification information is deleted from a current tracking queue, and the process of updating the passenger flow value of current statistics is not executed.
Wherein, the relevant information of each tracking person head frame comprises at least one of position information, hash value and acquisition time.
For example, the preset second threshold is 89, 3 tracking head frames corresponding to the target identification information 2 in the tracking queue are obtained, wherein the third similarity of any two tracking head frames is 92, 90 and 95 respectively, each third similarity corresponding to the target identification information 2 is determined to be greater than the second threshold 89, which indicates that the image information such as the background in each tracking head frame corresponding to the target identification information 2 does not change greatly, the object corresponding to the tracking head frame corresponding to the target identification information 2 is determined to be a "dummy", and the relevant information of each tracking head frame corresponding to the target identification information 2 is deleted in the current tracking queue, and the process of updating the current statistical passenger flow value is not executed.
And in a second mode, an updating period is preset in order to reduce the resources consumed when the head frame of the person to be tracked in the current tracking queue is updated. Updating the tracking head frames corresponding to each piece of identification information in the current tracking queue according to a preset updating period, namely acquiring third similarity between any two tracking head frames corresponding to the identification information according to each piece of identification information in the current tracking queue, and judging whether each third similarity is larger than a preset second threshold value or not, so as to determine whether to delete the tracking head frames corresponding to the identification information.
It should be noted that, the method for determining whether to delete the tracking human head frame corresponding to the identification information is similar to the method for determining in the first mode, and the repetition is not repeated.
To facilitate subsequent filtering out of artifacts contained in images to be identified, the method further comprises: and storing the relevant information of each tracking head frame corresponding to the target identification information in a current verification queue.
In order to facilitate the subsequent screening of the head frames of the "dummy" from the image to be identified, in the embodiment of the present invention, after determining that the object corresponding to the head frame of the tracked person corresponding to the target identification information in the tracking queue is the "dummy" based on the above embodiment, the relevant information of each head frame of the tracked person corresponding to the target identification information may be stored in the current verification queue, so that the head frames of the "dummy" included in the image to be identified may be further screened according to the updated verification queue, thereby improving the accuracy of the statistical passenger flow value.
Example 5: the embodiment provides a further method for determining target identification information of a target head frame according to the second similarity between the target head frame and each tracking head frame in the current tracking queue, including:
respectively determining the second similarity of the target head frame and each tracking head frame in the current tracking queue;
determining first identification information according to a Hungary algorithm and each second similarity;
and if the second similarity between the tracking head frame corresponding to the first identification information in the current tracking queue and the target head frame is smaller than a set matching threshold, determining the target identification information of the target head frame according to the feature similarity between the target head frame and each tracking head frame in the current tracking queue and the target threshold corresponding to the target head frame.
In general, the acquired images to be identified are acquired at preset time intervals, such as 2s, 3s, etc., and the preset time intervals are not set to be large in order to ensure that all the images of the objects entering the shooting range can be acquired in real time. However, in the practical application process, the time spent by the same object from entering the shooting range to leaving the shooting range is generally longer than a preset time interval, so that the acquired multiple images to be identified may all contain the head frames of the same object. If the passenger flow value is counted directly according to the number of each target head frame obtained from the image to be identified, the same object is counted for a plurality of times, so that the counted passenger flow value has low accuracy.
In order to accurately count the guest flow value, after each target head frame is acquired based on the above embodiment, a second similarity between the target head frame and each tracking head frame in the current tracking queue is determined for each target head frame, and a maximum value of the second similarity is determined according to the acquired second similarity between the target head frame and each tracking head frame corresponding to the identification information in the current tracking queue for each identification information in the current tracking queue. The first identification information is determined according to the hungarian algorithm and the maximum value of each second similarity.
In a specific implementation process, since the input of the hungarian algorithm must be a matrix with equal rows and columns, when determining the input matrix, the dimension of the input matrix is determined according to the maximum value of the number of currently determined target head frames and the number of identification information stored in the current tracking queue (for convenience of description, the dimension of the matrix is n×n).
The N x N-dimensional input matrix is determined based on a maximum value of a second similarity of each target head frame and each tracking head frame corresponding to each identification information in the current tracking queue. The element of the ith row and the jth column in the N-dimension input matrix is the maximum value in the second similarity of each tracking head frame corresponding to the ith target head frame and the jth identification information, wherein i=1, 2, …, A and A are the number of the target head frames; j=1, 2, …, B being the number of identification information in the front tracking queue; n takes the maximum value of A and B.
If the number of the target head frames is equal to the number of the identification information in the current tracking queue, N is the number of the target head frames or the number of the identification information in the current tracking queue, a vector of N1 is obtained according to a hungarian algorithm and an input matrix of n×n, and based on the output vector, first identification information corresponding to each target head frame is determined, that is, each dimension element in the output vector is the first identification information corresponding to the N target head frames respectively.
In another possible implementation manner, when the number of the obtained target head frames of the person is not equal to the number of the identification information in the current tracking queue, for convenience in determining the first identification information, the missing rows or columns of the currently determined input matrix may be supplemented to generate an n×n input matrix. In particular, the supplementing method may be to randomly select a value not smaller than the maximum value in the currently determined input matrix, and supplement the missing rows or columns of the input matrix.
It should be noted that each dimension element in the appended row or column is generally the same, but it is not excluded that each dimension element in the appended row or column may also be a different value.
For example, when 2 target head frames are currently determined and the identification information in the tracking queue is stored with 4, the determined N is 4, and according to the maximum value of the second similarity of each target head frame and the tracking head frames corresponding to the 4 identification information in the current tracking queue, only 2×4 input matrices can be determined, for convenience of determining the first identification information, the missing rows of the 2×4 input matrices can be supplemented, the maximum value in the 2×4 input matrices is determined to be 0.8, any value not less than the 0.8, for example, 1 is randomly selected, and the missing rows of the determined 2×4 input matrices are supplemented by using a first filling matrix consisting of 1 of 2×4, so as to generate an input matrix 4*4.
For another example, when 4 target head frames are currently determined and the identification information in the tracking queue is stored with 2, the determined N is 4, and according to the maximum value of the second similarity of each target head frame and the tracking head frames corresponding to the 2 identification information in the current tracking queue, only the input matrix of 4*2 can be determined, for convenience of determining the first identification information, the missing column of the input matrix of 4*2 may be supplemented, the maximum value of the input matrix of 4*2 is determined to be 0.95, any value not less than 0.95, for example, 0.95 is randomly selected, and the missing column of the input matrix of 4*2 determined above is supplemented by adopting a first filling matrix of 4*2 composed of 0.95, thereby generating the input matrix of 4*4.
In order to accurately determine the target identification information of the target head frame, after the first identification information is acquired based on the embodiment, the tracking head frame corresponding to the first identification information is determined, the second similarity between the target head frame and the tracking head frame corresponding to the first identification information determined in the embodiment is acquired, and each second similarity is compared with a set matching threshold value, so that the target identification information of the target head frame is determined.
Specifically, if the second similarity between any tracking head frame corresponding to the first identification information in the current tracking queue and the target head frame is greater than a set matching threshold, that is, the second similarity between any tracking head frame corresponding to the first identification information in the current tracking queue and the target head frame is determined to be greater than the set matching threshold, which indicates that the similarity between the target head frame and the tracking head frame corresponding to the first identification information is higher, then the target identification information of the target head frame is determined to be the first identification information.
The face features contained in the head frames of the same object acquired in different images to be identified are possibly shielded, so that the second similarity between the head frames of the same object is reduced, and the head frames of the same object are subsequently mistakenly identified as the head frames of different objects. For example, in two consecutive images to be identified including the person a, the head frame of the person a acquired in one of the images to be identified is blocked by the object B, and the head frame of the person a acquired in the other image to be identified is not blocked by the object, so that the second similarity between the head frames of the person a is small. Therefore, in order to further accurately determine the target identification information of the target head frame, in the embodiment of the present invention, when it is determined that, based on the above embodiment, the second similarity between the tracking head frame corresponding to the first identification information in the current tracking queue and the target head frame is smaller than the set matching threshold, the feature vector of the target head frame is obtained, and the feature vector corresponding to each tracking head frame in the pre-stored current tracking queue, and then, for each tracking head frame in the current tracking queue, the feature similarity between the target head frame and the tracking head frame is determined according to the feature vector of the target head frame and the feature vector corresponding to the pre-stored tracking head frame. And further analyzing based on the acquired feature similarity and a target threshold corresponding to the target head frame to determine target identification information of the target head frame.
The feature vector of the target head frame may be obtained through an image recognition model, or may be determined through other algorithms, which is not specifically limited herein. Preferably, in the embodiment of the present invention, the image is obtained through an image recognition model.
In one possible embodiment, if the feature vector of the target head frame is acquired through the image recognition model, the image recognition model is trained in advance in order to accurately acquire the feature vector of the target head frame. Specifically, the image recognition model is obtained by:
acquiring any sample head frame in a sample set and corresponding sample identification information thereof;
obtaining third identification information corresponding to the sample human head frame through a deep learning network model;
and training the deep learning network model according to the sample identification information and the third identification information to obtain an image recognition model.
The feature vector of the target head frame can be obtained through the feature extraction layer in the image recognition model.
In order to accurately acquire the feature vector of the target head frame, the deep learning network model can be trained according to each sample head frame in the sample set acquired in advance and the corresponding sample identification information thereof. The sample identification information is an identity characteristic for identifying an object corresponding to the sample head frame, and can be represented by numbers, letters, character strings and the like, or can be represented by other forms, and only the sample identification information of the sample head frames of different objects is required to be ensured to be different.
In addition, in order to increase the diversity of the sample head frames, the sample head frames of the same sample identification information include sample head frames of different angles, such as sample head frames including a front face, sample head frames including a side face turned 45 degrees to the right, sample head frames including a side face turned 45 degrees to the left, and the like.
The device for training the image recognition model may be the same as or different from the electronic device for performing the statistics of the current value.
Through the deep learning network model, identification information (for convenience of explanation, marked as third identification information) corresponding to the sample head frame can be identified, and the deep learning network model is trained according to the third identification information and the sample identification information corresponding to the sample head frame so as to adjust parameter values of all parameters of the deep learning network and obtain an image identification model.
For example, sample identification information corresponding to a sample head frame is A, and through a deep learning network model, third identification information corresponding to the sample head frame is identified as B, the third identification information B is inconsistent with the corresponding sample identification information A, and it is determined that the identification information of the sample head frame is misidentified by the deep learning network model.
The sample set contains a large number of sample head frames, and the operation is carried out on each sample head frame, and when the preset convergence condition is met, model training is completed.
The meeting of the preset convergence condition may be that the number of the sample head frames correctly identified by the sample head frames in the sample set is greater than a set number through the image identification model, or the number of iterations of training the image identification model reaches a set maximum number of iterations, and the like. The implementation may be flexibly set, and is not particularly limited herein.
In one possible implementation manner, when the image recognition model is trained, the sample head frames in the sample set can be divided into training samples and test samples, the image recognition model is trained based on the training sample head frames, and then the trained image recognition model is verified based on the test sample head frames.
After the trained image recognition model is obtained, the feature vector of the target head frame can be obtained through a feature extraction layer in the image recognition model.
In order to further accurately determine the target identification information of the target head frame, in the embodiment of the present invention, the determining the target identification information of the target head frame according to the feature similarity of the target head frame and each tracking head frame in the current tracking queue and the target threshold corresponding to the target head frame includes:
Determining second identification information according to the Hungary algorithm and the feature similarity;
if the feature similarity between any tracking head frame corresponding to the second identification information in the current tracking queue and the target head frame is larger than a target threshold corresponding to the target head frame, determining that the target identification information of the target head frame is the second identification information; otherwise, new identification information is distributed for the target head frame to serve as target identification information of the target head frame.
After the feature similarity between the target head frame and each tracking head frame in the current tracking queue is obtained based on the embodiment, determining second identification information corresponding to the target head frame according to each feature similarity and a hungarian algorithm.
In the specific implementation process, the input of the Hungary algorithm is a matrix with equal rows and columns, so that the dimension of the input matrix required by the Hungary algorithm is determined according to the maximum value in the number of the target head frames currently determined and the number of the tracking head frames stored in the current tracking queue.
Wherein the input matrix is determined based on the feature similarity of each target people head frame to each tracking people head frame in the current tracking queue. The element of the ith row and the jth column in the n×n dimension input matrix is the feature similarity of the ith target head frame and the jth tracking head frame, i=1, 2, …, a is the number of target head frames, j=1, 2, …, C is the number of tracking head frames in the front tracking queue, and N is the maximum value of a and C.
It should be noted that, through the hungarian algorithm and each feature similarity, the process of determining the second identification information is similar to the above method of determining the first identification information, and repeated parts are not repeated.
In order to accurately determine the target identification information of the target head frame, after the second identification information is obtained based on the above embodiment, determining the tracking head frame corresponding to the second identification information, obtaining the feature similarity of the target head frame determined in the above embodiment and the tracking head frame corresponding to the second identification information, and comparing the feature similarity with the target threshold corresponding to the target head frame, thereby determining whether the second identification information is the target identification information of the target head frame.
Specifically, if the feature similarity of any tracking head frame corresponding to the second identification information in the current tracking queue and the target head frame is greater than the target threshold corresponding to the target head frame, which indicates that the similarity of the target head frame and the tracking head frame corresponding to the second identification information is higher, determining that the target identification information of the target head frame is the second identification information; if the feature similarity of each tracking head frame corresponding to the second identification information and the target head frame in the current tracking queue is not greater than the target threshold value corresponding to the target head frame, the fact that the similarity of the target head frame and the tracking head frame corresponding to the second identification information is not high is indicated, the target head frame is likely to be the head frame of the object which just enters the application scene, new identification information is allocated to the target head frame, and the target identification information of the target head frame is determined to be the new identification information.
The target threshold corresponding to the target head frame may be a preset value, or may be determined according to the size of the target head frame. The setting may be specifically performed according to flexibility, and is not particularly limited herein.
Example 6: the following describes a method for calculating a guest flow value according to an embodiment of the present invention by a specific embodiment, and fig. 2 is a schematic flow chart of calculating a specific guest flow value according to an embodiment of the present invention, where the flow chart includes:
s201: and acquiring a head frame of a person contained in the image to be identified.
Since a plurality of head frames contained in the image to be identified may be obtained through the above steps, for convenience of explanation, the following steps are performed for any head frame in the obtained image to be identified:
s202: and determining a hash value of the head frame according to the image information contained in the head frame, judging whether the Hamming distance between the hash value of the head frame and the hash value of each verification head frame in the current verification queue is smaller than a preset first threshold value, if so, executing S204, otherwise, executing S203.
S203: it is determined that S205 to S213 are not performed.
S204: and determining the head frame as a target head frame.
After each target head frame included in the image to be recognized is determined based on the steps S202 to S204, for convenience of explanation, the following steps are performed for any target head frame acquired:
S205: and determining the area overlapping proportion of the target head frame and the tracking head frame of each piece of identification information in the current tracking queue.
S206: and determining first identification information according to the Hungary algorithm and the overlapping proportion of each region, judging whether the overlapping proportion of the region of any tracking head frame corresponding to the first identification information in the current tracking queue and the target head frame is larger than a set matching threshold, if so, executing S207, otherwise, executing S208.
S207: the target identification information of the target person' S head frame is determined as the first identification information, and then S212 is performed.
S208: and obtaining the feature similarity of the target head frame and each tracking head frame according to the feature vector of the target head frame obtained through the feature extraction layer in the image recognition model and the feature vector corresponding to each tracking head frame in the current tracking queue.
S209: and determining second identification information according to the Hungary algorithm and each feature similarity, judging whether the feature similarity of any tracking head frame corresponding to the second identification information in the current tracking queue and the target head frame is larger than a target threshold corresponding to the target head frame, if so, executing S210, otherwise, executing S211.
S210: the target identification information of the target person' S head frame is determined as the second identification information, and then S212 is performed.
S211: a new identification information is allocated to the target person 'S head frame, and the target identification information of the target person' S head frame is determined to be the new identification information, and then S212 is performed.
S212: and judging whether the third similarity of any two tracking head frames corresponding to the target identification information is larger than a preset second threshold value, if so, executing S213, otherwise, executing S215.
S213: and storing the relevant information of each tracking head frame corresponding to the target identification information in a verification queue.
S214: and deleting the relevant information of each tracking person head frame corresponding to the target identification information in the current tracking queue, and not executing S215-S219.
S215: and acquiring a tracking head frame corresponding to the target identification information from the current tracking queue.
S216: and determining the corresponding moving distances of the two tracking head frames according to the image information of every two tracking head frames adjacent to each other in the acquisition time, and determining the sum of the distances according to each moving distance.
S217: and judging whether the number of the head frames of the tracking person corresponding to the target identification information is larger than a set number threshold value, and whether the sum of the acquired distances is larger than a set distance threshold value, if so, executing S218, otherwise, executing S219.
S218: and updating the passenger flow value of the current statistics.
S219: and storing the relevant information of the head frame of the target person into a current tracking queue.
Example 7: in order to further improve accuracy of the statistical passenger flow value, based on the above embodiment, in the embodiment of the present invention, determining the target threshold corresponding to the target head frame includes: and determining a target threshold corresponding to the size of the target human head frame according to the corresponding relation between the image size and the threshold.
In the practical application process, the facial features contained in the head frames of the same person at different positions are obtained through a plurality of images to be identified, and the sizes of the head frames at different positions in the images to be identified are also different. For example, when a person C is located at the upper left corner of an image to be identified, the head frame of the person C is smaller and includes blurred facial features, and when the person C is located at the center of another image to be identified, the head frame of the person C is larger and includes clear facial features, so that when feature similarity is calculated according to the feature vector of the head frame with clear facial features of the same object and the feature vector of the head frame with blurred facial features, the feature similarity is very low, and cannot be generally greater than a set threshold, the person C is very easy to identify two persons by mistake, and the accuracy of the statistical guest value is reduced.
Therefore, in order to further improve the accuracy of the statistical passenger flow value, in the embodiment of the invention, the correspondence between the image size and the threshold value is pre-stored, and when the target identification information of the target head frame needs to be determined according to each feature similarity corresponding to the target head frame and the target threshold value corresponding to the target head frame, the target threshold value corresponding to the size of the target head frame is determined according to the pre-stored correspondence between the image size and the threshold value.
As a possible implementation manner, the correspondence between the image size and the threshold value is determined according to the following manner:
obtaining a sample feature vector of each sample head frame in the sample set, wherein each sample head frame corresponds to sample identification information;
for each sample head frame corresponding to the sample identification information, obtaining a sample feature vector of the sample head frame and a sample feature similarity of a sample feature vector of a target sample head frame corresponding to the sample identification information; and
and determining a threshold value corresponding to each image size according to the size of each sample human head frame and the corresponding sample feature similarity thereof.
In order to accurately determine the corresponding threshold according to the target head frames at different positions, in the embodiment of the invention, a sample set is collected in advance, and the sample set comprises each sample head frame and sample identification information corresponding to each sample head frame. After the sample feature vector of each sample head frame is respectively obtained through the image recognition model, the sample feature vector of the sample head frame and the sample feature similarity of the sample feature vector of the target sample head frame corresponding to the sample identification information are obtained for the sample head frame corresponding to each sample identification information in the sample set.
In the practical application scenario, the same object gradually approaches the image acquisition device from a distance, so that the size of a head frame of the object in the image to be identified containing the object, which is acquired by the image acquisition device at the earliest, is generally the smallest, the features contained in the head frame are also the smallest, and in a plurality of images to be identified containing the object, which are acquired continuously afterwards, the size of the head frame of the object is larger and larger along with the approach of the object, the features of the image in the head frame are also clearer, so that the sizes of the two head frames are larger according to the clearer features of the image of the same object, the determined feature similarity is larger according to the features of the two head frames, the two head frames with blurred features of the image of the same object are smaller, and the determined feature similarity is smaller according to the features of the two head frames. Based on the above, the image size of the human head frame has a certain linear relationship with the feature similarity. In the practical application process, the sample head frame with the earliest acquisition time can be generally determined as the target sample head frame corresponding to the sample identification information. In addition, the target sample human head frame corresponding to the sample identification information does not exclude other possible ways, and the above embodiment is not limited to the target sample human head frame corresponding to the sample identification information.
After the sample feature similarity corresponding to each sample head frame is obtained based on the above embodiment, determining the threshold value corresponding to each image size according to the size of each sample head frame and the corresponding sample feature similarity. Specifically, the size of the sample human head frame is taken as a horizontal axis, the sample feature similarity is taken as a vertical axis, fitting is performed according to the size of each sample human head frame and the corresponding sample feature similarity, and a fitting curve is determined, for example, d=f (x), where x is the size of the sample human head frame and D is a threshold. And according to the fitting curve, determining the corresponding relation between the image size and the threshold value. The specific fitting process belongs to the prior art, and is not described herein.
Example 8: the following describes a statistical method of the current value according to the present application by a specific embodiment, and fig. 3 is a schematic diagram of a statistical flow of the specific current value according to the embodiment of the present application, where the flow includes three parts, namely training of an image recognition model, determining of a correspondence between an area of a head frame and a dynamic threshold, and statistics of the current value, and details of each part are described below:
a first part: training of image recognition models.
S301: training the deep learning network model to obtain an image recognition model.
The first electronic equipment acquires any sample human head frame in the sample set and corresponding sample identification information thereof; obtaining third identification information corresponding to the sample human head frame through a deep learning network model; training the deep learning network model according to the sample identification information and the third identification information to obtain an image recognition model so that the feature vector of the target head frame can be obtained through a feature extraction layer in the image recognition model.
In the process of training the image recognition model, an off-line mode is generally adopted, and the first electronic device trains the deep learning network model in advance according to the sample head frames in the sample set and corresponding sample identification information so as to obtain the image recognition model.
Note that the first electronic device and the second electronic device that performs the subsequent statistics of the guest value may be the same or different, and are not particularly limited herein.
A second part: and determining the corresponding relation between the area of the head frame and the dynamic threshold value.
S302: and determining the corresponding relation between the image size and the threshold value.
The first electronic equipment respectively acquires sample feature vectors of each sample head frame in the sample set through a deep learning network model, and each sample head frame corresponds to sample identification information; for each sample head frame corresponding to the sample identification information, obtaining a sample feature vector of the sample head frame and a sample feature similarity of a sample feature vector of a target sample head frame corresponding to the sample identification information; and determining a threshold value corresponding to each image size according to the size of each sample human head frame and the corresponding sample feature similarity thereof.
The portion may be completed in a second electronic device for statistics of the current value, and in this embodiment, the execution subject of the portion is not limited.
Third section: the statistics of the guest flow value is carried out through the second electronic equipment based on the image recognition model obtained through training of the first electronic equipment, and the specific implementation method comprises the following steps:
s303: and acquiring a head frame of a person contained in the image to be identified.
Since a plurality of head frames contained in the image to be identified may be obtained through the above steps, for convenience of explanation, the following steps are performed for any head frame in the obtained image to be identified:
s304: and judging that the first similarity between the head frame and each verification head frame in the current verification queue is smaller than a preset first threshold value, if yes, executing S305, otherwise, executing S306.
S305: and determining the head frame as a target head frame.
S306: it is determined that S307 to S310 are not executed.
After each target head frame included in the image to be recognized is determined based on the steps S304 to S306, for convenience of explanation, the following steps are performed for any target head frame acquired:
s307: and determining target identification information of the target head frame according to the second similarity of the target head frame and each tracking head frame in the current tracking queue.
S308: and judging whether the target identification information meets the preset updating condition, if so, executing S309, otherwise, executing S310.
S309: and updating the passenger flow value of the current statistics.
S310: and storing the relevant information of the head frame of the target person into a current tracking queue.
Example 9: fig. 4 is a schematic structural diagram of a passenger flow value statistics device provided by an embodiment of the present invention, where the embodiment of the present invention provides a passenger flow value statistics device, and the device includes:
an acquisition unit 41 for acquiring a head frame of a person contained in an image to be recognized;
the judging unit 42 is configured to determine, for each head frame, that the head frame is a target head frame if the first similarity between the head frame and each verification head frame in the current verification queue is smaller than a preset first threshold;
a processing unit 43, configured to determine, for each target head frame, target identification information of the target head frame according to a second similarity between the target head frame and each tracking head frame in the current tracking queue; and if the target identification information meets a preset updating condition, updating the passenger flow value of the current statistics.
In a possible implementation manner, the judging unit 42 is specifically configured to:
Determining a hash value of the head frame according to the image information in the head frame; for each verifying head frame, determining a first distance between the hash value of the head frame and the hash value of the verifying head frame, and determining the first distance as a first similarity between the head frame and the verifying head frame.
In a possible implementation manner, the judging unit 42 is specifically configured to:
determining a target size according to the size of the head frame and the set multiple; determining an image area which contains the head frame and has the size of the target size in the image to be identified; and determining a hash value of the image area according to the image information contained in the image area, and determining the hash value of the image area as the hash value of the head frame.
In a possible implementation manner, the determining unit 42 is further configured to not execute the process of updating the current counted passenger flow value if the first similarity between the head frame and any one of the current verification queues is not less than the first threshold.
In a possible embodiment, the processing unit 43 is specifically configured to:
acquiring a tracking head frame corresponding to the target identification information from a current tracking queue; determining the corresponding moving distance of each two tracking head frames according to the image information of each two tracking head frames adjacent to each other in acquisition time, and determining the sum of the distances according to each moving distance; and if the number of the head frames of the tracked person corresponding to the target identification information is larger than a set number threshold value and the sum of the distances is larger than a set distance threshold value, determining that the target identification information meets a preset updating condition.
In a possible implementation manner, the processing unit 43 is further configured to delete, in the current tracking queue, relevant information of each tracking head frame corresponding to the target identification information if the third similarity of any two tracking head frames corresponding to the target identification information is greater than a preset second threshold, and not execute the process of updating the current statistical passenger flow value.
In a possible implementation manner, the processing unit 43 is further configured to store, in a current verification queue, relevant information of each tracking person head frame corresponding to the target identification information.
In a possible embodiment, the processing unit 43 is specifically configured to:
respectively determining the second similarity of the target head frame and each tracking head frame in the current tracking queue; determining first identification information according to a Hungary algorithm and each second similarity; and if the second similarity between the tracking head frame corresponding to the first identification information in the current tracking queue and the target head frame is smaller than a set matching threshold, determining the target identification information of the target head frame according to the feature similarity between the target head frame and each tracking head frame in the current tracking queue and the target threshold corresponding to the target head frame.
In a possible embodiment, the processing unit 43 is specifically configured to:
determining second identification information according to the Hungary algorithm and the feature similarity; if the feature similarity between any tracking head frame corresponding to the second identification information in the current tracking queue and the target head frame is larger than a target threshold corresponding to the target head frame, determining that the target identification information of the target head frame is the second identification information; otherwise, new identification information is distributed for the target head frame to serve as target identification information of the target head frame.
In a possible embodiment, the processing unit 43 is specifically configured to: and determining a target threshold corresponding to the size of the target human head frame according to the corresponding relation between the image size and the threshold.
In a possible implementation manner, the obtaining unit 41 is further configured to obtain a sample feature vector of each sample head frame in the sample set, where each sample head frame corresponds to sample identification information;
the processing unit 43 is further configured to obtain, for each sample header frame corresponding to the sample identification information, a sample feature vector of the sample header frame, and a sample feature similarity of a sample feature vector of a target sample header frame corresponding to the sample identification information; and determining a threshold value corresponding to each image size according to the size of each sample human head frame and the corresponding sample feature similarity thereof.
Example 9: fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and on the basis of the foregoing embodiments, the embodiment of the present invention further provides an electronic device, as shown in fig. 5, including: the processor 51, the communication interface 52, the memory 53 and the communication bus 54, wherein the processor 51, the communication interface 52 and the memory 53 complete the communication with each other through the communication bus 54;
the memory 53 has stored therein a computer program which, when executed by the processor 51, causes the processor 51 to perform the steps of:
acquiring a head frame of a person contained in an image to be identified; for each head frame, if the first similarity between the head frame and each verification head frame in the current verification queue is smaller than a preset first threshold value, determining that the head frame is a target head frame; for each target head frame, determining target identification information of the target head frame according to the second similarity of the target head frame and each tracking head frame in the current tracking queue; and if the target identification information meets a preset updating condition, updating the passenger flow value of the current statistics.
Because the principle of solving the problem of the electronic device is similar to that of the statistics method of the current value, the implementation of the electronic device can refer to the implementation of the method, and the repetition is not repeated.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface 52 is used for communication between the above-described electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit, a network processor (Network Processor, NP), etc.; but also digital instruction processors (Digital Signal Processing, DSP), application specific integrated circuits, field programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
Example 10: on the basis of the above embodiments, the embodiments of the present application further provide a computer readable storage medium having stored therein a computer program executable by a processor, which when run on the processor, causes the processor to perform the steps of:
acquiring a head frame of a person contained in an image to be identified; for each head frame, if the first similarity between the head frame and each verification head frame in the current verification queue is smaller than a preset first threshold value, determining that the head frame is a target head frame; for each target head frame, determining target identification information of the target head frame according to the second similarity of the target head frame and each tracking head frame in the current tracking queue; and if the target identification information meets a preset updating condition, updating the passenger flow value of the current statistics.
Since the principle of solving the problem by the computer readable storage medium is similar to the above-mentioned statistics method of the guest value, the implementation of the statistics method of the guest value can be referred to for implementation, and the repetition is omitted.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (24)

1. A method of statistics of passenger flow values, the method comprising:
acquiring a head frame of a person contained in an image to be identified;
for each head frame, if the first similarity between the head frame and each verifying head frame in the current verification queue is smaller than a preset first threshold value, determining that the head frame is a target head frame, wherein the verifying head frame is a head frame displayed through a transmission medium in the acquired surrounding environment;
for each target head frame, determining target identification information of the target head frame according to the second similarity of the target head frame and each tracking head frame in the current tracking queue; and if the target identification information meets a preset updating condition, updating the passenger flow value of the current statistics.
2. The method of claim 1, wherein determining a first similarity of the head frame to each verifying head frame in a current verification queue comprises:
Determining a hash value of the head frame according to the image information in the head frame;
for each verifying head frame, determining a first distance between the hash value of the head frame and the hash value of the verifying head frame, and determining the first distance as a first similarity between the head frame and the verifying head frame.
3. The method of claim 2, wherein the determining a hash value of the head frame based on the image information within the head frame comprises:
determining a target size according to the size of the head frame and the set multiple;
determining an image area which contains the head frame and has the size of the target size in the image to be identified; and
and determining the hash value of the image area according to the image information in the image area, and determining the hash value of the image area as the hash value of the head frame.
4. The method according to claim 1, wherein the method further comprises:
and if the first similarity between the head frame and any verification head frame in the current verification queue is not smaller than the first threshold value, not executing the process of updating the passenger flow value of the current statistics.
5. The method of claim 1, wherein determining that the target identification information satisfies a preset update condition comprises:
acquiring a tracking head frame corresponding to the target identification information from a current tracking queue;
determining the corresponding moving distance of each two tracking head frames according to the image information of each two tracking head frames adjacent to each other in acquisition time, and determining the sum of the distances according to each moving distance; and
if the number of the head frames of the tracking person corresponding to the target identification information is larger than a set number threshold value and the sum of the distances is larger than a set distance threshold value, determining that the target identification information meets a preset updating condition.
6. The method according to claim 1, wherein the method further comprises:
if the third similarity of any two tracking head frames corresponding to the target identification information is larger than a preset second threshold value, deleting the relevant information of each tracking head frame corresponding to the target identification information in the current tracking queue, and not executing the process of updating the passenger flow value of the current statistics.
7. The method of claim 6, wherein the method further comprises:
And storing the relevant information of each tracking head frame corresponding to the target identification information in a current verification queue.
8. The method of claim 1, wherein the determining the target identification information of the target head box based on the second similarity of the target head box to each tracking head box in the current tracking queue comprises:
respectively determining the second similarity of the target head frame and each tracking head frame in the current tracking queue;
determining first identification information according to a Hungary algorithm and each second similarity;
and if the second similarity between the tracking head frame corresponding to the first identification information in the current tracking queue and the target head frame is smaller than a set matching threshold, determining the target identification information of the target head frame according to the feature similarity between the target head frame and each tracking head frame in the current tracking queue and the target threshold corresponding to the target head frame.
9. The method of claim 8, wherein the determining the target identification information of the target head frame according to the feature similarity of the target head frame and each tracking head frame in the current tracking queue and the target threshold corresponding to the target head frame comprises:
Determining second identification information according to the Hungary algorithm and the feature similarity;
if the feature similarity between any tracking head frame corresponding to the second identification information in the current tracking queue and the target head frame is larger than a target threshold corresponding to the target head frame, determining that the target identification information of the target head frame is the second identification information; otherwise, new identification information is distributed for the target head frame to serve as target identification information of the target head frame.
10. The method of claim 9, wherein determining a target threshold corresponding to the target human head frame comprises:
and determining a target threshold corresponding to the size of the target human head frame according to the corresponding relation between the image size and the threshold.
11. The method of claim 10, wherein the correspondence of image size to threshold is determined according to the following:
obtaining a sample feature vector of each sample head frame in the sample set, wherein each sample head frame corresponds to sample identification information;
for each sample head frame corresponding to the sample identification information, obtaining a sample feature vector of the sample head frame and a sample feature similarity of a sample feature vector of a target sample head frame corresponding to the sample identification information;
And determining a threshold value corresponding to each image size according to the size of each sample human head frame and the corresponding sample feature similarity thereof.
12. A statistical device for passenger flow values, the device comprising:
the acquisition unit is used for acquiring a head frame contained in the image to be identified;
the judging unit is used for judging whether each head frame is a target head frame or not, if the first similarity between the head frame and each verification head frame in the current verification queue is smaller than a preset first threshold value, the verification head frames are head frames displayed through a transmission medium in the acquired surrounding environment;
the processing unit is used for determining target identification information of each target head frame according to the second similarity between the target head frame and each tracking head frame in the current tracking queue; and if the target identification information meets a preset updating condition, updating the passenger flow value of the current statistics.
13. The apparatus according to claim 12, wherein the judging unit is specifically configured to:
determining a hash value of the head frame according to the image information in the head frame; for each verifying head frame, determining a first distance between the hash value of the head frame and the hash value of the verifying head frame, and determining the first distance as a first similarity between the head frame and the verifying head frame.
14. The apparatus according to claim 13, wherein the judging unit is specifically configured to:
determining a target size according to the size of the head frame and the set multiple; determining an image area which contains the head frame and has the size of the target size in the image to be identified; and determining the hash value of the image area according to the image information in the image area, and determining the hash value of the image area as the hash value of the head frame.
15. The apparatus of claim 12, wherein the determining unit is further configured to:
and if the first similarity between the head frame and any verification head frame in the current verification queue is not smaller than the first threshold value, not executing the process of updating the passenger flow value of the current statistics.
16. The apparatus according to claim 15, wherein the processing unit is specifically configured to:
acquiring a tracking head frame corresponding to the target identification information from a current tracking queue; determining the corresponding moving distance of each two tracking head frames according to the image information of each two tracking head frames adjacent to each other in acquisition time, and determining the sum of the distances according to each moving distance; and if the number of the head frames of the tracked person corresponding to the target identification information is larger than a set number threshold value and the sum of the distances is larger than a set distance threshold value, determining that the target identification information meets a preset updating condition.
17. The apparatus of claim 12, wherein the processing unit is further configured to:
if the third similarity of any two tracking head frames corresponding to the target identification information is larger than a preset second threshold value, deleting the relevant information of each tracking head frame corresponding to the target identification information in the current tracking queue, and not executing the process of updating the passenger flow value of the current statistics.
18. The apparatus of claim 17, wherein the processing unit is further configured to:
and storing the relevant information of each tracking head frame corresponding to the target identification information in a current verification queue.
19. The apparatus according to claim 12, wherein the processing unit is specifically configured to:
respectively determining the second similarity of the target head frame and each tracking head frame in the current tracking queue; determining first identification information according to a Hungary algorithm and each second similarity; and if the second similarity between the tracking head frame corresponding to the first identification information in the current tracking queue and the target head frame is smaller than a set matching threshold, determining the target identification information of the target head frame according to the feature similarity between the target head frame and each tracking head frame in the current tracking queue and the target threshold corresponding to the target head frame.
20. The apparatus according to claim 19, wherein the processing unit is specifically configured to:
determining second identification information according to the Hungary algorithm and the feature similarity; if the feature similarity between any tracking head frame corresponding to the second identification information in the current tracking queue and the target head frame is larger than a target threshold corresponding to the target head frame, determining that the target identification information of the target head frame is the second identification information; otherwise, new identification information is distributed for the target head frame to serve as target identification information of the target head frame.
21. The apparatus according to claim 20, wherein the processing unit is specifically configured to:
and determining a target threshold corresponding to the size of the target human head frame according to the corresponding relation between the image size and the threshold.
22. The apparatus of claim 21, wherein the device comprises a plurality of sensors,
the acquisition unit is further used for acquiring sample feature vectors of each sample head frame in the sample set, and each sample head frame corresponds to sample identification information;
the processing unit is further configured to obtain, for each sample header frame corresponding to the sample identification information, a sample feature vector of the sample header frame, and a sample feature similarity of a sample feature vector of a target sample header frame corresponding to the sample identification information; and determining a threshold value corresponding to each image size according to the size of each sample human head frame and the corresponding sample feature similarity thereof.
23. An electronic device comprising at least a processor and a memory, the processor being adapted to implement the steps of the statistical method of the passenger flow values according to any one of claims 1-11 when executing a computer program stored in the memory.
24. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the steps of the statistical method of passenger flow values according to any one of claims 1-11.
CN202010664483.0A 2020-07-10 2020-07-10 Passenger flow value statistical method, device, equipment and medium Active CN111860261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010664483.0A CN111860261B (en) 2020-07-10 2020-07-10 Passenger flow value statistical method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010664483.0A CN111860261B (en) 2020-07-10 2020-07-10 Passenger flow value statistical method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN111860261A CN111860261A (en) 2020-10-30
CN111860261B true CN111860261B (en) 2023-11-03

Family

ID=72984018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010664483.0A Active CN111860261B (en) 2020-07-10 2020-07-10 Passenger flow value statistical method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111860261B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113869159B (en) * 2021-09-16 2022-06-10 深圳市创宜隆科技有限公司 Cloud server data management system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872414A (en) * 2010-02-10 2010-10-27 杭州海康威视软件有限公司 People flow rate statistical method and system capable of removing false targets
CN103577832A (en) * 2012-07-30 2014-02-12 华中科技大学 People flow statistical method based on spatio-temporal context
CN105512720A (en) * 2015-12-15 2016-04-20 广州通达汽车电气股份有限公司 Public transport vehicle passenger flow statistical method and system
CN105631418A (en) * 2015-12-24 2016-06-01 浙江宇视科技有限公司 People counting method and device
CN106548451A (en) * 2016-10-14 2017-03-29 青岛海信网络科技股份有限公司 A kind of car passenger flow crowding computational methods and device
WO2018121127A1 (en) * 2016-12-30 2018-07-05 苏州万店掌网络科技有限公司 System for collecting statistics on pedestrian traffic by means of tracking based on video analysis technique
CN108416250A (en) * 2017-02-10 2018-08-17 浙江宇视科技有限公司 Demographic method and device
CN108764017A (en) * 2018-04-03 2018-11-06 广州通达汽车电气股份有限公司 Public traffice passenger flow statistical method, apparatus and system
CN108932464A (en) * 2017-06-09 2018-12-04 北京猎户星空科技有限公司 Passenger flow volume statistical method and device
CN110260810A (en) * 2018-03-12 2019-09-20 深圳鼎然信息科技有限公司 The vehicles, which multiply, carries demographic method, device, equipment and medium
CN110263703A (en) * 2019-06-18 2019-09-20 腾讯科技(深圳)有限公司 Personnel's flow statistical method, device and computer equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7965866B2 (en) * 2007-07-03 2011-06-21 Shoppertrak Rct Corporation System and process for detecting, tracking and counting human objects of interest
CN109711299A (en) * 2018-12-17 2019-05-03 北京百度网讯科技有限公司 Vehicle passenger flow statistical method, device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872414A (en) * 2010-02-10 2010-10-27 杭州海康威视软件有限公司 People flow rate statistical method and system capable of removing false targets
CN103577832A (en) * 2012-07-30 2014-02-12 华中科技大学 People flow statistical method based on spatio-temporal context
CN105512720A (en) * 2015-12-15 2016-04-20 广州通达汽车电气股份有限公司 Public transport vehicle passenger flow statistical method and system
CN105631418A (en) * 2015-12-24 2016-06-01 浙江宇视科技有限公司 People counting method and device
CN106548451A (en) * 2016-10-14 2017-03-29 青岛海信网络科技股份有限公司 A kind of car passenger flow crowding computational methods and device
WO2018121127A1 (en) * 2016-12-30 2018-07-05 苏州万店掌网络科技有限公司 System for collecting statistics on pedestrian traffic by means of tracking based on video analysis technique
CN108416250A (en) * 2017-02-10 2018-08-17 浙江宇视科技有限公司 Demographic method and device
CN108932464A (en) * 2017-06-09 2018-12-04 北京猎户星空科技有限公司 Passenger flow volume statistical method and device
CN110260810A (en) * 2018-03-12 2019-09-20 深圳鼎然信息科技有限公司 The vehicles, which multiply, carries demographic method, device, equipment and medium
CN108764017A (en) * 2018-04-03 2018-11-06 广州通达汽车电气股份有限公司 Public traffice passenger flow statistical method, apparatus and system
CN110263703A (en) * 2019-06-18 2019-09-20 腾讯科技(深圳)有限公司 Personnel's flow statistical method, device and computer equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A novel passenger flow prediction model using deep learning methods;Lijuan Liu等;《Transportation Research Part C》;第84卷;第74-91页 *
基于机器学习的嵌入式公交客流统计系统;朱攀;《中国优秀硕士学位论文全文数据库 信息科技辑》(第(2019)07期);I138-812 *
基于视频分析的公交客流统计技术研究与实现;赵倩;《中国优秀硕士学位论文全文数据库 工程科技II辑》(第(2017)03期);C034-1153 *
复杂场景下人流量统计方法研究;张雅俊;《中国优秀硕士学位论文全文数据库 信息科技辑》(第(2018)04期);I138-3106 *

Also Published As

Publication number Publication date
CN111860261A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
US11188783B2 (en) Reverse neural network for object re-identification
CN108875676B (en) Living body detection method, device and system
CN109978893B (en) Training method, device, equipment and storage medium of image semantic segmentation network
US20180122114A1 (en) Method and apparatus for processing video image and electronic device
CN110245579B (en) People flow density prediction method and device, computer equipment and readable medium
CN110675407B (en) Image instance segmentation method and device, electronic equipment and storage medium
CN111242852A (en) Boundary aware object removal and content filling
CN110781980B (en) Training method of target detection model, target detection method and device
Führ et al. Combining patch matching and detection for robust pedestrian tracking in monocular calibrated cameras
CN110472599B (en) Object quantity determination method and device, storage medium and electronic equipment
US20190073538A1 (en) Method and system for classifying objects from a stream of images
CN111860261B (en) Passenger flow value statistical method, device, equipment and medium
CN114820644A (en) Method and apparatus for classifying pixels of an image
CN113159146A (en) Sample generation method, target detection model training method, target detection method and device
CN113052019A (en) Target tracking method and device, intelligent equipment and computer storage medium
CN111950507A (en) Data processing and model training method, device, equipment and medium
CN111539390A (en) Small target image identification method, equipment and system based on Yolov3
CN111985333A (en) Behavior detection method based on graph structure information interaction enhancement and electronic device
KR102026280B1 (en) Method and system for scene text detection using deep learning
JP2018120402A (en) Event detecting device, event detecting method, and program
CN116964588A (en) Target detection method, target detection model training method and device
CN115953744A (en) Vehicle identification tracking method based on deep learning
KR20200077257A (en) Method and apparatus for automatically generating learning data for machine learning
US11620360B2 (en) Methods and systems for recognizing object using machine learning model
CN114870384A (en) Taijiquan training method and system based on dynamic recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant