CN113591904A - Sojourn time statistical method, goods adjusting method and related device - Google Patents
Sojourn time statistical method, goods adjusting method and related device Download PDFInfo
- Publication number
- CN113591904A CN113591904A CN202110672393.0A CN202110672393A CN113591904A CN 113591904 A CN113591904 A CN 113591904A CN 202110672393 A CN202110672393 A CN 202110672393A CN 113591904 A CN113591904 A CN 113591904A
- Authority
- CN
- China
- Prior art keywords
- target
- time
- monitoring area
- tracking algorithm
- library
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 229940036051 sojourn Drugs 0.000 title claims abstract description 8
- 238000007619 statistical method Methods 0.000 title abstract description 6
- 238000012544 monitoring process Methods 0.000 claims abstract description 111
- 230000004044 response Effects 0.000 claims description 10
- 238000010586 diagram Methods 0.000 description 7
- 238000009825 accumulation Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000003340 mental effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a sojourn time statistical method, a goods adjusting method and a related device, wherein the sojourn time statistical method comprises the following steps: responding to the first target collected by the camera, tracking the first target by using a tracking algorithm, and acquiring first time for collecting the first target; responding to a tracking algorithm to track that the first target leaves the monitoring area, and acquiring second time when the first target leaves the monitoring area; the method comprises the steps of responding to a tracking algorithm to lose a first target, obtaining an identification library corresponding to a monitoring area, and judging whether a second target with similarity exceeding a first threshold value with the first target is included in the identification library or not; if not, adding the first target into the recognition library; otherwise, the tracking algorithm takes the time when the first target is lost as a second time; the difference between the second time and the first time is taken as the lingering time of the first target. Through the mode, the method and the device for monitoring the target location can be used for counting the lingering time of the first target in the monitoring area, and the information of the first target in the monitoring area is enriched.
Description
Technical Field
The present disclosure relates to the field of data statistics, and more particularly, to a sojourn time statistic method, a goods adjustment method, and a related device.
Background
With the advent of the intelligent era, the acquisition of targets in a monitoring area is no longer limited to the acquisition of monitoring pictures, more valuable information is extracted from the monitoring area, and more intelligent decision making is facilitated based on the acquired information.
In the prior art, for monitoring areas such as shopping malls, shops and shelves, targets entering the monitoring areas are generally identified, the number of the targets is counted to obtain passenger flow, but the information amount for making decisions only by means of the passenger flow is insufficient, so that the intelligentization is difficult to realize. In view of this, how to count the staying time of the first target in the monitored area, enriching the information of the first target in the monitored area becomes an urgent problem to be solved.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a lingering time statistical method, a goods adjusting method and a related device, which can be used for counting the lingering time of a first target in a monitoring area and enriching the information of the first target in the monitoring area.
In order to solve the above technical problem, a first aspect of the present application provides a method for counting a linger time, including: responding to a first target collected by a camera, tracking the first target by utilizing a tracking algorithm, and acquiring first time for collecting the first target; responding to the tracking algorithm to track that the first target leaves the monitoring area, and acquiring a second time when the first target leaves the monitoring area; responding to the tracking algorithm to lose the first target, acquiring an identification library corresponding to the monitoring area, and judging whether a second target with similarity exceeding a first threshold value with the first target is included in the identification library or not; if not, adding the first target to the recognition library; otherwise, taking the time when the first target is lost as the second time by the tracking algorithm; and taking the difference value of the second time and the first time as the lingering time of the first target.
In order to solve the above technical problem, a second aspect of the present application provides an electronic device, including: a memory and a processor coupled to each other, wherein the memory stores program data, and the processor calls the program data to execute the method of the first aspect.
To solve the above technical problem, a third aspect of the present application provides a computer-readable storage medium having stored thereon program data, which when executed by a processor, implements the method of the first aspect.
The beneficial effect of this application is: the method records the time for acquiring the first target after the camera acquires the first target, tracks the first target by using a tracking algorithm, and when the tracking algorithm tracks the first target to leave a monitoring area, recording a second time that the first target leaves the monitored area, losing the first target when the tracking algorithm fails, searching the identification library for whether a second target with the similarity exceeding the first threshold is included, and if not, the first object is added to the recognition library, and if included, the second object is considered to be the same object as the first object, and the object was lost by a camera, since the tracking algorithm has a low probability of continuous failure, it can be considered that, when the first object is lost again, the first object has left the monitored area, and further taking the time when the first target is lost as a second time, and taking the difference value between the second time and the first time as the staying time of the first target in the monitoring area. Therefore, after the first target is acquired, different lingering time acquisition strategies are set for the scene that the tracking algorithm completes tracking and the tracking is invalid, so that the lingering time of the first target in the monitoring area is counted, and the information of the first target in the monitoring area is enriched.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a method for measuring residence time according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a statistical method for residence time according to the present application;
FIG. 3 is a schematic flow chart diagram of an embodiment of the method for adjusting an article of the present application;
FIG. 4 is a schematic structural diagram of an embodiment of an electronic device of the present application;
FIG. 5 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a method for counting linger time according to the present application, the method comprising:
s101: and responding to the first target collected by the camera, tracking the first target by utilizing a tracking algorithm, and acquiring the first time for collecting the first target.
Specifically, the monitoring area is provided with at least one camera for shooting the monitoring area, when any camera in the monitoring area acquires a first target, the first target is tracked by a tracking algorithm, and the first time of acquiring the first target for the first time is recorded.
In an application mode, a camera is arranged in a monitoring area, when a pedestrian enters the monitoring area and is collected by the camera, the pedestrian is used as a first target, the time of collecting the first target for the first time is recorded, the face information of the first target is obtained, a corresponding first identifier is configured for the face information of the first target by using a tracking algorithm, and the first identifier corresponds to the first time.
In another application mode, a plurality of cameras are arranged in a monitoring area, after one or more pedestrians enter the monitoring area, a certain pedestrian is collected by any camera in response, the pedestrian serves as a first target, the time of collecting the first target for the first time is recorded, the face information of the first target is obtained, corresponding first identifications are respectively configured for the face information of the first target by utilizing a tracking algorithm, the first identifications correspond to the first time, each camera carries out a corresponding tracking process after collecting any first target, and each camera is not interfered with each other when tracking the first target.
S102: and judging whether the first target is lost by the tracking algorithm.
Specifically, a boundary is set in the monitoring area, the first target is continuously tracked by using a tracking algorithm, if the tracking algorithm tracks that the first target leaves from the boundary of the monitoring area, the step S103 is performed, and if the tracking algorithm tracks the first target, the first target is lost in the boundary of the monitoring area, the step S104 is performed.
In an application mode, a tracking algorithm is configured with a corresponding first identifier for face information of a first target, the tracking algorithm continuously tracks the first target based on the face information of the first target, the first target is lost in response to the tracking algorithm, timing is started, if time exceeds a time threshold, the tracking algorithm is determined to be invalid, if the time threshold is not exceeded, face information corresponding to the first target is acquired again, the first target is continuously tracked, and when the first target leaves the boundary of a monitoring area, the first target is determined to leave the monitoring area.
S103: and responding to the tracking algorithm to track that the first target leaves the monitoring area, and acquiring a second time when the first target leaves the monitoring area.
Specifically, when the tracking algorithm tracks that the first target leaves from the boundary of the monitoring area, the time when the first target leaves the monitoring area is obtained and recorded as the second time, and then the step S107 is performed.
In an application mode, when a tracking algorithm tracks a first target, the target with the same first identifier is tracked, and if the face information corresponding to the first target is not lost within a time threshold value and the first target leaves from the boundary of a monitoring area in the tracking process, the time when the first target leaves the monitoring area is recorded and recorded as a second time.
S104: and in response to the loss of the first target by the tracking algorithm, acquiring an identification library corresponding to the monitoring area, and judging whether a second target with the similarity exceeding a first threshold value with the first target is included in the identification library.
Specifically, in the process of tracking the first target by the tracking algorithm, before the first target does not completely leave the boundary of the monitoring area, the first target cannot be found in the monitoring area after the time threshold is exceeded, and then it is determined that the first target is lost.
Further, an identification library currently corresponding to the monitoring area is obtained, whether a second target with similarity exceeding a first threshold value with the first target is included is searched in the identification library, the second target is determined as the same target as the first target, the step S106 is performed when the second target is included in the identification library, and the step S105 is performed when the second target is not included in the identification library.
In an application mode, the identification library stores characteristic information corresponding to a first target, when the first target is lost by a tracking algorithm, characteristic comparison is carried out on the characteristic information corresponding to the first target and the characteristic information in the identification library to obtain the similarity between the first target and other targets in the identification library, a second target with the similarity exceeding a first threshold value is extracted, and the second target exceeding the first threshold value is judged to be the same target as the first target.
In a specific application scenario, the recognition base stores characteristic information corresponding to a first target as face information, when the first target is lost by a tracking algorithm, feature comparison is performed on the face information corresponding to the first target and the face information in the recognition base to obtain the similarity between the first target and other targets in the recognition base, wherein the similarity is 0-1, a second target with the similarity exceeding 89% is extracted from the face information, and the second target with the similarity exceeding 89% is determined to be the same target as the first target, wherein the numerical value corresponding to the higher similarity is higher.
S105: the first target is added to the recognition library.
Specifically, when the recognition base does not include a second target with the similarity exceeding the first threshold with the first target, storing the characteristic information corresponding to the first target lost by the tracking algorithm into the recognition base.
S106: and taking the time when the first target is lost as a second time by the tracking algorithm.
Specifically, when the identification library includes a second target, it indicates that the second target has been lost by a camera, the time sequence of the second target being lost by the tracking algorithm is before the first target is lost by the tracking algorithm, and the similarity between the second target and the first target exceeding the first threshold can be determined as the same target, it indicates that the same target has been lost by the camera in the monitoring area more than once.
Further, no matter the same camera or multiple cameras are used, the probability that the tracking algorithm loses the same target for multiple times is low, so that when the second target is included in the recognition library, it is determined that the target leaves the monitoring area when the target is lost again, and the time when the first target is lost is taken as the second time by the tracking algorithm.
In a specific application scene, at least two cameras are arranged in a monitoring area, when a first target enters the monitoring area, each camera tracks the first target, when any camera loses the first target for the first time, but other cameras still track the first target, the first target cannot find a corresponding second target in an identification library and is stored in the identification library, when other cameras lose the first target, other cameras can find a target which is stored in the identification library before, it is determined that the first target leaves the monitoring area under the condition that an identification part is shielded, and then the time of losing the first target is used as second time by other cameras, so that the scheme of obtaining the stay time under the shielding scene is perfected.
S107: the difference between the second time and the first time is taken as the lingering time of the first target.
Specifically, the difference between the second time and the first time is used as the staying time of the first target in the monitoring area.
Furthermore, the number of the first targets is counted, and the lingering time corresponding to the first targets is stored so as to facilitate intelligent decision making in the monitoring area and provide richer information for subsequent application.
In the scheme, after the camera collects the first target, the time for collecting the first target is recorded, the first target is tracked by using the tracking algorithm, when the first target is tracked by the tracking algorithm and leaves the monitoring area, recording a second time that the first target leaves the monitored area, losing the first target when the tracking algorithm fails, searching the identification library for whether a second target with the similarity exceeding the first threshold is included, and if not, the first object is added to the recognition library, and if included, the second object is considered to be the same object as the first object, and the object was lost by a camera, since the tracking algorithm has a low probability of continuous failure, it can be considered that, when the first object is lost again, the first object has left the monitored area, and further taking the time when the first target is lost as a second time, and taking the difference value between the second time and the first time as the staying time of the first target in the monitoring area. Therefore, after the first target is acquired, different lingering time acquisition strategies are set for the scene that the tracking algorithm completes tracking and the tracking is invalid, so that the lingering time of the first target in the monitoring area is counted, and the information of the first target in the monitoring area is enriched.
Referring to fig. 2, fig. 2 is a schematic flow chart of another embodiment of a method for counting linger time according to the present application, the method comprising:
s201: and responding to the first target collected by the camera, tracking the first target by utilizing a tracking algorithm, and acquiring the first time for collecting the first target.
Specifically, after a camera in a monitoring area acquires a first target, the first target is tracked by a tracking algorithm, and the first time when the first target is acquired for the first time is recorded.
In an application mode, after a pedestrian enters a monitoring area and is collected by a camera, the pedestrian is used as a first target, the time of collecting the first target for the first time is recorded, the face information of the first target is obtained through a pre-trained model, a corresponding first identifier is configured by the face information of the first target through a tracking algorithm, and the first identifier corresponds to the first time.
In another application mode, after a pedestrian enters a monitoring area and is collected by a camera, the pedestrian is used as a first target, the time of collecting the first target for the first time is recorded, a pre-trained model is used for obtaining the pedestrian re-recognition feature of the first target, wherein the pedestrian re-recognition is a technology for judging whether a specific pedestrian exists in an image or a video sequence by using a computer vision technology, a corresponding first identifier is configured by using the pedestrian re-recognition feature of the first target of a tracking algorithm, and the first identifier corresponds to the first time.
S202: and judging whether the first target is lost by the tracking algorithm.
Specifically, the monitoring area is provided with a boundary, and when a plurality of cameras are provided, the boundary of the monitoring area is set with the overlapping area of the monitoring ranges of the plurality of cameras as the monitoring area. The first target is continuously tracked by using a tracking algorithm, if the tracking algorithm tracks that the first target leaves from the boundary of the monitoring area, the step S203 is performed, and if the tracking algorithm tracks the first target and loses the first target in the boundary of the monitoring area, the step S204 is performed.
S203: and responding to the tracking algorithm to track that the first target leaves the monitoring area, and acquiring a second time when the first target leaves the monitoring area.
Specifically, when the tracking algorithm tracks that the first target leaves from the boundary of the monitoring area, the time when the first target leaves the monitoring area is obtained and recorded as the second time, and then the step S211 is performed.
In an application mode, a tracking algorithm sets a corresponding first identifier for a pedestrian re-identification feature corresponding to a first target, then the tracking algorithm tracks the pedestrian re-identification feature corresponding to the first identifier, and if the pedestrian re-identification feature corresponding to the first target is not lost within a time threshold value in the tracking process and the first target leaves from the boundary of a monitoring area, the time that the first target leaves the monitoring area is recorded and recorded as a second time.
S204: and in response to the loss of the first target by the tracking algorithm, acquiring an identification library corresponding to the monitoring area, and judging whether a second target with the similarity exceeding a first threshold value with the first target is included in the identification library.
Specifically, before the first target does not completely leave the boundary of the monitoring area, if the tracking algorithm loses the first target in the monitoring area and the time exceeds the time threshold, it is determined that the tracking algorithm loses the first target, so as to obtain an identification library corresponding to the current monitoring area, whether a second target with a similarity exceeding the first threshold is included in the identification library is searched, the second target with the similarity exceeding the first threshold is determined as the same target as the first target, if so, step S210 is performed, otherwise, step S205 is performed.
In an application mode, each camera corresponds to a respective identification library, and the step of acquiring the identification library corresponding to the monitoring area comprises the following steps: and acquiring a recognition library corresponding to other cameras except the camera shooting the first target.
Specifically, the cameras respectively correspond to one recognition library, and when the camera loses a target and does not find a target with a similarity higher than a first threshold value in the recognition libraries of other cameras, the lost target is stored in the recognition library corresponding to the camera with the lost target for searching and calling, so that the lost target is stored in the recognition library.
Further, assuming that the camera with the lost first target is the first camera, when the first camera loses the first target, the identification libraries corresponding to other cameras in the monitored area are obtained to search whether corresponding second targets exist in the identification libraries corresponding to the other cameras, so that repeated comparison of the identification libraries is reduced, and the comparison stage is eliminated from the identification library corresponding to the first camera.
S205: and in response to the fact that any camera corresponding to the monitoring area loses the first target and does not acquire the second target, acquiring feature information corresponding to the face information based on the face information corresponding to the first target.
Specifically, when a plurality of cameras are arranged in a monitoring area, when any camera in the monitoring area loses a first target being tracked and does not find a corresponding second target in a recognition library, the first target is sent to a face recognition model for feature extraction to obtain face information corresponding to the first target, and feature information corresponding to the face information is obtained from the face information.
In an application mode, when a plurality of cameras are arranged in a monitoring area, each camera tracks a first target which appears for the first time, the cameras track independently, when any camera loses the first target which is being tracked at the earliest, the first target is sent into a face recognition model for feature extraction because a corresponding second target cannot be found, so that face information corresponding to the first target is obtained, corresponding feature information is extracted based on the face information, and therefore when a target needs to be subjected to feature comparison with the target in a recognition library in the subsequent process, a more accurate comparison result can be obtained.
S206: adding the characteristic information to the recognition library.
Specifically, the feature information is added to the recognition library and stored.
In an application mode, a monitoring area corresponds to an identification library, when a target is lost by a camera corresponding to the monitoring area and the target with the similarity exceeding a first threshold is not found in the identification library, feature information corresponding to the lost target is added to the identification library for calling and matching, and therefore after the first target is lost, the first target is comprehensively matched with all targets in the identification library.
In another application mode, each camera corresponds to a respective recognition library, when any camera corresponding to the monitored area loses the target and does not find the target with the similarity exceeding the first threshold in the recognition library, the feature information is added to the recognition library corresponding to the camera for calling and matching, so that after the first camera loses the first target, matching is performed in the recognition libraries corresponding to other cameras except the first camera, and the process of matching with the recognition library corresponding to the first camera is reduced.
It should be noted that the step of determining whether the recognition library includes a second target whose similarity with the first target exceeds the first threshold includes: comparing the first target with the targets in the recognition library on the basis of the characteristic information corresponding to the first target to acquire the similarity between the targets in the recognition library and the first target; in response to the similarity being higher than a first threshold, determining that a second target is included in the recognition library; and in response to the similarity being less than or equal to the first threshold, determining that the second target is not included in the recognition library.
Specifically, when the feature information corresponding to the target is stored in the recognition library, when it is judged whether the similarity between the first target and the target in the recognition library exceeds a first threshold, the first target and the target in the recognition library are sent to the feature comparison model for feature comparison to obtain a numerical value corresponding to the similarity between the target in the recognition library and the first target, when the numerical value corresponding to the similarity is greater than the first threshold, it is judged that the recognition library includes the second target, and when the numerical value corresponding to the similarity is less than or equal to the first threshold, it is judged that the recognition library does not include the second target.
In an application mode, when a first target is subjected to feature comparison with targets in an identification library, according to a time sequence that the targets are added into the identification library, the targets with the addition time closer to the current time are subjected to feature comparison preferentially with the first target, so that the targets with the addition time closer to the current time and lost by other cameras are searched preferentially, the searching is stopped when a second target with the similarity degree exceeding a first threshold value with the first target is obtained, and the second target is deleted from the identification library, so that the continuous accumulation of data in the identification library is avoided, and the efficiency and the accuracy for acquiring the second target corresponding to the first target are improved.
S207: and judging whether the time for adding the first target into the recognition library exceeds a second threshold value.
Specifically, the difference between the time when the first target is added to the recognition base and the current time is calculated, and if the difference exceeds the second threshold, the step S208 is performed, otherwise, the step S209 is performed.
S208: and taking the time of the first target lost as second time by using the tracking algorithm, and deleting the second target corresponding to the first target from the recognition library.
Specifically, when the time that the tracking algorithm loses the first target exceeds the second threshold, it indicates that other cameras may not lose the same first target, or the first target has left the monitored area if it is occluded. Therefore, the tracking algorithm takes the time when the first target is lost as the second time, so that the tracking algorithm roughly takes the time when the first target is lost as the time when the first target leaves the monitoring area, and deletes the previously stored first target from the identification library to avoid the continuous accumulation of data in the identification library.
S209: the first target is stored in a recognition repository.
Specifically, when the time for storing the first target in the recognition library does not exceed the second threshold, the first target is continuously stored in the recognition library, and the step of responding to the camera to acquire the first target, tracking the first target by using the tracking algorithm, and acquiring the first time for acquiring the first target is returned, so that the feature information corresponding to the previously stored first target can be called, and the step of acquiring the first target by the camera is returned.
S210: and taking the time when the first target is lost as a second time by the tracking algorithm.
Specifically, when the recognition library includes the second target, it is indicated that the second target has been lost by the camera, and the probability that the tracking algorithm loses the same target for multiple times is low, so that when the recognition library includes the second target, the tracking algorithm takes the time when the first target is lost as the second time.
S211: the difference between the second time and the first time is taken as the lingering time of the first target.
Specifically, the difference between the second time and the first time is used as the staying time of the first target in the monitoring area, and the staying time corresponding to the plurality of first targets can be counted by repeating the steps to form a data set of the staying time corresponding to the plurality of first targets, so that the user can perform decision analysis.
In this embodiment, when the first target is lost by the tracking algorithm and the corresponding second target cannot be obtained, the face information corresponding to the first target is obtained, the feature information corresponding to the face information is extracted and stored in the recognition library for comparison of subsequent matching and calling, the first target is continuously tracked, the recognition library is continuously updated, when the time that the first target is added to the recognition library exceeds the second threshold, it is determined that the first target has left the monitoring area, the time that the first target is lost by the tracking algorithm is taken as the second time, so that the stay time of the first target in the monitoring area is finally obtained, and the information of the first target in the monitoring area is increased.
Referring to fig. 3, fig. 3 is a schematic flow chart of an embodiment of an article adjustment method according to the present application, the method including:
s301: and responding to the first target collected by the camera, tracking the first target by utilizing a tracking algorithm, and acquiring the first time for collecting the first target.
S302: and judging whether the first target is lost by the tracking algorithm.
S303: and responding to the tracking algorithm to track that the first target leaves the monitoring area, and acquiring a second time when the first target leaves the monitoring area.
S304: and in response to the loss of the first target by the tracking algorithm, acquiring an identification library corresponding to the monitoring area, and judging whether a second target with the similarity exceeding a first threshold value with the first target is included in the identification library.
S305: the first target is added to the recognition library.
S306: and taking the time when the first target is lost as a second time by the tracking algorithm.
S307: the difference between the second time and the first time is taken as the lingering time of the first target.
Specifically, the steps S301 to S307 are similar to the embodiments, and specific contents may refer to any of the embodiments, which are not described herein again.
S308: the items within the monitored area are adjusted based on the first target residence time.
Specifically, after the lingering time of the first target in the monitored area is obtained, the goods in the monitored area are adjusted according to the length of the lingering time.
In one application, when the residence time exceeds a third threshold, the total amount of items in the monitored area is increased to facilitate selection by the customer, further increasing the customer's selectable items.
In another application, when the stay time does not exceed the third threshold, the goods in the monitored area are re-displayed, and signs and customer-attracting decorations are added to attract customers to stay for a longer time.
It can be understood that the stay time of the first target is obtained by monitoring the first target in the monitoring area, so that the convenience in decision making in the monitoring area is improved, and the quantity and arrangement of goods in the monitoring area are more reasonable.
Further, the step of adjusting the items within the monitored area based on the first target residence time may be preceded by the steps of: the method comprises the steps of obtaining a first number of first targets, and obtaining average stay time corresponding to the first targets based on the first number and the stay time corresponding to the first targets.
The method comprises the steps that the number of first targets is counted by each camera, when the staying time corresponding to one first target is obtained, the number of the first targets is added, after the number of the first targets and the staying time are obtained, the average staying time corresponding to the first targets is calculated, and therefore goods in a monitoring area are adjusted according to the average staying time, and the goods in the monitoring area are adjusted according to the average staying time which is more universal, so that the adjustment of the goods is more reasonable.
In an application scenario, the step of adjusting the items in the monitored area based on the sojourn time of the first target includes: the items within the monitored area are adjusted based on the first number, the average residence time, and the shipment volume of the items within the monitored area.
Specifically, the shipment volume of the goods in the monitoring area can be counted by a camera or input by a user, whether the average stay time exceeds a fourth threshold value or not is judged, and the decision of the goods is made according to two scenes of long average stay time and short average stay time, so that the rationality of goods adjustment is improved.
And when the average stay time of the first target is long, judging whether the first number exceeds a fifth threshold, if the first number exceeds the fifth threshold, indicating that the stay time of the first target in the monitoring area is long and the total number of people is large, at the moment, if the delivery volume is large, increasing the input volume of the goods to reduce the probability of goods shortage, and if the delivery volume is small, reasonably arranging the goods on the shelf to improve the probability of goods being purchased. If the first number is smaller than or equal to the fifth threshold value, the first target stays in the monitoring area for a long time and the total number of people is small, at the moment, if the goods output amount is large, the goods input amount of the goods is increased, and the indication board is additionally arranged in the monitoring area, so that more users can more conveniently find the goods of the mental instrument, and if the goods output amount is small, the goods input amount is reduced, so that the stock pressure is reduced, and the indication board is additionally arranged, so that the users can conveniently find the goods.
And when the average stay time of the first target is short, judging whether the first number exceeds a fifth threshold value, if the first number exceeds the fifth threshold value, indicating that the stay time of the first target in the monitoring area is short and the total number of people is large, at the moment, if the delivery volume is large, increasing the input volume of the goods to reduce the probability of shortage of the goods, and if the delivery volume is small, adjusting the price of the goods to increase the stay time of the first target and improve the probability of purchase. If the first number is less than or equal to the fifth threshold value, the first target has short stay time in the monitoring area and the total number of people is small, at the moment, if the goods output amount is large, the goods input amount of the goods is increased, and the indication board is added in the monitoring area, so that more users can find the goods of the mental instrument more conveniently, and if the goods output amount is small, the goods input amount is reduced to deal with the goods overstock problem which possibly occurs in the following process when the total number of people is small and the goods output amount is small.
According to the scheme, the stay time of the first target is acquired, and the goods in the monitoring area are properly adjusted based on the first number of the first target, the average stay time and the goods delivery amount, so that the goods delivery amount, the arrangement mode and the price rationality are improved, the probability of potential risks is reduced, and the decision intelligence is improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of an electronic device 40 of the present application, where the electronic device includes a memory 401 and a processor 402 coupled to each other, where the memory 401 stores program data (not shown), and the processor 402 calls the program data to implement the method in any of the embodiments described above, and the description of the related contents refers to the detailed description of the embodiments of the method described above, which is not repeated herein.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a computer-readable storage medium 50 of the present application, the computer-readable storage medium 50 stores program data 500, and the program data 500 is executed by a processor to implement the method in any of the above embodiments, and the related contents are described in detail with reference to the above method embodiments and will not be described in detail herein.
It should be noted that, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.
Claims (10)
1. A method for counting linger time, the method comprising:
responding to a first target collected by a camera, tracking the first target by utilizing a tracking algorithm, and acquiring first time for collecting the first target;
responding to the tracking algorithm to track that the first target leaves the monitoring area, and acquiring a second time when the first target leaves the monitoring area;
responding to the tracking algorithm to lose the first target, acquiring an identification library corresponding to the monitoring area, and judging whether a second target with similarity exceeding a first threshold value with the first target is included in the identification library or not;
if not, adding the first target to the recognition library;
otherwise, taking the time when the first target is lost as the second time by the tracking algorithm;
and taking the difference value of the second time and the first time as the lingering time of the first target.
2. The method of claim 1, wherein the monitored area corresponds to at least one of the cameras, and the step of adding the first object to the recognition library comprises:
responding to that any one of the cameras corresponding to the monitoring area loses a first target and does not acquire the second target, and acquiring feature information corresponding to the face information based on the face information corresponding to the first target;
adding the characteristic information to the recognition library.
3. The sojourn time statistic method according to claim 2,
each camera corresponds to a respective recognition library, and the step of adding the feature information to the recognition libraries includes:
adding the characteristic information to the identification library corresponding to the camera;
the step of obtaining the identification library corresponding to the monitoring area comprises the following steps:
and acquiring the identification libraries corresponding to other cameras except the camera which shoots the first target.
4. The method of claim 2, wherein the step of determining whether the recognition library includes a second target with similarity to the first target exceeding a first threshold comprises:
comparing the first target with the targets in the identification library based on the characteristic information corresponding to the first target to obtain the similarity between the targets in the identification library and the first target;
in response to the similarity being higher than a first threshold, determining that the second target is included in the recognition library;
in response to the similarity being less than or equal to the first threshold, determining that the second target is not included in the recognition library.
5. The sojourn time statistic method of claim 1, wherein said step of adding said first target to said recognition library is followed by further comprising:
judging whether the time for adding the first target into the recognition library exceeds a second threshold value;
if the first target is lost, taking the time of the first target lost by the tracking algorithm as the second time, and deleting the second target corresponding to the first target from the recognition library by the tracking algorithm;
otherwise, storing the first target in the recognition library, returning to the step of responding to the first target collected by the camera, tracking the first target by using a tracking algorithm, and acquiring the first time of collecting the first target.
6. A method of adjusting an article, the method comprising:
responding to a first target collected by a camera, tracking the first target by utilizing a tracking algorithm, and acquiring first time for collecting the first target;
responding to the tracking algorithm to track that the first target leaves the monitoring area, and acquiring a second time when the first target leaves the monitoring area;
responding to the tracking algorithm to lose the first target, acquiring an identification library corresponding to the monitoring area, and judging whether a second target with similarity exceeding a first threshold value with the first target is included in the identification library or not;
if not, adding the first target to the recognition library;
otherwise, taking the time when the first target is lost as the second time by the tracking algorithm;
taking the difference between the second time and the first time as the lingering time of the first target;
adjusting items within the monitored area based on the first target residence time.
7. The item adjustment method of claim 6, wherein the step of adjusting the item within the monitored area based on the residence time of the first target is preceded by:
and acquiring a first number of the first targets, and acquiring the average stay time corresponding to the first targets based on the first number and the stay time corresponding to the first targets.
8. The item adjustment method according to claim 7, wherein the step of adjusting the item within the monitored area based on the residence time of the first target comprises:
adjusting the items within the monitoring area based on the first number, the average residence time, and the shipment volume of the items within the monitoring area.
9. An electronic device, comprising: a memory and a processor coupled to each other, wherein the memory stores program data that the processor calls to perform the method of any of claims 1-5 or 6-8.
10. A computer-readable storage medium, on which program data are stored, which program data, when being executed by a processor, carry out the method of any one of claims 1-5 or 6-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110672393.0A CN113591904B (en) | 2021-06-17 | 2021-06-17 | Residence time statistics method, goods adjustment method and related devices |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110672393.0A CN113591904B (en) | 2021-06-17 | 2021-06-17 | Residence time statistics method, goods adjustment method and related devices |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113591904A true CN113591904A (en) | 2021-11-02 |
CN113591904B CN113591904B (en) | 2024-06-21 |
Family
ID=78243822
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110672393.0A Active CN113591904B (en) | 2021-06-17 | 2021-06-17 | Residence time statistics method, goods adjustment method and related devices |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113591904B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663743A (en) * | 2012-03-23 | 2012-09-12 | 西安电子科技大学 | Multi-camera cooperative character tracking method in complex scene |
CN106128053A (en) * | 2016-07-18 | 2016-11-16 | 四川君逸数码科技股份有限公司 | A kind of wisdom gold eyeball identification personnel stay hover alarm method and device |
CN106303424A (en) * | 2016-08-15 | 2017-01-04 | 深圳市校联宝科技有限公司 | A kind of monitoring method and monitoring system |
WO2018133666A1 (en) * | 2017-01-17 | 2018-07-26 | 腾讯科技(深圳)有限公司 | Method and apparatus for tracking video target |
CN108985162A (en) * | 2018-06-11 | 2018-12-11 | 平安科技(深圳)有限公司 | Object real-time tracking method, apparatus, computer equipment and storage medium |
CN109117721A (en) * | 2018-07-06 | 2019-01-01 | 江西洪都航空工业集团有限责任公司 | A kind of pedestrian hovers detection method |
CN110969097A (en) * | 2019-11-18 | 2020-04-07 | 浙江大华技术股份有限公司 | Linkage tracking control method, equipment and storage device for monitored target |
CN112216062A (en) * | 2020-09-11 | 2021-01-12 | 深圳市朗尼科智能股份有限公司 | Community security early warning method, device, computer equipment and system |
CN112528812A (en) * | 2020-12-04 | 2021-03-19 | 京东方科技集团股份有限公司 | Pedestrian tracking method, pedestrian tracking device and pedestrian tracking system |
CN112801018A (en) * | 2021-02-07 | 2021-05-14 | 广州大学 | Cross-scene target automatic identification and tracking method and application |
-
2021
- 2021-06-17 CN CN202110672393.0A patent/CN113591904B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663743A (en) * | 2012-03-23 | 2012-09-12 | 西安电子科技大学 | Multi-camera cooperative character tracking method in complex scene |
CN106128053A (en) * | 2016-07-18 | 2016-11-16 | 四川君逸数码科技股份有限公司 | A kind of wisdom gold eyeball identification personnel stay hover alarm method and device |
CN106303424A (en) * | 2016-08-15 | 2017-01-04 | 深圳市校联宝科技有限公司 | A kind of monitoring method and monitoring system |
WO2018133666A1 (en) * | 2017-01-17 | 2018-07-26 | 腾讯科技(深圳)有限公司 | Method and apparatus for tracking video target |
CN108985162A (en) * | 2018-06-11 | 2018-12-11 | 平安科技(深圳)有限公司 | Object real-time tracking method, apparatus, computer equipment and storage medium |
CN109117721A (en) * | 2018-07-06 | 2019-01-01 | 江西洪都航空工业集团有限责任公司 | A kind of pedestrian hovers detection method |
CN110969097A (en) * | 2019-11-18 | 2020-04-07 | 浙江大华技术股份有限公司 | Linkage tracking control method, equipment and storage device for monitored target |
CN112216062A (en) * | 2020-09-11 | 2021-01-12 | 深圳市朗尼科智能股份有限公司 | Community security early warning method, device, computer equipment and system |
CN112528812A (en) * | 2020-12-04 | 2021-03-19 | 京东方科技集团股份有限公司 | Pedestrian tracking method, pedestrian tracking device and pedestrian tracking system |
CN112801018A (en) * | 2021-02-07 | 2021-05-14 | 广州大学 | Cross-scene target automatic identification and tracking method and application |
Also Published As
Publication number | Publication date |
---|---|
CN113591904B (en) | 2024-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443833B (en) | Object tracking method and device | |
JP6854881B2 (en) | Face image matching system and face image search system | |
CN110781733B (en) | Image duplicate removal method, storage medium, network equipment and intelligent monitoring system | |
US20100189410A1 (en) | Analysis of Video Footage | |
CN101425133A (en) | Human image retrieval system | |
JP2017033547A (en) | Information processing apparatus, control method therefor, and program | |
CN112464697A (en) | Vision and gravity sensing based commodity and customer matching method and device | |
CN109544595B (en) | Customer path tracking method and system | |
CN113255621B (en) | Face image filtering method, electronic device and computer-readable storage medium | |
Xiang et al. | Autonomous Visual Events Detection and Classification without Explicit Object-Centred Segmentation and Tracking. | |
CN114783037B (en) | Object re-recognition method, object re-recognition apparatus, and computer-readable storage medium | |
US11537639B2 (en) | Re-identification of physical objects in an image background via creation and storage of temporary data objects that link an object to a background | |
CN115858861A (en) | Video compression method, electronic device and computer-readable storage medium | |
CN112949539A (en) | Pedestrian re-identification interactive retrieval method and system based on camera position | |
CN112651366B (en) | Passenger flow number processing method and device, electronic equipment and storage medium | |
CN113918510A (en) | Picture archiving method and device, terminal equipment and computer readable storage medium | |
CN113674049A (en) | Commodity shelf position identification method and system based on picture search and storage medium | |
CN113591904B (en) | Residence time statistics method, goods adjustment method and related devices | |
Anwar et al. | Mining anomalous events against frequent sequences in surveillance videos from commercial environments | |
JP2023153148A (en) | Self-register system, purchased commodity management method and purchased commodity management program | |
CN114333039B (en) | Method, device and medium for clustering human images | |
CN112559583B (en) | Method and device for identifying pedestrians | |
Chen et al. | Surveillance video summarisation by jointly applying moving object detection and tracking | |
CN114627403A (en) | Video index determining method, video playing method and computer equipment | |
CN113657169A (en) | Gait recognition method, device, system and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |