CN113297888A - Method and device for checking image content detection result - Google Patents

Method and device for checking image content detection result Download PDF

Info

Publication number
CN113297888A
CN113297888A CN202010986033.3A CN202010986033A CN113297888A CN 113297888 A CN113297888 A CN 113297888A CN 202010986033 A CN202010986033 A CN 202010986033A CN 113297888 A CN113297888 A CN 113297888A
Authority
CN
China
Prior art keywords
added
result
detection result
results
count value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010986033.3A
Other languages
Chinese (zh)
Inventor
李松
张文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010986033.3A priority Critical patent/CN113297888A/en
Publication of CN113297888A publication Critical patent/CN113297888A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The specification discloses a method and a device for checking an image content detection result. The method is realized based on an image content detection result set, and comprises the following steps: and monitoring the repeated count value of the added results in the set, and if the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result. When the set is updated, traversing the added results in the current set to match with the results to be added, and judging whether the added results which are successfully matched with the results to be added exist in the current set; if not, adding the result to be added into the current set, and initializing the repeat count value of the result; if yes, performing increment processing on all the repeated count values of the added results which are successfully matched; the repetition count value is used at least for: and when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.

Description

Method and device for checking image content detection result
Technical Field
The embodiment of the specification relates to the field of image processing, in particular to a method and a device for checking an image content detection result.
Background
Image content detection is a technique for detecting whether specified content exists in an image and marking the specified content in the image. The image content detection technology is widely applied to monitoring scenes at present, monitoring equipment can continuously shoot images of a specified monitoring area, and specified contents such as people and vehicles in the monitoring area can be identified through the content detection technology so as to meet further application requirements.
Since content detection is performed based on the image feature of the detection target, when there is a portion in the image that has a certain image feature but does not specify content, an unexpected detection result occurs. For example, when a person poster happens to appear in a captured image when a person is detected in a monitored image of an area, the poster position in the image is wrongly marked. Such erroneous detection results inevitably affect the accuracy of subsequent applications.
Disclosure of Invention
In order to check out an erroneous detection result and improve the accuracy of image content detection, the present specification provides a method and an apparatus for checking an image content detection result. The specific technical scheme is as follows.
A method for updating image content detection result set includes creating content detection result set for checking aiming at monitoring image of any region, each added result in set has a repeated count value; the method comprises the following steps:
after any monitoring image of the area is obtained, content detection is carried out on the image, and the obtained content detection result is determined as a result to be added;
traversing the added results in the current content detection result set to match with the results to be added, and judging whether the added results which are successfully matched with the results to be added exist in the current set; if the result to be added does not exist, adding the result to be added into the current set, and initializing a repeat count value of the result; if the result is matched with the repeated count value, performing increment processing on the repeated count values of all the successfully matched added results according to a preset increment rule;
the repetition count value is at least for: and when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
A method for checking the image content detection result of the set based on the image content detection result set updating method comprises the following steps:
and monitoring the repeated count value of the added results in the set, and if the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
An image content detection result set updating device is used for creating a content detection result set for checking aiming at a monitoring image of any region, wherein each added result in the set has a repeated counting value; the device comprises:
a detection unit: after any monitoring image of the area is obtained, content detection is carried out on the image, and the obtained content detection result is determined as a result to be added;
a matching unit: traversing the added results in the current content detection result set to match with the results to be added, and judging whether the added results which are successfully matched with the results to be added exist in the current set; if the result to be added does not exist, adding the result to be added into the current set, and initializing a repeat count value of the result; if the result is matched with the repeated count value, performing increment processing on the repeated count values of all the successfully matched added results according to a preset increment rule;
the repetition count value is at least for: and when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
An image content detection result verification apparatus for updating a set of the above-mentioned image content detection result set updating apparatus, comprising:
a monitoring unit: monitoring the repeated count value of the added results in the set;
a judging unit: and if the repeated counting value of any added result is monitored to be larger than a preset threshold value, determining that the added result is an error detection result.
According to the technical scheme, the designated contents which repeatedly appear in the plurality of images are determined by constructing and updating the image content detection result set and utilizing the attribute value of the repeated count value. When the repetition count value is greater than the threshold, the corresponding specified content is determined to be an error detection result based on the dynamic characteristic of the specified content, that is, the correct specified content does not repeatedly appear in a plurality of monitoring images of which the number is greater than the threshold, so that the accuracy of image content detection is improved.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present specification, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic diagram illustrating a pedestrian detection provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a principle of pedestrian detection in a monitoring scenario according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of an image content detection result set updating method provided in an embodiment of the present specification;
fig. 4 is a schematic flowchart of a method for checking an image content detection result according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating an application example of a method for checking an image content detection result according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image content detection result set updating apparatus provided in an embodiment of the present specification;
fig. 7 is a schematic structural diagram of an image content detection result checking apparatus provided in an embodiment of the present specification;
fig. 8 is a schematic structural diagram of an apparatus for configuring a method according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present specification, the technical solutions in the embodiments of the present specification will be described in detail below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of protection.
In this specification, the image content detection may be a technique of detecting whether or not a specified content exists in an image and marking the specified content in the image.
The image content detection technology may be implemented by first extracting content features in the image, and if the extracted content features include features of the specified content, it may be determined that the specified content exists in the image, and the position of the specified content in the image may be further marked.
The present specification does not limit the specific technology for extracting the image content features, and an alternative embodiment may be implemented by using a deep neural network model, and extracting the implicit image content features for the image by using the deep neural network model obtained through training.
The image content detection technology has various application scenes, and objects to be detected may be different, for example, when the specified content is a pedestrian, the specific image content detection technology may be regarded as a pedestrian detection technology; when the specified content is a vehicle, the specific image content detection technology may be regarded as a vehicle detection technology.
Taking a pedestrian detection scene as an example, as shown in fig. 1, a schematic diagram of a pedestrian detection principle provided in this specification is provided.
The method comprises the steps of detecting a piece of image by using a pedestrian detection technology, detecting 3 pedestrians in the image, obtaining the positions of the detected pedestrians in the image, and marking the positions of the 3 pedestrians in the image by using a rectangular frame.
It should be noted that, for any image, the detection result may include one or more specific contents, and may also include 0 specific contents. Through the image content detection technology, under the condition that N designated contents exist in the image, the positions of the N designated contents in the image can be acquired, wherein N is more than or equal to 0.
The image content detection technology is widely applied to monitoring scenes at present, monitoring equipment can continuously shoot images of a specified monitoring area, and specified contents such as people and vehicles in the monitoring area can be identified through the content detection technology so as to meet further application requirements.
When the monitoring device continuously shoots the images, the specification does not limit whether the interval time period between any two adjacent image shooting is fixed or not, and does not limit the frequency of image shooting.
And the designated monitoring areas that are continuously photographed by the same monitoring device may be the same geographical area.
Therefore, a plurality of images continuously captured and acquired by the same monitoring device are all acquired for the same geographical area.
When image content detection is performed on a single image acquired by the monitoring device, the position of the acquired specified content in the image can represent the position of the specified content in the geographic area shot by the monitoring device at the moment when the monitoring device shoots the image.
When image content detection is performed on a plurality of images acquired by the monitoring equipment, the positions of the acquired specified content in each image can represent the positions of the specified content in a plurality of moments when the monitoring equipment shoots the images and in a geographic area shot by the monitoring equipment respectively.
In other words, when the image content detection technology is applied to a monitoring scene, a plurality of images continuously acquired by the same monitoring device are detected by using the image content detection technology, and the positions of the designated content in the geographic area shot by the monitoring device at a plurality of moments can be acquired, so that the further application requirements can be met by the positions of the designated content at a plurality of moments.
Fig. 2 is a schematic diagram of a principle of pedestrian detection in a monitoring scene provided in the present specification.
The monitoring device in the figure shoots the designated monitoring area which is 3 shelves in the shop, namely commodity shelves, food shelves and refrigerator shelves.
The monitoring device continuously acquired 3 images, taken at 6:00 am, 12:00 pm and 3:00 pm, respectively. The pedestrian detection is carried out on the 3 images continuously acquired by the monitoring equipment, so that the positions of the pedestrians in the 3 images can be respectively obtained, and further the positions of the pedestrians in the specified monitoring area at 3 moments of 6:00 am, 12:00 pm and 3:00 pm can be obtained.
Under the monitoring scene, the positions of the specified contents at multiple moments, which are acquired by the image content detection technology, can be further used for different application requirements.
The application requirements are such as calculating and estimating the motion trail of the specified content, counting the number of vehicles at different moments, analyzing the people flow distribution at different moments and the like.
For ease of understanding, a practical application example is given below for illustrative purposes.
In a new retail scene, people and goods relationship is often required to be digitalized, so that retail commodity distribution, goods feeding quantity and the like are optimized by using front-end technologies such as big data analysis, data mining and the like, and resource utilization rate, transaction occurrence rate and the like are improved.
The data of the relationship between persons and goods is, for example, the stay time of a pedestrian in front of a shelf of different categories, the fitting time, the types and the number of purchased products of pedestrians of different ages, the probability of a pedestrian who has purchased one product purchasing another kind of product in the vicinity of the product, and the like.
By using the data-based person-goods relationship, many conclusions can be analyzed, for example, pedestrians who buy beer often buy snacks and other consumables, and the analyzed conclusions can be utilized to optimize the distribution of commodities, and the like.
And the commodity taking behavior identification is an important part of person-goods relationship datamation and is used for identifying the commodity taking and trying-on behaviors of pedestrians. One of the more important steps is pedestrian detection.
The intelligent terminal is deployed in an online store, shelf pictures can be shot through the monitoring camera, pedestrian detection is completed through image recognition, the existence of pedestrians in the images is detected, the positions of the pedestrians in the images are determined, and therefore action detection is further executed.
If the detected motion is a motion of picking up a commodity, it can be considered that a pedestrian has taken a commodity picking up behavior.
However, since the content detection may be implemented based on the image feature of the specified content, when there is a content having some image feature of the specified content but not the specified content in the image, an unexpected detection result, that is, an erroneous detection result may occur.
For example, when pedestrian detection is performed on a monitored image of an area, if a person poster happens to appear in the captured image, the person on the poster has the content characteristics of a pedestrian, and therefore the poster person in the image is erroneously detected as a pedestrian, and the position of the poster person is marked. Such erroneous detection results inevitably affect the accuracy of subsequent applications.
For another example, when a vehicle is detected in a monitored image of an area, if a car poster happens to appear in the captured image, the poster position in the image is erroneously marked. Such erroneous detection results inevitably affect the accuracy of subsequent applications.
For another example, in the above-described new retail scenario, a person poster, a human-shaped display board, a mannequin, a dynamic person in a screen, and the like often appear in a store as an object that is easily erroneously detected as a pedestrian. When the pedestrian detection technology is used, if the accuracy of pedestrian detection is low, the subsequent analysis conclusion is not accurate.
In order to improve the accuracy of the image content detection technology, the present specification provides an image content detection result checking method.
By analyzing the error detection results detected by the image content detection technology in different application scenes, it is found that most error detection results have the content characteristics of the specified content, but the positions or the characteristics can be kept unchanged. Such as posters, models, etc.
From a practical scene, posters posted in a shop are often fixed in position, and in a plurality of images continuously acquired by the monitoring equipment, the positions of the posters in the images should be fixed. Similarly, human-shaped display boards, mannequins, dynamic characters on the screen, etc. in the store are easily mistakenly detected as pedestrians, and are usually placed at fixed positions, and the positions in the images should also be fixed.
Even if posters posted in a store change the posted location, the character content on the posters does not change, and thus the corresponding content features may be unchanged.
And the specified content can be dynamic in general, has dynamic characteristics, and is difficult to keep the position or the characteristic unchanged for a long time. The dynamic characteristic can be specifically interpreted that the designated content can be dynamic in a normal state, can change positions, does not remain static for a long time, and does not repeatedly appear in the same area.
For example, the posture and position of the pedestrian may be changed in general and not remain still for a long time, and thus, the pedestrian has dynamic characteristics.
Through analysis of the error detection results and the dynamic characteristics of the specified content, determining the commonality of the error detection results may include: the repetition occurs in the same area, that is, in a plurality of monitored images. The determination may be made specifically by location or content characteristics.
Therefore, the method for checking the image content detection result provided by this specification is mainly to check whether the detection result is always kept at a fixed position or has the same content characteristics in a plurality of monitoring images shot by the monitoring device for the specified content with dynamic characteristics, to determine whether the detection result is the specified content, to further exclude the false detection result, and to improve the accuracy of image content detection.
It should be noted that, in the method for checking the image content detection result provided in the present specification, the achievement of the technical effect of improving the accuracy of the image content detection technology may be based on the fact that the specified content has dynamic characteristics.
For further understanding, the method for checking the image content detection result provided in the present specification is explained in detail below with reference to the drawings.
Since the method needs to determine whether there is a detection result that remains still in a plurality of images, such as a position or a content feature, and each detection is performed on a single image captured by the monitoring device, the method needs to accumulate historical image content detection results and check whether there is an error detection result according to the accumulated plurality of images.
Therefore, in the present specification, for a monitoring image of any one area, a content detection result set for checking an error detection result may be created for storing a history of image content detection results. The content detection result set can be continuously updated according to the newly detected detection result. Based on the updated set of content detection results, a check may be made for the detection results.
The present specification provides an image content detection result set updating method, which is used for continuously updating an image content detection result set according to a new detection result, so as to implement an image content detection result checking method according to the currently updated image content detection result set, thereby checking the detection result in real time.
Fig. 3 is a schematic flow chart of an image content detection result set updating method provided in this specification. For a monitored image of any region, a content detection result set for verification may be created, and each added result in the set may have a repetition count value. Of course, the content detection result set just created may be an empty set.
The method may include the following steps.
S101: and after any monitoring image of the area is obtained, content detection is carried out on the image, and the obtained content detection result is determined as a result to be added.
S102: and traversing the added result in the current content detection result set to match with the result to be added.
S103: and judging whether an added result successfully matched with the result to be added exists in the current set.
If not, S104 is performed. If so, S105 is performed.
S104: and adding a result to be added to the current set, and initializing a repeat count value of the result.
S105: and performing increment processing on all the successfully matched repeated count values of the added results according to a preset increment rule.
Wherein the repetition count value is at least used for: and when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
The following is an explanation of the overall embodiment.
It should be noted that, since a content detection result set may be created for a monitoring image of any region, a content detection result set may also be created and updated for monitoring images of other regions using the above-described method.
In addition, since the created and updated content detection result set is used for checking and the content detection results are not simply stored, the duplicate count value included in the added detection results in the set can be directly increased for duplicate detection results, and is not required to be added to the set.
The repetition count value may be an attribute value of the added result in the set, or may be an attribute value corresponding to the added result outside the set, which is not limited in this embodiment. The repetition count value may specifically be considered to represent the number of monitoring images in which the detection result repeatedly appears.
To facilitate understanding of the present embodiment, the repeated count value may be further understood as a mechanism for checking the detection result in the set that remains static, and when any added result in the set is successfully matched with the result to be added, it may be indicated that the position in the monitoring image or the content feature in the image of the added result is the same as that of the result to be added, so that the result to be added may not be added to the set, and the repeated count value of the added result is increased.
Of course, the correct detection result may be repeatedly present in a plurality of images, and remain still, for example, the pedestrian stops walking, or the running vehicle temporarily stops at the side, but the pedestrian does not stop walking all day long, and the vehicle running on the expressway does not stop all day long. Therefore, in the mechanism of repeating the count value, a preset threshold needs to be added to distinguish between a correct detection result and an incorrect detection result.
When the number of repeated detection results is larger than the preset threshold, it may be determined that the position of the detection result in the plurality of images with the number larger than the threshold remains unchanged, or the feature in the plurality of images with the number larger than the threshold remains unchanged, that is, the detection result does not conform to the dynamic characteristics of the specified content, does not belong to the specified content, and may be determined as an error detection result.
That is, as mentioned above, the repetition count value may be used at least for: and when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
Obviously, by repeating the counting value mechanism, the detection result which is kept static can be checked in the set, and the error detection result can be determined.
In the repeated count value, the present specification is not limited, and specifically, the false detection result is determined by accumulating to a preset threshold value, or the false detection result is determined by decrementing to the preset threshold value. As long as the detection results that remain static can be checked in the set.
It should be additionally noted that more preset thresholds may appear in the present specification, and the preset thresholds in different mechanisms may be different from each other.
After the overall embodiment is explained, explanation is made below for each step.
For S101, it should be firstly explained that the acquisition of any monitoring image of the area may be performed according to the shooting sequence of the monitoring device, that is, the time sequence of the shooting by the monitoring device; it may also be acquired out of order, for example, acquiring the currently captured image first and then acquiring the previously captured image. This is not a limitation of the present specification. This is because, regardless of the order in which the monitor images of the area are acquired, the image content detection result set can be updated, and the detection result can be checked.
However, when content detection is performed on an image, the present embodiment does not limit the algorithm used for specifically implementing the content detection technology, and may be a deep neural network.
The obtained content detection result may be a characteristic value that specifies the position of the content in the image, or may be a characteristic value that specifies the content in the image, and this embodiment is not limited thereto.
As a specific example, the content detection result may be in the form of specifying the coordinate positions of four vertices of a rectangular frame in the image after the content is marked by the rectangular frame. These four coordinates may be used to characterize the location of the specified content in the image.
It can be understood that, when the detection result represents the position of the specified content in the image, the storage space occupied by the representation value itself is small, the storage space is saved, and the subsequent matching is facilitated, so that the efficiency of traversing matching in S102 can be improved.
When the detection result represents the characteristics of the specified content in the image, the representation value contains more information, and the detection result can be checked more directly and accurately.
Of course, as an alternative embodiment, the token value of the position of the designated content in the image and the feature token value of the designated content in the image may be combined together to be used as the content detection result, and the advantages of the two token values may be combined. For example, the position characteristic value is matched first, the matching speed is increased, and the characteristic value can be further matched under the condition of successful matching, so that the matching accuracy is improved.
In addition, a plurality of designated contents can be detected in a single image through an image content detection technology. Therefore, in the case where a plurality of specified contents are detected, a plurality of detection results can be obtained, and the steps of S101 to S105 described above are performed for each detection result.
It is to be noted that although the content detection result is determined as the result to be added, it is not necessarily added to the set. In S105, when there is an added result that is successfully matched with the result to be added in the set, the result to be added may not be added to the set.
For S102, after the result to be added is matched with any one of the added results, the result to be added may also be matched with other added results regardless of the matching result, and the step of traversal is not stopped by the matching result. After the result to be added is matched with all the added results in the current set, the following steps may be performed.
It should be noted that, when matching, matching needs to be performed on the current set, so as to check the error detection result in real time.
Specifically, the matching is performed by comparing the result to be added with any added result, and when the difference characteristic value of the comparison is smaller than a preset threshold value, the matching is successful; and when the threshold value is larger than the preset threshold value, the matching fails.
Of course, the preset threshold value here may not be the same as the preset threshold value in the repetition count value mechanism. The specific values of the preset threshold values in different meanings in the specification may be different.
The difference characterization value may specifically characterize a distance between the to-be-added result and a position of any one of the added results in the image, and may also characterize a difference between features of the to-be-added result and any one of the added results in the image.
An example of the gap characterization value is given below.
The gap characterization value can characterize the distance between the result to be added and the position of any added result in the image, and the detection result can be in the form of four vertex coordinates of a rectangular box characterizing the position of the specified content in the image.
And the calculation of the difference characterization value may specifically be to determine a ratio between an area of a portion where the rectangle 1 and the rectangle 2 overlap and an area sum of the rectangle 1 and the rectangle 2 according to a rectangle 1 surrounded by four vertex coordinates of the result to be added and a rectangle 2 surrounded by four vertex coordinates of any result added.
Obviously, there may be a plurality of added results in the current set that match successfully with the result to be added. In S105, the increment processing may be performed for the repetition count values of the added results for which all matching succeeds.
For S104, after adding the result to be added to the set, the repeat count value of the result to be added may be initialized to a fixed value. The present embodiment does not limit the specific initial value of the repetition count value.
For S105, the present embodiment also does not limit the specific preset increment rule and the preset threshold corresponding to the repeated count value.
Two examples of preset increment rules are given below for illustrative purposes.
1) The repetition count value may indicate the number of repetitions, and then the increment rule may be that the repetition count value is incremented by 1 every time the added result is successfully matched with one result to be added.
2) The repeat count value may represent a static invariant duration, and the increment rule may be based on how often the monitoring device takes images. For example, when the monitoring device takes 1 image every 2 seconds, the increment rule may be that the repeated count value is incremented by 2 every time the added result is successfully matched with one to-be-added result.
By the method, the corresponding image content detection result set can be updated for one monitoring area, and the error detection result in the set is determined by using the repeated count value, so that the error detection result can be checked by using the set obtained by current updating, and the accuracy of image content detection is improved.
In other alternative method embodiments, the storage form of the historical image content detection result may also be other forms, such as a table. At least the detection results and the corresponding repetition count values may be included in the table. Other steps may refer to the method embodiments described above.
Furthermore, in order to speed up the execution efficiency of the traversal in S102 in the above method embodiment and save the storage resource of the set, the detection results that are less used in the set may be deleted, and the number of the detection results included in the set may be reduced.
Similar to the above mechanism of repeated count values, in an alternative embodiment, by using the mechanism of a matching failure count value, in the case that any added result in the set fails to match with the result to be added, the matching failure count value is increased until the value is greater than a preset threshold value, and the added result is considered to have not been successfully matched for a long time, that is, the added result is a detection result that has not been updated in the set for a long time, and the added result can be deleted from the current set.
Of course, similar to the repeated count value, the present embodiment does not limit the mechanism of the matching failure count value, specifically, whether to accumulate to the preset threshold or decrease to the preset threshold. As long as it can indicate whether the added result has not been successfully matched for a long time.
The matching failure count value is explained in more detail below.
Based on the method embodiment shown in fig. 3, each added result in the set may also have a matching failure count value.
If any one of the added results fails to match with the result to be added, the matching failure count value of the added result can be subjected to increment processing according to a preset increment rule. The increment rule here may be different from the increment rule of the repetition count value.
The match failure count value may be used at least for: and when the matching failure count value of any added result is greater than a preset threshold value, deleting the added result from the current set.
Through the above steps, the matching failure count value can be used as a mechanism for deleting detection results which are used less frequently in the set.
Further, when the added result is successfully matched with the to-be-added result, it is indicated that the image content corresponding to the added result appears in the monitored image, and the to-be-added result is updated, so that the deleting speed can be slowed down.
Therefore, in order to slow down the deletion speed, in the case where there are added results that match the result to be added successfully in the set, the matching failure count values of all the added results that match successfully may be subjected to a decrement process according to a preset decrement rule.
The embodiment does not limit the specific increment rule and decrement rule in the matching failure count value mechanism, and the example of the increment rule may refer to the above explanation of the repeat count value. Two examples are given below for the decrement rule for illustrative purposes.
1) The matching failure count value may indicate the number of times of matching failure, and the decrement rule may be that the matching of each added result with one to-be-added result is successful, and the matching failure count value is decremented by 1.
2) The match failure count value may indicate a duration of the match failure, and the decrement rule may be determined according to a frequency with which the monitoring device captures the image. For example, when the monitoring device takes 1 image every 2 seconds, the decrement rule may be that the matching failure count value is decremented by 2 every time the added result is successfully matched with one to-be-added result.
By the mechanism of matching the failure count values, the number of detection results in the set can be reduced to a certain extent, the execution efficiency of traversing the detection results in the set in the S102 in the above method embodiment is improved, and the storage resources of the set are saved.
It should be additionally noted that although some of the detection results in the set may be deleted by matching the failure count value, the detection result that is not determined as an erroneous detection result before deletion may also be used as a correct detection result in a subsequent application. Of course, the detection result determined as the error detection result before deletion may be used as the error detection result in a subsequent application.
Further considering the added result determined as the false detection result and the added result not determined as the false detection result, respectively, since the set is used for checking the detection result, more of the added result determined as the false detection result is used for checking, and therefore, for convenience of subsequent checking, the added result determined as the false detection result can be stored in the set for a longer time.
In order to allow error detection results to be stored in the collection for a longer time, two improved examples are provided below for illustrative purposes based on the above-described mechanism of matching failure count values.
1) The matching failure count value of the added result determined as the error detection result may be deleted from the current set after being greater than a first preset threshold. The matching failure count value of the added result that is not determined as the error detection result may be deleted from the current set after being greater than a second preset threshold.
Wherein the first preset threshold may be larger than the second preset threshold in order to allow the added result determined as the false detection result to remain in the set for a longer time.
2) The added result, when determined to be an error detection result, may reset the match failure count value to 0 in order to allow the added result determined to be an error detection result to remain in the set for a longer time.
Of course, the present specification does not limit the specific manner of adjusting the matching failure count value mechanism of the error detection result, and it is within the scope disclosed in the present specification as long as the added result determined as the error detection result can be retained in the set for a longer time.
In addition to the improvement of the mechanism based on the matching failure count value, in another alternative embodiment, in order to make the error detection result stored in the set for a longer time, two different deletion mechanisms may be used for the error detection result and the added result that is not determined to be the error detection result, respectively.
Wherein, for the added result which is not determined as the error detection result, a valid time mechanism may be used, and the valid time of the added result may be reduced according to a preset decrement rule when the matching fails, until the added result is reduced to a preset threshold, and the added result is deleted.
And for the added result determined as the error detection result, a mechanism of disappearance time may be used, and the disappearance time of the error detection result may be increased according to a preset increment rule when the matching fails until the added result is increased to a preset threshold value, and the added result is deleted.
Similar to the mechanism of the matching failure count value, for the valid time or the disappearance time, the present specification does not limit the specific mechanism to be accumulating to the preset threshold or decreasing to the preset threshold, as long as the corresponding added result can be deleted after the matching failure is accumulated to a certain number of times. The present specification is also not limited to the preset increment rule or decrement rule, and specific examples may refer to the above explanations of the repeat count value and the matching failure count value.
It is to be noted that the effective time may not work while the added result, which is not determined as the erroneous detection result, is determined as the erroneous detection result, and the disappearance time may be further initialized for the subsequent deletion step.
The effective time is explained in detail below.
Based on the method embodiment shown in FIG. 3 above, each added result in the set may also have a validity time.
If any added result fails to be matched with the result to be added, the effective time of the added result can be subjected to decrement processing according to a preset decrement rule.
Of course, the validity time may also decrease over time, regardless of whether the match failed.
The effective time may be at least used for: when the effective time of any added result is 0, deleting the added result from the current set.
Furthermore, if there is an added result successfully matched with the result to be added, the effective time of all the successfully matched added results can be subjected to incremental processing according to a preset incremental rule, so that the deleting speed can be reduced.
The disappearance time is explained in detail below.
Based on the above-described method embodiment shown in fig. 3, each added result in the set that is determined to be an erroneous detection result may also have a vanishing time.
If any added result fails to be matched with the result to be added, the disappearance time of the added result can be subjected to incremental processing according to a preset incremental rule.
The disappearance time can be used at least for: and when the disappearance time of any added result is greater than a preset threshold value, deleting the added result from the current set.
Further, if there is an added result that matches successfully with the result to be added, the disappearance times of the added results that are all matching successfully and that are determined as the false detection results may be subjected to the decrement processing according to a preset decrement rule.
It should be noted that the matching failure count value, the valid time, and the vanishing time mentioned in the above embodiments may be attribute values included in the added result in the set, or may be attribute values corresponding to the added result outside the set.
When the form of storing the detection result is a table, the matching failure count value, the valid time, and the vanishing time may be column names in the table.
Based on the set updated in the above method embodiment, historical detection results may be stored for determining error detection results according to the repetition count value. Furthermore, the number of detection results in the set can be reduced according to the matching failure count value or the effective time and the vanishing time, the execution efficiency of the detection results in the traversal set is improved, and the storage resources of the set are saved.
By using the content detection result set obtained by current update in the method embodiment, the present specification also provides an image content detection result checking method.
Fig. 4 is a schematic flow chart of a method for checking an image content detection result according to this specification. Based on the mechanism of the currently updated content detection result set corresponding to the repetition count value in any of the above method embodiments, the checking method may include at least the following steps.
S201: the repeat count value of the added results in the set is monitored.
S202: and if the repeated counting value of any added result is monitored to be larger than a preset threshold value, determining that the added result is an error detection result.
S203: and for any result to be added, if the added result which is successfully matched in the set is an error detection result, determining the result to be added as the error detection result.
In S201, the monitoring of the added result repetition count value in the set is maintained. Once it is detected that the repetition count value of any added result is greater than the preset threshold, S202 is executed to determine that the added result is an error detection result.
And S203 does not need to depend on S201 or S202 to be executed, and can be directly executed according to the set, so that S203 can be executed in parallel with S201.
Based on the method embodiment, the error detection result can be checked according to the content detection set, so that the error detection result can be determined conveniently, the accuracy of the image content detection technology is improved, and the accuracy of subsequent application is also improved.
In order to show that the method embodiments can improve the accuracy of subsequent application, the present specification further provides two method embodiments in specific application scenarios, which are a method embodiment in a passenger flow statistics scenario and a method embodiment in a pedestrian attention behavior recognition scenario.
The passenger flow statistics scene may specifically be a statistics of the number of pedestrians in and around by using monitoring devices installed in places such as a mall, a supermarket, or a shop. The statistical result may be used to analyze the passenger flow distribution, hot goods, hot stores, and other specific services.
Obviously, based on traditional pedestrian detection, the poster people, the shop model and the like are often identified as pedestrians during passenger flow statistics, and then the accuracy of the passenger flow statistics is influenced. Based on the method embodiment, normal pedestrians can be identified more accurately by screening out false detection results, and the accuracy of passenger flow statistics is improved.
Therefore, the method embodiments in the passenger flow statistics scenario may specifically include a detection method for a pedestrian detection result and a method for performing passenger flow statistics by using an error detection result in the pedestrian detection result.
Based on the above method embodiment, the detection method for the pedestrian detection result may specifically be a pedestrian detection result set updating method, where a pedestrian detection result set for verification is created for a monitored image of any region, and each added result in the set has a repeat count value. The method may include at least the following steps.
S301: and after any monitoring image of the area is obtained, carrying out pedestrian detection on the image, and determining the obtained pedestrian detection result as a result to be added.
S302: traversing the added results in the current pedestrian detection result set to match with the results to be added, and judging whether the added results which are successfully matched with the results to be added exist in the current set; if not, adding the result to be added into the current set, and initializing the repeat count value of the result; and if so, performing increment processing on all the successfully matched repeated count values of the added results according to a preset increment rule.
The repetition count value may be used at least for: and when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
With the detected false detection result, a method of passenger flow statistics may comprise at least the following steps.
S401: and updating the pedestrian detection result set and determining an error detection result.
The updating step may specifically be S301 to S302. The pedestrian detection result set can be updated for multiple times, and then the error detection result is determined.
S402: and after any monitoring image of the area is obtained, carrying out pedestrian detection on the image to obtain a pedestrian detection result.
S403: and determining the error detection result in the pedestrian detection result according to the error detection result in the set, and counting the number of non-error detection results in the pedestrian detection result for passenger flow statistics.
In the embodiment of the method in the passenger flow statistics scenario, the monitoring image for any area may be a monitoring image for an area such as a store, a mall, a transportation station, and the like.
After the wrong detection results in the pedestrian detection results are screened out, the counted pedestrians are basically normal pedestrians, and the accuracy of the passenger flow counting method achieved through pedestrian detection is improved.
The pedestrian attention behavior recognition scene may specifically be that monitoring equipment installed in a place such as a mall, a supermarket or a shop is used to recognize the pedestrian attention behavior.
The pedestrian attention behaviors at least comprise a commodity taking behavior, a commodity trial behavior and a try-on behavior, and the commodity attention behaviors of the pedestrian are further determined.
After the attention behaviors of the travelers and the commodities the pedestrians pay attention to are identified, specific business analysis can be performed, for example, the commodities which are paid attention to most are analyzed, or whether the commodities are actually purchased after being paid attention to are analyzed by combining with the actual purchasing behaviors of the pedestrians, so that a sales strategy or a product strategy is adjusted, and the like.
In a more specific scenario, the taking behavior and the fitting behavior of the commodity, namely the shoe, can be identified.
The pedestrian attention behavior identification specifically may be to identify whether the pedestrian generates the pedestrian attention behavior through an image identification technology further on the basis of detecting the pedestrian in the monitoring image by using a conventional pedestrian detection technology.
However, false detection results such as poster characters and store models are often detected by pedestrian detection, and the characters corresponding to the false detection results often have pedestrian attention behaviors, so that the accuracy of the identification result is low.
With the use of the detected false detection result, a pedestrian attention behavior recognition method may include at least the following steps.
S501: and updating the pedestrian detection result set and determining an error detection result.
The updating step may specifically be S301 to S302. The pedestrian detection result set can be updated for multiple times, and then the error detection result is determined.
S502: and after any monitoring image of the area is obtained, carrying out pedestrian detection on the image to obtain a pedestrian detection result.
S503: and determining the error detection result contained in the pedestrian detection result according to the error detection result in the set.
S504: and aiming at any non-error detection result in the pedestrian detection results, identifying whether the behavior of the pedestrian corresponding to the non-error detection result belongs to any pedestrian attention behavior by utilizing an image identification technology.
Obviously, after the wrong detection results in the pedestrian detection results are screened out, the identified pedestrians are basically normal pedestrians, and the accuracy of the method for identifying the pedestrian attention behaviors based on the pedestrian detection is improved.
In order to facilitate further understanding, the present specification also provides an application example of the image content detection result checking method.
The specific application scenario may be that, in the new retail scenario, pedestrian distribution conditions at different times and before different types of commodities are determined through pedestrian detection, and then the most popular commodity type is determined according to the distribution conditions.
The designated monitoring area shot by the monitoring equipment is 3 shelves in the store, namely commodity shelves, food shelves and freezer shelves. The monitoring device takes one image every 2 seconds for pedestrian detection.
An image content detection result set is constructed in advance according to the appointed monitoring area, pedestrian detection is carried out immediately according to the latest shot image of the monitoring equipment, and the set is updated according to the detection result.
For convenience of representing the set, each detection result in the set is shown in the form of a table, and the table includes a repeat count value and a matching failure count value in addition to the detection result. The detection result is represented by a representation value of a position in the image, and of course, for convenience of representation, a coordinate of a single position in the image can be specifically represented. The repetition count value and the matching failure count value can be accumulated to a preset threshold value, the preset threshold value of the repetition count value is 10, and the preset threshold value of the matching failure count value is 20.
The specific content test result set can be shown in table 1 below.
Serial number The result of the detection Repeat count value Match failure count value
1 (5,5) 10 0
2 (1,4) 0 3
3 (0,0) 0 5
4 (1,3) 0 20
TABLE 1 content test result set
Fig. 5 is a schematic diagram illustrating an application example of the image content detection result checking method provided in the present specification.
In which pedestrian detection is performed based on the image in fig. 5, resulting in 2 detection results, which are (5,5) and (1,2), respectively.
For the detection results (5,5), the detection results 1-4 in the table 1 above are traversed for matching, and it is obvious that the matching with the detection result 1 is successful and the matching with the detection results 2-4 is failed. Therefore, the repetition count value of detection result 1 is incremented by 1, and the matching failure count values of detection results 2 to 4 are all incremented by 1.
Summarizing the updated set, if the repetition count value 11 of the detection result 1 is greater than the preset threshold value 10, the detection result 1 is determined as an error detection result. If the matching failure count value 21 of the detection result 4 is greater than the preset threshold value 20, the detection result 4 is deleted from the set.
For the detection results (1,2), the detection results 1-3 in the table 1 above are traversed for matching, and obviously the matching with the detection results 1-3 fails. Therefore, the matching failure count values of the detection results 1 to 3 are all incremented by 1, and the detection result (1,2) is added to the set as the detection result 5.
The updated content detection result set according to the 2 detection results can be shown in table 2 below.
Serial number The result of the detection Repeat count value Match failure count value
1 (5,5) 11 1
2 (1,4) 0 5
3 (0,0) 0 7
5 (1,2) 0 0
TABLE 2 updated Current content detection result set
Wherein, since the detection result 1 is determined as an erroneous detection result, the detection results 2-5 can be used as correct detection results for subsequent data analysis.
From the perspective of people, the detection result 1 is obviously a person poster, and after the detection result 1 which is an error detection result is eliminated, the remaining correct detection result can help to improve the accuracy of data analysis and obtain more accurate pedestrian distribution conditions, so that the most popular commodity types are accurately determined.
In addition to the above method embodiments, the present specification also provides two apparatus embodiments, which are an image content detection result set updating apparatus and an image content detection result checking apparatus.
Fig. 6 is a schematic structural diagram of an image content detection result set updating apparatus provided in this specification. And creating a content detection result set for checking aiming at the monitoring image of any region, wherein each added result in the set has a repeated counting value.
The image content detection result set updating means may specifically include the following units.
The detection unit 601: after any monitoring image of the area is obtained, content detection can be performed on the image, and the obtained content detection result is determined as a result to be added.
The matching unit 602: traversing the added results in the current content detection result set to match with the results to be added, and judging whether the added results which are successfully matched with the results to be added exist in the current set; if not, adding the result to be added to the current set, and initializing the repeat count value of the result; if the result is the same as the added result, the repeated count values of all the added results which are successfully matched can be subjected to increment processing according to a preset increment rule.
The repetition count value may be used at least for: and when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
Wherein each added result in the set may also have a match failure count value.
The matching unit 602 may further be configured to: and if any one added result fails to be matched with the result to be added, performing incremental processing on the matching failure count value of the added result according to a preset incremental rule. The match failure count value may be used at least for: and when the matching failure count value of any added result is greater than a preset threshold value, deleting the added result from the current set.
Further, the matching unit 602 may be specifically configured to: and if the number of the added results is greater than the preset value, performing increment processing on the repeated count values of the added results which are successfully matched according to a preset increment rule, and performing decrement processing on the matching failure count values of the added results which are successfully matched according to a preset decrement rule.
The matching unit 602 may also be specifically configured to: when the matching failure count value of any added result determined as the error detection result is larger than a first preset threshold value, the added result is suspected to be deleted from the current set; and when the matching failure count value of any added result which is not determined as the error detection result is larger than a second preset threshold value, deleting the added result from the current set.
Further, each added result in the set may also have a validity time.
The matching unit 602 may further be configured to: if any added result fails to be matched with the result to be added, the effective time of the added result can be subjected to decrement treatment according to a preset decrement rule; the effective time may be at least used for: when the validity time of any added result is 0, the added result can be deleted from the current set.
Further, the matching unit 602 may specifically be configured to: if the result is the same as the added result, the repeated count value and the effective time of the added result which are successfully matched can be subjected to increment processing according to a preset increment rule.
Furthermore, each added result in the set that is determined to be a false detection result may also have a vanishing time.
The matching unit 602 may further be configured to: if any added result fails to be matched with the result to be added, the disappearance time of the added result can be subjected to incremental processing according to a preset incremental rule; the disappearance time is at least used for: when the disappearance time of any added result is greater than a preset threshold, the added result can be deleted from the current set.
Further, the matching unit 602 may be specifically configured to: if the added result exists, the repeated count values of the added results which are successfully matched all can be subjected to increment processing according to a preset increment rule, and the disappearance time of the added results which are successfully matched all and determined as error detection results can be subjected to decrement processing according to a preset decrement rule.
Fig. 7 is a schematic structural diagram of an image content detection result checking apparatus provided in this specification. The apparatus may specifically comprise the following means, and the specific steps may be performed based on a set in the apparatus shown in fig. 6.
A monitoring unit 701: the repeat count value of the added results in the set is monitored.
The judgment unit 702: and if the repeated counting value of any added result is monitored to be larger than a preset threshold value, determining that the added result is an error detection result.
The apparatus may further comprise: the determination unit 703: and for any result to be added, if the added result which is successfully matched in the set is an error detection result, determining the result to be added as the error detection result.
The specification also provides a passenger flow statistics device, aiming at the monitoring image of any region, a pedestrian detection result set for checking can be created, and each added result in the set has a repeated count value; the apparatus may comprise the following 3 units.
First update unit 801: and updating the pedestrian detection result set and determining an error detection result.
The updating may include: after any monitoring image of the area is obtained, carrying out pedestrian detection on the image, and determining the obtained pedestrian detection result as a result to be added; traversing the added results in the current pedestrian detection result set to match with the results to be added, and judging whether the added results which are successfully matched with the results to be added exist in the current set; if not, adding the result to be added into the current set, and initializing the repeat count value of the result; if the result is matched with the repeated count value, performing increment processing on the repeated count values of all the successfully matched added results according to a preset increment rule; the repetition count value is used at least for: and when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
The first pedestrian detection unit 802: and after any monitoring image of the area is obtained, carrying out pedestrian detection on the image to obtain a current pedestrian detection result.
The statistic unit 803: and determining the error detection result in the current pedestrian detection result according to the error detection result in the set, and counting the number of non-error detection results in the current pedestrian detection result for passenger flow statistics.
The specification also provides a pedestrian attention behavior recognition device, wherein the pedestrian attention behavior at least comprises a commodity taking behavior, a commodity trial behavior and a try-on behavior; for the monitoring image of any region, a pedestrian detection result set for checking can be created, and each added result in the set has a repeated counting value; the apparatus may comprise the following 4 units.
Second updating unit 901: and updating the pedestrian detection result set and determining an error detection result.
The updating may include: after any monitoring image of the area is obtained, carrying out pedestrian detection on the image, and determining the obtained pedestrian detection result as a result to be added; traversing the added results in the current pedestrian detection result set to match with the results to be added, and judging whether the added results which are successfully matched with the results to be added exist in the current set; if not, adding the result to be added into the current set, and initializing the repeat count value of the result; if the result is matched with the repeated count value, performing increment processing on the repeated count values of all the successfully matched added results according to a preset increment rule; the repetition count value is used at least for: and when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
The second pedestrian detection unit 902: and after any monitoring image of the area is obtained, carrying out pedestrian detection on the image to obtain a current pedestrian detection result.
The recognition unit 903: determining an error detection result contained in the current pedestrian detection result according to the error detection result in the set; and aiming at any non-error detection result in the current pedestrian detection result, identifying whether the behavior of the pedestrian corresponding to the non-error detection result belongs to any pedestrian attention behavior by utilizing an image identification technology.
For the detailed explanation of the four device embodiments, reference may be made to the method embodiments described above, which are not described herein again.
Embodiments of the present specification further provide a computer device, which at least includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements an image content detection result set updating method as shown in fig. 3 or an image content detection result checking method as shown in fig. 4 when executing the program.
Fig. 8 is a schematic diagram illustrating a more specific hardware structure of a computer device according to an embodiment of the present disclosure, where the device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein the processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 are communicatively coupled to each other within the device via bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 1020 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 1020 and called to be executed by the processor 1010.
The input/output interface 1030 is used for connecting an input/output module to input and output information. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 1040 is used for connecting a communication module (not shown in the drawings) to implement communication interaction between the present apparatus and other apparatuses. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 1050 includes a path that transfers information between various components of the device, such as processor 1010, memory 1020, input/output interface 1030, and communication interface 1040.
It should be noted that although the above-mentioned device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
Embodiments of the present specification also provide a computer-readable storage medium on which a computer program is stored, where the computer program, when executed by a processor, implements an image content detection result set updating method as shown in fig. 3 or an image content detection result checking method as shown in fig. 4.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
From the above description of the embodiments, it is clear to those skilled in the art that the embodiments of the present disclosure can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the embodiments of the present specification may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments of the present specification.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described apparatus embodiments are merely illustrative, and the modules described as separate components may or may not be physically separate, and the functions of the modules may be implemented in one or more software and/or hardware when implementing the embodiments of the present disclosure. And part or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing is only a detailed description of the embodiments of the present disclosure, and it should be noted that, for those skilled in the art, many modifications and decorations can be made without departing from the principle of the embodiments of the present disclosure, and these modifications and decorations should also be regarded as protection for the embodiments of the present disclosure.

Claims (15)

1. A method for updating image content detection result set includes creating content detection result set for checking aiming at monitoring image of any region, each added result in set has a repeated count value; the method comprises the following steps:
after any monitoring image of the area is obtained, content detection is carried out on the image, and the obtained content detection result is determined as a result to be added;
traversing the added results in the current content detection result set to match with the results to be added, and judging whether the added results which are successfully matched with the results to be added exist in the current set; if the result to be added does not exist, adding the result to be added into the current set, and initializing a repeat count value of the result; if the result is matched with the repeated count value, performing increment processing on the repeated count values of all the successfully matched added results according to a preset increment rule;
the repetition count value is at least for: and when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
2. The method of claim 1, each added result in the set further having a match failure count value; if any added result fails to be matched with the result to be added, performing incremental processing on a matching failure count value of the added result according to a preset incremental rule;
the match failure count value is used at least for: and when the matching failure count value of any added result is greater than a preset threshold value, deleting the added result from the current set.
3. The method according to claim 2, wherein if the added result exists, the incremental processing is performed on the repeated count values of all the added results successfully matched according to a preset incremental rule, and the incremental processing comprises:
and if the number of the added results is greater than the preset value, performing increment processing on the repeated count values of the added results which are successfully matched according to a preset increment rule, and performing decrement processing on the matching failure count values of the added results which are successfully matched according to a preset decrement rule.
4. The method of claim 2, wherein deleting any added result from the current set when the matching failure count value of the added result is greater than a preset threshold comprises:
deleting the added result from the current set when the matching failure count value of any added result determined as the error detection result is greater than a first preset threshold value;
and when the matching failure count value of any added result which is not determined as the error detection result is larger than a second preset threshold value, deleting the added result from the current set.
5. The method of claim 1, each added result in the set further having a validity time; if any added result fails to be matched with the result to be added, carrying out decrement treatment on the effective time of the added result according to a preset decrement rule;
the effective time is at least used for: when the effective time of any added result is 0, deleting the added result from the current set.
6. The method according to claim 5, wherein if the added result exists, the incremental processing is performed on the repeated count values of all the added results successfully matched according to a preset incremental rule, and the incremental processing comprises:
and if so, performing increment processing on the repeated count values and the effective time of all the added results which are successfully matched according to a preset increment rule.
7. The method of claim 1, each added result in the set determined to be a false detection result further having a disappearance time; if any added result fails to be matched with the result to be added, performing incremental processing on the disappearance time of the added result according to a preset incremental rule;
the disappearance time is at least for: and when the disappearance time of any added result is greater than a preset threshold value, deleting the added result from the current set.
8. The method according to claim 7, wherein if the added result exists, the incremental processing is performed on the repeated count values of all the added results successfully matched according to a preset incremental rule, and the incremental processing comprises:
and if the error detection result exists, performing increment processing on the repeated count values of the added results which are successfully matched according to a preset increment rule, and performing decrement processing on the disappearance time of the added results which are successfully matched and determined as the error detection result according to a preset decrement rule.
9. A method for checking the detection result of image contents based on the set according to any one of claims 1 to 8, comprising:
and monitoring the repeated count value of the added results in the set, and if the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
10. The method of claim 9, further comprising:
and for any result to be added, if the added result which is successfully matched in the set is an error detection result, determining the result to be added as the error detection result.
11. A passenger flow statistical method is characterized in that a pedestrian detection result set for checking is created according to a monitoring image of any region, and each added result in the set has a repeated count value; the method comprises the following steps:
updating the pedestrian detection result set and determining an error detection result;
the updating comprises the following steps: after any monitoring image of the area is obtained, carrying out pedestrian detection on the image, and determining the obtained pedestrian detection result as a result to be added; traversing the added results in the current pedestrian detection result set to match with the results to be added, and judging whether the added results which are successfully matched with the results to be added exist in the current set; if the result to be added does not exist, adding the result to be added into the current set, and initializing a repeat count value of the result; if the result is matched with the repeated count value, performing increment processing on the repeated count values of all the successfully matched added results according to a preset increment rule; the repetition count value is at least for: when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result;
after any monitoring image of the area is obtained, carrying out pedestrian detection on the image to obtain a current pedestrian detection result;
and determining the error detection result in the current pedestrian detection result according to the error detection result in the set, and counting the number of non-error detection results in the current pedestrian detection result for passenger flow statistics.
12. A pedestrian concerns the behavioral recognition method, the said pedestrian concerns the behavior and includes the behavior of taking the goods at least, the behavior of trying the goods and trying the behavior; aiming at the monitoring image of any region, creating a pedestrian detection result set for checking, wherein each added result in the set has a repeated count value; the method comprises the following steps:
updating the pedestrian detection result set and determining an error detection result;
the updating comprises the following steps: after any monitoring image of the area is obtained, carrying out pedestrian detection on the image, and determining the obtained pedestrian detection result as a result to be added; traversing the added results in the current pedestrian detection result set to match with the results to be added, and judging whether the added results which are successfully matched with the results to be added exist in the current set; if the result to be added does not exist, adding the result to be added into the current set, and initializing a repeat count value of the result; if the result is matched with the repeated count value, performing increment processing on the repeated count values of all the successfully matched added results according to a preset increment rule; the repetition count value is at least for: when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result;
after any monitoring image of the area is obtained, carrying out pedestrian detection on the image to obtain a current pedestrian detection result;
determining an error detection result contained in the current pedestrian detection result according to the error detection result in the set; and aiming at any non-error detection result in the current pedestrian detection result, identifying whether the behavior of the pedestrian corresponding to the non-error detection result belongs to any pedestrian attention behavior by utilizing an image identification technology.
13. An image content detection result set updating device is used for creating a content detection result set for checking aiming at a monitoring image of any region, wherein each added result in the set has a repeated counting value; the device comprises:
a detection unit: after any monitoring image of the area is obtained, content detection is carried out on the image, and the obtained content detection result is determined as a result to be added;
a matching unit: traversing the added results in the current content detection result set to match with the results to be added, and judging whether the added results which are successfully matched with the results to be added exist in the current set; if the result to be added does not exist, adding the result to be added into the current set, and initializing a repeat count value of the result; if the result is matched with the repeated count value, performing increment processing on the repeated count values of all the successfully matched added results according to a preset increment rule;
the repetition count value is at least for: and when the repeated count value of any added result is greater than a preset threshold value, determining that the added result is an error detection result.
14. An image content detection result checking apparatus based on the set of claim 13, comprising:
a monitoring unit: monitoring the repeated count value of the added results in the set;
a judging unit: and if the repeated counting value of any added result is monitored to be larger than a preset threshold value, determining that the added result is an error detection result.
15. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 12 when executing the program.
CN202010986033.3A 2020-09-18 2020-09-18 Method and device for checking image content detection result Pending CN113297888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010986033.3A CN113297888A (en) 2020-09-18 2020-09-18 Method and device for checking image content detection result

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010986033.3A CN113297888A (en) 2020-09-18 2020-09-18 Method and device for checking image content detection result

Publications (1)

Publication Number Publication Date
CN113297888A true CN113297888A (en) 2021-08-24

Family

ID=77318291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010986033.3A Pending CN113297888A (en) 2020-09-18 2020-09-18 Method and device for checking image content detection result

Country Status (1)

Country Link
CN (1) CN113297888A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110081043A1 (en) * 2009-10-07 2011-04-07 Sabol Bruce M Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background
US20120093421A1 (en) * 2010-10-19 2012-04-19 Palo Alto Research Center Incorporated Detection of duplicate document content using two-dimensional visual fingerprinting
CN105988863A (en) * 2015-02-11 2016-10-05 华为技术有限公司 Event processing method and device
CN108108733A (en) * 2017-12-19 2018-06-01 北京奇艺世纪科技有限公司 A kind of news caption detection method and device
CN108241844A (en) * 2016-12-27 2018-07-03 北京文安智能技术股份有限公司 A kind of public traffice passenger flow statistical method, device and electronic equipment
CN109214806A (en) * 2018-11-20 2019-01-15 北京京东尚科信息技术有限公司 Self-help settlement method, apparatus and storage medium
CN109344746A (en) * 2018-09-17 2019-02-15 曜科智能科技(上海)有限公司 Pedestrian counting method, system, computer equipment and storage medium
CN110020647A (en) * 2018-01-09 2019-07-16 杭州海康威视数字技术股份有限公司 A kind of contraband object detection method, device and computer equipment
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
WO2020001302A1 (en) * 2018-06-25 2020-01-02 苏州欧普照明有限公司 People traffic statistical method, apparatus, and system based on vision sensor
CN110795998A (en) * 2019-09-19 2020-02-14 深圳云天励飞技术有限公司 People flow detection method and device, electronic equipment and readable storage medium
WO2020093829A1 (en) * 2018-11-09 2020-05-14 阿里巴巴集团控股有限公司 Method and device for real-time statistical analysis of pedestrian flow in open space
WO2020164270A1 (en) * 2019-02-15 2020-08-20 平安科技(深圳)有限公司 Deep-learning-based pedestrian detection method, system and apparatus, and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110081043A1 (en) * 2009-10-07 2011-04-07 Sabol Bruce M Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background
US20120093421A1 (en) * 2010-10-19 2012-04-19 Palo Alto Research Center Incorporated Detection of duplicate document content using two-dimensional visual fingerprinting
CN105988863A (en) * 2015-02-11 2016-10-05 华为技术有限公司 Event processing method and device
CN108241844A (en) * 2016-12-27 2018-07-03 北京文安智能技术股份有限公司 A kind of public traffice passenger flow statistical method, device and electronic equipment
CN108108733A (en) * 2017-12-19 2018-06-01 北京奇艺世纪科技有限公司 A kind of news caption detection method and device
CN110020647A (en) * 2018-01-09 2019-07-16 杭州海康威视数字技术股份有限公司 A kind of contraband object detection method, device and computer equipment
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
WO2020001302A1 (en) * 2018-06-25 2020-01-02 苏州欧普照明有限公司 People traffic statistical method, apparatus, and system based on vision sensor
CN109344746A (en) * 2018-09-17 2019-02-15 曜科智能科技(上海)有限公司 Pedestrian counting method, system, computer equipment and storage medium
WO2020093829A1 (en) * 2018-11-09 2020-05-14 阿里巴巴集团控股有限公司 Method and device for real-time statistical analysis of pedestrian flow in open space
CN109214806A (en) * 2018-11-20 2019-01-15 北京京东尚科信息技术有限公司 Self-help settlement method, apparatus and storage medium
WO2020164270A1 (en) * 2019-02-15 2020-08-20 平安科技(深圳)有限公司 Deep-learning-based pedestrian detection method, system and apparatus, and storage medium
CN110795998A (en) * 2019-09-19 2020-02-14 深圳云天励飞技术有限公司 People flow detection method and device, electronic equipment and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张静;杨大伟;毛琳;: "基于图像变换的递归式行人错检校验算法", 大连民族大学学报, no. 03 *
蒙芳;翟建丽;: "多数据源近似重复记录增量式识别方法仿真", 计算机仿真, no. 08 *

Similar Documents

Publication Publication Date Title
CN107808122B (en) Target tracking method and device
CN109101989B (en) Merchant classification model construction and merchant classification method, device and equipment
CN108416902B (en) Real-time object identification method and device based on difference identification
US20180260963A1 (en) Apparatus, method and image processing device for smoke detection in image
CN111145214A (en) Target tracking method, device, terminal equipment and medium
CN109168052B (en) Method and device for determining service satisfaction degree and computing equipment
JP2016143334A (en) Purchase analysis device and purchase analysis method
JP6366529B2 (en) Flow line processing system and flow line processing method
CN109102324B (en) Model training method, and red packet material laying prediction method and device based on model
CN112200631A (en) Industry classification model training method and device
CN111178116A (en) Unmanned vending method, monitoring camera and system
JPWO2019123714A1 (en) Information processing equipment, product recommendation methods, and programs
CN108734366B (en) User identification method and system, nonvolatile storage medium and computer system
CN113643260A (en) Method, apparatus, device, medium and product for detecting image quality
CN111402027B (en) Identity recognition method, commodity loan auditing method, device and terminal equipment
CN113297888A (en) Method and device for checking image content detection result
CN112200711B (en) Training method and system of watermark classification model
CN113111734B (en) Watermark classification model training method and device
CN111222377B (en) Commodity information determining method and device and electronic equipment
CN111860261B (en) Passenger flow value statistical method, device, equipment and medium
CN112950329A (en) Commodity dynamic information generation method, device, equipment and computer readable medium
CN111160314A (en) Violence sorting identification method and device
CN111091413A (en) Passenger flow data statistical method and device and computer readable storage medium
CN111368201A (en) Hot event detection method and device, electronic equipment and storage medium
CN113360356B (en) Method for identifying reading cheating behaviors, computing device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination