CN113393265A - Method for establishing database of feature library of passing object, electronic device and storage medium - Google Patents

Method for establishing database of feature library of passing object, electronic device and storage medium Download PDF

Info

Publication number
CN113393265A
CN113393265A CN202110569170.1A CN202110569170A CN113393265A CN 113393265 A CN113393265 A CN 113393265A CN 202110569170 A CN202110569170 A CN 202110569170A CN 113393265 A CN113393265 A CN 113393265A
Authority
CN
China
Prior art keywords
passing object
feature
region
interest
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110569170.1A
Other languages
Chinese (zh)
Other versions
CN113393265B (en
Inventor
葛赵泳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110569170.1A priority Critical patent/CN113393265B/en
Publication of CN113393265A publication Critical patent/CN113393265A/en
Application granted granted Critical
Publication of CN113393265B publication Critical patent/CN113393265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application relates to a method for building a characteristic library of a passing object, an electronic device and a storage medium. Wherein, the method comprises the following steps: acquiring a first image with a first passing object; extracting a first region of interest and a second region of interest of a first passing object from the first image; under the condition that a first region of interest and a second region of interest of a first passing object are extracted from the first image, determining a first feature based on the first region of interest of the first passing object, and determining a second feature based on the second region of interest of the first passing object; matching a first target passing object of the first passing object based on the first characteristic in the characteristic library, and matching a second target passing object of the first passing object based on the second characteristic in the characteristic library; and under the condition that the first target passing object and the second target passing object are not matched, allocating a unique identifier for the first passing object, and storing the first characteristic and the second characteristic into a characteristic library by taking the unique identifier as an index.

Description

Method for establishing database of feature library of passing object, electronic device and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a method for building a feature library of a passing object, an electronic device, and a storage medium, and a method for counting passing objects, an electronic device, and a storage medium.
Background
Along with the improvement of the face recognition comparison performance, the face correlation technology is more and more mature in the intelligent application in the security field, and is mainly applied to the aspects of personnel control, real-time alarm, personnel identity verification and the like at present. The method aims at the application of face recognition comparison, generally aims at comparing a captured image with a preset library to further acquire information related to target personnel in the library, and meanwhile, through face comparison, a certain duplication removing strategy can be set, such as duplication removing in business hours, duplication removing at intervals and the like, a camera of a certain user appearing in front of a certain place for multiple times is captured for once, and compared with the prior art based on a common passenger flow camera, the method is more accurate.
However, when the user actually walks, the user can miss the face when the user can not take the head, and then the user can miss the shooting, thereby influencing the analysis of accurate marketing data.
Aiming at the problem of low precision of accurate marketing data analysis in the related technology, no effective solution is provided at present.
Disclosure of Invention
In the embodiment, a method, an electronic device and a storage medium for building a feature library of a passing object, and a method, an electronic device and a storage medium for counting the passing object are provided to solve the problem of low accuracy of accurate marketing data analysis in the related art.
In a first aspect, in this embodiment, a method for building a feature library of a passing object is provided, including:
acquiring a first image with a first passing object;
extracting a first region of interest and a second region of interest of the first passing object from the first image;
in the case where a first region of interest and a second region of interest of the first passing object are extracted from the first image, determining a first feature based on the first region of interest of the first passing object, and determining a second feature based on the second region of interest of the first passing object;
matching a first target passing object of the first passing object based on the first feature in a feature library, and matching a second target passing object of the first passing object based on the second feature in the feature library;
and under the condition that the first target passing object and the second target passing object are not matched, allocating a unique identifier for the first passing object, and storing the first characteristic and the second characteristic into the characteristic library by taking the unique identifier as an index.
In some of these embodiments, the method further comprises:
determining a second feature based on a second region of interest of the first passing object in the case where the first region of interest of the first passing object is not extracted from the first image and the second region of interest of the first passing object is extracted from the first image;
matching a second target pass object of the pass object in the feature library based on the second feature;
and under the condition that the second target passing object is not matched, allocating a unique identifier for the first passing object, and storing the second characteristics into the characteristic library by taking the unique identifier as an index.
In some of these embodiments, the method further comprises:
in the case where a first region of interest of the first passing object is extracted from the first image and a second region of interest of the first passing object is not extracted from the first image, determining a first feature based on the first region of interest of the first passing object;
matching a first target pass object of the pass object in the feature library based on the first feature;
and under the condition that the first target passing object is not matched, allocating a unique identifier for the first passing object, and storing the first characteristics into the characteristic library by taking the unique identifier as an index.
In some of these embodiments, the method further comprises:
under the condition that the first target passing object is matched and the second target passing object is not matched, the first characteristic and/or the second characteristic are/is stored into the characteristic library by taking the unique identification of the first target passing object as an index;
and under the condition that the first target passing object is not matched and the second target passing object is matched, storing the first characteristic and/or the second characteristic into the characteristic library by taking the unique identification of the second target passing object as an index.
In some of the embodiments described herein, the first and second,
under the condition that the first target passing object and the second target passing object are matched, judging whether the unique identifier of the first target passing object is the same as the unique identifier of the second target passing object or not;
and under the condition that the unique identification of the first target passing object is judged to be the same as the unique identification of the second target passing object, the first characteristic and/or the second characteristic are/is stored into the characteristic library by taking the same unique identification as an index.
In some of these embodiments, the method further comprises:
and under the condition that the unique identification of the first target passing object is judged to be different from the unique identification of the second target passing object, determining the target passing object with higher similarity to the first passing object in the first target passing object and the second target passing object, and storing the first feature and/or the second feature into the feature library by taking the unique identification of the target passing object with higher similarity as an index.
In some of these embodiments, the first image further comprises a second pass object; when the first target passing object and the second target passing object are not matched, allocating a unique identifier to the passing object, and storing the first feature and the second feature into the feature library by using the unique identifier as an index, the method further comprises:
generating associated information based on the unique identification of the first passing object and the unique identification of the second passing object;
and storing the associated information to the feature library by taking the unique identifier of the first passing object and/or the unique identifier of the second passing object as an index.
In some of these embodiments, the first pass object includes at least one of: pedestrians, vehicles;
under the condition that the first passing object is a pedestrian, the first interested area is a human face area, and the second interested area is a human body area; the first characteristic is a face characteristic, and the second characteristic is a body characteristic;
under the condition that the first passing object is a vehicle, the first interested area is a license plate area, and the second interested area is a vehicle body area; the first characteristic is a license plate characteristic and the second characteristic is a vehicle body characteristic.
In a second aspect, in this embodiment, a method for counting passing objects is provided, including:
acquiring a second image with a third passing object;
extracting a first region of interest of the third passing object from the second image;
under the condition that the first region of interest of the third passing object is not extracted, extracting a second region of interest of the third passing object from the second image, and extracting a third feature based on the second region of interest of the third passing object;
matching a third target passing object of the third passing object based on the third feature in a feature library, wherein the feature library is the feature library in the first aspect;
and updating a count value corresponding to the third target passing object when the third target passing object is matched from the feature library.
In some of these embodiments, the method further comprises:
extracting a fourth feature based on the first region of interest of the third passing object in the case that the first region of interest of the third passing object is extracted;
matching, in the token library, a third target pass object of the third pass object based on the fourth feature;
and updating a count value corresponding to the third target passing object when the third target passing object is matched from the feature library.
In some of these embodiments, the second image further includes a fourth passing object, the method further comprising:
in the case where the first region of interest of the fourth passing object is not extracted, extracting a second region of interest of the fourth passing object from the second image, and extracting a fourth feature based on the second region of interest of the fourth passing object;
acquiring association information corresponding to the unique identifier of the third passing object from the feature library based on the unique identifier of the third passing object;
determining a fifth passing object associated with the third passing object according to the associated information;
extracting a second region of interest of the fifth passing object, and extracting a fifth feature based on the second region of interest of the fifth passing object;
judging whether the similarity between the fourth feature and the fifth feature is greater than a preset similarity or not;
and updating a count value corresponding to the fourth passing object when the similarity between the fourth feature and the fifth feature is judged to be greater than a preset similarity.
In a third aspect, in this embodiment, there is provided an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method for building a feature library of a passing object according to the first aspect or implements the method for counting a passing object according to the second aspect when executing the computer program.
In a fourth aspect, in the present embodiment, there is provided a storage medium, on which a computer program is stored, the program, when executed by a processor, implementing the method for building a feature library of a passing object according to the first aspect or implementing the method for counting a passing object according to the second aspect.
Compared with the related art, the feature library construction method, the electronic device and the storage medium of the passing object, and the counting method, the electronic device and the storage medium of the passing object provided in the embodiment are realized by acquiring a first image with a first passing object; extracting a first region of interest and a second region of interest of a first passing object from the first image; under the condition that a first region of interest and a second region of interest of a first passing object are extracted from the first image, determining a first feature based on the first region of interest of the first passing object, and determining a second feature based on the second region of interest of the first passing object; matching a first target passing object of the first passing object based on the first characteristic in the characteristic library, and matching a second target passing object of the first passing object based on the second characteristic in the characteristic library; under the condition that the first target passing object and the second target passing object are not matched, the unique identification is distributed to the first passing object, and the first characteristic and the second characteristic are stored in the characteristic library by taking the unique identification as an index, so that the problem of low precision of precision marketing data analysis in the related technology is solved, and the precision of the precision marketing data analysis is improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a terminal of the feature library construction method of a route object according to the present embodiment;
FIG. 2 is a flowchart of a feature library construction method for a route passing object according to the present embodiment;
fig. 3 is a flowchart of a method of counting the passing objects of the present embodiment;
fig. 4 is a flowchart of a method of counting a passing object according to the preferred embodiment.
Detailed Description
For a clearer understanding of the objects, aspects and advantages of the present application, reference is made to the following description and accompanying drawings.
Unless defined otherwise, technical or scientific terms used herein shall have the same general meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The use of the terms "a" and "an" and "the" and similar referents in the context of this application do not denote a limitation of quantity, either in the singular or the plural. The terms "comprises," "comprising," "has," "having," and any variations thereof, as referred to in this application, are intended to cover non-exclusive inclusions; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or modules, but may include other steps or modules (elements) not listed or inherent to such process, method, article, or apparatus. Reference throughout this application to "connected," "coupled," and the like is not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. In general, the character "/" indicates a relationship in which the objects associated before and after are an "or". The terms "first," "second," "third," and the like in this application are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or a similar computing device. For example, the method is executed on a terminal, and fig. 1 is a block diagram of a hardware structure of the terminal of the method for building a library of feature libraries of a route object according to the embodiment. As shown in fig. 1, the terminal may include one or more processors 102 (only one shown in fig. 1) and a memory 104 for storing data, wherein the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely an illustration and is not intended to limit the structure of the terminal described above. For example, the terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as a computer program corresponding to the feature library establishing method of the route object in the embodiment, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network described above includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
It should be noted that, in some embodiments, the computer program in the above embodiments may also be used to execute the method for counting the passing objects in this embodiment.
In this embodiment, a method for building a feature library of a passing object is provided, and fig. 2 is a flowchart of the method for building a feature library of a passing object in this embodiment, as shown in fig. 2, the process includes the following steps:
in step S201, a first image having a first passing object is acquired.
In this step, the first image may be acquired in real time or may be acquired from a database in which the first image is stored.
Step S202, a first region of interest and a second region of interest of the first passing object are extracted from the first image.
In this step, the first region of interest and the second region of interest may be extracted by inputting the first image into a pre-trained and well-trained neural network.
In step S203, in the case where the first region of interest and the second region of interest of the first passing object are extracted from the first image, the first feature is determined based on the first region of interest of the first passing object, and the second feature is determined based on the second region of interest of the first passing object.
In this step, the first feature is obtained by performing feature extraction on the first region of interest, and the second feature is obtained by performing feature extraction on the second region of interest, so that matching between the first feature and the second feature can be facilitated.
Step S204, a first target passing object of the first passing object is matched in the feature library based on the first feature, and a second target passing object of the first passing object is matched in the feature library based on the second feature.
In this step, the feature library may be a database in which some features are stored in advance according to actual needs by the user.
And step S205, under the condition that the first target passing object and the second target passing object are not matched, allocating a unique identifier for the first passing object, and storing the first characteristic and the second characteristic into a characteristic library by taking the unique identifier as an index.
Based on the above steps S201 to S205, by extracting the first region of interest and the second region of interest of the passing object in the first image, then obtaining the first feature in the first region of interest, and obtaining the second feature in the second region of interest, then matching the first target passing object of the passing object based on the first feature in the feature library, and matching the second target passing object of the passing object based on the second feature in the feature library, and finally, in the case that neither the first target passing object nor the second target passing object is matched, assigning a unique identifier to the passing object, and storing the first feature and the second feature into the feature library by using the unique identifier as an index, the storage of multiple features of the passing object is realized, and the problem that the precision of marketing data analysis is low due to the fact that a certain feature of the passing object cannot be shot in the related art is solved, the accuracy of accurate marketing data analysis is improved.
In some of the embodiments, the second feature may also be determined based on the second region of interest of the first passing object in case that the first region of interest of the first passing object is not extracted from the first image and in case that the second region of interest of the first passing object is extracted from the first image; matching a second target passing object of the passing object in the feature library based on the second feature; and under the condition that the second target passing object is not matched, allocating a unique identifier for the first passing object, and storing the second characteristics into the characteristic library by taking the unique identifier as an index.
In this embodiment, on the premise of the second interest region of the passing object extracted from the first image, the single index of the passing object according to the second feature is realized in a manner of matching the second target passing object based on the second feature of the second interest region.
In some of the embodiments, the first feature may also be determined based on the first region of interest of the first passing object in a case where the first region of interest of the first passing object is extracted from the first image and the second region of interest of the first passing object is not extracted from the first image; matching a first target passing object of the passing object in the feature library based on the first feature; and under the condition that the first target passing object is not matched, allocating a unique identifier for the first passing object, and storing the first characteristics into the characteristic library by taking the unique identifier as an index.
In this embodiment, on the premise of the first region of interest of the passing object extracted from the first image, the single indexing of the passing object according to the first feature is realized in a manner of matching the first target passing object based on the first feature of the first region of interest.
In some embodiments, the first feature and/or the second feature may be further stored in the feature library by using the unique identifier of the first target passing object as an index in the case of matching to the first target passing object and not matching to the second target passing object; and under the condition that the first target passing object is not matched and the second target passing object is matched, storing the first characteristic and/or the second characteristic into the characteristic library by taking the unique identifier of the second target passing object as an index.
In this embodiment, by storing the first feature and/or the second feature in the feature library according to the unique identifier of the first passing object as an index, or by storing the first feature and/or the second feature in the feature library according to the unique identifier of the second passing object as an index, multi-way indexing of the first feature and the second feature is realized, which is convenient for counting subsequent passing objects.
In some embodiments, in the case that the first target passing object and the second target passing object are matched, whether the unique identifier of the first target passing object is the same as the unique identifier of the second target passing object is judged; and under the condition that the unique identification of the first target passing object is judged to be the same as the unique identification of the second target passing object, the first characteristic and/or the second characteristic are/is stored into the characteristic library by taking the same unique identification as an index.
In this embodiment, the first feature and the second feature of the passing object are stored in the feature library by using the same unique identifier as an index when it is determined that the unique identifier of the first target passing object is the same as the unique identifier of the second target passing object, and meanwhile, the uniqueness of the unique identifiers of the indexes of the first feature and the second feature of the passing object can be further determined by determining whether the unique identifiers are the same.
In some embodiments, when it is determined that the unique identifier of the first target passing object is not the same as the unique identifier of the second target passing object, the target passing object with higher similarity to the first passing object in the first target passing object and the second target passing object may be determined, and the first feature and/or the second feature may be stored in the feature library by using the unique identifier of the target passing object with higher similarity as an index.
In this embodiment, by determining, under the condition that it is determined that the unique identifier of the first target passing object is not the same as the unique identifier of the second target passing object, a target passing object with a higher similarity to the first passing object in the first target passing object and the second target passing object, and storing the first feature and/or the second feature in the feature library by using the unique identifier of the target passing object with the higher similarity as an index, the first feature and the second feature of the passing object are stored, and meanwhile, by selecting the unique identifier through the similarity, the accuracy of the unique identifiers of the indexes of the first feature and the second feature of the passing object can be further determined.
In some of these embodiments, the first image further comprises a second pass object; under the condition that the first target passing object and the second target passing object are not matched, a unique identifier is distributed for the passing object, the first feature and the second feature are stored in a feature library by taking the unique identifier as an index, and then association information can be generated based on the unique identifier of the first passing object and the unique identifier of the second passing object; and storing the associated information to a feature library by taking the unique identifier of the first passing object and/or the unique identifier of the second passing object as an index.
In this embodiment, the first passing object and the second passing object are associated, so that the second passing object is determined according to the association information of the first passing object subsequently, and the second passing object is counted; or determining the first passing object according to the associated information of the second passing object and counting the first passing object.
In some of these embodiments, the first pass object includes at least one of: pedestrians, vehicles; under the condition that the first passing object is a pedestrian, the first interested area is a face area, and the second interested area is a human body area; the first characteristic is a face characteristic, and the second characteristic is a body characteristic; under the condition that the first passing object is a vehicle, the first interested area is a license plate area, and the second interested area is a vehicle body area; the first characteristic is a license plate characteristic and the second characteristic is a vehicle body characteristic.
In this embodiment, different regions may be selected as the first region of interest and the second region of interest according to the difference of the first passing, and in this way, the extraction of the regions of interest of different passing objects is realized.
It should be noted that the first passing object may also be an animal, a passing object designed according to the actual needs of the user, and the like, and the embodiment of the present application is not limited.
In the present embodiment, a method for counting a passing object is provided, and fig. 3 is a flowchart of the method for counting a passing object of the present embodiment, as shown in fig. 3, the flowchart includes the following steps:
in step S301, a second image having a third passing object is acquired.
In this step, the second image may be acquired in real time or may be acquired from a database in which the second image is stored.
Step S302, a first region of interest of the third passing object is extracted from the second image.
In this step, the first region of interest may be extracted by a neural network that is well trained in advance and has a certain region extraction accuracy, so as to improve the accuracy of the first region of interest.
Note that, in the case where the third passing object is a human, the first region of interest may be a human face region.
In step S303, in a case where the first region of interest of the third passing object is not extracted, a second region of interest of the third passing object is extracted from the second image, and the third feature is extracted based on the second region of interest of the third passing object.
In this step, by extracting the second region of interest of the third passing object from the second image and extracting the third feature based on the second region of interest of the third passing object, it is possible to facilitate the subsequent counting of the passing object according to the third feature.
Step S304, a third target passing object of the third passing object is matched based on the third feature in the feature library, where the feature library is the feature library in the foregoing embodiment.
In this step, in the case that the first region of interest is not extracted, matching of the third feature is realized in the feature library based on a manner that the third feature matches a third target passing object of the third passing object, so that the passing object is counted according to the third feature when the third feature is subsequently matched.
In step S305, when the third target passing object is matched from the feature library, the count value corresponding to the third target passing object is updated.
Based on the above steps S301 to S305, by extracting the first region of interest and the second region of interest of the third passing object in the second image, under the condition that the first interested region of the third passing object is not extracted, extracting the second interested region of the third passing object, then according to the extracted second interested area of the third passing object, extracting the third feature of the second interested area of the third passing object, finally matching the third target passing object of the third passing object corresponding to the third feature from the feature library, and under the condition that the third target passing object is matched from the feature library, the mode of updating the count value corresponding to the third target passing object solves the problem that the human face cannot be shot in the related technology, and the problem of low counting accuracy of the passing objects in the accurate marketing data is caused, and the accuracy of the accurate marketing data is improved.
In some embodiments, in a case where the first region of interest of the third passing object is extracted, the fourth feature may be further extracted based on the first region of interest of the third passing object; matching a third target passing object of the third passing object based on a fourth feature in the feature library; when the third target passing object is matched from the feature library, the count value corresponding to the third target passing object is updated.
In this embodiment, when the first region of interest of the third passing object is extracted, the fourth feature is directly extracted based on the first region of interest of the third passing object, matching is performed, and when the fourth feature is matched, the count value corresponding to the third target passing object is updated, so that the third passing object is accurately counted.
In some embodiments, in a case where the second image further includes a fourth passing object, it is also possible to extract a second region of interest of the fourth passing object from the second image and extract a fourth feature based on the second region of interest of the fourth passing object, in a case where the first region of interest of the fourth passing object is not extracted; acquiring associated information corresponding to the unique identifier of the third passing object from the feature library based on the unique identifier of the third passing object; determining a fifth passing object associated with the third passing object according to the associated information; extracting a second region of interest of the fifth passing object, and extracting a fifth feature based on the second region of interest of the fifth passing object; judging whether the similarity between the fourth feature and the fifth feature is greater than a preset similarity or not; and updating the count value corresponding to the fourth passing object when the similarity between the fourth feature and the fifth feature is judged to be greater than the preset similarity.
In this embodiment, when the second image further includes a fourth passing object, and when the first region of interest of the fourth passing object is not extracted, the second region of interest of the fourth passing object is extracted from the second image, then the unique identifier of the third passing object determines the association information, and determines the similarity between the fifth feature and the fourth feature of the fifth passing object according to the association information, and when the similarity is greater than the preset similarity, the count value corresponding to the fourth passing object is updated, so that the fourth passing object is counted according to the association information, and the accuracy of the marketing data is further improved.
The present embodiment is described and illustrated below by means of preferred embodiments.
Fig. 4 is a flowchart of a method of counting a passing object according to the preferred embodiment. As shown in fig. 3, the process includes the following steps:
in this step, the passing object is a pedestrian, the face region is used as a third region of interest, and the body region is used as a fourth region of interest to describe and explain.
In step S401, an image is captured by a Network Video Recorder (NVR).
In step S402, whether one person or a plurality of persons are present in the image is detected, and step S403 is executed if one person is present, and step S408 is executed if a plurality of persons are present.
Step S403, detecting whether there is a face feature of the face region captured, if yes, performing step S404, and if not, performing step S405.
And step S404, updating the face counting times corresponding to the face features in the feature library according to the face region.
And S405, capturing the human body area, and calculating the similarity of the captured human body area and the human body area in the feature library.
Step S406, determining whether the similarity between the captured human body region and the human body region with the highest similarity in the feature library reaches a preset similarity, if so, performing step S407, and if not, performing step S413.
It should be noted that the comparison by using the human body region does not represent that the present application is limited thereto, and the comparison in this embodiment may also be an article image or other images.
Step S407, updating the face count of the human body corresponding to the human body region in the corresponding feature library.
Step 408, detecting whether each pedestrian captures a face region, if so, executing step 409, otherwise, executing step 410.
And step S409, updating the face counting times corresponding to the face features in the feature library according to the face region of each pedestrian.
And step S410, comparing the human body and face feature similarity of each pedestrian for the pedestrians of which the face area is not detected, and accumulating to obtain the sum of the similarity.
In step S411, it is determined whether the total similarity is greater than the preset total similarity, if so, step S414 is executed, and if not, step S413 is executed.
Step S412, updating the face count corresponding to the human body area in the passerby library corresponding to the human body area which is not snapped to the human face area.
Step S413 ends.
Through the steps, the embodiment utilizes the relevance between the human body and the human face, the multiple human bodies and the relevance between the human body and the surrounding objects to carry out comparison analysis, restores and additionally records the human face information, ensures the integrity of the accurate marketing snapshot record, and improves the accuracy of the accurate marketing analysis.
There is also provided in this embodiment an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
in step S201, a first image having a first passing object is acquired.
Step S202, a first region of interest and a second region of interest of the first passing object are extracted from the first image.
In step S203, in the case where the first region of interest and the second region of interest of the first passing object are extracted from the first image, the first feature is determined based on the first region of interest of the first passing object, and the second feature is determined based on the second region of interest of the first passing object.
Step S204, a first target passing object of the first passing object is matched in the feature library based on the first feature, and a second target passing object of the first passing object is matched in the feature library based on the second feature.
And step S205, under the condition that the first target passing object and the second target passing object are not matched, allocating a unique identifier for the first passing object, and storing the first characteristic and the second characteristic into a characteristic library by taking the unique identifier as an index.
Optionally, in this embodiment, the processor may be further configured to execute, by the computer program, the following steps:
in step S301, a second image having a third passing object is acquired.
Step S302, a first region of interest of the third passing object is extracted from the second image.
In step S303, in a case where the first region of interest of the third passing object is not extracted, a second region of interest of the third passing object is extracted from the second image, and the third feature is extracted based on the second region of interest of the third passing object.
Step S304, a third target passing object of the third passing object is matched based on the third feature in the feature library, where the feature library is the feature library in the foregoing embodiment.
In step S305, when the third target passing object is matched from the feature library, the count value corresponding to the third target passing object is updated.
It should be noted that, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and optional implementations, and details are not described again in this embodiment.
In addition, in combination with the method for counting faces in the passerby library provided in the above embodiment, a storage medium may also be provided in this embodiment. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements a method for building a library of feature libraries of any of the above-described embodiments, or implements a method for counting any of the above-described embodiments.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be derived by a person skilled in the art from the examples provided herein without any inventive step, shall fall within the scope of protection of the present application.
It is obvious that the drawings are only examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application can be applied to other similar cases according to the drawings without creative efforts. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
The term "embodiment" is used herein to mean that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly or implicitly understood by one of ordinary skill in the art that the embodiments described in this application may be combined with other embodiments without conflict.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the patent protection. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (13)

1. A method for building a database of a feature library of a passing object is characterized by comprising the following steps:
acquiring a first image with a first passing object;
extracting a first region of interest and a second region of interest of the first passing object from the first image;
in the case where a first region of interest and a second region of interest of the first passing object are extracted from the first image, determining a first feature based on the first region of interest of the first passing object, and determining a second feature based on the second region of interest of the first passing object;
matching a first target passing object of the first passing object based on the first feature in a feature library, and matching a second target passing object of the first passing object based on the second feature in the feature library;
and under the condition that the first target passing object and the second target passing object are not matched, allocating a unique identifier for the first passing object, and storing the first characteristic and the second characteristic into the characteristic library by taking the unique identifier as an index.
2. The method of claim 1, further comprising:
determining a second feature based on a second region of interest of the first passing object in the case where the first region of interest of the first passing object is not extracted from the first image and the second region of interest of the first passing object is extracted from the first image;
matching a second target pass object of the pass object in the feature library based on the second feature;
and under the condition that the second target passing object is not matched, allocating a unique identifier for the first passing object, and storing the second characteristics into the characteristic library by taking the unique identifier as an index.
3. The method of claim 1, further comprising:
in the case where a first region of interest of the first passing object is extracted from the first image and a second region of interest of the first passing object is not extracted from the first image, determining a first feature based on the first region of interest of the first passing object;
matching a first target pass object of the pass object in the feature library based on the first feature;
and under the condition that the first target passing object is not matched, allocating a unique identifier for the first passing object, and storing the first characteristics into the characteristic library by taking the unique identifier as an index.
4. The method of claim 1, further comprising:
under the condition that the first target passing object is matched and the second target passing object is not matched, the first characteristic and/or the second characteristic are/is stored into the characteristic library by taking the unique identification of the first target passing object as an index;
and under the condition that the first target passing object is not matched and the second target passing object is matched, storing the first characteristic and/or the second characteristic into the characteristic library by taking the unique identification of the second target passing object as an index.
5. The method of claim 1, wherein the database is created by a database library of characteristics of the road object,
under the condition that the first target passing object and the second target passing object are matched, judging whether the unique identifier of the first target passing object is the same as the unique identifier of the second target passing object or not;
and under the condition that the unique identification of the first target passing object is judged to be the same as the unique identification of the second target passing object, the first characteristic and/or the second characteristic are/is stored into the characteristic library by taking the same unique identification as an index.
6. The method of claim 5, further comprising:
and under the condition that the unique identification of the first target passing object is judged to be different from the unique identification of the second target passing object, determining the target passing object with higher similarity to the first passing object in the first target passing object and the second target passing object, and storing the first feature and/or the second feature into the feature library by taking the unique identification of the target passing object with higher similarity as an index.
7. The method of claim 1, wherein the first image further comprises a second pass object; when the first target passing object and the second target passing object are not matched, allocating a unique identifier to the passing object, and storing the first feature and the second feature into the feature library by using the unique identifier as an index, the method further comprises:
generating associated information based on the unique identification of the first passing object and the unique identification of the second passing object;
and storing the associated information to the feature library by taking the unique identifier of the first passing object and/or the unique identifier of the second passing object as an index.
8. The method of claim 1, wherein the first pass object comprises at least one of: pedestrians, vehicles;
under the condition that the first passing object is a pedestrian, the first interested area is a human face area, and the second interested area is a human body area; the first characteristic is a face characteristic, and the second characteristic is a body characteristic;
under the condition that the first passing object is a vehicle, the first interested area is a license plate area, and the second interested area is a vehicle body area; the first characteristic is a license plate characteristic and the second characteristic is a vehicle body characteristic.
9. A method for counting objects passing through, comprising:
acquiring a second image with a third passing object;
extracting a first region of interest of the third passing object from the second image;
under the condition that the first region of interest of the third passing object is not extracted, extracting a second region of interest of the third passing object from the second image, and extracting a third feature based on the second region of interest of the third passing object;
matching a third target via object of the third via object based on the third feature in a feature library, wherein the feature library is the feature library of any one of claims 1 to 8;
and updating a count value corresponding to the third target passing object when the third target passing object is matched from the feature library.
10. The method of counting passing objects of claim 9, further comprising:
extracting a fourth feature based on the first region of interest of the third passing object in the case that the first region of interest of the third passing object is extracted;
matching, in the token library, a third target pass object of the third pass object based on the fourth feature;
and updating a count value corresponding to the third target passing object when the third target passing object is matched from the feature library.
11. The method of counting passing objects of claim 9, wherein the second image further comprises a fourth passing object, the method further comprising:
in the case where the first region of interest of the fourth passing object is not extracted, extracting a second region of interest of the fourth passing object from the second image, and extracting a fourth feature based on the second region of interest of the fourth passing object;
acquiring association information corresponding to the unique identifier of the third passing object from the feature library based on the unique identifier of the third passing object;
determining a fifth passing object associated with the third passing object according to the associated information;
extracting a second region of interest of the fifth passing object, and extracting a fifth feature based on the second region of interest of the fifth passing object;
judging whether the similarity between the fourth feature and the fifth feature is greater than a preset similarity or not;
and updating a count value corresponding to the fourth passing object when the similarity between the fourth feature and the fifth feature is judged to be greater than a preset similarity.
12. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform a method of library building a feature library of a passing object according to any one of claims 1 to 8, or a method of counting a passing object according to any one of claims 8 to 11.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for library-building a feature library of route objects according to any one of claims 1 to 8 or the steps of the method for counting route objects according to any one of claims 8 to 11.
CN202110569170.1A 2021-05-25 2021-05-25 Feature library construction method for passing object, electronic device and storage medium Active CN113393265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110569170.1A CN113393265B (en) 2021-05-25 2021-05-25 Feature library construction method for passing object, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110569170.1A CN113393265B (en) 2021-05-25 2021-05-25 Feature library construction method for passing object, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN113393265A true CN113393265A (en) 2021-09-14
CN113393265B CN113393265B (en) 2023-04-25

Family

ID=77618978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110569170.1A Active CN113393265B (en) 2021-05-25 2021-05-25 Feature library construction method for passing object, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113393265B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339391A (en) * 2010-07-27 2012-02-01 株式会社理光 Multiobject identification method and device
CN108491812A (en) * 2018-03-29 2018-09-04 百度在线网络技术(北京)有限公司 The generation method and device of human face recognition model
CN109740464A (en) * 2018-12-21 2019-05-10 北京智行者科技有限公司 The identification follower method of target
CN109977823A (en) * 2019-03-15 2019-07-05 百度在线网络技术(北京)有限公司 Pedestrian's recognition and tracking method, apparatus, computer equipment and storage medium
CN110781813A (en) * 2019-10-24 2020-02-11 北京市商汤科技开发有限公司 Image recognition method and device, electronic equipment and storage medium
CN110874583A (en) * 2019-11-19 2020-03-10 北京精准沟通传媒科技股份有限公司 Passenger flow statistics method and device, storage medium and electronic equipment
CN111738181A (en) * 2020-06-28 2020-10-02 浙江大华技术股份有限公司 Object association method and device, and object retrieval method and device
CN111783654A (en) * 2020-06-30 2020-10-16 苏州科达科技股份有限公司 Vehicle weight identification method and device and electronic equipment
CN112686178A (en) * 2020-12-30 2021-04-20 中国电子科技集团公司信息科学研究院 Multi-view target track generation method and device and electronic equipment
US20210117687A1 (en) * 2019-10-22 2021-04-22 Sensetime International Pte. Ltd. Image processing method, image processing device, and storage medium
CN112766230A (en) * 2021-02-09 2021-05-07 浙江工商大学 Video streaming personnel online time length estimation method and corresponding system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339391A (en) * 2010-07-27 2012-02-01 株式会社理光 Multiobject identification method and device
CN108491812A (en) * 2018-03-29 2018-09-04 百度在线网络技术(北京)有限公司 The generation method and device of human face recognition model
CN109740464A (en) * 2018-12-21 2019-05-10 北京智行者科技有限公司 The identification follower method of target
CN109977823A (en) * 2019-03-15 2019-07-05 百度在线网络技术(北京)有限公司 Pedestrian's recognition and tracking method, apparatus, computer equipment and storage medium
US20210117687A1 (en) * 2019-10-22 2021-04-22 Sensetime International Pte. Ltd. Image processing method, image processing device, and storage medium
CN110781813A (en) * 2019-10-24 2020-02-11 北京市商汤科技开发有限公司 Image recognition method and device, electronic equipment and storage medium
CN110874583A (en) * 2019-11-19 2020-03-10 北京精准沟通传媒科技股份有限公司 Passenger flow statistics method and device, storage medium and electronic equipment
CN111738181A (en) * 2020-06-28 2020-10-02 浙江大华技术股份有限公司 Object association method and device, and object retrieval method and device
CN111783654A (en) * 2020-06-30 2020-10-16 苏州科达科技股份有限公司 Vehicle weight identification method and device and electronic equipment
CN112686178A (en) * 2020-12-30 2021-04-20 中国电子科技集团公司信息科学研究院 Multi-view target track generation method and device and electronic equipment
CN112766230A (en) * 2021-02-09 2021-05-07 浙江工商大学 Video streaming personnel online time length estimation method and corresponding system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
田歌: "人脸检测跟踪技术的研究" *
赵倩倩: "基于统计特征的人脸识别系统设计与实现" *

Also Published As

Publication number Publication date
CN113393265B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN108091140B (en) Method and device for determining fake-licensed vehicle
CN108446681B (en) Pedestrian analysis method, device, terminal and storage medium
CN110363076A (en) Personal information correlating method, device and terminal device
CN111291682A (en) Method and device for determining target object, storage medium and electronic device
CN108563651B (en) Multi-video target searching method, device and equipment
CN110969215A (en) Clustering method and device, storage medium and electronic device
CN112770265B (en) Pedestrian identity information acquisition method, system, server and storage medium
CN112749652A (en) Identity information determination method and device, storage medium and electronic equipment
CN111091106A (en) Image clustering method and device, storage medium and electronic device
CN112528099A (en) Personnel peer-to-peer analysis method, system, equipment and medium based on big data
CN109784220B (en) Method and device for determining passerby track
CN111709382A (en) Human body trajectory processing method and device, computer storage medium and electronic equipment
CN110264497B (en) Method and device for determining tracking duration, storage medium and electronic device
CN113077018A (en) Target object identification method and device, storage medium and electronic device
CN113470013A (en) Method and device for detecting moved article
CN111104915B (en) Method, device, equipment and medium for peer analysis
CN111738181A (en) Object association method and device, and object retrieval method and device
CN113393265A (en) Method for establishing database of feature library of passing object, electronic device and storage medium
CN115391596A (en) Video archive generation method and device and storage medium
CN113343004B (en) Object identification method and device, storage medium and electronic device
CN112559583B (en) Method and device for identifying pedestrians
CN113591620A (en) Early warning method, device and system based on integrated mobile acquisition equipment
CN114255321A (en) Method and device for collecting pet nose print, storage medium and electronic equipment
CN113591713A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113743248A (en) Identity information extraction method, device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant