CN111126119A - Method and device for counting user behaviors arriving at store based on face recognition - Google Patents

Method and device for counting user behaviors arriving at store based on face recognition Download PDF

Info

Publication number
CN111126119A
CN111126119A CN201811296903.3A CN201811296903A CN111126119A CN 111126119 A CN111126119 A CN 111126119A CN 201811296903 A CN201811296903 A CN 201811296903A CN 111126119 A CN111126119 A CN 111126119A
Authority
CN
China
Prior art keywords
user
face
store
library
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811296903.3A
Other languages
Chinese (zh)
Inventor
刘博文
姚向民
黄培
徐云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811296903.3A priority Critical patent/CN111126119A/en
Publication of CN111126119A publication Critical patent/CN111126119A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application provides a user arrival behavior statistical method and device based on face recognition, and the method comprises the following steps: analyzing and processing image data acquired by each camera shooting acquisition device at the current moment to acquire user face images of each area in a store at the current moment; judging whether the face image of a first user in the store at the current moment is matched with the face image of a second user in a first preset face library, wherein the time interval between the store arrival time of the user corresponding to the face image contained in the first preset face library and the current moment is less than a threshold value; and if so, updating the store-to behavior of the second user in the first preset face library according to the current region of the first user. According to the method, the time interval between the shop arrival time and the current time of the user corresponding to the face image stored in the first preset face library is smaller than the threshold value, so that the face image of the user acquired at the current time is searched in the first preset face library, and the processing speed, accuracy and real-time performance can be greatly improved.

Description

Method and device for counting user behaviors arriving at store based on face recognition
Technical Field
The application relates to the technical field of face recognition, in particular to a method and a device for counting user behaviors arriving at a store based on face recognition.
Background
At present, in a digital store-to-store behavior recognition scene of a shopping mall, face detection, search and storage are performed on a captured photo of a store-to user, so that store-to behavior statistics at a store level and historical store-to behavior tracking of the user are performed, and a merchant is helped to formulate a personalized marketing strategy.
However, as the number of the store-to users increases, the magnitude of the store-to user face library at the store level gradually increases, so that the processing speed, accuracy and real-time performance of face detection and search for the store-to users are low.
Disclosure of Invention
The application provides a method and a device for counting store-arriving user behaviors based on face recognition, which are used for solving the problems of low processing speed and poor accuracy and real-time performance in face detection and search of store-arriving users in a store-arriving user face library along with the increase of the number of store-arriving users in the related art.
An embodiment of one aspect of the application provides a method for counting store-arriving user behaviors based on face recognition, which includes:
analyzing and processing image data acquired by each camera shooting acquisition device at the current moment to acquire user face images of each area in a store at the current moment;
judging whether the face image of the first user in the store at the current moment is matched with the face image of the second user in a first preset face library, wherein the time interval between the store arrival time of the user corresponding to the face image contained in the first preset face library and the current moment is less than a threshold value;
and if so, updating the shop-arriving behavior of the second user in the first preset face library according to the current region of the first user.
According to the method for counting the user's in-store behaviors based on face recognition, image data collected by each camera shooting collection device at the current moment is analyzed to obtain user face images of each area in the store at the current moment, whether the face image of a first user in the store at the current moment is matched with the face image of a second user in a first preset face library or not is judged, wherein the time interval between the time of the face image corresponding to the user in the first preset face library and the current moment is smaller than a threshold value, and if the face image is matched with the face image of the second user in the first preset face library, the in-store behaviors of the second user in the first preset face library are updated according to the area where the first user is located. In this embodiment, since the time interval between the store arrival time and the current time of the user corresponding to the face image stored in the first preset face library is smaller than the threshold, the face image of the store user is stored in the first preset face library, so that the face image of the user acquired at the current time is retrieved in the first preset face library.
Another embodiment of the present application provides an arrival user behavior statistics apparatus based on face recognition, including:
the analysis module is used for analyzing and processing image data acquired by each camera shooting acquisition device at the current moment so as to acquire user face images of each area in the store at the current moment;
the first judgment module is used for judging whether the face image of the first user in the store at the current moment is matched with the face image of the second user in a first preset face library, wherein the time interval between the store arrival time of the user corresponding to the face image contained in the first preset face library and the current moment is less than a threshold value;
and the first updating module is used for updating the shop-arriving behavior of the second user in the first preset face library according to the current area of the first user when the face image of the first user in the shop at the current moment is matched with the face image of the second user in the first preset face library.
The shop-arriving user behavior statistical device based on face recognition obtains user face images of all areas in a current-time shop by analyzing and processing image data collected by each camera shooting collection device at the current time, judges whether the face image of a first user in the current-time shop is matched with the face image of a second user in a first preset face library, wherein the time interval between the shop-arriving time of the user corresponding to the face image contained in the first preset face library and the current time is smaller than a threshold value, and if the face image is matched with the face image of the second user in the first preset face library, updates the shop-arriving behavior of the second user in the first preset face library according to the area where the first user is currently located. In this embodiment, since the time interval between the store arrival time and the current time of the user corresponding to the face image stored in the first preset face library is smaller than the threshold, the face image of the store user is stored in the first preset face library, so that the face image of the user acquired at the current time is retrieved in the first preset face library.
Another embodiment of the present application provides a computer device, including a processor and a memory;
wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to implement the statistical method for the user behavior arriving at the store based on face recognition according to the embodiment of the above aspect.
Another embodiment of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a statistical method for store-to-store user behavior based on face recognition as described in an embodiment of the above aspect.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a method for counting store-to-store user behaviors based on face recognition according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another method for counting store-to-store user behaviors based on face recognition according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another method for counting store-to-store user behaviors based on face recognition according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a further method for counting store-to-store user behaviors based on face recognition according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a further store-to-store user behavior statistical method based on face recognition according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an arrival user behavior statistics apparatus based on face recognition according to an embodiment of the present application;
FIG. 7 illustrates a block diagram of an exemplary computer device suitable for use to implement embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The following describes an arrival user behavior statistical method and apparatus based on face recognition according to an embodiment of the present application with reference to the drawings.
The embodiment of the application provides a method for counting the behavior of the store-arriving user based on face recognition, aiming at the problems that the magnitude of a store-arriving user face library is gradually increased along with the increase of the number of store-arriving users in the related technology, so that the processing speed of face detection and search for the store-arriving user is low, and the accuracy and the real-time performance are poor.
According to the method for counting the user's in-store behaviors based on face recognition, image data collected by each camera shooting collection device at the current moment is analyzed to obtain user face images of each area in the store at the current moment, whether the face image of a first user in the store at the current moment is matched with the face image of a second user in a first preset face library or not is judged, wherein the time interval between the time of the face image corresponding to the user in the first preset face library and the current moment is smaller than a threshold value, and if the face image is matched with the face image of the second user in the first preset face library, the in-store behaviors of the second user in the first preset face library are updated according to the area where the first user is located. In this embodiment, since the time interval between the store arrival time and the current time of the user corresponding to the face image stored in the first preset face library is smaller than the threshold, the face image of the store user is stored in the first preset face library, so that the face image of the user acquired at the current time is retrieved in the first preset face library.
Fig. 1 is a schematic flow chart of an arrival store user behavior statistical method based on face recognition according to an embodiment of the present application.
As shown in fig. 1, the statistical method for the user behavior arriving at the store based on face recognition includes:
step 101, analyzing and processing image data acquired by each camera shooting acquisition device at the current moment to acquire user face images of each area in the store at the current moment.
In the embodiment of the present application, the store may be a specific store, or may be a closed area, such as a mall, a park, a scenic spot, and the like. Taking a store as an example, a plurality of areas such as a daily article area, a shoe monopoly area, a bag monopoly area, and the like can be set in the store. Taking a mall as an example, the mall may include a restaurant, a clothing store, a family store, and the like.
In this embodiment, camera capturing devices, such as cameras, may be disposed in each area in the store to capture images in each area. For example, cameras may be provided in various stores in a mall to capture images in the various stores.
Specifically, each camera shooting and collecting device can be controlled to collect images in each area at preset time intervals, the images collected by each camera shooting and collecting device at the current time are obtained, image data collected by each camera shooting and collecting device at the current time is analyzed, face images are extracted from the image data, and therefore user face images of each area in a store at the current time are obtained.
Step 102, judging whether the face image of the first user in the store at the current moment is matched with the face image of the second user in a first preset face library.
In this embodiment, images of the store-entering user can be collected in real time according to the camera shooting collection device arranged at the entrance of the store, and the obtained face images of the user are stored in the first preset face library.
And the time interval between the store arrival time of the user corresponding to the face image contained in the first preset face library and the current moment is less than a threshold value. For example, a first preset face library may be used to store face images of users who arrive at the store on the current day.
It should be noted that the time from the user to store corresponding to the face image included in the first preset face library is the time when the user is detected to enter the store for the first time.
After the face image of the user in the store at the current moment is obtained, the face image of the user in the store at the current moment can be matched with the face image of the user in the first preset face library.
Specifically, whether the face image of the first user in the store at the current moment is matched with the face image of the second user in a first preset face library is judged. If the first preset face library comprises at least one face image of the second user, matching the face image of the first user with each face image of the second user, and if the matching degree exceeds a preset matching degree threshold value, considering that the face image of the first user is matched with the face image of the second user in the first preset face library.
In practical applications, the person does not change much in a short time, such as a day, like a hairstyle, a facial decoration, a clothing, and the like, in this embodiment, since the interval between the arrival time of the user and the current time corresponding to the face image stored in the first preset face library satisfies a requirement smaller than a threshold, that is, the face image of the user that has arrived at the store recently is stored in the first preset face library. The similarity between the face image of the same user stored in the first preset face library and the newly captured face image is much higher than the similarity between the face image of the same user in the face library of the store user and the newly captured face image in the prior art. Therefore, the current snapshot face image is matched with the face image in the first preset face library, so that the processing speed and the real-time performance are greatly improved, and the accuracy is improved.
And 103, if the first user is matched with the second user, updating the store-to-store behavior of the second user in the first preset face library according to the current region of the first user.
When the face image of the first user in the store at the current moment is matched with the face image of the second user in the first preset face library, the first user is matched with the second user in the first preset face library, and the store-to-store behavior of the second user in the first preset face library can be updated according to the current region of the first user.
For example, the current region of the first user, the current purchasing behavior, the current time, the face image at the current time, and the like may be updated to the second user-to-store behavior in the second preset face library.
In the embodiment of the application, when the face image of the user arriving at the store at the current moment is matched with the face image of the user in the first preset face library, the behavior of the user arriving at the store is updated to the first preset face library, and the first preset face library comprises the face image of the user arriving at the store within the preset time period, so that the real-time behavior of the user arriving at the store can be counted according to the first preset face library, and the real-time performance is high.
Based on the embodiment, when the face image of the user in the store at the current time is matched with the face image of the user in the first preset face library, if the face image of the first user in the store at the current time is not matched with the face image of the second user in the first preset face library, it is indicated that the first user arrives at the store for the first time, and a new user corresponding to the face image of the first user can be created in the first preset face library.
As an example, the first preset face library includes a corresponding relationship between a face image and a user identifier, so that a new user identifier may be created in the first preset face library, the face image of the first user is stored in the first preset face library, and the created new user identifier has a corresponding relationship with the face image of the first user. The user identifier is used to uniquely identify the user, and may be a user number or the like.
In practical applications, when the number of users in the store exceeds a preset number at a certain time, if the number of users in the store continues to increase, the shopping experience of the users in the store may be affected. At present, the entrance of a user is controlled manually by staff at an entrance in a store, and the control mode has poor flexibility. In the embodiment of the application, the state of the entrance door of the user in the store can be controlled according to the number of the users in the store at the current moment. Fig. 2 is a schematic flow chart of another store-to-store user behavior statistical method based on face recognition according to an embodiment of the present application.
As shown in fig. 2, the statistical method for the user behavior arriving at the store based on face recognition includes:
step 201, performing face recognition on the face images of the users in each area in the store at the current moment to determine the number of the users in the store at the current moment.
As another possible implementation manner, after the face images of the users in the respective areas in the current-time store are acquired, the face images with the similarity exceeding the preset threshold in the acquired face images of the users in the respective areas in the current-time store are searched through a face recognition technology, and deduplication is performed, so that the number of face images subjected to deduplication is the number of the users in the current-time store.
It can be understood that in the embodiment of the application, the number of users in the store at each time can be determined by performing face recognition on the face images acquired at each time. And then, the number of users entering the store within a certain time can be determined by matching and removing the weight of the face images collected at different moments.
For example, a face image of a first user acquired at the current time is matched with a face image of the user stored in a first preset face library, and if the face image of the first user is matched with the face image of a certain user in the first preset face library and the first user is in a store from the store arrival time to the current time according to the knowledge of images acquired at times before the current time, it can be determined that the time from the store arrival time of the first user to the current time, and the passenger flow volume in the store includes the first user.
Step 202, controlling the state of the user entrance doors in the store according to the number of the users in the store at the current time.
For example, by counting that the number of the users in the store at the current moment exceeds a preset threshold, the entrance door of the user in the store can be controlled to be in a closed state, so that the user to enter is suspended from entering. After a period of time, if the number of the users in the store is smaller than a preset threshold value, controlling the entrance door of the user in the store to be in an open state, and allowing the user to enter the store.
According to the statistical method for the user behavior arriving at the store based on the face recognition, the face image of the user at the current time is obtained according to the image data collected by each camera shooting and collecting device at the current time, and the face image is de-duplicated, so that the number of the face images of each area in the store at the current time can be accurately determined, the user number in the store at the current time can be accurately determined, the user to be entered is controlled according to the real-time passenger flow, the control flexibility is improved, and the labor cost is saved.
In practice, there may be different areas within a store, such as luxury areas, flat goods areas, and so forth. Particularly luxury areas, are currently controlled manually by staff, mainly at the entrance of the area, according to the number of users in the area, in a manner that is inflexible and increases labor costs.
In this embodiment, different entrance authorities can be set for different areas in the store. For example, whether the user has the right to enter a certain area can be determined according to the collected face image of the user.
Fig. 3 is a schematic flow chart of another store-to-store user behavior statistical method based on face recognition according to an embodiment of the present application.
As shown in fig. 3, after acquiring the face images of the users in the respective areas in the store at the current time, the method for counting the user behaviors to the store based on face recognition further includes:
step 301, determining a face image of a user in a store at the current moment.
In this embodiment, after the face images of the users in the respective areas in the current-time store are acquired, the face images of the users in the respective areas in the current-time store are subjected to deduplication processing through face recognition, so that the face images of the users in the current-time store can be determined. It can be understood that the number of face images of the user in the store at the current time is the number of the user in the store at the current time.
Step 302, determining whether the face image of the first user in the store matches with the face image of the third user in the second preset face library.
In this embodiment, a face library may be preset for one or more areas in a store as needed. The different areas can correspond to different face libraries, and users corresponding to face images in the face libraries have the authority to enter the in-store areas corresponding to the face libraries.
After the face image of the user in the store at the current moment is determined, whether the face image of the first user in the store is matched with the face image of the third user in the second preset face library or not can be judged. The face images in the second preset face library may be collected in advance or stored in advance.
Specifically, each user in the second preset face library corresponds to at least one image, the face image of the first user in the store can be matched with each face image of the third user in the second preset face library, and if the matching degrees exceed the preset matching degree threshold value, the face image of the first user in the store can be considered to be matched with the face image of the third user in the second preset face library.
Step 303, if the face image of the first user in the store matches with the face image of the third user in the second preset face library, determining that the first user has the right to enter the area corresponding to the second preset face library.
When the face image of the first user in the store is matched with the face image of the third user in the second preset face library, it is indicated that the first user in the store is the third user in the second preset face library, and then it can be determined that the first user has the right to enter the area corresponding to the second preset face library, and the first user is allowed to enter the area.
In the above embodiment, before the face image of the first user is matched with the face image of the third user in the second preset face library, the second preset face library corresponding to each area in the store may be predetermined.
Specifically, historical data of each store-arriving user, such as image data and consumption data of each area of the store where the user is located, may be acquired, and the historical data may be analyzed to determine the type of each store-arriving user. And then, according to the type of each user in the store, the function of each area in the store and the type of each sales item in each area in the store, determining a second preset face library corresponding to each area.
For example, an in-store area where a certain user frequently goes is determined according to historical data of a user who frequently goes to the store, the consumption level of the user can be determined according to the area where the user frequently goes and the consumption record, and the face image of the user can be stored in a second preset face library corresponding to the area according to the consumption level of the user, the function of the area and the type of goods sold in the area.
It should be noted that the face images of the same user may be stored in a second preset face library corresponding to different areas.
Or, the second preset face library corresponding to each region may be determined according to the consumption records of the users arriving at the store in each region. For example, the face image of the user who has been consumed in a certain area in the store is stored in the second preset face library corresponding to the area.
Or when each user transacts members in each area in the store, acquiring the face image of each user, and storing the face image of each member user into a second preset face library corresponding to each area. For example, if a user transacts a member in a shoe selling area, the face image of the user may be stored in a second predetermined face library.
Therefore, when a user enters a certain area in a store, the face image of the user acquired at the current moment can be matched with the face image of the user in the second preset face library corresponding to the area, and when the face image of the user is determined to be matched with the face of the user in the second preset face library, the user can be allowed to enter the area. Therefore, the control flexibility of personnel to be entered in each area in the store is improved, and the labor cost is saved.
In order to obtain historical behavior information of a user arriving at a store conveniently, in the embodiment of the application, a face image in a first preset face library in the store can be updated into a total face library. Fig. 4 is a schematic flow chart of another store-to-store user behavior statistical method based on face recognition according to an embodiment of the present application.
As shown in fig. 4, the statistical method for the user behavior arriving at the store based on face recognition further includes:
step 401, determining whether the current time meets an update condition of a first preset face library in the store.
In this embodiment, the first preset face library may be updated every preset time, or the total face library may be updated according to the sales state of the commodities in the store. The sale state of the goods can comprise time-sharing discount, classified-batch discount and the like of the goods in the store.
For example, the first preset face library may be reset every other day. That is, the first preset face library is used for storing the face images of the user who arrives at the store at the same day, for example, 23:00 points of each day are used as updating conditions, whether the current time is 23:00 points or not can be judged, and if yes, the current time is determined to meet the condition for resetting the first preset face library in the store.
For another example, in a store at 19:00-21:00 pm, a time-limited discount is performed on a vegetable area, and when the time-limited discount time is 19:00, the first preset face library can be reset, so that the face image of the user collected in the time-limited discount is stored in the first preset face library.
Step 402, if the update condition of the first preset face library in the store is met, updating the total face library by using each face image in the first preset face library.
In this embodiment, when the current time meets the update condition of the first preset face library in the store, the total face library may be updated by using each face image in the first preset face library, and then the first preset face library is emptied.
For example, the total face library is updated with the face images in the first preset face library in each day store.
When the total face library is updated, whether the matching degree of the face image of the fourth user in the first preset face library and the face image of each user in the total face library is greater than a threshold value or not can be judged. And if the matching degree of the face image of the fourth user and the face image of each user in the total face library is smaller than the threshold value, creating a new user corresponding to the face image of the fourth user in the total face library. As an example, a new user identifier may be added to the total face library, and the face image of the fourth user is stored in the total face library, so that the face image of the fourth user corresponds to the new user identifier. And the new user identification is used for identifying the new user.
In this embodiment, the total face library may include at least one image of each user, and when the face image of the fourth user is matched with the face images of the users in the total face library, it may be determined that the face image of the fourth user is matched with each face image of the users in the total face library. If the matching degree between the face image of the fourth user and each face image of a certain user in the total face library is greater than the threshold value, the matching degree between the face image of the fourth user and the face image of the user can be considered to be greater than the threshold value. Therefore, the face image of the fourth user is matched with each face image of the users in the total face, and the updating accuracy of the total face library is improved.
In one embodiment, if the first preset face library includes a plurality of face images of the fourth user, matching each face image of the fourth user with the face images of the users in the total face library, counting the number of the face images of the fourth user matched with the face images of the users in the total face library, taking the user in the total face library corresponding to the face image with the largest number as the user matched with the fourth user, and updating the face images to the user matched in the total face library.
Further, in order to avoid that too many faces are stored in the total face library and the updating efficiency is affected, the number of face images corresponding to each user in the total face library is not more than a preset number, for example, not more than 5 face images.
When the face image of the fourth user is matched with the face image of a certain user in the master face library, but the number of the face images corresponding to the user in the master face library reaches the preset number, the high-quality face image of the fourth user can be used for replacing the face image with poor image quality of the user in the master face library, so that the quality of the face image in the master face library is ensured.
Furthermore, in order to improve the matching accuracy, the master face library may also be maintained periodically, for example, splitting or merging users in the master face library. Specifically, for each user in the total face library, whether a face image with low similarity to other face images exists in the face images corresponding to the user can be analyzed. And if so, matching the face image with low similarity with the face images of other users in the total face library. If the face image of the user matched with the face image exists, storing the face image with low similarity under the matched user; and if the face image of the user matched with the face image does not exist, adding a user identifier in the total face library, and moving the face image with low similarity to the position under the added user identifier so as to enable the face image with low similarity to have a corresponding relation with the added user identifier.
In order to improve the service quality in the store, in the embodiment of the present application, the total face library may include behavior data of each user in each area in the store, for example, the number of times of appearance in each area, the length of stay in each area at each time, and the like, so as to determine the use of each area in the store by using the data stored in the total face library, and further determine the selling area of the goods to be sold. Fig. 5 is a schematic flow chart of a method for counting user behavior arriving at a store based on face recognition according to an embodiment of the present application.
As shown in fig. 5, the statistical method for the user behavior arriving at the store based on face recognition further includes:
step 501, determining the use of each area in the store according to the historical sale product types of each area in the store and the behavior data of each user in each area.
For example, the store comprises three areas, namely area A, area B and area C, which are respectively used for selling discount products, offering experience products and selling new market products. According to the behavior data of each user in 3 areas, the fact that the stay time of most users in the area A is longer than that of the other two areas, and the stay time in the area C is the shortest, then the fact that the area A is used for hot selling products, the area B can be used as an experience product area, and the area C can be used as a recommended product selling area can be determined.
Step 502, determining the current selling area of each article to be sold according to the current type of each article to be sold in the store and the application of each area.
In this embodiment, the selling area of each item to be sold can be determined according to the use of each area. For example, if the type of item currently being sold is a new product, then it may be placed in the recommended products area. For another example, if the type of the currently sold product is a product with a low version, the currently sold product can be placed in the experience area for the user to experience.
According to the method and the device for determining the selling area of the commodity to be sold, the selling area of the commodity to be sold is determined according to the behavior data of each user in each area in the store and the historical selling commodity type of each area in the general face library, and therefore the determining accuracy of the selling area of the commodity to be sold can be improved.
In order to implement the embodiment, the embodiment of the application further provides a device for counting the behavior of the user arriving at the store based on face recognition. Fig. 6 is a schematic structural diagram of an arrival user behavior statistics apparatus based on face recognition according to an embodiment of the present application.
As shown in fig. 6, the statistical apparatus for the user behavior arriving at the store based on face recognition comprises: the device comprises a parsing module 610, a first judging module 620 and a first updating module 630.
The analysis module 610 is configured to analyze image data acquired by each camera acquisition device at the current time to obtain a user face image of each area in the store at the current time;
the first judging module 620 is configured to judge whether a face image of a first user in a store at a current moment is matched with a face image of a second user in a first preset face library, where a time interval between a store arrival time of the user and the current moment, which corresponds to the face image included in the first preset face library, is smaller than a threshold;
the first updating module 630 is configured to update the store-to-store behavior of the second user in the first preset face library according to the current area of the first user when the face image of the first user in the store at the current time is matched with the face image of the second user in the first preset face library.
In a possible implementation manner of the embodiment of the present application, the apparatus further includes:
and the creating module is used for creating a new user corresponding to the face image of the first user in the first preset face library when the face image of the first user in the store at the current moment is not matched with the face image of the second user in the first preset face library.
In a possible implementation manner of the embodiment of the present application, the apparatus further includes:
the first determining module is used for carrying out face recognition on the face images of the users in all the areas in the store at the current moment so as to determine the number of the users in the store at the current moment;
and the control module is used for controlling the state of the entrance doors of the users in the store according to the number of the users in the store at the current moment.
In a possible implementation manner of the embodiment of the present application, the apparatus further includes:
the second judgment module is used for judging whether the face image of the first user in the store at the current moment is matched with the face image of the third user in a second preset face library;
and the second determining module is used for determining that the first user has the authority to enter the area corresponding to the second preset face library when the face image of the first user in the shop at the current moment is matched with the face image of the third user in the second preset face library.
In a possible implementation manner of the embodiment of the present application, the apparatus further includes:
the third determining module is used for analyzing the acquired historical data of each store-arriving user and determining the type of each store-arriving user;
and the fourth determining module is used for determining a second preset face library corresponding to each region according to the type of each user arriving at the store, the function of each region in the store and the type of each region sold product in the store.
In a possible implementation manner of the embodiment of the present application, the apparatus further includes:
the third judgment module is used for judging whether the current moment meets the updating condition of a first preset face library in the store;
and the second updating module is used for updating the total face library by utilizing each face image in the first preset face library when the current moment meets the updating condition of the first preset face library in the store.
In a possible implementation manner of the embodiment of the present application, the update module includes:
the judging unit is used for judging whether the matching degree of the face image of the fourth user in the first preset face library and the face image of each user in the total face library is greater than a threshold value or not;
and the creating unit is used for creating a new user corresponding to the face image of the fourth user in the total face library when the matching degree of the face image of the fourth user and the face image of any user in the total face library is smaller than a threshold value.
In a possible implementation manner of the embodiment of the application, the total face library includes at least one image of each user; and the judging unit is specifically used for judging whether the matching degrees of the face image of the fourth user and each face image of each user in the total face library are all larger than a threshold value.
In a possible implementation manner of the embodiment of the application, the total face library includes behavior data of each user in each area in a store; the device also includes:
the fifth determining module is used for determining the use of each area in the store according to the historical selling product types of each area in the store and the behavior data of each user in each area;
and the sixth determining module is used for determining the current selling areas of the various articles to be sold according to the current types of the articles to be sold in the store and the purposes of the areas.
It should be noted that the foregoing explanation of the embodiment of the method for counting the behavior of the user arriving at the store based on face recognition is also applicable to the apparatus for counting the behavior of the user arriving at the store based on face recognition in this embodiment, and therefore, the explanation is not repeated here.
The shop-arriving user behavior statistical device based on face recognition obtains user face images of all areas in a current-time shop by analyzing and processing image data collected by each camera shooting collection device at the current time, judges whether the face image of a first user in the current-time shop is matched with the face image of a second user in a first preset face library, wherein the time interval between the shop-arriving time of the user corresponding to the face image contained in the first preset face library and the current time is smaller than a threshold value, and if the face image is matched with the face image of the second user in the first preset face library, updates the shop-arriving behavior of the second user in the first preset face library according to the area where the first user is currently located. In this embodiment, since the time interval between the store arrival time and the current time of the user corresponding to the face image stored in the first preset face library is smaller than the threshold, the face image of the store user is stored in the first preset face library, so that the face image of the user acquired at the current time is retrieved in the first preset face library.
In order to implement the foregoing embodiments, an embodiment of the present application further provides a computer device, including a processor and a memory;
wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to implement the statistical method of the user behavior arriving at the store based on the face recognition according to the embodiment.
FIG. 7 illustrates a block diagram of an exemplary computer device suitable for use to implement embodiments of the present application. The computer device 12 shown in fig. 7 is only an example, and should not bring any limitation to the function and the scope of use of the embodiments of the present application.
As shown in FIG. 7, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only memory (CD-ROM), a Digital versatile disk Read Only memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via Network adapter 20. As shown, network adapter 20 communicates with the other modules of computer device 12 via bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by executing programs stored in the system memory 28.
In order to implement the foregoing embodiments, the present application further proposes a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the statistical method for the user behavior arriving at the store based on face recognition as described in the foregoing embodiments.
In the description of the present specification, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. A store-to user behavior statistical method based on face recognition is characterized by comprising the following steps:
analyzing and processing image data acquired by each camera shooting acquisition device at the current moment to acquire user face images of each area in a store at the current moment;
judging whether the face image of the first user in the store at the current moment is matched with the face image of the second user in a first preset face library, wherein the time interval between the store arrival time of the user corresponding to the face image contained in the first preset face library and the current moment is less than a threshold value;
and if so, updating the shop-arriving behavior of the second user in the first preset face library according to the current region of the first user.
2. The method as claimed in claim 1, wherein after determining whether the face image of the first user in the store at the current time is matched with the face image of the second user in the first preset face library, the method further comprises:
and if not, creating a new user corresponding to the face image of the first user in the first preset face library.
3. The method of claim 1, wherein after acquiring the face images of the users in the respective areas of the store at the current moment, the method further comprises:
performing face recognition on the face images of the users in all the areas in the current-time store to determine the number of the users in the current-time store;
and controlling the state of the user entrance doors in the store according to the number of the users in the store at the current moment.
4. The method of claim 1, wherein after acquiring the face images of the users in the respective areas of the store at the current moment, the method further comprises:
judging whether the face image of the first user in the store at the current moment is matched with the face image of a third user in a second preset face library;
and if so, determining that the first user has the authority to enter the area corresponding to the second preset face library.
5. The method of claim 4, wherein before determining whether the facial image of the first user in the store at the current time matches the facial image of the third user in the second predetermined face library, further comprising:
analyzing the acquired historical data of each store-arriving user, and determining the type of each store-arriving user;
and determining a second preset face library corresponding to each region according to the type of each user arriving at the store, the function of the region in the store and the category of the sold products in each region in the store.
6. The method of any of claims 1-5, further comprising:
judging whether the current moment meets the updating condition of the first preset face library or not;
and if so, updating the total face library by utilizing each face image in the first preset face library.
7. The method according to claim 6, wherein the updating of the total face library according to each face image in the first preset face library comprises:
judging whether the matching degree of the face image of the fourth user in the first preset face library and the face image of each user in the total face library is greater than a threshold value or not;
and if the matching degrees of the face image of the fourth user and the face image of any user in the total face library are smaller than a threshold value, creating a new user corresponding to the face image of the fourth user in the total face library.
8. The method of claim 7, wherein the master face library includes at least one image for each user;
the determining whether the matching degree between the face image of the fourth user in the first preset face library and the face image of each user in the total face library in the store is greater than a threshold value includes:
and judging whether the matching degrees of the face image of the fourth user and each face image of each user in the total face library are all larger than a threshold value.
9. The method of any one of claims 7 to 8, wherein the database of faces includes data on the behavior of each user in the respective area of the store;
the method further comprises the following steps:
determining the use of each area in the store according to the historical sale and sale types of each area in the store and the behavior data of each user in each area;
and determining the selling areas of the current various articles to be sold according to the types of the various articles to be sold in the store and the purposes of the areas.
10. An arrival user behavior statistic device based on face recognition is characterized by comprising:
the analysis module is used for analyzing and processing image data acquired by each camera shooting acquisition device at the current moment so as to acquire user face images of each area in the store at the current moment;
the first judgment module is used for judging whether the face image of the first user in the store at the current moment is matched with the face image of the second user in a first preset face library, wherein the time interval between the store arrival time of the user corresponding to the face image contained in the first preset face library and the current moment is less than a threshold value;
and the first updating module is used for updating the shop-arriving behavior of the second user in the first preset face library according to the current area of the first user when the face image of the first user in the shop at the current moment is matched with the face image of the second user in the first preset face library.
11. A computer device comprising a processor and a memory;
wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to implement the statistical method for the user behavior arriving at the store based on the face recognition according to any one of claims 1 to 9.
12. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the face recognition-based statistical method of user behavior arriving at a store according to any one of claims 1-9.
CN201811296903.3A 2018-11-01 2018-11-01 Method and device for counting user behaviors arriving at store based on face recognition Pending CN111126119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811296903.3A CN111126119A (en) 2018-11-01 2018-11-01 Method and device for counting user behaviors arriving at store based on face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811296903.3A CN111126119A (en) 2018-11-01 2018-11-01 Method and device for counting user behaviors arriving at store based on face recognition

Publications (1)

Publication Number Publication Date
CN111126119A true CN111126119A (en) 2020-05-08

Family

ID=70494529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811296903.3A Pending CN111126119A (en) 2018-11-01 2018-11-01 Method and device for counting user behaviors arriving at store based on face recognition

Country Status (1)

Country Link
CN (1) CN111126119A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723678A (en) * 2020-05-27 2020-09-29 上海瀛之杰汽车信息技术有限公司 Human face passenger flow identification method, device, equipment and medium suitable for multi-person scene
CN111784387A (en) * 2020-06-23 2020-10-16 大连中维世纪科技有限公司 Multi-dimensional big data-based consumer brand loyalty analysis method
CN111950364A (en) * 2020-07-07 2020-11-17 北京思特奇信息技术股份有限公司 System and method for identifying face of tens of millions of base libraries in different libraries
CN112559793A (en) * 2021-02-23 2021-03-26 成都旺小宝科技有限公司 Retrieval method of face image
CN112818922A (en) * 2021-02-25 2021-05-18 上海数川数据科技有限公司 Store clerk identification method based on image

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592116A (en) * 2011-12-27 2012-07-18 Tcl集团股份有限公司 Cloud computing application method, system and terminal equipment, and cloud computing platform
CN102625081A (en) * 2012-02-20 2012-08-01 华焦宝 Method and system for tracking target in visit place
CN105488478A (en) * 2015-12-02 2016-04-13 深圳市商汤科技有限公司 Face recognition system and method
CN105787756A (en) * 2016-02-19 2016-07-20 龚齐伟 Human behavior analysis apparatus, method and system
CN106469296A (en) * 2016-08-30 2017-03-01 北京旷视科技有限公司 Face identification method, device and gate control system
US20170099295A1 (en) * 2012-03-14 2017-04-06 Autoconnect Holdings Llc Access and portability of user profiles stored as templates
CN206400637U (en) * 2016-12-22 2017-08-11 重庆文理学院 A kind of crowd's congestion prevention and control system recognized based on key node
CN107341892A (en) * 2017-08-04 2017-11-10 兰庆天 A kind of gate control system and its method of work for realizing people stream counting control
CN107563343A (en) * 2017-09-18 2018-01-09 南京甄视智能科技有限公司 The self-perfection method and system of FaceID databases based on face recognition technology
CN107679613A (en) * 2017-09-30 2018-02-09 同观科技(深圳)有限公司 A kind of statistical method of personal information, device, terminal device and storage medium
CN107992855A (en) * 2017-12-22 2018-05-04 中国科学院重庆绿色智能技术研究院 A kind of triple verification methods of airport security based on recognition of face
CN108288163A (en) * 2018-02-08 2018-07-17 东莞市藕丝人工智能科技有限公司 A kind of fast face settlement method based on video acquisition
CN108446681A (en) * 2018-05-10 2018-08-24 深圳云天励飞技术有限公司 Pedestrian's analysis method, device, terminal and storage medium
CN108573333A (en) * 2017-03-14 2018-09-25 思凯睿克有限公司 The appraisal procedure and its system of the KPI Key Performance Indicator of entity StoreFront

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592116A (en) * 2011-12-27 2012-07-18 Tcl集团股份有限公司 Cloud computing application method, system and terminal equipment, and cloud computing platform
CN102625081A (en) * 2012-02-20 2012-08-01 华焦宝 Method and system for tracking target in visit place
US20170099295A1 (en) * 2012-03-14 2017-04-06 Autoconnect Holdings Llc Access and portability of user profiles stored as templates
CN105488478A (en) * 2015-12-02 2016-04-13 深圳市商汤科技有限公司 Face recognition system and method
CN105787756A (en) * 2016-02-19 2016-07-20 龚齐伟 Human behavior analysis apparatus, method and system
CN106469296A (en) * 2016-08-30 2017-03-01 北京旷视科技有限公司 Face identification method, device and gate control system
CN206400637U (en) * 2016-12-22 2017-08-11 重庆文理学院 A kind of crowd's congestion prevention and control system recognized based on key node
CN108573333A (en) * 2017-03-14 2018-09-25 思凯睿克有限公司 The appraisal procedure and its system of the KPI Key Performance Indicator of entity StoreFront
CN107341892A (en) * 2017-08-04 2017-11-10 兰庆天 A kind of gate control system and its method of work for realizing people stream counting control
CN107563343A (en) * 2017-09-18 2018-01-09 南京甄视智能科技有限公司 The self-perfection method and system of FaceID databases based on face recognition technology
CN107679613A (en) * 2017-09-30 2018-02-09 同观科技(深圳)有限公司 A kind of statistical method of personal information, device, terminal device and storage medium
CN107992855A (en) * 2017-12-22 2018-05-04 中国科学院重庆绿色智能技术研究院 A kind of triple verification methods of airport security based on recognition of face
CN108288163A (en) * 2018-02-08 2018-07-17 东莞市藕丝人工智能科技有限公司 A kind of fast face settlement method based on video acquisition
CN108446681A (en) * 2018-05-10 2018-08-24 深圳云天励飞技术有限公司 Pedestrian's analysis method, device, terminal and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723678A (en) * 2020-05-27 2020-09-29 上海瀛之杰汽车信息技术有限公司 Human face passenger flow identification method, device, equipment and medium suitable for multi-person scene
CN111784387A (en) * 2020-06-23 2020-10-16 大连中维世纪科技有限公司 Multi-dimensional big data-based consumer brand loyalty analysis method
CN111950364A (en) * 2020-07-07 2020-11-17 北京思特奇信息技术股份有限公司 System and method for identifying face of tens of millions of base libraries in different libraries
CN111950364B (en) * 2020-07-07 2024-03-22 北京思特奇信息技术股份有限公司 System and method for identifying library-separating face of tens of millions of libraries
CN112559793A (en) * 2021-02-23 2021-03-26 成都旺小宝科技有限公司 Retrieval method of face image
CN112559793B (en) * 2021-02-23 2021-07-13 成都旺小宝科技有限公司 Retrieval method of face image
CN112818922A (en) * 2021-02-25 2021-05-18 上海数川数据科技有限公司 Store clerk identification method based on image
CN112818922B (en) * 2021-02-25 2022-08-02 上海数川数据科技有限公司 Shop assistant identification method based on image

Similar Documents

Publication Publication Date Title
CN111126119A (en) Method and device for counting user behaviors arriving at store based on face recognition
CN107909443B (en) Information pushing method, device and system
US8774462B2 (en) System and method for associating an order with an object in a multiple lane environment
US20180247361A1 (en) Information processing apparatus, information processing method, wearable terminal, and program
JP3584334B2 (en) Human detection tracking system and human detection tracking method
TWI723411B (en) Anomaly detection method, device and equipment in unmanned settlement scene
US8855364B2 (en) Apparatus for identification of an object queue, method and computer program
US9846811B2 (en) System and method for video-based determination of queue configuration parameters
CN111383039A (en) Information pushing method and device and information display system
CN104462530A (en) Method and device for analyzing user preferences and electronic equipment
US20220084102A1 (en) Commodity recommendation method, server, shopping cart and shopping system
CN110648186B (en) Data analysis method, device, equipment and computer readable storage medium
US20170330206A1 (en) Motion line processing system and motion line processing method
JP2015090579A (en) Behavior analysis system
US20160189170A1 (en) Recognizing Customers Requiring Assistance
CN111127066A (en) Mining application method and device based on user information
CN107730245B (en) Automatic checkout method based on unmanned store and unmanned store
CN110647825A (en) Method, device and equipment for determining unmanned supermarket articles and storage medium
CN111178116A (en) Unmanned vending method, monitoring camera and system
JP2010211485A (en) Gaze degree measurement device, gaze degree measurement method, gaze degree measurement program and recording medium with the same program recorded
CN108875677B (en) Passenger flow volume statistical method and device, storage medium and terminal
CN110246280B (en) Human-cargo binding method and device, computer equipment and readable medium
CN111339929B (en) Retail system of unmanned supermarket
CN114299388A (en) Article information specifying method, showcase, and storage medium
CN107871019B (en) Man-vehicle association search method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination