CN111414799A - Method and device for determining peer users, electronic equipment and computer readable medium - Google Patents

Method and device for determining peer users, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN111414799A
CN111414799A CN202010093589.XA CN202010093589A CN111414799A CN 111414799 A CN111414799 A CN 111414799A CN 202010093589 A CN202010093589 A CN 202010093589A CN 111414799 A CN111414799 A CN 111414799A
Authority
CN
China
Prior art keywords
user
target
peer
identity information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010093589.XA
Other languages
Chinese (zh)
Inventor
倪峥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010093589.XA priority Critical patent/CN111414799A/en
Publication of CN111414799A publication Critical patent/CN111414799A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/12Hotels or restaurants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the disclosure provides a peer user determination method, a peer user determination device, electronic equipment and a computer-readable storage medium. The method comprises the following steps: determining target identity information of a target user according to a target image of the target user in a designated area; determining the co-line user images which are acquired at a plurality of preset positions related to the moving direction of the target user and are related to the target user according to the target identity information; and determining the identity information of the peer users corresponding to the target user according to the peer user image. The embodiment of the disclosure can effectively avoid the difference caused by personnel difference or loss, and the precision of user identification is higher.

Description

Method and device for determining peer users, electronic equipment and computer readable medium
Technical Field
Embodiments of the present disclosure relate to the technical field of peer user determination, and in particular, to a peer user determination method, apparatus, electronic device, and computer-readable storage medium.
Background
In a restaurant, a plurality of persons often go to the restaurant, but only the ID of a checkout counter can be obtained when a user checks a code at a dining table or a cashier, so that only the dining preference and the consumption capability of the checkout counter can be obtained. With an average of 400 people per day going to a store and 4 people per meal, the store operator can only take 100 people of meal preference and consumption. The relation, consumption preference, consumption capacity and off-line POI (Point of interest) information of the people who are the same as the people who check out accounts are unknown, and therefore the improvement of the return-to-store rate of the part of users is greatly influenced.
The current scheme mainly depends on the boss of the store or a clerk to pay attention to the identity characteristics of the same pedestrian having a meal each time, but the scheme requires that the clerk can easily pay important attention to the same pedestrian having a meal back, and the people in the restaurant can be identified again due to the fact that the loss rate of the people in the restaurant is high.
Disclosure of Invention
Embodiments of the present disclosure provide a method and an apparatus for determining a peer user, an electronic device, and a computer-readable storage medium, which are used to determine identity information of a peer user of a target user according to a peer user image associated with the target user at a preset position, so that differences caused by personnel differences or losses are effectively avoided, and accuracy of user identification is high.
According to a first aspect of embodiments of the present disclosure, there is provided a peer user determination method, including:
determining target identity information of a target user according to a target image of the target user in a designated area;
determining the co-line user images which are acquired at a plurality of preset positions related to the moving direction of the target user and are related to the target user according to the target identity information;
and determining the identity information of the peer users corresponding to the target user according to the peer user image.
Optionally, the determining target identity information of the target user according to the target image of the target user in the designated area includes:
and determining the target identity information of the target user according to the face features of the target user in the target image.
Optionally, the determining, according to the target identity information, the images of the users in the same row associated with the target user, which are acquired at a plurality of preset positions related to the moving direction of the target user, includes:
acquiring a first concurrent user image containing the target user, which is acquired in advance at a first preset position, according to the target identity information;
determining the basic user characteristics of the first online user corresponding to the target user according to the first online user image;
acquiring a second same-row user image which is acquired in advance at least one second preset position and contains the basic user characteristics;
and determining the first in-line user image and at least one second in-line user image as in-line user images associated with the target user.
Optionally, the determining, according to the peer user image, peer user identity information of the peer user corresponding to the target user includes:
determining second user identity information corresponding to the second peer user according to the face features of the second peer user contained in the first peer user image;
determining first user identity information corresponding to the first peer user according to the face features of the first peer user contained in at least one second peer user image;
and determining the identity information of the same-row users corresponding to the target user based on the first user identity information and the second user identity information.
Optionally, after determining, according to the peer user image, peer user identity information of a peer user corresponding to the target user, the method further includes:
acquiring service data generated by the target user in the designated area;
and determining behavior preference data of the target user and the peer user according to the service data.
According to a second aspect of embodiments of the present disclosure, there is provided a peer user determination apparatus including:
the target identity information determining module is used for determining target identity information of a target user according to a target image of the target user in a designated area;
the image acquisition module of the same-row user is used for determining the image of the same-row user related to the target user, which is acquired at a plurality of preset positions related to the moving direction of the target user, according to the target identity information;
and the identity determining module of the same-row users is used for determining the identity information of the same-row users corresponding to the target users according to the image of the same-row users.
Optionally, the target identity information determining module includes:
and the target identity information determining unit is used for determining the target identity information of the target user according to the face features of the target user in the target image.
Optionally, the image obtaining module for the peer user includes:
the first peer user image acquisition unit is used for acquiring a first peer user image which is acquired in advance at a first preset position and contains the target user according to the target identity information;
a basic user feature determining unit, configured to determine, according to the first peer user image, a basic user feature of a first peer user corresponding to the target user;
the second peer user image acquisition unit is used for acquiring a second peer user image which is acquired in advance at least one second preset position and contains the basic user characteristics;
and the peer user image determining unit is used for determining the first peer user image and at least one second peer user image as the peer user image associated with the target user.
Optionally, the peer user identity determining module includes:
the second user identity determining unit is used for determining second user identity information corresponding to the second peer user according to the face features of the second peer user contained in the first peer user image;
the first user identity determining unit is used for determining first user identity information corresponding to the first peer user according to the face features of the first peer user contained in at least one second peer user image;
and the identity determination unit of the peer users is used for determining the identity information of the peer users corresponding to the target user based on the first user identity information and the second user identity information.
Optionally, the method further comprises:
a service data acquisition module, configured to acquire service data generated by the target user in the designated area;
and the behavior preference determining module is used for determining the behavior preference data of the target user and the peer user according to the service data.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing any of the peer user determination methods described above when executing the program.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform any one of the above-mentioned peer user determination methods.
According to the co-user determining scheme provided by the embodiment of the disclosure, target identity information of a target user is determined according to a target image of the target user in a designated area, co-user images associated with the target user and acquired at a plurality of preset positions related to the moving direction of the target user are determined according to the target identity information, and co-user identity information of a co-user corresponding to the target user is determined according to the co-user images. According to the embodiment of the disclosure, the identity information of the user who is in the same line with the target user is determined through the co-line user images at the plurality of preset positions in the user moving direction, the co-line user images are obtained from the plurality of preset positions in the user moving direction, the difference caused by personnel difference or loss can be effectively avoided, and the accuracy of user identification is high.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments of the present disclosure will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart illustrating steps of a peer user determination method according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating steps of a peer user determination method according to a second embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a peer user determination device according to a third embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a peer user determination device according to a fourth embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a target identity information determination module according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a peer user image obtaining module according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a peer user identity determining module according to an embodiment of the present disclosure.
Detailed Description
Technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present disclosure, belong to the protection scope of the embodiments of the present disclosure.
Referring to fig. 1, a flowchart illustrating steps of a peer user determination method provided in an embodiment of the present disclosure is shown, and as shown in fig. 1, the peer user determination method may specifically include the following steps:
step 101: and determining the target identity information of the target user according to the target image of the target user in the designated area.
In the embodiment of the present disclosure, the target user refers to a user transacting business on an entity business side, for example, a user transacting checkout business, a user transacting member business, and the like. The entity business party refers to an entity party providing business, such as a business party like an entity restaurant.
The designated area refers to an area in the entity business side, such as a certain area in front of a restaurant counter or other business handling areas, where the face image of the target user can be collected.
The target image is an image which is acquired in a designated area and contains the facial features of a target user, an image acquisition device can be preset in an entity business side to acquire the facial image of the user in the designated area as the target image, for example, an intelligent camera is installed behind a cashier desk of a restaurant and used for face snapshot, recognition and the like of a checkout staff.
The target identity information refers to identity identification information of a target user, and in the present disclosure, the face features of the target user are used as the target identity information.
It can be understood that, for each user, the face feature of each user is unique, and different users can be distinguished through the face feature.
The method comprises the steps of obtaining a target image of a target user in a designated area of an entity service party, determining target identity information of the target user according to the target image, specifically, obtaining face features of the target user from the target image by adopting a face recognition technology, namely, collecting an image or a video stream containing a face by using a camera or a pick-up head, automatically detecting and tracking the face in the image, and further identifying the detected face.
After determining the target identity information of the target user according to the target image of the target user in the designated area, step 102 is performed.
Step 102: according to the target identity information, determining the co-line user images which are acquired at a plurality of preset positions related to the moving direction of the target user and are related to the target user.
The preset positions refer to a plurality of positions preset in the moving direction of the user in the entity service side, and the images of the users in the same row refer to images collected at the plurality of preset positions related to the moving direction of the target user and associated with the target user. For example, an intelligent camera is installed within the entrance and exit of a restaurant at a distance of about 5-8 meters, and is used for capturing and identifying faces of people entering the restaurant and capturing face pictures of all people entering the restaurant. A main channel of a restaurant is provided with an intelligent camera, a face picture in the physical distance from the channel to a dining table is captured, and three pictures are captured before a gateway, the channel and the dining table are seated according to the time sequence and the average walking speed of normal personnel.
It is to be understood that the above examples are only examples set forth for a better understanding of the technical solutions of the embodiments of the present disclosure, and are not to be taken as the only limitations on the embodiments of the present disclosure.
After the target identity information of the target user is acquired, images collected at a plurality of preset positions in advance can be acquired according to the target identity information, images of users in the same row associated with the target user are acquired, and after three pictures respectively captured before a doorway, a passageway and a dining table are taken as an example, according to image features of at least one user captured before the dining table is taken and sitting on the target user, the image features of the users can be acquired to acquire features similar to or identical to the image features collected at the doorway, the passageway and other positions to serve as images of the users in the same row associated with the target user.
After determining the images of the users in the same row associated with the target user, which are acquired at a plurality of preset positions related to the moving direction of the target user, according to the target identity information, step 103 is executed.
Step 103: and determining the identity information of the peer users corresponding to the target user according to the peer user image.
The fellow user refers to a user who goes to the entity business side together with the target user to handle the business, for example, a user who goes to a restaurant to eat together with the target user, or the like.
The identity information of the peer user refers to identity identification information of the user in the same row as the target user, and in the embodiment of the disclosure, the face feature of the peer user can be used as the identity information of the peer user.
After the co-line user images associated with the target user and generated at a plurality of preset positions of the entity service party are acquired, a face recognition technology can be adopted to recognize face features of the co-line user from the co-line user images, and then the face features of the co-line user are used as identity recognition information of the co-line user, namely the identity information of the co-line user.
According to the method and the device, the identity information of the users in the same row with the target user is determined through the same-row user images at the plurality of preset positions related to the moving direction of the target user, and the difference caused by personnel difference or loss can be effectively avoided.
The method for determining the peer users provided by the embodiment of the disclosure determines target identity information of a target user according to a target image of the target user in an appointed area, determines the peer user images associated with the target user, which are acquired at a plurality of preset positions related to the moving direction of the target user, according to the target identity information, and determines the peer user identity information of the peer user corresponding to the target user according to the peer user images. According to the method and the device, the identity information of the user in the same row with the target user is determined through the same-row user images in the plurality of preset positions related to the moving direction of the target user, so that the difference caused by personnel difference or loss can be effectively avoided, and the accuracy of user identification is high.
Referring to fig. 2, a flowchart illustrating steps of a peer user determination method provided in the second embodiment of the present disclosure is shown, and as shown in fig. 2, the peer user determination method may specifically include the following steps:
step 201: and determining the target identity information of the target user according to the face features of the target user in the target image.
In the embodiment of the present disclosure, the target user refers to a user transacting business on an entity business side, for example, a user transacting checkout business, a user transacting member business, and the like. The entity business party refers to an entity party providing business, such as a business party like an entity restaurant.
The designated area refers to an area in the entity business side, such as a certain area in front of a restaurant counter or other business handling areas, where the face image of the target user can be collected.
The target image is an image which is acquired in a designated area and contains the facial features of a target user, an image acquisition device can be preset in an entity business side to acquire the facial image of the user in the designated area as the target image, for example, an intelligent camera is installed behind a cashier desk of a restaurant and used for face snapshot, recognition and the like of a checkout staff.
The target identity information refers to identity identification information of a target user, and in the present disclosure, the face features of the target user are used as the target identity information.
It can be understood that, for each user, the face feature of each user is unique, and different users can be distinguished through the face feature.
The method comprises the steps of obtaining a target image of a target user in a designated area of an entity service party, determining target identity information of the target user according to the target image, specifically, obtaining face features of the target user from the target image by adopting a face recognition technology, namely, collecting an image or a video stream containing a face by using a camera or a pick-up head, automatically detecting and tracking the face in the image, and further identifying the detected face.
After determining the target identity information of the target user according to the target image of the target user in the designated area, step 202 is performed.
Step 202: and acquiring a first concurrent user image containing the target user, which is acquired in advance at a first preset position, according to the target identity information.
The first preset position is a position where a user who is in the same row with the target user can be directly identified in the entity business party, and the first preset position is a position where an image of the target user can be collected, which is set in a moving direction of the target user.
The first peer user image refers to a peer user image which is collected in advance at a first preset position and contains a target user.
After the target identity information of the target user is identified in the designated area, a first concurrent user image containing the target user and pre-collected at a first preset position can be acquired according to the target identity information.
After acquiring the first in-line user image containing the target user pre-acquired at the first preset position, step 203 is performed.
Step 203: and determining the basic user characteristics of the first online user corresponding to the target user according to the first online user image.
The face features of the users in the same row as the target user may be all recognized in the first in-line user image, and the identity information of the users in the same row as the target user may be determined according to the face features.
The first peer user is a user who is in the same line as the target user and cannot recognize the human face features in the peer user image collected at the first preset position, for example, taking a restaurant as an example, when the peer user and the target user are seated, only the shadow of a certain peer user can be recognized in the collected peer user image, and the peer user is taken as the first peer user.
The basic user features refer to basic features of the first user in the first in-line user image, such as wearing features, head shape features, behavior features, and the like of the first user.
After acquiring a first peer user image including a target user, which is pre-captured at a first preset position, a basic user characteristic of the first peer user in the first peer user image may be identified.
After determining the base user characteristics of the first peer user corresponding to the target user, step 204 is performed.
Step 204: and acquiring a second same-row user image which is acquired in advance at least one second preset position and contains the basic user characteristics.
The second preset position refers to a position which is collected in advance in the entity business side and can contain all the users coming and going, and the second preset position is also a position which is set in the moving direction of the target user and can collect the image of the target user, for example, taking a restaurant as an example, the second preset position can be a restaurant entrance, a restaurant passage and other positions.
The second image of the same-row user refers to the image of the same-row user which contains the basic user characteristics and is collected in advance at a second preset position. For example, taking a restaurant as an example, the image of the user in the same line collected at the restaurant entrance, the restaurant passage, and the like is the image of the user in the second same line.
It is to be understood that the above examples are only examples set forth for a better understanding of the technical solutions of the embodiments of the present disclosure, and are not to be taken as the only limitations on the embodiments of the present disclosure.
After determining the basic user characteristics of the first peer user corresponding to the target user, a second peer user image containing the basic user characteristics, which is pre-collected at least one second preset position, may be obtained according to the basic user characteristics of the first peer user.
After acquiring a second co-ordinate user image containing the basic user features, which is preset to be captured at least one second preset position, step 205 is performed.
Step 205: and determining the first in-line user image and at least one second in-line user image as in-line user images associated with the target user.
After acquiring the first in-line user image at the first preset position and the second in-line user image at the at least one second preset position, the acquired first in-line user image and the at least one second in-line user image may be determined as in-line user images associated with the target user.
After determining the co-line user image associated with the target user, step 206 is performed.
Step 206: and determining second user identity information corresponding to the second peer user according to the face features of the second peer user contained in the first peer user image.
The second peer user is a user of the face feature included in the first peer user image.
The second user identity information refers to identity identification information of a second peer user, and in the embodiment of the present disclosure, the second user identity information may be face feature information of the second peer user, that is, the face feature information of the second peer user is used as the identity identification information of the second peer user.
The first peer user image includes the face features of the second peer user, and after the first peer user image is acquired, the second user identity information corresponding to the second peer user can be determined according to the face features of the second peer user.
After determining the second user identity information corresponding to the second peer user according to the facial features of the second peer user included in the first peer user image, step 208 is executed.
Step 207: and determining first user identity information corresponding to the first peer user according to the face features of the first peer user contained in at least one second peer user image.
After a second peer user image containing basic user features which is preset and collected at least one second preset position is obtained, according to the user features of the first user on the basis, the face features of the first peer user which are communicated with the basic user features can be obtained from the second peer user image, and the face features of the first peer user are used as first user identity information corresponding to the first peer user.
After determining the first user identity information corresponding to the first peer user according to the facial features of the first peer user included in the at least one second peer user image, step 208 is executed.
Step 208: and determining the identity information of the same-row users corresponding to the target user based on the first user identity information and the second user identity information.
After acquiring the first user identity information of the first peer user in the same row as the target user and the second user identity information of the second peer user, the first user identity information and the second user identity information may be determined as the peer user identity information of the peer user corresponding to the target user.
Use the dining room as an example, this disclosed embodiment combines smart camera and high in the clouds face identification platform, accomplishes snatching, discerning, clustering to the face, and the equipment of 2 key point positions of rethread access & exit and cash desk is built and can be obtained many people's quantity and identity information in the link of having dinner automatically, has effectively solved present shop and can only take the person's of settling accounts information and can't know the problem that many places of people are who through cashier's system.
After determining the identity information of the peer user corresponding to the target user, step 209 is performed.
Step 209: and acquiring the service data generated by the target user in the designated area.
The business data refers to business data generated by the target user in a designated area, for example, in a restaurant, dishes and funds spent by the target user in the restaurant, and in this case, the dishes and the funds spent by the target user are business data of the target user.
In some examples, the service data generated by the target user on the entity service side may be obtained according to the service data of the target user recorded by the entity service side.
In some examples, the data generated by the entity service party may be entered into a background database, and after the target identity information of the target user is obtained, the service data generated by the target user on the entity service party may be obtained from the background database according to the target identity information.
It is to be understood that the above examples are only examples set forth for a better understanding of the technical solutions of the embodiments of the present disclosure, and are not to be taken as the only limitations on the embodiments of the present disclosure.
After acquiring the service data generated by the target user on the entity service side, step 210 is executed.
Step 210: and determining behavior preference data of the target user and the peer user according to the service data.
After the business data generated by the target user in the designated area is obtained, behavior preference data of the target user and the fellow users can be determined according to the business data, for example, the same faces in a specific time period of the entrance point and the exit point are found by face clustering comparison in a face library arranged on a cloud platform, so that the number and identity information of the fellow users of the checkout persons are retrieved, and the consumption preference, average consumption capacity and dining geographic position of the fellow users for dining this time are further marked.
According to the method for determining the same-row users, the identity information of the users in the same row with the target user is determined through the user images which are acquired from the plurality of preset positions related to the target user and are related to the target user, and therefore the difference caused by personnel difference or loss can be effectively avoided.
The method for determining the peer users provided by the embodiment of the disclosure determines target identity information of a target user according to a target image of the target user in an appointed area, determines to acquire the peer user images associated with the target user at a plurality of preset positions related to the moving direction of the target user according to the target identity information, and determines the peer user identity information of the peer user corresponding to the target user according to the peer user images. According to the embodiment of the disclosure, the identity information of the user who is in the same row with the target user is determined through the images of the users in the same row at the plurality of preset positions related to the moving direction of the target user, so that the difference caused by personnel difference or loss can be effectively avoided, and the accuracy of user identification is high.
Referring to fig. 3, which shows a schematic structural diagram of a peer user determination apparatus provided in a third embodiment of the present disclosure, as shown in fig. 3, the peer user determination apparatus 300 may include:
a target identity information determining module 310, configured to determine target identity information of a target user according to a target image of the target user in a specified area;
a peer user image obtaining module 320, configured to determine, according to the target identity information, peer user images associated with the target user, which are obtained at multiple preset positions related to the moving direction of the target user;
and the peer user identity determining module 330 is configured to determine, according to the peer user image, peer user identity information of a peer user corresponding to the target user.
The device for determining the peer user provided by the embodiment of the disclosure determines the target identity information of the target user according to the target image of the target user in the designated area, determines the peer user images associated with the target user, which are acquired at a plurality of preset positions related to the moving direction of the target user, according to the target identity information, and determines the peer user identity information of the peer user corresponding to the target user according to the peer user images. According to the embodiment of the disclosure, the identity information of the users in the same row with the target user is determined through the images of the users in the same row associated with the target user at the plurality of preset positions related to the moving direction of the target user, so that the difference caused by personnel difference or loss can be effectively avoided, and the accuracy of user identification is high.
Referring to fig. 4, which shows a schematic structural diagram of a peer user determination apparatus according to a fourth embodiment of the present disclosure, as shown in fig. 4, the peer user determination apparatus 400 may include:
a target identity information determining module 410, configured to determine target identity information of a target user according to a target image of the target user in a specified area;
a peer user image obtaining module 420, configured to determine, according to the target identity information, peer user images associated with the target user, which are obtained at multiple preset positions related to the moving direction of the target user;
a peer user identity determining module 430, configured to determine, according to the peer user image, peer user identity information of a peer user corresponding to the target user;
a service data obtaining module 440, configured to obtain service data generated by the target user in the designated area;
a behavior preference determining module 450, configured to determine behavior preference data of the target user and the peer user according to the service data.
Optionally, as shown in fig. 5, the target identity information determining module 410 includes:
the target identity information determining unit 411 is configured to determine the target identity information of the target user according to the facial features of the target user in the target image.
Optionally, as shown in fig. 6, the image obtaining module 420 of the same row of users includes:
a first peer user image obtaining unit 421, configured to obtain, according to the target identity information, a first peer user image that is acquired in advance at a first preset position and includes the target user;
a basic user feature determining unit 422, configured to determine, according to the first peer user image, a basic user feature of a first peer user corresponding to the target user;
a second peer user image obtaining unit 423, configured to obtain a second peer user image that includes the basic user feature and is acquired in advance at least one second preset position;
a peer user image determining unit 424, configured to determine the first peer user image and the at least one second peer user image as peer user images associated with the target user.
Optionally, as shown in fig. 7, the peer user identity determining module 430 includes:
a second user identity determining unit 431, configured to determine, according to the face features of the second peer user included in the first peer user image, second user identity information corresponding to the second peer user;
a first user identity determining unit 432, configured to determine, according to a face feature of the first peer user included in at least one second peer user image, first user identity information corresponding to the first peer user;
a peer user identity determining unit 433, configured to determine, based on the first user identity information and the second user identity information, peer user identity information of a peer user corresponding to the target user.
The device for determining the peer user provided by the embodiment of the disclosure determines the target identity information of the target user according to the target image of the target user in the designated area, determines the peer user images associated with the target user, which are acquired at a plurality of preset positions related to the moving direction of the target user, according to the target identity information, and determines the peer user identity information of the peer user corresponding to the target user according to the peer user images. According to the embodiment of the disclosure, through the peer user images associated with the target users at the plurality of preset positions of the entity service party, the identity information of the users in the same row with the target users is determined through the peer user images at the plurality of preset positions, so that the difference caused by personnel difference or loss can be effectively avoided, and the accuracy of user identification is high.
An embodiment of the present disclosure also provides an electronic device, including: a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the peer user determination method of the foregoing embodiment when executing the program.
Embodiments of the present disclosure also provide a computer-readable storage medium, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the peer user determination method of the foregoing embodiments.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present disclosure are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the embodiments of the present disclosure as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the embodiments of the present disclosure.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the embodiments of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, claimed embodiments of the disclosure require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of an embodiment of this disclosure.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
The various component embodiments of the disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be understood by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in a motion picture generating device according to an embodiment of the present disclosure. Embodiments of the present disclosure may also be implemented as an apparatus or device program for performing a portion or all of the methods described herein. Such programs implementing embodiments of the present disclosure may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit embodiments of the disclosure, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Embodiments of the disclosure may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only for the purpose of illustrating the preferred embodiments of the present disclosure and is not to be construed as limiting the embodiments of the present disclosure, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the embodiments of the present disclosure are intended to be included within the scope of the embodiments of the present disclosure.
The above description is only a specific implementation of the embodiments of the present disclosure, but the scope of the embodiments of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present disclosure, and all the changes or substitutions should be covered by the scope of the embodiments of the present disclosure. Therefore, the protection scope of the embodiments of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A peer user determination method, comprising:
determining target identity information of a target user according to a target image of the target user in a designated area;
determining the co-line user images which are acquired at a plurality of preset positions related to the moving direction of the target user and are related to the target user according to the target identity information;
and determining the identity information of the peer users corresponding to the target user according to the peer user image.
2. The method of claim 1, wherein determining the target identity information of the target user according to the target image of the target user in the designated area comprises:
and determining the target identity information of the target user according to the face features of the target user in the target image.
3. The method according to claim 1, wherein the determining, according to the target identity information, the images of the same row of users associated with the target user, which are acquired at a plurality of preset positions related to the moving direction of the target user, comprises:
acquiring a first concurrent user image containing the target user, which is acquired in advance at a first preset position, according to the target identity information;
determining the basic user characteristics of the first online user corresponding to the target user according to the first online user image;
acquiring a second same-row user image which is acquired in advance at least one second preset position and contains the basic user characteristics;
and determining the first in-line user image and at least one second in-line user image as in-line user images associated with the target user.
4. The method according to claim 3, wherein the determining, according to the peer user image, peer user identity information of the peer user corresponding to the target user comprises:
determining second user identity information corresponding to the second peer user according to the face features of the second peer user contained in the first peer user image;
determining first user identity information corresponding to the first peer user according to the face features of the first peer user contained in at least one second peer user image;
and determining the identity information of the same-row users corresponding to the target user based on the first user identity information and the second user identity information.
5. The method according to claim 1, after said determining, according to the peer user image, peer user identity information of the peer user corresponding to the target user, further comprising:
acquiring service data generated by the target user in the designated area;
and determining behavior preference data of the target user and the peer user according to the service data.
6. A peer user determination device, comprising:
the target identity information determining module is used for determining target identity information of a target user according to a target image of the target user in a designated area;
the image acquisition module of the same-row user is used for determining the image of the same-row user related to the target user, which is acquired at a plurality of preset positions related to the moving direction of the target user, according to the target identity information;
and the identity determining module of the same-row users is used for determining the identity information of the same-row users corresponding to the target users according to the image of the same-row users.
7. The apparatus of claim 6, wherein the target identity information determination module comprises:
and the target identity information determining unit is used for determining the target identity information of the target user according to the face features of the target user in the target image.
8. The apparatus of claim 6, wherein the peer user image acquisition module comprises:
the first peer user image acquisition unit is used for acquiring a first peer user image which is acquired in advance at a first preset position and contains the target user according to the target identity information;
a basic user feature determining unit, configured to determine, according to the first peer user image, a basic user feature of a first peer user corresponding to the target user;
the second peer user image acquisition unit is used for acquiring a second peer user image which is acquired in advance at least one second preset position and contains the basic user characteristics;
and the peer user image determining unit is used for determining the first peer user image and at least one second peer user image as the peer user image associated with the target user.
9. The apparatus of claim 8, wherein the peer user identity determination module comprises:
the second user identity determining unit is used for determining second user identity information corresponding to the second peer user according to the face features of the second peer user contained in the first peer user image;
the first user identity determining unit is used for determining first user identity information corresponding to the first peer user according to the face features of the first peer user contained in at least one second peer user image;
and the identity determination unit of the peer users is used for determining the identity information of the peer users corresponding to the target user based on the first user identity information and the second user identity information.
10. The apparatus of claim 6, further comprising:
a service data acquisition module, configured to acquire service data generated by the target user in the designated area;
and the behavior preference determining module is used for determining the behavior preference data of the target user and the peer user according to the service data.
11. An electronic device, comprising:
processor, memory and computer program stored on the memory and executable on the processor, the processor implementing the peer user determination method as claimed in any one of claims 1 to 5 when executing the program.
12. A computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the peer user determination method of any of claims 1 to 5.
CN202010093589.XA 2020-02-14 2020-02-14 Method and device for determining peer users, electronic equipment and computer readable medium Pending CN111414799A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010093589.XA CN111414799A (en) 2020-02-14 2020-02-14 Method and device for determining peer users, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010093589.XA CN111414799A (en) 2020-02-14 2020-02-14 Method and device for determining peer users, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN111414799A true CN111414799A (en) 2020-07-14

Family

ID=71490951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010093589.XA Pending CN111414799A (en) 2020-02-14 2020-02-14 Method and device for determining peer users, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111414799A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269916A (en) * 2021-05-17 2021-08-17 武汉爱迪科技股份有限公司 Guest prejudging analysis method and system based on face recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269916A (en) * 2021-05-17 2021-08-17 武汉爱迪科技股份有限公司 Guest prejudging analysis method and system based on face recognition
CN113269916B (en) * 2021-05-17 2022-04-19 武汉爱迪科技股份有限公司 Guest prejudging analysis method and system based on face recognition

Similar Documents

Publication Publication Date Title
TWI778030B (en) Store apparatus, store management method and program
JP4125634B2 (en) Customer information collection management method and system
US11138420B2 (en) People stream analysis method, people stream analysis apparatus, and people stream analysis system
JP6444655B2 (en) Display method, stay information display system, display control device, and display control method
JP4717934B2 (en) Relational analysis method, relational analysis program, and relational analysis apparatus
JP2021039789A (en) Store device, store system, store management method, and program
US20150199698A1 (en) Display method, stay information display system, and display control device
Lee et al. Understanding customer malling behavior in an urban shopping mall using smartphones
JP6756473B2 (en) Behavior analyzers and systems and methods
EP2988473B1 (en) Argument reality content screening method, apparatus, and system
JP2008152810A (en) Customer information collection and management system
CA3014365C (en) System and method for gathering data related to quality of service in a customer service environment
CN110826610A (en) Method and system for intelligently detecting whether dressed clothes of personnel are standard
JP2015090579A (en) Behavior analysis system
JP2017130061A (en) Image processing system, image processing method and program
WO2019077559A1 (en) System for tracking products and users in a store
KR20160011804A (en) The method for providing marketing information for the customers of the stores based on the information about a customers' genders and ages detected by using face recognition technology
CN111414799A (en) Method and device for determining peer users, electronic equipment and computer readable medium
CN110246280B (en) Human-cargo binding method and device, computer equipment and readable medium
JP2019028559A (en) Work analyzing device, work analyzing method, and program
CN113792674B (en) Method and device for determining empty rate and electronic equipment
WO2022140879A1 (en) Identity recognition method, terminal, server, and system
JP6982168B2 (en) Face matching system
CN114360057A (en) Data processing method and related device
Lee et al. Understanding human-place interaction from tracking and identification of many users

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200714

WD01 Invention patent application deemed withdrawn after publication