CN111680654A - Personnel information acquisition method, device and equipment based on article picking and placing event - Google Patents

Personnel information acquisition method, device and equipment based on article picking and placing event Download PDF

Info

Publication number
CN111680654A
CN111680654A CN202010543558.XA CN202010543558A CN111680654A CN 111680654 A CN111680654 A CN 111680654A CN 202010543558 A CN202010543558 A CN 202010543558A CN 111680654 A CN111680654 A CN 111680654A
Authority
CN
China
Prior art keywords
image
event
key points
target
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010543558.XA
Other languages
Chinese (zh)
Other versions
CN111680654B (en
Inventor
张天琦
程浩
邹明杰
吴昌建
陈鹏
戴华东
龚晖
张玉全
张迪
朱皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010543558.XA priority Critical patent/CN111680654B/en
Publication of CN111680654A publication Critical patent/CN111680654A/en
Application granted granted Critical
Publication of CN111680654B publication Critical patent/CN111680654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G19/00Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method, a device and equipment for acquiring personnel information based on an article picking and placing event, wherein the method comprises the following steps: after an article taking and placing event is detected, identifying a hand area associated with the article taking and placing event in an image containing the article taking and placing event, and determining a human face area belonging to the same human body with the hand area in the image; acquiring personnel information triggering an article taking and placing event based on the face area; therefore, in the scheme, the personnel information is obtained based on the face area in the image, the trajectory of the personnel does not need to be tracked, and even if the personnel move in different monitoring scenes, the personnel information triggering the article taking and placing event can still be accurately obtained.

Description

Personnel information acquisition method, device and equipment based on article picking and placing event
Technical Field
The invention relates to the technical field of retail, in particular to a method, a device and equipment for acquiring personnel information based on an article picking and placing event.
Background
In the traditional retail industry, special salespeople and cashiers are generally required to be equipped, and the labor cost is high. With the development of technology, some shopping places such as unmanned supermarkets, unmanned shopping malls and the like do not need to be equipped with sales personnel and collection personnel.
In these shopping venues, it is necessary to detect the item pick-and-place event on the shelf and determine the person picking and placing the item, in other words, whether the item on the shelf has been picked or replaced by a customer and which customer has picked or placed the item.
In the related scheme, after a person enters a shopping place, the person information is obtained, and the movement track of the person is tracked through a video image acquired by a camera; and under the condition that an article taking and placing event is detected, acquiring personnel information triggering the article taking and placing event according to the tracked movement track of each personnel.
Shopping places are large in space and generally comprise a plurality of monitoring scenes, and people usually move in different monitoring scenes. Thus, the process of tracking the movement trajectory of the person usually involves switching the monitoring scene. For example, when the person a is located in the monitoring scene 1, the person a is tracked in the video image acquired by the camera in the monitoring scene 1, and after the person a moves from the monitoring scene 1 to the monitoring scene 2, the person a needs to be switched to the video image acquired by the camera in the monitoring scene 2 to continue tracking the person a. The switching process easily causes tracking interruption, and if the tracking interruption occurs, the information of personnel triggering the article taking and placing event cannot be accurately acquired.
Disclosure of Invention
The embodiment of the invention aims to provide a method, a device and equipment for acquiring personnel information based on an article taking and placing event so as to accurately acquire the personnel information triggering the article taking and placing event.
In order to achieve the above object, an embodiment of the present invention provides a method for acquiring personal information based on an article picking and placing event, including:
after an article taking and placing event is detected, identifying a hand area associated with the article taking and placing event in an image containing the article taking and placing event;
determining a human face region belonging to the same human body as the hand region in the image;
and acquiring personnel information triggering the article taking and placing event based on the face area.
Optionally, the method further includes:
detecting whether an article taking and placing event occurs or not based on a gravity value acquired by a gravity sensor arranged on a goods shelf;
if so, determining the position of the article taking and placing event in the shelf as a target position, and acquiring an image containing the article taking and placing event;
determining an image area corresponding to the target position in the image as a target image area;
the identifying, in the image containing the item pick and place event, a hand region associated with the item pick and place event comprises:
identifying a hand region located in the target image region as a hand region associated with the item pick and place event.
Optionally, the shelf includes a plurality of shelves, and each shelf corresponds to a gravity sensor; the determining the position of the goods shelf where the goods taking and placing event occurs as a target position comprises:
determining a goods lattice where the gravity sensor with the changed collected gravity value is located as a target goods lattice;
the determining an image area corresponding to the target position in the image as a target image area includes:
and determining a preset area corresponding to the target cargo grid in the image as a target image area.
Optionally, after the acquiring the image including the article pick-and-place event, the method further includes:
identifying hand key points, head key points and connection relations between the hand key points and the head key points in the image, wherein the connection relations represent that the hand key points and the head key points belong to the same human body;
the identifying a hand region located in the target image region comprises:
identifying hand key points in the target image area as target hand key points;
the determining, in the image, a face region belonging to the same human body as the hand region includes:
determining head key points belonging to the same human body with the target hand key points based on the connection relation, and taking the head key points as target head key points;
and in the image, identifying a face region where the key point of the target head is located.
Optionally, the image includes: a two-dimensional image and a depth image; the identifying hand key points, head key points, and connection relationships between hand key points and head key points in the image comprises:
identifying hand key points, head key points and a connection relation between the hand key points and the head key points in the two-dimensional image;
obtaining three-dimensional coordinates of the key points of the hand based on the mapping relation between the two-dimensional image and the depth image;
the identifying, as a target hand keypoint, a hand keypoint located in the target image region includes:
matching the three-dimensional coordinates of the target image area with the three-dimensional coordinates of the hand key points, and identifying the hand key points in the target image area based on the matching result to serve as target hand key points; and the three-dimensional coordinate of the target image area is obtained by pre-calibration.
Optionally, after determining the cargo grid where the gravity sensor where the collected gravity value changes is located as the target cargo grid, the method further includes:
judging whether the changed gravity numerical value becomes larger or smaller; if the size of the event is larger than the preset value, determining that the type of the article taking and placing event is a placing back event; if the number of the article picking and placing events is smaller, determining that the type of the article picking and placing events is a picking event;
determining the category of the articles placed on the target goods grid as the category of the articles to be taken and placed;
calculating the quantity of the articles to be taken and placed based on the gravity change value of the gravity sensor and the unit weight of the articles placed on the target goods grid;
after acquiring the personnel information triggering the article taking and placing event based on the face area, the method further comprises the following steps:
and recording the behavior information of personnel based on the personnel information, the type of the article taking and placing event, the type of the taken and placed article and the quantity of the taken and placed article.
Optionally, the method further includes:
acquiring an image in a preset scene;
detecting whether an article taking and placing event occurs in the image, and if so, determining the image as an image to be processed;
the identifying, in the image containing the item pick and place event, a hand region associated with the item pick and place event comprises:
identifying, in the image to be processed, a hand region associated with the item pick and place event.
Optionally, the acquiring, based on the face area, the person information triggering the event of taking and placing the article includes:
extracting the face features of the face area to serve as the face features to be searched;
and searching the personnel information corresponding to the face features to be searched in the corresponding relationship between the pre-stored face features and the personnel information, and taking the personnel information as the personnel information for triggering the article taking and placing event.
In order to achieve the above object, an embodiment of the present invention further provides a device for acquiring personal information based on an article picking and placing event, including:
the first identification module is used for identifying a hand area associated with an article taking and placing event in an image containing the article taking and placing event after the article taking and placing event is detected;
the first determining module is used for determining a human face area belonging to the same human body with the hand area in the image;
and the first acquisition module is used for acquiring personnel information triggering the article taking and placing event based on the face area.
Optionally, the apparatus further comprises:
the first detection module is used for detecting whether an article taking and placing event occurs or not based on a gravity value acquired by a gravity sensor arranged on the goods shelf; if yes, triggering a second determining module and a second obtaining module;
the second determining module is used for determining the position of the goods taking and placing event in the goods shelf as a target position;
the second acquisition module is used for acquiring an image containing the article taking and placing event;
a third determining module, configured to determine, in the image, an image area corresponding to the target position as a target image area;
the first identification module is specifically configured to: identifying a hand region located in the target image region as a hand region associated with the item pick and place event.
Optionally, the shelf includes a plurality of shelves, and each shelf corresponds to a gravity sensor; the second determining module is specifically configured to:
determining a goods lattice where the gravity sensor with the changed collected gravity value is located as a target goods lattice;
the third determining module is specifically configured to:
and determining a preset area corresponding to the target cargo grid in the image as a target image area.
Optionally, the apparatus further comprises:
the second identification module is used for identifying hand key points, head key points and connection relations between the hand key points and the head key points in the image, wherein the connection relations represent that the hand key points and the head key points belong to the same human body;
the first identification module is specifically configured to: identifying hand key points in the target image area as target hand key points;
the first determining module is specifically configured to:
determining head key points belonging to the same human body with the target hand key points based on the connection relation, and taking the head key points as target head key points; and in the image, identifying a face region where the key point of the target head is located.
Optionally, the image includes: a two-dimensional image and a depth image; the second identification module is specifically configured to:
identifying hand key points, head key points and a connection relation between the hand key points and the head key points in the two-dimensional image;
obtaining three-dimensional coordinates of the key points of the hand based on the mapping relation between the two-dimensional image and the depth image;
the first identification module is specifically configured to:
matching the three-dimensional coordinates of the target image area with the three-dimensional coordinates of the hand key points, and identifying the hand key points in the target image area based on the matching result to serve as target hand key points; and the three-dimensional coordinate of the target image area is obtained by pre-calibration.
Optionally, the apparatus further comprises:
the fourth determining module is used for judging whether the changed gravity numerical value becomes larger or smaller; if the size of the event is larger than the preset value, determining that the type of the article taking and placing event is a placing back event; if the number of the article picking and placing events is smaller, determining that the type of the article picking and placing events is a picking event;
the fifth determining module is used for determining the type of the articles placed on the target goods grid as the type of the articles to be taken or placed;
the calculation module is used for calculating the quantity of the taken and placed articles based on the gravity change value of the gravity sensor and the unit weight of the articles placed on the target goods grid;
and the recording module is used for recording the behavior information of the personnel based on the personnel information acquired by the first acquiring module, the type of the article taking and placing event determined by the fourth determining module, the type of the article taking and placing determined by the fifth determining module and the number of the articles taking and placing calculated by the calculating module.
Optionally, the apparatus further comprises:
the third acquisition module is used for acquiring an image in a preset scene;
the second detection module is used for detecting whether an article taking and placing event occurs in the image, and if so, the sixth determination module is triggered;
a sixth determining module, configured to determine the image as an image to be processed;
the first identification module is specifically configured to:
identifying, in the image to be processed, a hand region associated with the item pick and place event.
Optionally, the first obtaining module is specifically configured to:
extracting the face features of the face area to serve as the face features to be searched;
and searching the personnel information corresponding to the face features to be searched in the corresponding relationship between the pre-stored face features and the personnel information, and taking the personnel information as the personnel information for triggering the article taking and placing event.
In order to achieve the above object, an embodiment of the present invention further provides an electronic device, including a processor and a memory;
a memory for storing a computer program;
and the processor is used for realizing any one of the above personnel information acquisition methods based on the article taking and placing events when executing the program stored in the memory.
By applying the embodiment of the invention, after an article taking and placing event is detected, a hand area associated with the article taking and placing event is identified in an image containing the article taking and placing event, and a human face area belonging to the same human body as the hand area is determined in the image; acquiring personnel information triggering an article taking and placing event based on the face area; therefore, in the scheme, the personnel information is obtained based on the face area in the image, the trajectory of the personnel does not need to be tracked, and even if the personnel move in different monitoring scenes, the personnel information triggering the article taking and placing event can still be accurately obtained.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a first flowchart of a method for acquiring personal information based on an article pick-and-place event according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a camera mounting apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic view of another camera mounting structure according to an embodiment of the present invention;
fig. 4 is a second flowchart illustrating a method for acquiring personal information based on an article pick-and-place event according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a personnel information acquisition device based on an article pick-and-place event according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to achieve the above object, embodiments of the present invention provide a method, an apparatus, and a device for acquiring personal information based on an article picking and placing event, where the method and the apparatus may be applied to various electronic devices, and are not limited specifically.
Fig. 1 is a first flowchart of a method for acquiring personal information based on an article pick-and-place event according to an embodiment of the present invention, including:
s101: after an article pick-and-place event is detected, a hand region associated with the article pick-and-place event is identified in an image containing the article pick-and-place event.
The manner of detecting the article taking and placing event may be various, for example, whether the article taking and placing event occurs may be detected by a gravity sensor, and further, whether the article taking and placing event occurs may be detected by an image.
For example, after an article pick-and-place event is detected, an image including the article pick-and-place event may be acquired. Alternatively, the image acquisition may be continued, and after an article pick-and-place event is detected, the hand region associated with the article pick-and-place event is identified in the image including the article pick-and-place event.
In one embodiment, whether an article picking and placing event occurs can be detected based on a gravity value acquired by a gravity sensor arranged on a shelf; if so, determining the position of the article taking and placing event in the shelf as a target position, and acquiring an image containing the article taking and placing event; and determining an image area corresponding to the target position in the image as a target image area. Thus, S101 may include: identifying a hand region located in the target image region as a hand region associated with the item pick and place event.
For example, gravity sensors may be provided at various locations on the shelves to sense the particular location where an item pick and place event occurs. For convenience of description, the position in the shelf where the article picking and placing event occurs is referred to as a target position. The image areas corresponding to the positions of the shelves can be pre-calibrated in the image, and in the embodiment, after the target position is determined, the image area corresponding to the target position can be determined according to the calibration result. The hand area located in the image area is then taken as the hand area associated with the item pick and place event.
Alternatively, in one embodiment, the shelf includes a plurality of compartments, each compartment corresponding to a respective gravity sensor, so that it can be sensed which compartment has an item picking event. For convenience of description, the compartment in which the article pick-and-place event occurs is referred to as a target compartment.
In the embodiment, the cargo grid where the gravity sensor where the collected gravity value changes can be determined as the target cargo grid; in the image containing the article picking and placing event, a preset area (the target image area) corresponding to the target cargo space is determined, and then a hand area located in the preset area corresponding to the target cargo space is identified.
For example, a preset area corresponding to each cargo space may be previously calibrated in the image. In one case, referring to fig. 2, the shelf may be expanded by 10 cm up, down, left, and right, respectively, and the shelf may be expanded by 30 cm toward the customer, so as to obtain a cubic area, and the cubic area may be used as a preset area corresponding to the shelf.
Therefore, the corresponding preset area of the goods lattice is larger than the area of the goods lattice, and the identification sensitivity can be improved. Or, in other cases, the goods grid may not be expanded, and the area where the goods grid is located is used as the preset area corresponding to the goods grid. The specific grid area setting is not limited.
In one embodiment, a hand key point, a head key point, and a connection relationship between the hand key point and the head key point in the image may be identified, where the connection relationship indicates that the hand key point and the head key point belong to the same human body; and identifying the hand key points in the target image area (the preset area corresponding to the target cargo space) as target hand key points.
For example, a bottom-up (from bottom to top, from point to face) key point detection algorithm based on a convolutional neural network can be adopted to detect hand key points and head key points in an image, and establish a connection relationship between the hand key points and the head key points of the same human body. In one case, for each person in the image, three key points of that person can be detected: the number of the key points is not limited. Or, other target detection algorithms may also be used to detect the hand region and the face region of the person in the image, and the specific detection mode is not limited.
S102: in the image, a face region belonging to the same human body as the hand region is determined.
For example, referring to fig. 2, a camera may be disposed on a side of the shelf facing the customer, and the camera performs image acquisition for the customer, so that the acquired image may include an event that the customer takes and places an item and a face of the customer, and the face is less obstructed, and a face area belonging to the same human body as the hand area may be determined in the image.
In the above one embodiment, the hand key points, the head key points, and the connection relationship between the hand key points and the head key points in the image are identified; identifying hand key points in a preset area corresponding to the target cargo grid in the image as target hand key points; in this embodiment, S102 may include: determining head key points belonging to the same human body with the target hand key points based on the connection relation, and taking the head key points as target head key points; and in the image, identifying a face region where the key point of the target head is located.
For example, the key points of the target head may be expanded around to obtain a target frame, and the target frame is the face region. The shape and size of the target frame are not limited.
For example, a face position detection method based on a convolutional neural network may be used to obtain a target frame of a face. And subsequently, a feature extraction method based on a convolutional neural network can be used for extracting the features of the face region in the target frame to obtain the feature vector of the face region.
In one embodiment, the hand key points may be three-dimensional coordinates, such that the hand key points located in the target cargo space may be identified more accurately.
In this embodiment, the image includes a two-dimensional image and a depth image; the hand key points, the head key points and the connection relation between the hand key points and the head key points in the two-dimensional image can be identified; and obtaining the three-dimensional coordinates of the key points of the hand based on the mapping relation between the two-dimensional image and the depth image.
In this embodiment, the three-dimensional coordinates of the target image region may be matched with the three-dimensional coordinates of the hand key points, and the hand key points located in the target image region may be identified as target hand key points based on the matching result; and the three-dimensional coordinate of the target image area is obtained by pre-calibration. As described above, the preset region corresponding to each cargo space may be calibrated in the image in advance, and in this embodiment, the three-dimensional coordinates of the preset region, that is, the three-dimensional coordinates of the target image region, may also be calibrated. It can be understood that by matching the three-dimensional coordinates of the target image area with the three-dimensional coordinates of the hand key points, whether the hand key points are located in the area corresponding to the target cargo space can be judged, and the target hand key points are identified.
For example, a two-dimensional image including a shelf and a depth image may be acquired. For example, referring to fig. 2, a camera may be disposed on the side of the shelf facing the customer, and the camera captures images of the customer. In one case, the camera may be a multi-view camera, such as a binocular camera, a trinocular camera, and the like, without limitation. The multi-view image collected by the multi-view camera is a two-dimensional image, and a depth image can be calculated based on the multi-view image.
Assuming that the cameras are binocular cameras, an image captured by a first-eye camera of the binocular cameras may be selected, for example, an image captured by a left-eye camera, or an image captured by a right-eye camera may also be selected, and then a hand key point, a head key point, and a connection relationship between the hand key point and the head key point are identified in the selected images.
Or, in another case, the camera may be a depth camera, which performs image acquisition for the customer, resulting in a two-dimensional image and a depth image.
S103: and acquiring personnel information triggering the article taking and placing event based on the face area.
For example, the person information may be a face image region, a face feature, or identity information of a person, and the like, and is not limited specifically. The face area determined in the step S102 can be directly used as personnel information for triggering the article taking and placing event; or, the face features can be extracted from the face area to serve as personnel information for triggering the article taking and placing event; or, the identity information of the person can be acquired based on the face area and used as the person information for triggering the article taking and placing event.
In one embodiment, S103 may include: extracting the face features of the face area to serve as the face features to be searched; and searching the personnel information corresponding to the face features to be searched in the corresponding relationship between the pre-stored face features and the personnel information, and taking the personnel information as the personnel information for triggering the article taking and placing event.
For example, in one case, a shopping terminal with a face snapshot function may be provided at an entrance of a shopping place; when a person enters a shopping place, the shopping terminal snapshottes a face image and extracts face features from the face image; in addition, the personnel submit personnel information in the shopping terminal; and storing the human face characteristics and the personnel information. Thus, the shopping terminal stores the face characteristics and the personnel information of each person entering the shopping place.
Or, in another case, the person may also install a shopping APP (Application) in the mobile terminal, and when the APP registers, the APP collects a face image and obtains person information, extracts a face feature from the face image, and stores the face feature and the person information to the server. Therefore, the server side stores the face characteristics and the personnel information of the personnel using the APP.
For example, after the face region is determined in S102, the feature vector of the face region may be extracted as the feature vector to be searched. Correspondingly, the shopping terminal or the server stores the feature vector of the face image and the corresponding personnel information, and the personnel information corresponding to the feature vector matched with the feature vector to be searched can be searched in the pre-stored feature vector and personnel information by comparing the cosine similarity of the feature vector and is used as the personnel information for triggering the article taking and placing event.
In one embodiment, the article pick-and-place event and the information of the person triggering the article pick-and-place event are recorded, so that the recorded content can be subsequently utilized to analyze the shopping habits of the customers. Alternatively, when the customer settles the account, the customer may review the information using the recorded content. The specific application of the recorded content is not limited.
In one embodiment, after the target cargo space is determined, it can be determined whether the changed gravity value becomes larger or smaller; if the size of the event is larger than the preset value, determining that the type of the article taking and placing event is a placing back event; if the number of the article picking and placing events is smaller, determining that the type of the article picking and placing events is a picking event; determining the category of the articles placed on the target goods grid as the category of the articles to be taken and placed; and calculating the quantity of the picked and placed articles based on the gravity change value of the gravity sensor and the unit weight of the articles placed on the target goods grid.
In this embodiment, after S103, the behavior information of the person may be recorded based on the person information, the type of the article pick-and-place event, the category of the pick-and-place article, and the number of the pick-and-place articles. For example, the behavior information of the person can be utilized to analyze the shopping habits of the person. Alternatively, when the person performs settlement, the person may review the settlement using the behavior information of the person. The specific application of the behavior information is not limited.
In one case, the items in the virtual shopping cart of the person may be updated based on the person information, the type of the item pick-and-place event, the category of the pick-and-place item, and the number of the pick-and-place items.
For example, assuming that the articles placed in the grid 1 are chocolate, the gravity of each chocolate is M, and assuming that the gravity value collected by the gravity sensor corresponding to the grid 1 is reduced by M, the event type corresponding to the grid 1 is determined as a take-away event, and M/M is calculated, i.e., the number of the chocolates taken away by the person.
Suppose that the embodiment of the invention is applied, the person triggering the article taking and placing event is determined to be person A, and the information of person A is obtained. For example, the personnel information may include identity information, payment information, and the like of the personnel. For example, a virtual shopping cart may be created for a person and bound to the payment method of the person. In the above example, M/M chocolates may be added to person A's virtual shopping cart.
And if the fact that the gravity value acquired by the gravity sensor corresponding to the goods grid 1 is increased by N is determined subsequently, determining that the event type corresponding to the goods grid 1 is a placing-back event, and calculating N/m, namely the number of the chocolates placed back by the person. Assuming that the person who triggered the item pick-and-place event is still person A, N/m chocolates may be reduced in person A's virtual shopping cart using the illustrated embodiment of the present invention.
As described above, whether an article pick-and-place event occurs may be detected through the image. In one embodiment, an image in a preset scene may be acquired; and detecting whether an article taking and placing event occurs in the image, and if so, determining the image as an image to be processed. In such an embodiment, a hand region associated with the item pick and place event is identified in the image to be processed. The specific identification method is described in detail above, and is not described herein again.
For example, referring to fig. 2, a camera may be disposed on a side of the shelf facing the customer, and the camera captures an image of the customer, so that the captured image may include an event that the customer takes and places an item and a face of the customer.
Alternatively, in another case, two cameras may be provided: a first camera and a second camera. As shown in fig. 2, the first camera is disposed on a side of the shelf facing a customer, and the first camera captures an image of the customer, where the image captured by the first camera is used to identify a hand area associated with an item picking and placing event and a face area belonging to the same human body as the hand area.
The second camera may be placed above the shelf, and as shown in fig. 3, a vertically downward camera may be mounted at a position just in front of the shelf (customer facing side) about 3.5 meters from the ground. The second camera acquires images of the goods shelf, and the images acquired by the second camera are used for detecting whether an article taking and placing event occurs or not and the specific position of the article taking and placing event occurs.
In this case, the calibration can be performed in the images collected by the two cameras, respectively, to calibrate the position of each shelf in the shelf. Therefore, after the target goods lattice of the article taking and placing event is determined in the image acquired by the second camera, the preset area corresponding to the target goods lattice can be determined in the image acquired by the first camera, then the hand area in the preset area is identified, and the face area belonging to the same human body with the hand area is determined.
By applying the embodiment of the invention, after an article taking and placing event is detected, a hand area associated with the article taking and placing event is identified in an image containing the article taking and placing event, and a human face area belonging to the same human body as the hand area is determined in the image; acquiring personnel information triggering an article taking and placing event based on the face area; therefore, in the scheme, the personnel information is obtained based on the face area in the image, the trajectory of the personnel does not need to be tracked, and even if the personnel move in different monitoring scenes, the personnel information triggering the article taking and placing event can still be accurately obtained.
In some related schemes, an infrared image is collected through an infrared camera, human body actions in the infrared image are identified through a bone recognition algorithm, and if the actions are actions for taking and placing articles, the identities of people for taking and placing the articles are continuously determined. In this solution, in the first aspect, the cost of the infrared camera is high; in the second aspect, if a plurality of customers exist in front of the shelf, the customers are shielded mutually, and the stand column in front of the shelf can also shield the customers, so that the human body action in the infrared image cannot be accurately identified by utilizing a bone identification algorithm, and the article taking and placing event cannot be accurately detected.
In some embodiments of the scheme, whether an article picking and placing event occurs is detected through a gravity value acquired by a gravity sensor arranged on a goods shelf; in a first aspect, the cost of using a gravity sensor is lower than the cost of using an infrared camera; in the second aspect, the detection accuracy of the article taking and placing event cannot be influenced by the shielding of personnel and the upright columns; in the third aspect, the types and weights of the articles placed in the goods grids are calibrated in advance, and the types of the article taking and placing events, the types of the articles taken and placed and the quantity of the articles taken and placed can be determined according to the change of the gravity value collected by the gravity sensor.
In some related schemes, an RFID (Radio Frequency Identification) tag is attached to each item, and when a customer leaves a shopping place, the RFID tag is automatically detected by an instrument to determine which customer has taken away which items, so as to settle the deduction. However, in this solution, the RFID tag needs to be attached to each article, which is costly and causes waste of resources. By applying the embodiment of the invention, RFID labels do not need to be pasted on the objects, so that the cost is reduced and the waste is reduced.
Fig. 4 is a second flowchart of a method for acquiring personal information based on an article pick-and-place event according to an embodiment of the present invention, where the method includes:
s401: detecting whether an article taking and placing event occurs or not based on a gravity value acquired by a gravity sensor arranged on a goods shelf; if so, S402 is performed.
S402: determining a goods lattice of the goods taking and placing event in the goods shelf as a target goods lattice; and acquiring an image containing the article pick-and-place event.
In this embodiment, the shelf includes a plurality of grids, and each grid corresponds to a gravity sensor. And if the gravity value acquired by the gravity sensor changes, determining that the goods taking and placing event occurs in the goods grid corresponding to the gravity sensor. For convenience of description, the compartment in which the article pick-and-place event occurs is referred to as a target compartment.
In this embodiment, after an article taking and placing event is detected, an image including the article taking and placing event is acquired. For example, referring to fig. 2, a camera is disposed on a side of the shelf facing the customer, and the camera may be controlled to capture an image when an item picking and placing event is detected, so that the captured image may include the item picking and placing event of the customer.
S403: and determining a preset area corresponding to the target goods grid in the image.
For example, a preset area corresponding to each cargo space may be previously calibrated in the image. In one case, referring to fig. 2, the shelf may be expanded by 10 cm up, down, left, and right, respectively, and the shelf may be expanded by 30 cm toward the customer, so as to obtain a cubic area, and the cubic area may be used as a preset area corresponding to the shelf.
Therefore, the corresponding preset area of the goods lattice is larger than the area of the goods lattice, and the identification sensitivity can be improved. Or, in other cases, the goods grid may not be expanded, and the area where the goods grid is located is used as the preset area corresponding to the goods grid. The specific grid area setting is not limited.
S404: hand key points, head key points, and connection relationships between the hand key points and the head key points in the image are identified. The connection relation indicates that the hand key point and the head key point belong to the same human body.
The sequence of S403 and S404 is not limited.
In one embodiment, the two-dimensional image and the depth image may be obtained by a multi-view camera, or the two-dimensional image and the depth image may be obtained by a depth camera. Identifying hand key points, head key points and a connection relation between the hand key points and the head key points in the two-dimensional image; and obtaining the three-dimensional coordinates of the key points of the hand based on the mapping relation between the two-dimensional image and the depth image.
In such an embodiment, the hand keypoints located in the target grid may be identified more accurately based on the three-dimensional coordinates of the hand keypoints.
For example, a bottom-up (from bottom to top, from point to face) key point detection algorithm based on a convolutional neural network can be adopted to detect hand key points and head key points in an image, and establish a connection relationship between the hand key points and the head key points of the same human body. In one case, for each person in the image, three key points of that person can be detected: the number of the key points is not limited. The specific detection method is not limited.
S405: in the image, hand key points located in a preset area corresponding to the target cargo space are identified as target hand key points.
In the above embodiment, the three-dimensional coordinates of the hand key points are obtained, and in this embodiment, the three-dimensional coordinates of the preset region corresponding to the target cargo space may be matched with the three-dimensional coordinates of the hand key points, and the hand key points located in the preset region corresponding to the target cargo space may be identified as the target hand key points based on the matching result. And the three-dimensional coordinates of the preset area corresponding to the target goods grid are obtained by pre-calibration. As described above, the preset region corresponding to each cargo space may be calibrated in the image in advance, and in this embodiment, the three-dimensional coordinates of the preset region may also be calibrated.
S406: and determining head key points belonging to the same human body with the target hand key points as target head key points based on the connection relation.
S407: in the image, a face region where a key point of a target head is located is identified.
For example, referring to fig. 2, a camera may be disposed on a side of the shelf facing a customer, and the camera performs image acquisition for the customer, so that the acquired image may include an event that the customer takes and places an item and a face of the customer, and the face is less obstructed, and a face area where a target head key point is located may be identified in the image.
For example, the key points of the target head may be expanded around to obtain a target frame, and the target frame is the face region. The shape and size of the target frame are not limited.
S408: and extracting the face features of the face area to be used as the face features to be searched.
For example, a face position detection method based on a convolutional neural network may be used to obtain a target frame of a face. And then, extracting the features of the face region in the target frame by using a feature extraction method based on a convolutional neural network to obtain a feature vector of the face region.
S409: and searching the personnel information corresponding to the face features to be searched in the corresponding relationship between the pre-stored face features and the personnel information, and taking the personnel information as the personnel information for triggering the article taking and placing event.
For example, in one case, a shopping terminal with a face snapshot function may be provided at an entrance of a shopping place; when a person enters a shopping place, the shopping terminal snapshottes a face image and extracts face features from the face image; in addition, the personnel submit personnel information in the shopping terminal; and storing the human face characteristics and the personnel information. Thus, the shopping terminal stores the face characteristics and the personnel information of each person entering the shopping place.
Or, under another situation, the person can also install a shopping APP in the mobile terminal, when the APP registers, the APP collects a face image and acquires person information, the APP extracts face features from the face image, and the face features and the person information are stored in the server side. Therefore, the server side stores the face characteristics and the personnel information of the personnel using the APP.
For example, after the face region is identified in S407, the feature vector of the face region may be extracted as the feature vector to be searched. Correspondingly, the shopping terminal or the server stores the feature vector of the face image and the corresponding personnel information, and the personnel information corresponding to the feature vector matched with the feature vector to be searched can be searched in the pre-stored feature vector and personnel information by comparing the cosine similarity of the feature vector and is used as the personnel information for triggering the article taking and placing event.
In one embodiment, the behavior information of the person is recorded based on the person information, the type of the article pick-and-place event, the type of the pick-and-place article, and the number of the pick-and-place articles. In this way, the shopping habits of the customers can be analyzed subsequently by utilizing the recorded behavior information. Alternatively, when the customer settles the account, the customer may review the information using the recorded behavior information. The specific application of the behavior information is not limited.
By applying the embodiment shown in fig. 4 of the invention, on the first hand, the personnel information is obtained based on the face area in the image, the trajectory of the personnel does not need to be tracked, and even if the personnel moves in different monitoring scenes, the personnel information triggering the event of taking and placing the article can still be accurately obtained. Detecting whether an article taking and placing event occurs or not through a gravity value acquired by a gravity sensor arranged on a goods shelf; the cost of using a gravity sensor is lower than the cost of using an infrared camera; and the shielding of personnel and the upright column can not influence the detection accuracy of the article taking and placing event. In the third aspect, RFID labels do not need to be pasted on the articles, so that the cost is reduced, and the waste is reduced.
Corresponding to the above method embodiment, an embodiment of the present invention further provides a device for acquiring personal information based on an article pick-and-place event, as shown in fig. 5, including:
a first identifying module 501, configured to identify, after detecting an article picking and placing event, a hand region associated with the article picking and placing event in an image including the article picking and placing event;
a first determining module 502, configured to determine, in the image, a face region belonging to the same human body as the hand region;
a first obtaining module 503, configured to obtain, based on the face area, person information that triggers the event of taking and placing the article.
In one embodiment, the apparatus further comprises: a first detection module, a second determination module, a second acquisition module, and a third determination module (not shown in the figure), wherein,
the first detection module is used for detecting whether an article taking and placing event occurs or not based on a gravity value acquired by a gravity sensor arranged on the goods shelf; if yes, triggering a second determining module and a second obtaining module;
the second determining module is used for determining the position of the goods taking and placing event in the goods shelf as a target position;
the second acquisition module is used for acquiring an image containing the article taking and placing event;
a third determining module, configured to determine, in the image, an image area corresponding to the target position as a target image area;
the first identifying module 501 is specifically configured to: identifying a hand region located in the target image region as a hand region associated with the item pick and place event.
In one embodiment, the shelf comprises a plurality of shelves, each shelf corresponding to a respective gravity sensor; the second determining module is specifically configured to:
determining a goods lattice where the gravity sensor with the changed collected gravity value is located as a target goods lattice;
the third determining module is specifically configured to:
and determining a preset area corresponding to the target cargo grid in the image as a target image area.
In one embodiment, the apparatus further comprises:
a second identification module (not shown in the figure) for identifying the hand key points, the head key points and the connection relations between the hand key points and the head key points in the image, wherein the connection relations indicate that the hand key points and the head key points belong to the same human body;
the first identifying module 501 is specifically configured to: identifying hand key points in the target image area as target hand key points;
the first determining module 502 is specifically configured to:
determining head key points belonging to the same human body with the target hand key points based on the connection relation, and taking the head key points as target head key points; and in the image, identifying a face region where the key point of the target head is located.
In one embodiment, the image includes: a two-dimensional image and a depth image; the second identification module is specifically configured to:
identifying hand key points, head key points and a connection relation between the hand key points and the head key points in the two-dimensional image;
obtaining three-dimensional coordinates of the key points of the hand based on the mapping relation between the two-dimensional image and the depth image;
the first identifying module 501 is specifically configured to:
matching the three-dimensional coordinates of the target image area with the three-dimensional coordinates of the hand key points, and identifying the hand key points in the target image area based on the matching result to serve as target hand key points; and the three-dimensional coordinate of the target image area is obtained by pre-calibration.
In one embodiment, the apparatus further comprises: a fourth determination module, a fifth determination module, a calculation module and a recording module (not shown in the figure), wherein,
the fourth determining module is used for judging whether the changed gravity numerical value becomes larger or smaller; if the size of the event is larger than the preset value, determining that the type of the article taking and placing event is a placing back event; if the number of the article picking and placing events is smaller, determining that the type of the article picking and placing events is a picking event;
the fifth determining module is used for determining the type of the articles placed on the target goods grid as the type of the articles to be taken or placed;
the calculation module is used for calculating the quantity of the taken and placed articles based on the gravity change value of the gravity sensor and the unit weight of the articles placed on the target goods grid;
and the recording module is used for recording the behavior information of the personnel based on the personnel information acquired by the first acquiring module, the type of the article taking and placing event determined by the fourth determining module, the type of the article taking and placing determined by the fifth determining module and the number of the articles taking and placing calculated by the calculating module.
In one embodiment, the apparatus further comprises: a third acquisition module, a second detection module and a sixth determination module (not shown in the figures), wherein,
the third acquisition module is used for acquiring an image in a preset scene;
the second detection module is used for detecting whether an article taking and placing event occurs in the image, and if so, the sixth determination module is triggered;
a sixth determining module, configured to determine the image as an image to be processed;
the first identifying module 501 is specifically configured to:
identifying, in the image to be processed, a hand region associated with the item pick and place event.
In one embodiment, the first obtaining module 503 is specifically configured to:
extracting the face features of the face area to serve as the face features to be searched;
and searching the personnel information corresponding to the face features to be searched in the corresponding relationship between the pre-stored face features and the personnel information, and taking the personnel information as the personnel information for triggering the article taking and placing event.
By applying the embodiment of the invention, after an article taking and placing event is detected, a hand area associated with the article taking and placing event is identified in an image containing the article taking and placing event, and a human face area belonging to the same human body as the hand area is determined in the image; acquiring personnel information triggering an article taking and placing event based on the face area; therefore, in the scheme, the personnel information is obtained based on the face area in the image, the trajectory of the personnel does not need to be tracked, and even if the personnel move in different monitoring scenes, the personnel information triggering the article taking and placing event can still be accurately obtained. In addition, the scheme does not need to stick RFID labels on the articles, so that the cost is reduced, and the waste is reduced.
In some embodiments, whether an article picking and placing event occurs is detected through a gravity value acquired by a gravity sensor arranged on a shelf; in a first aspect, the cost of using a gravity sensor is lower than the cost of using an infrared camera; in the second aspect, the detection accuracy of the article taking and placing event cannot be influenced by the shielding of personnel and the upright columns; in the third aspect, the types and weights of the articles placed in the goods grids are calibrated in advance, and the types of the article taking and placing events, the types of the articles taken and placed and the quantity of the articles taken and placed can be determined according to the change of the gravity value collected by the gravity sensor.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 601 and a memory 602,
a memory 602 for storing a computer program;
the processor 601 is configured to implement any one of the above methods for acquiring the personal information based on the article pick-and-place event when executing the program stored in the memory 602.
The Memory mentioned in the above electronic device may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and when the computer program is executed by a processor, the method for acquiring personal information based on an article pick-and-place event is implemented.
In another embodiment of the present invention, a computer program product containing instructions is further provided, which when run on a computer, causes the computer to execute any one of the above methods for acquiring personal information based on an article pick-and-place event.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, apparatus embodiments, device embodiments, computer-readable storage medium embodiments, and computer program product embodiments are described for simplicity as they are substantially similar to method embodiments, where relevant, reference may be made to some descriptions of method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A personnel information acquisition method based on an article picking and placing event is characterized by comprising the following steps:
after an article taking and placing event is detected, identifying a hand area associated with the article taking and placing event in an image containing the article taking and placing event;
determining a human face region belonging to the same human body as the hand region in the image;
and acquiring personnel information triggering the article taking and placing event based on the face area.
2. The method of claim 1, further comprising:
detecting whether an article taking and placing event occurs or not based on a gravity value acquired by a gravity sensor arranged on a goods shelf;
if so, determining the position of the article taking and placing event in the shelf as a target position, and acquiring an image containing the article taking and placing event;
determining an image area corresponding to the target position in the image as a target image area;
the identifying, in the image containing the item pick and place event, a hand region associated with the item pick and place event comprises:
identifying a hand region located in the target image region as a hand region associated with the item pick and place event.
3. The method of claim 2, wherein the shelf comprises a plurality of compartments, each compartment corresponding to a respective gravity sensor; the determining the position of the goods shelf where the goods taking and placing event occurs as a target position comprises:
determining a goods lattice where the gravity sensor with the changed collected gravity value is located as a target goods lattice;
the determining an image area corresponding to the target position in the image as a target image area includes:
and determining a preset area corresponding to the target cargo grid in the image as a target image area.
4. The method of claim 3, wherein after acquiring the image containing the item pick and place event, further comprising:
identifying hand key points, head key points and connection relations between the hand key points and the head key points in the image, wherein the connection relations represent that the hand key points and the head key points belong to the same human body;
the identifying a hand region located in the target image region comprises:
identifying hand key points in the target image area as target hand key points;
the determining, in the image, a face region belonging to the same human body as the hand region includes:
determining head key points belonging to the same human body with the target hand key points based on the connection relation, and taking the head key points as target head key points;
and in the image, identifying a face region where the key point of the target head is located.
5. The method of claim 4, wherein the image comprises: a two-dimensional image and a depth image; the identifying hand key points, head key points, and connection relationships between hand key points and head key points in the image comprises:
identifying hand key points, head key points and a connection relation between the hand key points and the head key points in the two-dimensional image;
obtaining three-dimensional coordinates of the key points of the hand based on the mapping relation between the two-dimensional image and the depth image;
the identifying, as a target hand keypoint, a hand keypoint located in the target image region includes:
matching the three-dimensional coordinates of the target image area with the three-dimensional coordinates of the hand key points, and identifying the hand key points in the target image area based on the matching result to serve as target hand key points; and the three-dimensional coordinate of the target image area is obtained by pre-calibration.
6. The method according to claim 3, wherein the determining the cargo space in which the gravity sensor with the changed collected gravity value is located as the target cargo space further comprises:
judging whether the changed gravity numerical value becomes larger or smaller; if the size of the event is larger than the preset value, determining that the type of the article taking and placing event is a placing back event; if the number of the article picking and placing events is smaller, determining that the type of the article picking and placing events is a picking event;
determining the category of the articles placed on the target goods grid as the category of the articles to be taken and placed;
calculating the quantity of the articles to be taken and placed based on the gravity change value of the gravity sensor and the unit weight of the articles placed on the target goods grid;
after acquiring the personnel information triggering the article taking and placing event based on the face area, the method further comprises the following steps:
and recording the behavior information of personnel based on the personnel information, the type of the article taking and placing event, the type of the taken and placed article and the quantity of the taken and placed article.
7. The method of claim 1, further comprising:
acquiring an image in a preset scene;
detecting whether an article taking and placing event occurs in the image, and if so, determining the image as an image to be processed;
the identifying, in the image containing the item pick and place event, a hand region associated with the item pick and place event comprises:
identifying, in the image to be processed, a hand region associated with the item pick and place event.
8. The method of claim 1, wherein the obtaining of the person information triggering the item pick-and-place event based on the face region comprises:
extracting the face features of the face area to serve as the face features to be searched;
and searching the personnel information corresponding to the face features to be searched in the corresponding relationship between the pre-stored face features and the personnel information, and taking the personnel information as the personnel information for triggering the article taking and placing event.
9. The utility model provides a personnel information acquisition device based on article are got and are put incident which characterized in that includes:
the first identification module is used for identifying a hand area associated with an article taking and placing event in an image containing the article taking and placing event after the article taking and placing event is detected;
the first determining module is used for determining a human face area belonging to the same human body with the hand area in the image;
and the first acquisition module is used for acquiring personnel information triggering the article taking and placing event based on the face area.
10. An electronic device comprising a processor and a memory;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 8 when executing a program stored in the memory.
CN202010543558.XA 2020-06-15 2020-06-15 Personnel information acquisition method, device and equipment based on article picking and placing event Active CN111680654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010543558.XA CN111680654B (en) 2020-06-15 2020-06-15 Personnel information acquisition method, device and equipment based on article picking and placing event

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010543558.XA CN111680654B (en) 2020-06-15 2020-06-15 Personnel information acquisition method, device and equipment based on article picking and placing event

Publications (2)

Publication Number Publication Date
CN111680654A true CN111680654A (en) 2020-09-18
CN111680654B CN111680654B (en) 2023-10-13

Family

ID=72436102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010543558.XA Active CN111680654B (en) 2020-06-15 2020-06-15 Personnel information acquisition method, device and equipment based on article picking and placing event

Country Status (1)

Country Link
CN (1) CN111680654B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016528B (en) * 2020-10-20 2021-07-20 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium
CN115082163A (en) * 2022-07-19 2022-09-20 江苏创纪云网络科技有限公司 Intelligent retail transaction data management system and method based on third-party Internet

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130039547A1 (en) * 2011-08-11 2013-02-14 At&T Intellectual Property I, L.P. Method and Apparatus for Automated Analysis and Identification of a Person in Image and Video Content
JP2015049582A (en) * 2013-08-30 2015-03-16 東芝テック株式会社 Commodity registration apparatus and program
CN106971130A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture identification method using face as reference
WO2019033635A1 (en) * 2017-08-16 2019-02-21 图灵通诺(北京)科技有限公司 Purchase settlement method, device, and system
CN109426785A (en) * 2017-08-31 2019-03-05 杭州海康威视数字技术股份有限公司 A kind of human body target personal identification method and device
CN109447619A (en) * 2018-09-20 2019-03-08 华侨大学 Unmanned settlement method, device, equipment and system based on open environment
CN110347772A (en) * 2019-07-16 2019-10-18 北京百度网讯科技有限公司 Article condition detection method, device and computer readable storage medium
CN110647825A (en) * 2019-09-05 2020-01-03 广州织点智能科技有限公司 Method, device and equipment for determining unmanned supermarket articles and storage medium
US20200012999A1 (en) * 2018-07-03 2020-01-09 Baidu Usa Llc Method and apparatus for information processing
CN111079478A (en) * 2018-10-19 2020-04-28 杭州海康威视数字技术股份有限公司 Unmanned goods selling shelf monitoring method and device, electronic equipment and system
CN111127174A (en) * 2020-01-06 2020-05-08 鄂尔多斯市东驿科技有限公司 Intelligent unmanned supermarket control system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130039547A1 (en) * 2011-08-11 2013-02-14 At&T Intellectual Property I, L.P. Method and Apparatus for Automated Analysis and Identification of a Person in Image and Video Content
JP2015049582A (en) * 2013-08-30 2015-03-16 東芝テック株式会社 Commodity registration apparatus and program
CN106971130A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture identification method using face as reference
WO2019033635A1 (en) * 2017-08-16 2019-02-21 图灵通诺(北京)科技有限公司 Purchase settlement method, device, and system
CN109426785A (en) * 2017-08-31 2019-03-05 杭州海康威视数字技术股份有限公司 A kind of human body target personal identification method and device
US20200012999A1 (en) * 2018-07-03 2020-01-09 Baidu Usa Llc Method and apparatus for information processing
CN109447619A (en) * 2018-09-20 2019-03-08 华侨大学 Unmanned settlement method, device, equipment and system based on open environment
CN111079478A (en) * 2018-10-19 2020-04-28 杭州海康威视数字技术股份有限公司 Unmanned goods selling shelf monitoring method and device, electronic equipment and system
CN110347772A (en) * 2019-07-16 2019-10-18 北京百度网讯科技有限公司 Article condition detection method, device and computer readable storage medium
CN110647825A (en) * 2019-09-05 2020-01-03 广州织点智能科技有限公司 Method, device and equipment for determining unmanned supermarket articles and storage medium
CN111127174A (en) * 2020-01-06 2020-05-08 鄂尔多斯市东驿科技有限公司 Intelligent unmanned supermarket control system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林月华;孙建明;姚依妮;李昭;: "面向超市货架包装的人眼检测技术", no. 02 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016528B (en) * 2020-10-20 2021-07-20 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium
CN115082163A (en) * 2022-07-19 2022-09-20 江苏创纪云网络科技有限公司 Intelligent retail transaction data management system and method based on third-party Internet
CN115082163B (en) * 2022-07-19 2022-11-11 江苏创纪云网络科技有限公司 Intelligent retail transaction data management system and method based on third-party Internet

Also Published As

Publication number Publication date
CN111680654B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
JP7229580B2 (en) Unmanned sales system
US20220198550A1 (en) System and methods for customer action verification in a shopping cart and point of sales
WO2019165894A1 (en) Article identification method, device and system, and storage medium
WO2019165891A1 (en) Method for identifying product purchased by user, and device and smart shelf system
CN111507315A (en) Article picking and placing event detection method, device and equipment
CN108320404B (en) Commodity identification method and device based on neural network and self-service cash register
CN111415461B (en) Article identification method and system and electronic equipment
CN109409291B (en) Commodity identification method and system of intelligent container and shopping order generation method
TWI578272B (en) Shelf detection system and method
US11049373B2 (en) Storefront device, storefront management method, and program
WO2014050518A1 (en) Information processing device, information processing method, and information processing program
EP3531341B1 (en) Method and apparatus for recognising an action of a hand
JP2009048430A (en) Customer behavior analysis device, customer behavior determination system, and customer buying behavior analysis system
CN112464697A (en) Vision and gravity sensing based commodity and customer matching method and device
JP2024051084A (en) Store device, store system, store management method, and program
CN111680654B (en) Personnel information acquisition method, device and equipment based on article picking and placing event
JP7264401B2 (en) Accounting methods, devices and systems
WO2018002864A2 (en) Shopping cart-integrated system and method for automatic identification of products
CN111263224A (en) Video processing method and device and electronic equipment
CN111079478A (en) Unmanned goods selling shelf monitoring method and device, electronic equipment and system
CN111831673A (en) Goods identification system, goods identification method and electronic equipment
CN109711498B (en) Target object behavior prediction method and device, processing equipment and intelligent commodity shelf
CN111260685B (en) Video processing method and device and electronic equipment
CN109243049A (en) A kind of the commodity access identifying system and method for sales counter
CN110443946B (en) Vending machine, and method and device for identifying types of articles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant