CN115546900B - Risk identification method, device, equipment and storage medium - Google Patents

Risk identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN115546900B
CN115546900B CN202211490715.0A CN202211490715A CN115546900B CN 115546900 B CN115546900 B CN 115546900B CN 202211490715 A CN202211490715 A CN 202211490715A CN 115546900 B CN115546900 B CN 115546900B
Authority
CN
China
Prior art keywords
behavior
behavior data
pedestrian
condition
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211490715.0A
Other languages
Chinese (zh)
Other versions
CN115546900A (en
Inventor
冯昊
李斌
冯雪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shenxiang Intelligent Technology Co ltd
Original Assignee
Zhejiang Lianhe Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lianhe Technology Co ltd filed Critical Zhejiang Lianhe Technology Co ltd
Priority to CN202211490715.0A priority Critical patent/CN115546900B/en
Publication of CN115546900A publication Critical patent/CN115546900A/en
Application granted granted Critical
Publication of CN115546900B publication Critical patent/CN115546900B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Alarm Systems (AREA)

Abstract

The embodiment of the application provides a risk identification method, a risk identification device, risk identification equipment and a storage medium. The method comprises the following steps: acquiring collected video data in a commodity display place; determining a motion trajectory of a pedestrian within the merchandise display based on the video data; and determining whether behavior data of a target behavior exists in the behavior data of the pedestrian in the commodity display position or not based on the movement locus of the pedestrian, wherein the target behavior comprises behaviors related to a first behavior data condition and a second behavior data condition which are preset, and identifying the pedestrian which meets the first behavior data condition but does not meet the second behavior data condition based on the determination result. The method and the device can shorten the time consumption for finding the thief, reduce the labor cost and improve the finding efficiency.

Description

Risk identification method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a risk identification method, apparatus, device, and storage medium.
Background
The supermarket is a popular shopping place, customers can freely shop in the supermarket, but the phenomenon that commodities are lost often exists in the supermarket, and loss is brought to the supermarket.
Although most supermarkets find theft by installing anti-theft alarm doors, the anti-theft mode needs to attach anti-theft labels to each commodity, which brings additional capital cost on one hand and also needs manpower to attach the labels on the other hand. Moreover, because there are too many commodities in the supermarket, some commodities are not suitable for labeling or the labels are easy to damage, the supermarket usually only uses anti-theft labels for part of commodities, and the commodities are often stolen. For this reason, it is also usually necessary to find the thief by manually backtracking the monitoring video, however, such a method has the problems of high labor cost, long time consumption and low efficiency.
Disclosure of Invention
The embodiment of the application provides a risk identification method, a risk identification device, risk identification equipment and a storage medium, and aims to solve the problems that in the prior art, a thief needs to be found in a mode of manually backtracking a monitoring video, the labor cost is high, the time consumption is long, and the efficiency is low.
In a first aspect, an embodiment of the present application provides a risk identification method, including:
acquiring video data in a commodity display place;
determining a trajectory of a pedestrian's motion within the merchandise display based on the video data;
determining whether behavior data of a target behavior exists in the behavior data of the pedestrian in the commodity display place based on the motion trail of the pedestrian, wherein the target behavior comprises behaviors involved in a first behavior data condition and a second behavior data condition which are preset;
based on the determination result, a pedestrian that satisfies the first behavior data condition but does not satisfy the second behavior data condition is identified.
In a second aspect, an embodiment of the present application provides a risk identification apparatus, including:
the acquisition module is used for acquiring the acquired video data in the commodity display place;
a first determination module for determining a motion trajectory of a pedestrian within the merchandise display based on the video data;
the second determination module is used for determining whether behavior data of a target behavior exists in the behavior data of the pedestrian in the commodity display place based on the motion trail of the pedestrian, wherein the target behavior comprises behaviors involved in a preset first behavior data condition and a preset second behavior data condition;
an identification module for identifying a pedestrian who satisfies the first behavior data condition but does not satisfy the second behavior data condition based on a determination result.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor; wherein the memory stores one or more computer instructions that, when executed by the processor, implement the method of any one of the first aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which, when executed, implements the method according to any one of the first aspect.
Embodiments of the present application also provide a computer program, which is used to implement the method according to any one of the first aspect when the computer program is executed by a computer.
In the embodiment of the application, the motion track of the pedestrian in the commodity display position can be determined based on the collected video data in the commodity, whether behavior data of the target behavior exists in the behavior data of the pedestrian in the commodity display position or not is determined based on the motion track of the pedestrian, the target behavior comprises behaviors related to a preset first behavior data condition and a second behavior data condition, the pedestrian meeting the first behavior data condition but not meeting the second behavior data condition is identified based on the determination result, the pedestrian with the theft risk in the commodity display position is automatically found based on the video data in the commodity display position, therefore, the time consumption for finding the thief can be shortened, the labor cost is reduced, and the efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following descriptions are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a risk identification method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a risk identification method according to another embodiment of the present application;
fig. 4 is a schematic structural diagram of a risk identification device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two, but does not exclude the presence of at least one.
It should be understood that the term "and/or" as used herein is merely a relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at \8230; \8230when" or "when 8230; \823030, when" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (a stated condition or event)" may be interpreted as "upon determining" or "in response to determining" or "upon detecting (a stated condition or event)" or "in response to detecting (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrases "comprising one of \8230;" does not exclude the presence of additional like elements in an article or system comprising the element.
In addition, the sequence of steps in the embodiments of the methods described below is merely an example, and is not strictly limited.
In order to facilitate those skilled in the art to understand the technical solutions provided in the embodiments of the present application, a technical environment for implementing the technical solutions is described below.
Fig. 1 is a schematic view of an application scenario of the risk identification method according to the embodiment of the present application, and as shown in fig. 1, the application scenario may include a product display 11, a camera 12, and an electronic device 13. The product display 11 refers to any type of place that can be used for displaying products, and may be, for example, a shop counter or a store of a merchant. A camera 12 may be disposed within the product display 11 for capturing video data within the product display 11. The electronic device 13 may acquire video data collected by the camera 12 and identify a pedestrian at risk of theft based on the video data.
At present, a thief is discovered by manually backtracking a monitoring video, however, the method has the problems of high labor cost, long time consumption and low efficiency.
In order to solve the technical problems that a thief needs to be found by manually backtracking a monitoring video, the labor cost is high, the time consumption is long, and the efficiency is low, in the embodiment of the application, the motion track of a pedestrian in a commodity display position can be determined based on the collected video data in the commodity, whether behavior data of a target behavior exists in the behavior data of the pedestrian in the commodity display position or not can be determined based on the motion track of the pedestrian, the target behavior comprises behaviors related to a preset first behavior data condition and a preset second behavior data condition, and the pedestrian meeting the first behavior data condition but not meeting the second behavior data condition can be identified based on the determination result, so that the pedestrian with the theft risk in the commodity display position can be automatically found based on the video data in the commodity display position, the time consumption for finding the thief can be shortened, the labor cost is reduced, and the efficiency is improved.
It should be noted that, in fig. 1, the image capturing device 12 is taken as an example to capture video data in the product display, and it is understood that, in other embodiments, when the electronic device 13 has an image capturing function, the electronic device 13 may capture video data in the product display.
It should be noted that the method provided by the embodiment of the present application may be applied to retail scenes such as business supermarkets, convenience stores, brand exclusive stores, and the like, and may be used for risk identification in these scenes.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments and features of the embodiments described below can be combined with one another without conflict.
Fig. 2 is a schematic flowchart of a risk identification method according to an embodiment of the present application, where the embodiment may be applied to the electronic device 13 in fig. 1, and may be specifically executed by a processor of the electronic device 13. As shown in fig. 2, the method of this embodiment may include:
step 21, acquiring video data in the acquired commodity display;
step 22, determining the motion track of the pedestrian in the commodity display place based on the video data;
step 23, determining whether behavior data of target behaviors exist in the behavior data of the pedestrian in the commodity display place or not based on the motion trail of the pedestrian, wherein the target behaviors comprise behaviors related to a preset first behavior data condition and a preset second behavior data condition;
based on the determination result, the pedestrian who satisfies the first behavior data condition but does not satisfy the second behavior data condition is identified, step 24.
In the embodiment of the application, as early preparation, the shooting device can be installed in the commodity display place or the position of the original shooting device in the commodity display place can be adjusted, so that the shooting device can shoot all scenes needing to be shot in the commodity display place, and electronic equipment can be installed to acquire videos of all the shooting devices and analyze the videos. The number of the shooting devices may be one or more, the shooting devices may be, for example, cameras, video cameras, etc., the product display may be, for example, a supermarket, and the electronic device may be, for example, a computer device.
In the embodiment of the application, after the video data in the commodity display place is obtained, the motion track of the pedestrian in the commodity display place can be determined based on the video data. The motion trail of the pedestrian in the commodity display can be used for describing the passing position of the pedestrian in the whole commodity display, the motion trail can be composed of a plurality of trace points which are arranged in sequence, and each trace point can be used for describing a time point and a position.
When the number of the shooting devices deployed in the commodity display position is one, the video data collected by the shooting devices are processed by adopting a pedestrian tracking algorithm, and the motion track of each pedestrian in the commodity display position can be directly obtained.
When the number of the shooting devices arranged in the commodity display position is multiple, the video data collected by the same shooting device can be processed by adopting a pedestrian tracking algorithm to obtain the motion trail of the pedestrians under the same shooting device, and then the motion trails of the pedestrians under different shooting devices are combined to obtain the motion trail of the pedestrians in the commodity display position. It should be noted that, regarding the specific implementation of the pedestrian tracking algorithm and the combination of the trajectories of the same pedestrian under different shooting devices, reference may be made to the specific description in the prior art, and details are not described herein again.
It should be noted that, in a possible case, the motion trajectory of a part of the pedestrians in the merchandise display is a complete trajectory, that is, the motion trajectory of the part of the pedestrians can completely describe the trajectory of the pedestrians from entering the merchandise display to leaving the merchandise display, and the motion trajectory of some pedestrians may be an incomplete trajectory.
In the embodiment of the application, after determining the motion trajectory of the pedestrian in the product display, it may be determined whether behavior data of a target behavior exists in the behavior data of the pedestrian in the product display based on the motion trajectory of the pedestrian, wherein the target behavior may include a behavior related to a first behavior data condition and a behavior related to a second behavior data condition, the first behavior data condition is used for describing behavior data characteristics specific to a thief in the product display, the second behavior data condition is used for describing behavior data characteristics specific to a worker in the product display, and the worker may be, for example, a shop assistant, a supplier worker, and the like who works in the product display.
In one embodiment, the step 23 may specifically include determining whether the pedestrian has behavior data of the target behavior in the behavior data of the product display by determining whether the pedestrian has the target behavior in the product display, and based on this: based on the trajectory of the pedestrian, it is determined whether the pedestrian has a target behavior within the merchandise display. Wherein, if the target behavior exists in the commodity display place, the behavior data of the target behavior exists in the behavior data of the pedestrian in the commodity display place can be represented; the absence of a target behavior by a pedestrian within the merchandise display may indicate the absence of behavior data for the target behavior from the behavior data for the pedestrian in the merchandise display.
In one embodiment, considering that the checkout behavior of a pedestrian in a merchandise display is not easily detected, the first behavior data condition may include the presence of behavior data of a pickup behavior and the absence of behavior data of a behavior considered to be checked out, so that a pedestrian at risk of theft who has picked up a merchandise but not checked out may be detected. In this case, the target behavior may include a pick-up behavior and a behavior that is considered to be checked-out. Wherein the behavior considered to be checked out may specifically relate to the layout within the merchandise display.
For example, behavior data for a behavior considered to be checked-out may include one or more of the following: the behavior data of the behavior of operating the self-service cash register, the behavior data of the behavior of passing through an artificial cash register station, or the behavior data of the behavior of passing through the self-service cash register zone and having the stay time less than a first time threshold. The goods taking action, the action of operating the self-service cash register, the action of passing through an artificial cash register, the action of passing through the self-service cash register area and the stay time in the self-service cash register area can be realized based on a track and through a computer vision algorithm, and the specific implementation mode can refer to specific description in the prior art and is not repeated herein.
In the case where a self-service checkout area is provided in the merchandise display, the behavior data regarded as the behavior of having settled may include behavior data of a behavior of passing through the self-service checkout area and having a stay time period shorter than a first time period threshold value, which may be 1 minute, for example. In the case where a self-service cash register is provided within the merchandise display, but a self-service cash register area is not provided, the behavior data of the behavior considered to be checkout may include behavior data of the behavior of operating the self-service cash register. Where an artificial cash register is provided within the merchandise display, the behavioral data that is considered to be a checked-out behavior may include behavioral data of a behavior of passing through the artificial cash register.
In one embodiment, in consideration of the fact that one person may shop for another person to check out in real life, the first behavior data condition may further include that the same person also has no behavior data which is considered to be checked out, so that pedestrians who shop together in the commodity display can be considered during recognition, false recognition of the fact that one person shops for another person to check out is reduced, and the false recognition rate is reduced.
In one embodiment, in a case where the employee lane and the customer lane are respectively provided in the product display, the second behavior data condition may include that there is behavior data of a behavior passing through the employee lane but there is no behavior data of a behavior passing through the customer lane, so that it is possible to realize that a worker who passes through the employee lane but does not pass through the customer lane is excluded. In this case, the target behavior may include behavior through a staff passage and behavior through a customer passage.
In the embodiment of the present application, after the determination result is obtained, a pedestrian that satisfies the first behavior data condition but does not satisfy the second behavior data condition may be identified based on the determination result. It should be understood that a pedestrian who satisfies the first behavior data condition but does not satisfy the second behavior data condition may be considered as a pedestrian at risk of theft.
It should be noted that the number of conditions in the first behavior data condition may be plural, and when the behavior data of a pedestrian satisfies all the conditions in the first behavior data condition, the pedestrian may be considered to satisfy the first behavior data condition. The number of conditions in the second behavior data condition may be plural, and in the case where the behavior data of a pedestrian does not satisfy any one of the second behavior data conditions, the pedestrian may be considered to not satisfy the second behavior data condition.
In one embodiment, the second activity data condition may include activity data for activities that exist within the display for which the length of stay exceeds a second length threshold, and/or activity data for activities that exist within the display before or after the opening of the display. The second time period threshold value may be, for example, 3 hours, and the time range in which the pedestrian stays in the merchandise display may be determined according to the movement track of the pedestrian. In this case, the target behavior may include behavior that the length of stay within the display exceeds a second length threshold and/or behavior that still occurs within the display before or after the opening of the display.
In one embodiment, the behavior data of the pedestrian with the incomplete motion track may not be judged, so as to avoid the occurrence of false recognition caused by incomplete motion track, which is beneficial to reducing the false recognition rate, and based on the fact, the target behavior may further include a behavior passing through an entrance of a commodity display place and a behavior passing through an exit of the commodity display place; identifying, based on the determination result, the pedestrian who satisfies the first behavior data condition but does not satisfy the second behavior data condition, which may specifically include: identifying at least a portion of the pedestrians for which there is behavior data of a behavior through an entrance at the merchandise display and behavior data of a behavior through an exit at the merchandise display based on the determination result; based on the determination result, a pedestrian that satisfies the first behavior data condition but does not satisfy the second behavior data condition is identified from among at least some pedestrians.
In one embodiment, the first behavior data condition may further include the presence of behavior data that is considered to be a behavior that carries merchandise, in which case the target behavior may further include a behavior that is considered to be a behavior that carries merchandise. For example, behavior data that is considered behavior of carrying merchandise may include one or more of the following: the behavior data of the behavior of packing the commodity into a bag in the commodity display place, the behavior data of the behavior of packing the commodity into a body in the commodity display place, the behavior data of the behavior of packing the bag on the body when the commodity display place is shown, or the behavior data of the behavior of the commodity in a shopping cart when the commodity display place is shown.
Optionally, the method provided in the embodiment of the present application may further include: and outputting alarm information for prompting the pedestrians who meet the first behavior data condition but do not meet the second behavior data condition, so that the working personnel can timely know the existence of the pedestrians with possible theft behaviors and further carry out manual examination. It should be noted that, as to the specific manner of outputting the alarm information, the application is not limited.
Or, in another embodiment, the behavior of carrying the article may be taken as a further consideration than the behavior involved in the first behavior data condition, so that the suspicious degree of the person who may have the theft may be distinguished. Based on this, the target behavior may also include behavior that is considered to carry merchandise; after identifying the pedestrian who satisfies the first behavior data condition but does not satisfy the second behavior data condition based on the determination result, it may further include: on the basis of the determination result, it is determined that, of the pedestrians who satisfy the first behavior data condition but do not satisfy the second behavior data condition, there is a pedestrian who is regarded as behavior data of a behavior to carry a commodity. It is understood that there is a pedestrian whose behavior data is considered to be a behavior carrying a commodity, whose degree of risk is higher than there is no pedestrian whose behavior data is considered to be a behavior carrying a commodity.
Optionally, the method provided in the embodiment of the present application may further include: alarm information for presenting pedestrians who satisfy the first behavior data condition but do not satisfy the second behavior data condition is output in an order that pedestrians who have behavior data regarded as behaviors of carrying goods are ahead and pedestrians who do not have behavior data regarded as behaviors of carrying goods are behind. Therefore, the staff can preferentially audit the pedestrians with higher risk degree.
According to the risk identification method provided by the embodiment, the motion track of the pedestrian in the commodity display position is determined based on the collected video data in the commodity, whether behavior data of the pedestrian in the behavior data of the commodity display position exists or not is determined based on the motion track of the pedestrian, the target behavior comprises behaviors related to a preset first behavior data condition and a preset second behavior data condition, the pedestrian meeting the first behavior data condition but not meeting the second behavior data condition is identified based on the determination result, and the purpose that the pedestrian with the theft risk in the commodity display position is automatically found based on the video data in the commodity display position is achieved, so that the time consumption for finding the thief can be shortened, the labor cost is reduced, and the efficiency is improved.
Taking a product exhibition place as an example of a supermarket, in one embodiment, the overall processing flow of the risk identification method may include four steps as shown in fig. 3: step 31, deploying the system; step 32, tracking the whole track of the pedestrian; step 33, behavior recognition; and step 34, theft decision, and risk classification of pedestrians with theft risk in step 34, wherein step 31 can be understood as early preparation, and step 32, step 33 and step 34 can be realized by three functional modules respectively.
Because of the complexity of real supermarket personnel, the task output by step 34 is not necessarily a thief and therefore a manual review may also be performed. The pedestrians with the theft risk output in the step 34 can be finally checked by manual work whether the theft is really present or not, and the interception or the further measures are implemented.
Step 31, system deployment
The system needs to be deployed before running. Deployment may include installing a camera or adjusting the position of the camera so that the camera can capture all scenes in the supermarket, and the video resolution and the capturing direction may satisfy the application conditions of the computer algorithm. And installing a computer, wherein the computer can acquire and analyze the videos of all the cameras.
In addition, the content in the camera picture can be labeled. Labels include specific areas of the drawing and the meanings represented. The region to be labeled may include: a self-checkout area, an artificial checkout counter, a staff aisle, a customer aisle, and shelf locations. The marked self-service checkout area, the manual cash register, the staff passage and the customer passage can be used for identifying the behavior of pedestrians passing through the area, and the marked shelf position can be used for identifying the behavior of goods taking.
Step 32, tracking the whole track of the pedestrian
In the step, the passing position of the pedestrian can be restored between entering the supermarket and leaving the supermarket. And the computer acquires videos of all cameras in the supermarket, detects all people from the picture and tracks the motion tracks of all people in the picture. The computer combines the motion tracks of the customers in different cameras, so that all the motion tracks of the pedestrians from entering the supermarket to leaving the supermarket are obtained.
Step 33, behavior recognition
The identification of the action performed in this step may include identification of pickup, identification of fellow persons, identification of entry areas, identification of operation of self-checkout machines, and identification of other actions.
The goods taking identification is an action identification algorithm running on a computer, and judges whether the pedestrian takes the goods from the goods shelf or not according to the marked goods shelf position and the behavior characteristics of the person in the picture, for example, a human body skeleton motion model can be generated according to a human body image obtained by carrying out image detection on an image of the motion track of the pedestrian, wherein the human body skeleton can be represented by dotted lines, and further the goods taking action can be identified according to the distance relation between the human body skeleton and the goods shelf mark and whether the person takes the goods or not. The pick-up identification can also be achieved in other ways to obtain more accurate results, such as depth cameras, weighing shelves, etc., but at higher cost.
Peer identification refers to identifying people who are shopping together in a supermarket who may check out together or be replaced by another person. For example, the co-pedestrian recognition may be performed based on the motion trajectories of all pedestrians, so as to obtain a group of pedestrians, and the pedestrians in the same group may be considered as co-pedestrians with each other.
Entry area identification refers to identification of a marked area for pedestrian access, which is marked at system deployment, e.g., a customer entering a "self-checkout area" and a clerk passing through a "clerk lane". For example, the entering region identification may be performed based on a motion trajectory of a certain pedestrian, to obtain an identification result of whether the pedestrian enters a labeled region (when the pedestrian enters the certain labeled region, an event that the pedestrian enters the labeled region may be obtained), and corresponding time points of entering and leaving.
Operating the self-service checkout machine to identify the action of the pedestrian operating the self-service checkout machine. For example, motion recognition may be performed based on a human body screen detected from an image of a motion trajectory of a certain pedestrian, and a recognition result of whether the pedestrian operates the self checkout machine may be obtained.
Identifying other actions may include: the goods are put into bags or bags, the goods are put into clothes, and the shopping cart contains the goods. For example, the result of recognition of whether or not a pedestrian has an action of packing a commodity in a bag or pouch may be obtained by performing action recognition based on a human body screen detected from an image of a motion trajectory of the pedestrian.
Step 34, theft action decision
In the step, whether the pedestrian is a thief or not can be evaluated through the motion track and the behavior of the pedestrian in the supermarket, and the pedestrian is ranked according to the suspicious degree of the behavior.
Thief basic features: taking the commodity; the cash does not pass through an artificial cash register, does not pass through a self-service cash register area, and does not operate a self-service cash register; there are events entering and leaving the supermarket; in addition, the same person does not pass through an artificial cash register, does not pass through a self-service cash register area, and does not operate a self-service cash register. Wherein events entering and leaving the supermarket are used for eliminating incomplete motion trajectories.
Excluding staff: the behavior of a clerk, supplier or supervisor, etc. in a supermarket is different from that of a regular customer, and has the possibility of being judged as a suspected thief, and the clerk can be excluded by the following logic: through the employee lane and not through the customer lane; or staying in the supermarket for more than 3 hours; or thrown out in the supermarket before the supermarket starts to open the business or after the store is closed. Staff identification algorithms can also be used to distinguish between staff and customers.
On the premise of meeting the basic characteristics of the thief and excluding the clerk, a more suspicious thief can be further determined according to the behavior characteristics of the thief.
Wherein the act of further determining a more suspect thief may comprise: the supermarket has the action of putting commodities into a bag or a bag, or the supermarket has the action of putting the commodities on the body, or the supermarket takes the bag when the commodities are taken out, or the supermarket has the commodities in a shopping cart when the commodities are taken out. It should be noted that any one of these four conditions may be satisfied to determine that it is highly suspected of being a theft, and that the clerk may prioritize the verification.
It should be noted that, after a pedestrian enters the self-checkout area or operates the self-checkout machine, a theft (such as missing scanning, false scanning, direct bagging, etc.) may occur, and the theft before the checkout machine is not a concern in this embodiment. The present embodiment mainly focuses on the case where a user directly takes a commodity in a shop and takes the commodity away from the supermarket from a shopping mall, for example.
In the embodiment, only a common monitoring camera is adopted as an equipment scheme, only the pedestrian is required to clearly take the commodity, the stealing behavior is discovered through behavior logic, the stealing behavior that the commodity is directly hidden in the body, clothes or bags and is directly taken out of the supermarket through a shopping-free channel or other non-payment channels can be discovered, the shelf structure is not required to be manufactured, the implementation and maintenance cost is low, and the arrangement is convenient.
Fig. 4 is a schematic structural diagram of a risk identification apparatus according to an embodiment of the present application; referring to fig. 4, the present embodiment provides an identification apparatus, which may perform the risk identification method provided in the foregoing embodiment, and specifically, the apparatus may include:
an obtaining module 41, configured to obtain video data in a commodity display place;
a first determining module 42, configured to determine a motion trajectory of each pedestrian within the merchandise display based on the video data;
a second determining module 43, configured to determine whether behavior data of a target behavior exists in behavior data of a pedestrian in the merchandise display based on a motion trajectory of the pedestrian, where the target behavior includes behaviors involved in a preset first behavior data condition and a preset second behavior data condition;
and the identification module 44 is used for identifying the pedestrians which meet the first behavior data condition but do not meet the second behavior data condition based on the determination result.
In a possible implementation manner, the second determining module 43 is specifically configured to: determining whether the pedestrian has a target behavior in the commodity display place based on the motion trail of the pedestrian; the presence of the target behavior of the pedestrian in the product display place indicates that the behavior data of the pedestrian in the product display place indicates the presence of the target behavior, and the absence of the target behavior of the pedestrian in the product display place indicates that the behavior data of the pedestrian in the product display place does not indicate the presence of the target behavior.
In one possible implementation, the first behavior data condition includes the presence of behavior data for pickup behavior and the absence of behavior data for behavior considered checked out.
In one possible implementation, the behavior data for what is considered to be a checked-out behavior includes one or more of: the behavior data of the behavior of operating the self-service cash register, the behavior data of the behavior of passing through the manual cash register desk, or the behavior data of the behavior of passing through the self-service cash register area and having the stay time less than the first time threshold.
In one possible implementation, the first behavior data condition further includes that the peer does not have behavior data for what is considered checked-out behavior.
In one possible implementation, the second behavioral data condition includes the presence of behavioral data for a behavior through the employee channel and the absence of behavioral data for a behavior through the customer channel.
In one possible implementation, the second behavior data condition includes behavior data for behaviors that exist within the display for a length of time that exceeds a second length of time threshold, and/or behavior data for behaviors that exist within the display before or after the display begins to operate.
In one possible implementation, the second behavior data condition further includes the presence of behavior data considered as a behavior of carrying the article.
In one possible implementation, the behavior data considered to be the behavior of carrying the good includes one or more of: the behavior data of the behavior of packing the commodity into a bag in the commodity display place, the behavior data of the behavior of packing the commodity into a body in the commodity display place, the behavior data of the behavior of packing the bag on the body when the commodity display place is shown, or the behavior data of the behavior of the commodity in a shopping cart when the commodity display place is shown.
In a possible implementation manner, the apparatus provided in this embodiment may further include a first warning module, configured to output warning information for prompting a pedestrian who satisfies the first behavior data condition but does not satisfy the second behavior data condition.
In one possible implementation, the target behavior further includes a behavior that is considered to be carrying merchandise; the identification module 44 is further configured to: on the basis of the determination result, it is determined that, of the pedestrians who satisfy the first behavior data condition but do not satisfy the second behavior data condition, there is a pedestrian who is regarded as behavior data of a behavior of carrying a commodity.
In a possible implementation manner, the apparatus provided in this embodiment may further include a second warning module, and output warning information for prompting a pedestrian who satisfies the first behavior data condition but does not satisfy the second behavior data condition according to an order that a pedestrian who has behavior data regarded as a behavior of carrying a commodity comes before and a pedestrian who does not have behavior data regarded as a behavior of carrying a commodity comes after.
In a possible implementation manner, the identifying module 44 is specifically configured to: identifying at least a portion of the pedestrians for which there is behavior data of a behavior passing through an entrance at the merchandise display and behavior data of a behavior passing through an exit at the merchandise display based on the determination result; based on the determination result, pedestrians who satisfy the first behavior data condition but do not satisfy the second behavior data condition are identified from among at least some pedestrians.
The apparatus shown in fig. 4 can execute the method provided by the embodiment shown in fig. 2, and reference may be made to the related description of the embodiment shown in fig. 2 for a part not described in detail in this embodiment. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 2, and are not described herein again.
In one possible implementation, the structure of the apparatus shown in fig. 4 may be implemented as an electronic device. As shown in fig. 5, the electronic device may include: a processor 51 and a memory 52. Wherein the memory 52 stores a program that enables the controller to execute the method provided in the embodiment shown in fig. 2 described above, and the processor 51 is configured to execute the program stored in the memory 52.
The program comprises one or more computer instructions which, when executed by the processor 51, are capable of performing the steps of:
acquiring collected video data in a commodity display place;
determining a trajectory of a pedestrian's movement within the merchandise display based on the video data;
determining whether behavior data of a target behavior exists in the behavior data of the pedestrian in the commodity display place based on the motion trail of the pedestrian, wherein the target behavior comprises behaviors involved in a first behavior data condition and a second behavior data condition which are preset;
based on the determination result, a pedestrian that satisfies the first behavior data condition but does not satisfy the second behavior data condition is identified.
Optionally, the processor 51 is further configured to perform all or part of the steps in the embodiment shown in fig. 2.
The electronic device may further include a communication interface 53 for communicating with other devices or a communication network.
The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed, the method according to the embodiment shown in fig. 2 is implemented.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described technical solutions and some contributions to the art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, linked lists, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the scope of the technical solutions of the embodiments of the present application.

Claims (16)

1. A method for risk identification, comprising:
acquiring video data in a commodity display place;
determining a trajectory of a pedestrian's movement within the merchandise display based on the video data;
determining whether behavior data of a target behavior exists in the behavior data of the pedestrian in the commodity display based on the motion trail of the pedestrian, wherein the target behavior comprises behaviors related to a preset first behavior data condition and a preset second behavior data condition, the first behavior data condition is used for describing behavior data characteristics specific to a thief in the commodity display, and the second behavior data condition is used for describing behavior data characteristics specific to a worker in the commodity display;
based on the determination result, a pedestrian that satisfies the first behavior data condition but does not satisfy the second behavior data condition is identified.
2. The method of claim 1, wherein determining whether behavior data of a target behavior exists in the behavior data of the pedestrian within the merchandise display based on the motion trajectory of the pedestrian comprises:
determining whether the pedestrian has the target behavior within the merchandise display based on the pedestrian's motion profile; the presence of the target behavior of the pedestrian in the merchandise display place indicates the presence of behavior data of the target behavior in the behavior data of the pedestrian in the merchandise display place, and the absence of the target behavior of the pedestrian in the merchandise display place indicates the absence of the behavior data of the target behavior in the behavior data of the pedestrian in the merchandise display place.
3. The method of claim 1, wherein the first behavior data condition comprises the presence of behavior data for a pickup behavior and the absence of behavior data for a behavior considered to be checked out.
4. The method of claim 3, wherein the behavior data for behaviors considered to be checked-out includes one or more of: the behavior data of the behavior of operating the self-service cash register, the behavior data of the behavior of passing through the manual cash register desk, or the behavior data of the behavior of passing through the self-service cash register area and having the stay time less than the first time threshold.
5. The method of claim 3, wherein the first behavior data condition further comprises the absence of behavior data for the peer for the behavior considered checked-out.
6. The method of claim 1, wherein the second behavioral data condition comprises presence of behavioral data for behavior through a staff corridor and absence of behavioral data for behavior through a customer corridor.
7. The method of claim 1, wherein the second behavioral data condition comprises behavioral data of an activity occurring within the display for a length of stay exceeding a second length of time threshold and/or behavioral data of an activity occurring within the display before or after the opening of business at the display.
8. The method of claim 3, wherein the second behavior data condition further comprises the presence of behavior data that is considered a behavior for carrying merchandise.
9. The method of claim 8, wherein the behavior data for behaviors believed to be carrying merchandise comprises one or more of: the behavior data of the behavior of packing the goods into the bag in the goods display place, the behavior data of the behavior of packing the goods onto the body in the goods display place, the behavior data of the behavior of packing the bag on the body when the goods display place is shown, or the behavior data of the behavior of the goods in the shopping cart when the goods display place is shown.
10. The method according to any one of claims 1-8, further comprising: and outputting alarm information for prompting pedestrians who meet the first behavior data condition but do not meet the second behavior data condition.
11. The method according to any of claims 3-7, wherein the target behavior further comprises a behavior that is considered to be carrying merchandise; after the identifying, based on the determination result, the pedestrian who satisfies the first behavior data condition but does not satisfy the second behavior data condition, the method further includes:
determining, based on the determination result, that the pedestrian regarded as the behavior data of the behavior for carrying the article exists among the pedestrians who satisfy the first behavior data condition but do not satisfy the second behavior data condition.
12. The method of claim 11, further comprising: and outputting alarm information for prompting a pedestrian who satisfies the first behavior data condition but does not satisfy the second behavior data condition in an order that a pedestrian who has the behavior data regarded as the behavior of carrying the commodity comes before and a pedestrian who does not have the behavior data regarded as the behavior of carrying the commodity comes after.
13. The method of claim 1, wherein the target behavior further comprises behavior through an entrance of the merchandise display and behavior through an exit of the merchandise display, and wherein identifying, based on the determination, a pedestrian that satisfies the first behavior data condition but does not satisfy the second behavior data condition comprises:
identifying at least a portion of the pedestrians for which there is behavior data of a behavior through an entrance at the merchandise display and behavior data of a behavior through an exit at the merchandise display based on the determination result;
based on the determination result, a pedestrian that satisfies the first behavior data condition but does not satisfy the second behavior data condition is identified from among the at least some pedestrians.
14. A risk identification device, comprising:
the acquisition module is used for acquiring the acquired video data in the commodity display place;
a first determination module for determining a trajectory of a pedestrian within the merchandise display based on the video data;
the second determination module is used for determining whether behavior data of a target behavior exists in the behavior data of the pedestrian in the commodity display place based on the motion trail of the pedestrian, wherein the target behavior comprises a preset first behavior data condition and a second behavior data condition, the first behavior data condition is used for describing behavior data characteristics specific to a thief in the commodity display place, and the second behavior data condition is used for describing behavior data characteristics specific to a worker in the commodity display place;
an identification module for identifying a pedestrian who satisfies the first behavior data condition but does not satisfy the second behavior data condition based on a determination result.
15. An electronic device, comprising: a memory, a processor; wherein the memory stores one or more computer instructions that, when executed by the processor, implement the method of any of claims 1-13.
16. A computer-readable storage medium, having stored thereon a computer program which, when executed, implements the method of any one of claims 1 to 13.
CN202211490715.0A 2022-11-25 2022-11-25 Risk identification method, device, equipment and storage medium Active CN115546900B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211490715.0A CN115546900B (en) 2022-11-25 2022-11-25 Risk identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211490715.0A CN115546900B (en) 2022-11-25 2022-11-25 Risk identification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115546900A CN115546900A (en) 2022-12-30
CN115546900B true CN115546900B (en) 2023-03-31

Family

ID=84722307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211490715.0A Active CN115546900B (en) 2022-11-25 2022-11-25 Risk identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115546900B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111241872A (en) * 2018-11-28 2020-06-05 杭州海康威视数字技术股份有限公司 Video image shielding method and device
CN115294525A (en) * 2022-08-06 2022-11-04 深圳进化动力数码科技有限公司 Market commodity supervisory systems

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018136731A (en) * 2017-02-22 2018-08-30 東芝テック株式会社 Information processing device and program
CN111263224B (en) * 2018-11-30 2022-07-15 阿里巴巴集团控股有限公司 Video processing method and device and electronic equipment
CN112257487A (en) * 2020-05-29 2021-01-22 北京沃东天骏信息技术有限公司 Identification method, equipment, security system and storage medium
CN114360182B (en) * 2020-09-27 2024-02-27 腾讯科技(深圳)有限公司 Intelligent alarm method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111241872A (en) * 2018-11-28 2020-06-05 杭州海康威视数字技术股份有限公司 Video image shielding method and device
CN115294525A (en) * 2022-08-06 2022-11-04 深圳进化动力数码科技有限公司 Market commodity supervisory systems

Also Published As

Publication number Publication date
CN115546900A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
US11676387B2 (en) Method and apparatus for detecting suspicious activity using video analysis
JP5054670B2 (en) Method and apparatus for detecting suspicious behavior using video analysis
US7516888B1 (en) Method and apparatus for auditing transaction activity in retail and other environments using visual recognition
US9355308B2 (en) Auditing video analytics through essence generation
US8448858B1 (en) Method and apparatus for detecting suspicious activity using video analysis from alternative camera viewpoint
CN102881100B (en) Entity StoreFront anti-thefting monitoring method based on video analysis
US20050102183A1 (en) Monitoring system and method based on information prior to the point of sale
WO2002059836A2 (en) Monitoring responses to visual stimuli
JP5673888B1 (en) Information notification program and information processing apparatus
CN111263224A (en) Video processing method and device and electronic equipment
CN111260685B (en) Video processing method and device and electronic equipment
Rajpurkar et al. Alert generation on detection of suspicious activity using transfer learning
JP4159572B2 (en) Abnormality notification device and abnormality notification method
EP4053812A1 (en) Method and apparatus for the detection of suspicious behaviour in a retail environment
CN115546900B (en) Risk identification method, device, equipment and storage medium
CN115641548A (en) Abnormality detection method, apparatus, device and storage medium
CN115546703B (en) Risk identification method, device and equipment for self-service cash register and storage medium
WO2019151068A1 (en) Information processing method, information processing device, and recording medium
CN115565117B (en) Data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231219

Address after: Room 801-6, No. 528 Yan'an Road, Gongshu District, Hangzhou City, Zhejiang Province, 310000

Patentee after: Zhejiang Shenxiang Intelligent Technology Co.,Ltd.

Address before: Room 5034, building 3, 820 wenerxi Road, Xihu District, Hangzhou, Zhejiang 310000

Patentee before: ZHEJIANG LIANHE TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right