CN111985440B - Intelligent auditing method and device and electronic equipment - Google Patents

Intelligent auditing method and device and electronic equipment Download PDF

Info

Publication number
CN111985440B
CN111985440B CN202010897100.4A CN202010897100A CN111985440B CN 111985440 B CN111985440 B CN 111985440B CN 202010897100 A CN202010897100 A CN 202010897100A CN 111985440 B CN111985440 B CN 111985440B
Authority
CN
China
Prior art keywords
user
target
commodity
image
suspected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010897100.4A
Other languages
Chinese (zh)
Other versions
CN111985440A (en
Inventor
邹明杰
张天琦
戴华东
程浩
朱皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010897100.4A priority Critical patent/CN111985440B/en
Publication of CN111985440A publication Critical patent/CN111985440A/en
Application granted granted Critical
Publication of CN111985440B publication Critical patent/CN111985440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • G06Q30/0635Processing of requisition or of purchase orders
    • G06Q30/0637Approvals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides an intelligent auditing method, an intelligent auditing device and electronic equipment. According to the application, by introducing the intelligent auditing mechanism into the appointed area such as the unmanned supermarket, the exception can be corrected in time, the target data for correcting the exception can be obtained, the commodity order can be generated according to the target data in the follow-up process, the accuracy of the commodity order is ensured, and the shopping experience of the non-inductive payment is improved.

Description

Intelligent auditing method and device and electronic equipment
Technical Field
The present application relates to image processing technologies, and in particular, to an intelligent auditing method and apparatus, and an electronic device.
Background
With the development of image recognition technology and payment technology, the application of non-inductive payment is becoming wider and wider. Taking an unmanned supermarket as an example, a user can directly leave the unmanned supermarket without queuing to pay and settle accounts after purchasing goods in the unmanned supermarket, and an intelligent system corresponding to the unmanned supermarket can judge which goods the user purchases and trigger settlement and deduction based on an intelligent algorithm. Such non-inductive payment provides great convenience to the user.
However, in the non-inductive payment application, the intelligent system often has some anomalies when identifying, such as a failure to bind a user newly entering a designated area (such as an unmanned supermarket, an unmanned warehouse, etc., collectively referred to as a designated area) with a registered payment account, a failure to associate the user with a commodity, a failure to determine the type, quantity, etc. of the commodity purchased by the user. These anomalies can lead to inaccurate product orders ultimately being generated for the user for application to the designated areas (e.g., unmanned supermarkets, etc.).
Disclosure of Invention
The embodiment of the application provides an intelligent auditing method, an intelligent auditing device and electronic equipment, which are used for ensuring the accuracy of commodity orders.
The technical scheme provided by the application comprises the following steps:
an intelligent auditing method, the method comprising:
acquiring the event type carried by the monitored auditing service request; the auditing service request is triggered by abnormal monitoring by a configured intelligent system in a designated area when at least one acquired image is processed based on a configured intelligent algorithm; the event type is used for characterizing the anomaly;
determining an audit policy matched with the event type according to the event type;
and auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality.
Optionally, the event type is a first type, and the first type is used for indicating that the newly generated target user track fails to be associated with the known user ID allocated when any user enters a designated area; the alarm data comprises image information corresponding to the at least one acquired image and the target user track; the auditing strategy is a user track auditing strategy;
The auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality comprises the following steps:
extracting a user image area corresponding to the target user track from at least one acquired image corresponding to the image information;
and determining a first target user ID matched with the target user track from the at least one known user ID according to the user image area corresponding to the target user track and the obtained user image area corresponding to the at least one known user ID.
Optionally, the determining, according to the user image area corresponding to the target user track and the obtained user image area corresponding to the at least one known user ID, the first target user ID matched with the target user track from the at least one known user ID includes:
inputting a user image area corresponding to the target user track into a trained first convolutional neural network to obtain a target human body characteristic model;
inputting the obtained user image area corresponding to at least one known user ID into the first convolutional neural network to obtain candidate human body feature models corresponding to the known user IDs;
And determining the known user ID corresponding to one of the candidate human body feature models as the first target user ID according to the similarity between the target human body feature model and each candidate human body feature model.
Optionally, the event type is a second type for indicating a person-goods association abnormality; the alarm data comprise target time, target goods lattice, target behavior and commodity information of target commodity; the target goods lattice is the goods lattice where the target goods is located, the target moment is the moment when the target goods are executed with the target action, and the target action is taking or putting back; the auditing strategy is a human-cargo association auditing strategy;
the auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality comprises the following steps:
acquiring at least one first video image which is acquired by the designated acquisition equipment before the target moment and has a first duration; obtaining at least one second video image which is acquired by the designated acquisition equipment after the target moment and has a second duration; the at least one designated collection device is a collection device with a field of view area at least comprising the target cargo compartment;
A second target user ID for performing the target action on the target commodity is determined based on the obtained at least one first video image and at least one second video image.
Optionally, the determining the second target user ID for performing the target action on the target commodity according to the obtained at least one first video image and at least one second video image includes:
extracting a human body image sequence of each suspected user from at least one first video image and at least one second video image respectively; the suspected user is a user positioned in a set area in front of the target goods lattice, and the human body image sequence of the suspected user consists of an image area of the suspected user in a first video image and an image area of the suspected user in a second video image; determining a behavior analysis result corresponding to each suspected user according to the human body image sequence of each suspected user, and determining a user ID corresponding to one suspected user as the second target user ID if the behavior analysis result corresponding to the suspected user is matched with the target behavior; or alternatively, the process may be performed,
extracting at least one first image area from at least one first video image, wherein the first image area corresponds to a set area in front of the target goods lattice; extracting a second image area from at least one second video image, wherein the second image area corresponds to a set area in front of the target goods lattice; determining a handheld commodity analysis result corresponding to each suspected user before and after the target moment according to at least one first image area and at least one second image area, wherein the suspected user is a user in a set area in front of the target goods lattice; and if the handheld commodity analysis result corresponding to one suspected user is matched with the target behavior and the target commodity information, determining that the user ID corresponding to the suspected user is the second target user ID.
Optionally, the determining the second target user ID for performing the target action on the target commodity according to the obtained at least one first video image and at least one second video image includes:
extracting a human body image sequence of each suspected user from at least one first video image and at least one second video image respectively; the suspected user is a user positioned in a set area in front of the target goods lattice, and the human body image sequence of the suspected user consists of an image area of the suspected user in a first video image and an image area of the suspected user in a second video image; determining a behavior analysis result corresponding to each suspected user according to the human body image sequence of each suspected user, and determining a user ID corresponding to one suspected user as a first candidate user ID if the behavior analysis result corresponding to the suspected user is matched with the target behavior;
extracting at least one first image area from at least one first video image, wherein the first image area corresponds to a set area in front of the target goods lattice; extracting a second image area from at least one second video image, wherein the second image area corresponds to a set area in front of the target goods lattice; determining the corresponding handheld commodity analysis results of each suspected user before and after the target moment according to at least one first image area and at least one second image area, wherein the suspected user is a user in a set area in front of the target goods lattice; if a handheld commodity analysis result corresponding to a suspected user is matched with the target behavior and the target commodity information, determining that the user ID corresponding to the suspected user is a second candidate user ID;
And if the first candidate user ID is consistent with the second candidate user ID, determining that the second target user ID is the first candidate user ID or the second candidate user ID.
Optionally, the determining, according to the human body image sequence of each suspected user, the behavior analysis result corresponding to each suspected user includes:
and inputting the human body image sequence of each suspected user to the trained second convolutional neural network to obtain a behavior analysis result corresponding to the suspected user.
Optionally, the determining, according to the at least one first image area and the at least one second image area, a handheld commodity analysis result corresponding to each suspected user before and after the target time includes:
inputting the at least one first image area into a trained third convolutional neural network to obtain at least one first commodity information, wherein the first commodity information at least comprises: correspondence of commodity category, commodity number and commodity position; determining first commodity information associated with the suspected user from at least one first commodity information according to the hand track corresponding to the obtained suspected user ID;
inputting at least one second image area into the third convolutional neural network to obtain at least one second commodity information, wherein the second commodity information at least comprises: correspondence of commodity category, commodity number and commodity position; determining second commodity information associated with the suspected user from at least one piece of second commodity information according to the hand track corresponding to the obtained suspected user ID;
And determining the handheld commodity analysis results corresponding to the same suspected user before and after the target moment according to the first commodity information and/or the second commodity information associated with the same suspected user.
Optionally, the event type is a third type for indicating that the commodity is abnormal in identification; the alarm data comprise target time, target goods lattice and target behavior; the target goods lattice is the goods lattice where the target goods is, the target moment is the moment of executing the target behavior, and the target behavior is taking or putting back; the auditing strategy is a commodity identification auditing strategy;
the auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality comprises the following steps:
acquiring at least one third video image which is acquired by the designated acquisition equipment and has a third duration before the target moment; obtaining a fourth video image which is acquired by at least one appointed acquisition device, is after the target moment and has a fourth time length; the appointed acquisition equipment refers to acquisition equipment of which the field of view area at least comprises the target goods lattice;
determining commodity change information of commodities in the same commodity category before and after the target moment according to at least one third video image and at least one fourth video image; determining target commodity information of a target commodity subjected to the target behavior according to commodity change information; the target commodity information includes at least: target commodity category and target commodity quantity.
Optionally, the determining, according to the at least one third video image and the at least one fourth video image, commodity change information of the commodity of the same category occurring before and after the target time includes:
extracting a third image area from at least one third video image, wherein the third image area corresponds to a set area in front of the target goods lattice; inputting at least one third image area into a trained fourth convolutional neural network to obtain third commodity information, wherein the third commodity information at least comprises: correspondence between commodity categories and commodity numbers;
extracting a fourth image area from at least one fourth video image, wherein the fourth image area corresponds to a set area in front of the target goods lattice; inputting at least one fourth image area into the fourth convolutional neural network to obtain fourth commodity information; the fourth commodity information includes at least: correspondence between commodity categories and commodity numbers;
and determining commodity change information of commodities in the same commodity category before and after the target moment according to at least one third commodity information and/or at least one fourth commodity information.
The embodiment of the application provides an intelligent auditing device, which comprises the following components:
The obtaining unit is used for obtaining the event type carried by the monitored auditing service request; the auditing service request is triggered by abnormal monitoring by a configured intelligent system in a designated area when at least one acquired image is processed based on a configured intelligent algorithm; the event type is used for characterizing the anomaly;
the determining unit is used for determining an audit strategy matched with the event type according to the event type;
and the auditing unit is used for auditing the alarm data carried by the auditing service request according to the auditing policy so as to obtain target data for correcting the abnormality.
The embodiment of the application provides electronic equipment, which comprises: a processor and a machine-readable storage medium;
the machine-readable storage medium stores machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to perform the method steps disclosed above.
According to the technical scheme, the intelligent auditing mechanism is introduced into the appointed area such as the unmanned supermarket, so that the abnormality can be corrected in time, the target data for correcting the abnormality is obtained, the commodity order can be generated according to the target data, the accuracy of the commodity order is ensured, and the shopping experience of the non-inductive payment is improved.
Furthermore, by introducing the intelligent auditing mechanism into the appointed area such as the unmanned supermarket, the intelligent system configured in the appointed area such as the unmanned supermarket can be improved to only deal with the ideal shopping behavior without the abnormality.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart of a method according to an embodiment of the present application;
fig. 2 is a flowchart for implementing the above step 103 provided in embodiment 1 of the present application;
FIG. 3 is a flowchart showing the implementation of step 202 provided in embodiment 1 of the present application;
fig. 4 is a flowchart for implementing the above step 103 provided in embodiment 2 of the present application;
FIG. 5 is a flowchart showing a first implementation of step 403 provided in embodiment 2 of the present application;
FIG. 6 is a flowchart showing a second implementation of step 403 provided in embodiment 2 of the present application;
FIG. 7 is a flowchart showing the implementation of determining the analysis result of the handheld commodity in step 603 provided in embodiment 2 of the present application;
FIG. 8 is a flowchart of a third implementation of step 403 provided in embodiment 2 of the present application;
fig. 9 is a flowchart for implementing the above step 103 provided in embodiment 3 of the present application;
FIG. 10 is a flowchart of determining information about commodity change in step 903 according to embodiment 3 of the present application;
FIG. 11 is a block diagram of a device according to an embodiment of the present application;
fig. 12 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to solve the problem of inaccurate commodity orders caused by the abnormality described in the background technology, the application provides an intelligent auditing mechanism, and the abnormal event is eliminated through the intelligent auditing mechanism so as to ensure the accuracy of the commodity orders. In order to better understand the technical solution provided by the embodiments of the present application and make the above objects, features and advantages of the embodiments of the present application more obvious, the technical solution in the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a method provided in an embodiment of the present application. Alternatively, this embodiment may be applied to an electronic device running the intelligent auditing mechanism described above. In order to facilitate the distinction from the original intelligent system of the designated area, the electronic device herein may be referred to as an intelligent auditing system. As an example, the designated area may be an area such as an unmanned supermarket or an unmanned warehouse, and the present embodiment is not particularly limited.
As shown in fig. 1, the process may include the steps of:
step 101, obtaining event types carried by the monitored auditing service request; the auditing service request is triggered by abnormal monitoring by a configured intelligent system in a designated area when at least one acquired image is processed based on a configured intelligent algorithm; the event type is used to characterize the anomaly.
In one example, when the configured intelligent system in the designated area obtains at least one image (recorded as an acquired image), the configured intelligent system performs corresponding processing on the acquired image based on the configured intelligent algorithm (e.g., the intelligent algorithm is a commodity identification algorithm, then a commodity is identified from the acquired image, etc.). And then generating a corresponding commodity order according to the processing result. While intelligent systems may be abnormal in processing the acquired images. As one example, there are many types of anomalies, such as that a user newly entering a designated area cannot be associated with a registered payment account, that a commodity in the designated area cannot be associated with a plurality of suspected users (abbreviated as "person-to-goods association failure"), that a newly generated user track cannot be associated with any assigned user ID, that the identification of parameters such as the category, number, etc. of commodities purchased by the user fails or cannot be identified, etc.
If the anomaly is not corrected, the subsequent commodity order may be affected, for example, when a user newly entering the designated area cannot be associated with a registered payment account, the generated commodity order cannot be settled, or when a commodity in the designated area is associated with a plurality of suspected users, an erroneous commodity order may be generated for a certain user.
Based on this, in this embodiment, once the intelligent system is abnormal when processing the acquired image, the intelligent system triggers the audit service request, and finally the intelligent audit system receives the audit service request, so that the monitored audit service request is obtained in step 101. As described in step 101, after the intelligent auditing system obtains the auditing service request, it extracts the event type carried by the auditing service request from the auditing service request. The event type here is used to characterize the anomaly.
And 102, determining an audit strategy matched with the event type according to the event type.
In this embodiment, different exceptions have different audit policies, based on which the step 102 may determine a matched audit policy based on the event type obtained in the step 101, which will be described in the following by way of example and not be repeated herein.
And step 103, auditing the alarm data carried by the auditing service request according to an auditing strategy to obtain target data for correcting the abnormality.
Once the above step 102 determines that the audit policy is good, this step 103 may run the audit policy to audit the alarm data carried by the audit service request, so as to obtain the target data for correcting the anomaly. The alarm data is specifically data associated with the anomaly, such as image information of a video image being processed by the intelligent system when the anomaly is detected, and the alarm data will be described below by way of example, and how to audit the alarm data carried by the audit service request according to an audit policy in step 103 to obtain target data for correcting the anomaly, which is not described herein in detail.
Thus, the flow shown in fig. 1 is completed.
As can be seen from the flow shown in fig. 1, in this embodiment, by introducing the intelligent auditing mechanism shown in fig. 1 in a designated area such as an unmanned supermarket, the anomaly can be corrected in time, the target data for correcting the anomaly can be obtained, and then a commodity order can be generated according to the target data, so that the accuracy of the commodity order is ensured, and the shopping experience of non-inductive payment is improved.
Further, by introducing the intelligent auditing mechanism shown in fig. 1 in a designated area such as an unmanned supermarket, the embodiment can improve the ideal shopping behavior that the intelligent system configured in the designated area such as the unmanned supermarket can only handle without the abnormality.
The flow shown in fig. 1 is described below by three embodiments:
example 1:
the event types described above are, as applied to this embodiment 1: a first type. Optionally, the first type is used to indicate that the newly generated user trajectory (denoted as target user trajectory) fails to be associated with the user ID (denoted as known user ID) assigned by any user when entering the designated area.
In this embodiment 1, any user is assigned a corresponding user ID (denoted as a known user ID) when entering a specified area, so as to characterize the user, and associate a track (denoted as a user track) of the user in the specified area. In this embodiment 1, a plurality of acquisition devices (such as binocular cameras) are installed in the designated area to ensure complete coverage of all locations in the designated area and to track the user entering the designated area. During the tracking process, the above-mentioned anomaly (noted as the first anomaly) often occurs: the newly generated user trajectory (noted as the target user trajectory) fails to correlate with the known user ID assigned by any user upon entering the designated area. The first anomaly may occur, which may be: if the tracking system is based on face tracking, the face snapshot recognition may fail due to a plurality of reasons such as low head or blocked head of the tracked user, so as to further cause the first abnormality; and if the tracking system is based on full-field target tracking, it may be a tracking disruption that causes the first anomaly described above. The reason why the first abnormality is caused is not particularly limited in this embodiment, and is mainly aimed at how to intelligently audit to obtain the target data for correcting the first abnormality after the occurrence of the first abnormality.
Optionally, in embodiment 1, in step 103, the alarm data carried by the audit service request is audited according to an audit policy to obtain the target data for correcting the anomaly, which is shown in the flow of fig. 2.
Referring to fig. 2, fig. 2 is a flowchart for implementing the above step 103 according to embodiment 1 of the present application. In the process, the first exception is applied, and the audit policy corresponds to a user track audit policy. Correspondingly, the flow shown in FIG. 2 is performed based on a user trajectory auditing policy.
In addition, in the flow shown in fig. 2, the alarm data may be alarm data indicating the first abnormality, which is applied to the first abnormality. Alternatively, the alert data may include: image information and the target user trajectory. The image information may be information describing the at least one acquired image, such as an acquisition time point at which the acquired image is acquired, an equipment identifier of an acquisition equipment that acquires the acquired image, etc., and the final purpose is to obtain the acquired image based on the image information.
As shown in fig. 2, the process may include the steps of:
step 201, extracting a user image area corresponding to the target user track from at least one acquired image corresponding to the image information.
Taking the example that the image information includes the device identifier and the acquisition time point, in step 201, the corresponding acquisition device may be searched based on the device identifier, then the image acquired by the acquisition device at the acquisition time point (i.e., at least one acquired image corresponding to the image information) may be acquired based on the acquisition time point, and finally at least one acquired image corresponding to the image information may be searched. And after at least one acquired image corresponding to the image information is searched, extracting a user image area corresponding to the target user track from the at least one searched acquired image.
Optionally, in this embodiment, the user image area corresponding to the target user track refers to an image area where the user is located at a specified position in the at least one acquired image. The designated locations here may be the locations in the target user trajectory described above.
Step 202, determining a first target user ID matched with the target user track from the at least one known user ID according to the user image area corresponding to the target user track and the obtained user image area corresponding to the at least one known user ID.
Based on the above-described known user ID, optionally, the user image area corresponding to the above-described known user ID may be: the image area where the user corresponding to the user ID is located in all the acquired images (acquired by all the acquisition devices installed in the designated area) is known.
By step 202, an associated one of the known user IDs (i.e., the first target user ID) can be finally matched with the target user track from the assigned known user IDs, and the target user track and the user track originally associated with the first target user ID can be connected by means of the first target user ID to form the whole track of the user corresponding to the first target user ID in the designated area. The first target user ID is the target data for correcting the first anomaly, which is applied to the flow shown in fig. 2.
As to how step 202 in this embodiment 1 determines the first target user ID matching the target user track from the at least one known user ID according to the user image area corresponding to the target user track and the obtained user image area corresponding to the at least one known user ID, there are many ways to implement this, and fig. 3 below illustrates one implementation of this:
referring to fig. 3, fig. 3 is a flowchart of step 202 provided in embodiment 1 of the present application. As shown in fig. 3, the process may include the steps of:
step 301, inputting a user image area corresponding to the target user track into a trained first convolutional neural network to obtain a target human body characteristic model.
The target human feature model here corresponds to the target user trajectory described above.
Optionally, in this embodiment, the first convolutional neural network is mainly used for feature modeling, and there are many implementation manners of the first convolutional neural network in specific implementation, for example, may be an network such as an incapacionnet, which is not specifically limited in this embodiment. As to how the first convolutional neural network models the features, it is similar to the existing feature modeling manner, and will not be described in detail here.
Step 302, inputting the obtained user image area corresponding to at least one known user ID into the first convolutional neural network to obtain candidate human body feature models corresponding to the known user IDs.
In the application, each known user ID has a corresponding user track (which may be referred to as a candidate user track), and based on this, the candidate human feature model corresponding to each known user ID may also be a candidate human feature model corresponding to each candidate user track.
Step 303, determining a known user ID corresponding to one of the candidate human body feature models as the first target user ID according to the similarity between the target human body feature model and each of the candidate human body feature models.
Optionally, in step 303, a similarity (such as cosine similarity) between the target human feature model and each candidate human feature model may be calculated, and then one of the candidate human feature models is selected according to the similarity. Optionally, the similarity between the candidate human body feature model selected here and the target human body feature model satisfies the following conditions: greater than or equal to a set similarity threshold (e.g., 0.9). And then, determining the known user ID corresponding to the selected candidate human body characteristic model as the first target user ID.
Thus, the flow shown in fig. 3 is completed.
By the flow shown in fig. 3, it is possible to implement how the above step 202 determines the first target user ID matching the target user track from the at least one known user ID according to the user image area corresponding to the target user track and the obtained user image area corresponding to the at least one known user ID. It should be noted that the flow shown in fig. 3 is only one specific implementation manner for implementing the above step 202, and is not limited thereto.
Example 1 was described above. It should be noted that, in this embodiment 1, if the first target user ID matching the target user track cannot be determined finally, optionally, the above-mentioned audit service request may be forwarded to the audit client to trigger a manual audit, and finally an accurate commodity order may be generated based on the manual audit result.
Example 2:
the event type described above is a second type applied to embodiment 2. Optionally, the second type is used to indicate a person-cargo association anomaly. In embodiment 2, when the intelligent system configured in the designated area determines which user performs the target behavior before a certain commodity (designated as a target commodity) is subjected to the target behavior (designated as a target commodity), it is not possible to accurately determine the final target user, that is, the occurrence of the person-cargo association abnormality (designated as a second abnormality).
In this embodiment 2, how to audit the alarm data carried by the audit service request according to the audit policy in the above step 103 to obtain the target data for correcting the anomaly may be described with reference to the flow shown in fig. 4:
referring to fig. 4, fig. 4 is a flowchart for implementing the above step 103 according to embodiment 2 of the present application. In this process, the audit policy corresponds to a person-to-person associated audit policy applied to the second exception. Correspondingly, the flow shown in fig. 4 is performed based on a person-goods association audit policy.
In addition, in the flow shown in fig. 4, the alarm data may be alarm data indicating the second abnormality, which is applied to the second abnormality. Alternatively, the alert data may include: target time, target goods lattice, target behavior and commodity information of target commodity. Here, the target cargo space refers to a cargo space where the target commodity is located. The target time refers to the time when the target commodity is executed with the target action, and the target action refers to taking or putting back. The commodity information of the target commodity may include a commodity category, a commodity number, and the like of the target commodity. Optionally, the commodity information of the target moment, the target cargo grid, the target behavior and the target commodity may be obtained from monitoring information of a gravity sensor configured on the target cargo grid, where the gravity sensor may monitor the monitoring information including the commodity information of the target moment, the target cargo grid, the target behavior and the target commodity, and a specific monitoring manner is similar to a commodity monitoring manner of the existing gravity sensor and will not be repeated herein.
As shown in fig. 4, the process may include the steps of:
step 401, obtaining at least one first video image which is acquired by the designated acquisition device before the target time and has a first duration.
Here, the at least one designated collection device means a collection device whose field of view region includes at least the target cargo compartment.
Alternatively, in this embodiment 2, the field area of each of the mounted capturing devices (such as a binocular camera) in the designated area is stored in advance, based on which at least one designated capturing device whose field area includes the target cargo compartment can be found from the field area of each of the mounted capturing devices (such as a binocular camera) in the designated area, and then the first video image acquired by the at least one designated capturing device before the target time and having the first duration can be acquired from the at least one designated capturing device or a storage medium dedicated to storing the video recorded by the at least one designated capturing device.
The first video image before the target time may be obtained by step 401. It should be noted that, when the number of designated capturing devices is greater than 1, the number of the obtained first video images may be greater than 1, that is, a plurality of first video images may appear.
Step 402, obtaining at least one second video image which is acquired by the designated acquisition device, is after the target time and has a second duration.
Similar to the above step 401, optionally, in embodiment 2, the field area of view of each of the mounted capturing devices (such as a binocular camera) in the designated area is pre-stored, and based on this, at least one designated capturing device including the target cargo space in the field area can be found according to the obtained field area of view of each of the mounted capturing devices (such as a binocular camera) in the designated area, and then the second video image acquired by the at least one designated capturing device after the target time and having the second duration can be acquired from the at least one designated capturing device or a storage medium dedicated to storing the video recorded by the at least one designated capturing device. Alternatively, the second time period may be the same as or different from the first time period described above.
A second video image after the target time may be obtained via step 402. It should be noted that, when the number of designated capturing devices is greater than 1, the number of the obtained second video images may be greater than 1, that is, a plurality of second video images may appear.
Step 403, determining a second target user ID for performing the target action on the target commodity according to the obtained at least one first video image and at least one second video image.
Here, the second target user ID is the target data described above.
By the process shown in fig. 4, it can be finally determined which user (i.e., the user corresponding to the second target user ID) performs the target behavior with respect to the target commodity. Finally, the personnel-goods association is realized, and then the corresponding commodity order can be generated based on the personnel-goods association for settlement.
Optionally, in this embodiment, in step 403, there are many ways to determine the second target user ID for performing the target action on the target commodity according to the obtained at least one first video image and the obtained at least one second video image, and three ways are described below by way of example:
mode 1:
in this mode 1, the behavior analysis is used to implement the second target user ID for determining the target behavior performed on the target commodity in the above step 403, and specifically, the flow shown in fig. 5 may be seen:
referring to fig. 5, fig. 5 is a flowchart of step 403 provided in embodiment 2 of the present application. As shown in fig. 5, the process may include the steps of:
Step 501, a human body image sequence of each suspected user is extracted from at least one first video image and at least one second video image.
Here, the suspected user is a user in a designated area in the first video image and the second video image, and the designated area corresponds to a set area in front of the target cargo compartment. Correspondingly, the human body image sequence of the suspected user is composed of an image area of the suspected user in the first video image and an image area of the suspected user in the second video image.
Step 502, determining a behavior analysis result corresponding to each suspected user according to the human body image sequence of each suspected user, and if one behavior analysis result corresponding to each suspected user is matched with the target behavior, determining that the user ID corresponding to the suspected user is the second target user ID.
Optionally, determining the behavior analysis result corresponding to each suspected user according to the human body image sequence of each suspected user in step 502 may include: and inputting the human body image sequence of each suspected user into a trained second convolutional neural network to obtain a behavior analysis result corresponding to each suspected user. The second convolutional neural network analyzes the behavior analysis result corresponding to each suspected user based on the behavior analysis algorithm. Alternatively, the second convolutional neural network may be a three-dimensional convolutional network (C3D: 3D Convolutional Networks) when embodied. As to how the second convolutional neural network is based on the behavior analysis algorithm, it is similar to the existing commodity detection method, and will not be described in detail here.
Optionally, in this embodiment, the confidence level of the behavior analysis result corresponding to each suspected user is further determined when the behavior analysis result corresponding to each suspected user is determined according to the human body image sequence of each suspected user, for example, the second convolutional neural network may further obtain the confidence level of the behavior analysis result corresponding to each suspected user when obtaining the behavior analysis result corresponding to each suspected user. Based on this, in the above step 502, before determining that the user ID corresponding to the suspected user is the second target user ID, it may further be determined whether the confidence coefficient of the behavior analysis result corresponding to the suspected user is greater than or equal to a set threshold value, for example, 0.9, and if the confidence coefficient is greater than or equal to the set threshold value, for example, 0.9, it is determined that the user ID corresponding to the suspected user is the second target user ID.
Thus, the flow shown in fig. 5 is completed.
By the flow shown in fig. 5, the determination of the second target user ID for performing the target action on the target commodity according to the obtained at least one first video image and the obtained at least one second video image in step 403 is finally achieved.
Mode 2:
in this mode 2, the second target user ID for executing the target behavior on the target commodity is determined in the step 403 by using handheld commodity analysis, and specifically, the flow shown in fig. 6 may be referred to as follows:
Referring to fig. 6, fig. 6 is a flowchart of a second implementation of step 403 provided in embodiment 2 of the present application. As shown in fig. 6, the process may include the steps of:
step 601, extracting a first image area from at least one first video image, wherein the first image area corresponds to a set area in front of the target cargo grid.
In this step 601, the first image area is an image area corresponding to a set area in front of the target cargo compartment in the first video image. When the number of first video images is greater than 1, the number of first image areas may also be greater than 1.
Step 602, extracting a second image area from at least one second video image, wherein the second image area corresponds to a set area in front of the target cargo grid.
In step 602, the second image area is an image area corresponding to a set area in front of the target cargo compartment in the second video image. When the number of second video images is greater than 1, the number of second image areas may also be greater than 1.
Step 603, determining a handheld commodity analysis result corresponding to each suspected user before and after a target moment according to at least one first image area and at least one second image area, wherein the suspected user is a user in a set area in front of the target goods lattice, and if the handheld commodity analysis result corresponding to one suspected user is matched with the target behavior and the target commodity information, determining that a user ID corresponding to the suspected user is the second target user ID.
Based on the first video image described above being a video image that has been acquired before the target time, a first image area extracted from the first video image, that is, a scene corresponding to a set area in front of the target cargo space before the target time; similarly, based on the fact that the second video image is the video image acquired after the target time, the second image area extracted from the second video image is the scene corresponding to the set area in front of the target cargo grid after the target time, and based on the two different scenes, the handheld commodity analysis result corresponding to each suspected user before and after the target time can be easily determined. The following describes how to determine the hand-held commodity analysis result corresponding to each suspected user before and after the target time according to at least one first image area and at least one second image area in step 603 by way of example through a flowchart shown in fig. 7:
referring to fig. 7, fig. 7 is a flowchart of an implementation of determining a handheld commodity analysis result in step 603 provided in embodiment 2 of the present application. As shown in fig. 7, the process may include the steps of:
step 701, inputting at least one first image area to a trained third convolutional neural network to obtain at least one first commodity information, and determining first commodity information associated with a suspected user from the at least one first commodity information according to a hand track corresponding to the obtained suspected user ID.
Here, the third convolutional neural network performs commodity detection based on an algorithm available for commodity detection, such as FasterRCNN and conceptionnet, etc., to obtain commodity information. As for the specific detection mode, it is similar to the existing commodity detection mode, and will not be described in detail here. The first commodity information here includes at least: correspondence of commodity category, commodity number and commodity position. Alternatively, the commodity position may be represented by the position of the circumscribed rectangular box of the commodity. When the number of the first image areas is greater than 1, the number of the first commodity information may also be greater than 1.
The first image area described above is an image area corresponding to the set area in front of the target cargo space in the first video image, and based on this, the first commodity information obtained in the step 702 is the commodity information held by the suspected user in the set area in front of the target cargo space before the target time. The first merchandise information is specific to merchandise information held by a suspected user (specifically, a hand), and the first merchandise information associated with the suspected user can be determined from at least one first merchandise information according to the obtained hand track corresponding to the suspected user ID. Optionally, if the commodity position in the first commodity information is one of the positions in the hand tracks corresponding to one of the suspected user IDs, or if the distance between the commodity position in the first commodity information and one of the positions in the hand tracks corresponding to one of the suspected user IDs is the smallest, the first commodity information may be considered to be associated with the suspected user ID (or the first commodity information may be considered to be associated with the suspected user corresponding to the suspected user ID). Finally, the step 701 is implemented, where first merchandise information associated with the suspected user is determined from the at least one first merchandise information according to the hand trajectory corresponding to the obtained suspected user ID.
Step 702, inputting the at least one second image area into the third convolutional neural network to obtain at least one second commodity information, and determining second commodity information associated with the suspected user from the at least one second commodity information according to the hand track corresponding to the obtained suspected user ID.
This step 702 is similar to the step 701 described above, and will not be repeated.
Step 703, determining a handheld commodity analysis result corresponding to the same suspected user before and after the target time according to the first commodity information and/or the second commodity information associated with the same suspected user.
For any suspected user, there may be no handheld commodity information before the target time (i.e., the suspected user does not have associated first commodity information), and there may be handheld commodity information after the target time (i.e., the suspected user has associated second commodity information), where, according to the second commodity information associated with the suspected user, the handheld commodity analysis result corresponding to the suspected user before and after the target time is determined, for example, the handheld commodity analysis result includes an action behavior (picking), a commodity category (a commodity category in the second commodity information associated with the suspected user), a commodity number (a commodity number in the second commodity information associated with the suspected user), and so on.
Thus, the process shown in fig. 7 is implemented how to determine the handheld commodity analysis result corresponding to each suspected user before and after the target moment according to at least one first image area and at least one second image area in step 603.
After determining the handheld commodity analysis results corresponding to each suspected user before and after the target moment, as described in step 603, if one of the handheld commodity analysis results corresponding to each suspected user before and after the target moment is matched with the target behavior and the target commodity information, for example, the action behavior of the suspected user in the handheld commodity analysis results corresponding to each of the suspected user before and after the target moment is consistent with the target behavior, and the commodity category and the commodity number in the handheld commodity analysis results are consistent with the target commodity information, the handheld commodity analysis results corresponding to each of the suspected user before and after the target moment are considered to be matched with the target behavior and the target commodity information. Based on the above, it may be determined that the user ID corresponding to the suspected user is the second target user ID.
Optionally, in this embodiment, when determining the handheld commodity analysis result corresponding to each suspected user before and after the target time according to at least one first image area and at least one second image area, the confidence coefficient of the handheld commodity analysis result is further determined. Based on this, in the above step 603, before determining that the user ID corresponding to the suspected user is the second target user ID, it may further be determined whether the confidence coefficient of the handheld commodity analysis result corresponding to the suspected user is greater than or equal to a set threshold value, for example, 0.9, and if the confidence coefficient is greater than or equal to the set threshold value, for example, 0.9, it is determined that the user ID corresponding to the suspected user is the second target user ID. Alternatively, the confidence level of the handheld commodity analysis result may be calculated by: for example, the confidence coefficient of the first commodity information (the confidence coefficient of the first commodity information is further obtained when the first commodity information is obtained by the third convolution neural network) and/or the confidence coefficient of the second commodity information (the confidence coefficient of the second commodity information is further obtained when the second commodity information is obtained by the third convolution neural network) used for determining the handheld commodity analysis result are obtained, the confidence coefficient of the handheld commodity analysis result is determined according to the obtained confidence coefficient, for example, when 1 confidence coefficient is obtained, the obtained confidence coefficient is directly determined to the confidence coefficient of the handheld commodity analysis result, and when 2 confidence coefficients are obtained, the obtained 2 confidence coefficients are subjected to setting operation (such as averaging, summing and the like) to obtain the result of the handheld commodity analysis result.
So far, the description of mode 2 is completed.
In way 2, it is finally achieved that in step 403 above, the second target user ID for performing the target action on the target commodity is determined based on the obtained at least one first video image and the at least one second video image.
Mode 3:
this mode 3 implements the second target user ID for performing the target action on the target commodity in step 403 by combining the above modes 1 and 2, and specifically, see the flow shown in fig. 8.
Referring to fig. 8, fig. 8 is a flowchart of step 403 provided in embodiment 2 of the present application. As shown in fig. 8, the process may include the steps of:
step 801, respectively extracting a human body image sequence of each suspected user from at least one first video image and at least one second video image; the suspected user is a user positioned in a set area in front of the target goods lattice, and the human body image sequence of the suspected user consists of an image area of the suspected user in a first video image and an image area of the suspected user in a second video image; and determining a behavior analysis result corresponding to each suspected user according to the human body image sequence of each suspected user, and if one behavior analysis result corresponding to each suspected user is matched with the target behavior, determining the user ID corresponding to the suspected user as a first candidate user ID.
This step 801 is referred to in the above-mentioned mode 1, and will not be described here again.
Step 802, extracting at least one first image area from at least one first video image, wherein the first image area corresponds to a set area in front of the target goods lattice; extracting a second image area from at least one second video image, wherein the second image area corresponds to a set area in front of the target goods lattice; determining the corresponding handheld commodity analysis results of each suspected user before and after the target moment according to at least one first image area and at least one second image area, wherein the suspected user is a user in a set area in front of the target goods lattice; and if the handheld commodity analysis result corresponding to the suspected user is matched with the target behavior and the target commodity information, determining the user ID corresponding to the suspected user as a second candidate user ID.
This step 802 may refer to the above-mentioned mode 2, and will not be described herein.
Step 803, if the first candidate user ID is consistent with the second candidate user ID, determining that the second target user ID is the first candidate user ID or the second candidate user ID.
Thus, the flow shown in fig. 8 is completed.
By the flow shown in fig. 8, the determination of the second target user ID for performing the target action on the target commodity according to the obtained at least one first video image and the obtained at least one second video image in step 403 is finally achieved.
Example 2 was described above. It should be noted that, in this embodiment 2, if the second target user ID for executing the target behavior on the target commodity cannot be determined finally, optionally, the above-mentioned audit service request may be forwarded to the audit client to trigger a manual audit, and finally an accurate commodity order may be generated based on the manual audit result.
Example 3:
in this embodiment 3, the event type is the third type. Optionally, a third type is used to indicate a merchandise identification anomaly (noted as a third anomaly). As one example, the third anomaly may be a failure in article identification, an article that someone has taken an anomaly, or an article that cannot be identified, etc.
In this embodiment 3, how to audit the alarm data carried by the audit service request according to the audit policy in the above step 103 to obtain the target data for correcting the anomaly may be described with reference to the flow shown in fig. 9:
referring to fig. 9, fig. 9 is a flowchart for implementing the above step 103 provided in embodiment 3 of the present application. In this flow, the third exception is applied, and the audit policy corresponds to a commodity identification audit policy. Correspondingly, the flow shown in fig. 9 is performed based on the item identification audit policy.
In addition, in the flow shown in fig. 9, the alarm data may be alarm data indicating the third abnormality, which is applied to the third abnormality. Alternatively, the alert data may include: target time, target goods lattice and target behavior. Here, the target cargo space refers to a cargo space where the target commodity is located. The target time refers to the time when the target commodity is executed with the target action, and the target action refers to taking or putting back. Optionally, the target time, the target cargo space, and the target behavior may be obtained from monitoring information of a gravity sensor configured on the target cargo space, where the gravity sensor may monitor monitoring information including the target time, the target cargo space, and the target behavior, and a specific monitoring manner is similar to a cargo monitoring manner of an existing gravity sensor, and is not described herein again.
As shown in fig. 9, the process may include the steps of:
step 901, obtaining at least one third video image which is acquired by the designated acquisition device before the target time and has a third duration.
This step 901 is similar to step 401 described above and will not be described again here.
Optionally, the third time period is the same as one of the first time period and the second time period, or different from the first time period and the second time period.
Step 902, obtaining at least one fourth video image which is acquired by the designated acquisition device, is after the target time and has a fourth duration.
This step 902 is similar to step 402 described above and will not be described again here.
Optionally, the third time period is the same as one of the first time period, the second time period, and the third time period, or different from the first time period, the second time period, and the third time period.
Step 903, determining commodity change information of the commodity of the same commodity category before and after the target moment according to the obtained at least one third video image and at least one fourth video image, and determining target commodity information of the target commodity of the target behavior according to the commodity change information.
The target commodity information here includes at least: target commodity category and target commodity quantity. As to how to determine commodity change information of commodities of the same commodity category around the target time based on the obtained at least one third video image and at least one fourth video image, the following flow chart of fig. 10 gives an embodiment:
as shown in fig. 10, the process may include:
in step 1001, a third image area is extracted from at least one third video image, where the third image area corresponds to a set area in front of the target cargo grid, and the at least one third image area is input to a trained fourth convolutional neural network to obtain at least one third commodity information.
The extraction of the third image area from the at least one third video image in this step 1001 is similar to step 601 described above. And will not be described in detail here. The fourth convolutional neural network in this step 1001 may be the same as or different from the third convolutional neural network described above, with the ultimate goal of identifying third merchandise information. Alternatively, the third merchandise information may include: correspondence between commodity category and commodity number.
Step 1002, extracting a fourth image area from at least one fourth video image, where the fourth image area corresponds to a set area in front of the target cargo grid, and inputting the at least one fourth image area to the fourth convolutional neural network to obtain fourth cargo information.
The fourth image region is extracted from the at least one fourth video image in step 1002 similar to step 602 described above. And will not be described in detail here. Optionally, the fourth commodity information includes at least: correspondence between commodity category and commodity number.
Step 1003, determining commodity change information of commodities in the same commodity category before and after the target time according to at least one third commodity information and/or at least one fourth commodity information.
For example, if there is no third commodity information corresponding to a certain commodity category (referred to as commodity category a) before the target time and there is fourth commodity information corresponding to commodity category a after the target time, then it is considered that commodity category a changes before and after the target time, and the commodity change information may be the third commodity information corresponding to commodity category a.
For example, if the target time is preceded by third commodity information including a commodity type (referred to as commodity type b) and after the target time there is fourth commodity information including commodity type b, but the number of commodities corresponding to commodity type b in the third commodity information is K1 and the number of commodities corresponding to commodity type b in the fourth commodity information is K2 different from K1, then it is considered that commodity type b changes before and after the target time, and the commodity change information may be an absolute value including a difference between commodity types b, K2 and K1.
Finally, determining commodity change information of commodities in the same commodity category before and after the target moment according to the obtained at least one third video image and at least one fourth video image through a flow shown in fig. 10. Optionally, as described in the above step 903, once the commodity change information of the commodity of the same commodity category before and after the target time is determined, a corresponding action may be determined according to the commodity change information, for example, taking the commodity category a as an example, and taking the commodity category b as an example, if K1 is greater than K2, the corresponding action may be replaced. After determining the action behaviors, if one action behavior is consistent with the target behavior, determining commodity change information corresponding to the action behavior as the target commodity information. Finally, the target commodity information of the target commodity for which the target behavior is performed is determined according to the commodity change information described in the above step 903.
Example 3 was described above. It should be noted that, in this embodiment 3, if the target commodity information of the target commodity for which the target behavior is executed cannot be determined finally, optionally, the above-mentioned audit service request may be forwarded to the audit client to trigger the manual audit, and finally, an accurate commodity order may be generated based on the manual audit result.
The method provided by the present embodiment is described above. The following describes an apparatus provided in this embodiment:
referring to fig. 11, fig. 11 is a block diagram of an apparatus according to an embodiment of the present application. As shown in fig. 11, the apparatus may include:
the obtaining unit is used for obtaining the event type carried by the monitored auditing service request; the auditing service request is triggered by abnormal monitoring by a configured intelligent system in a designated area when at least one acquired image is processed based on a configured intelligent algorithm; the event type is used for characterizing the anomaly;
the determining unit is used for determining an audit strategy matched with the event type according to the event type;
and the auditing unit is used for auditing the alarm data carried by the auditing service request according to the auditing policy so as to obtain target data for correcting the abnormality.
Optionally, the event type is a first type, and the first type is used for indicating that the newly generated target user track fails to be associated with the known user ID allocated when any user enters a designated area; the alarm data comprises image information corresponding to the at least one acquired image and the target user track; the auditing strategy is a user track auditing strategy;
the auditing unit auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality comprises the following steps:
extracting a user image area corresponding to the target user track from at least one acquired image corresponding to the image information;
and determining a first target user ID matched with the target user track from the at least one known user ID according to the user image area corresponding to the target user track and the obtained user image area corresponding to the at least one known user ID.
Optionally, the determining, according to the user image area corresponding to the target user track and the obtained user image area corresponding to the at least one known user ID, the first target user ID matched with the target user track from the at least one known user ID includes:
Inputting a user image area corresponding to the target user track into a trained first convolutional neural network to obtain a target human body characteristic model;
inputting the obtained user image area corresponding to at least one known user ID into the first convolutional neural network to obtain candidate human body feature models corresponding to the known user IDs;
and determining the known user ID corresponding to one of the candidate human body feature models as the first target user ID according to the similarity between the target human body feature model and each candidate human body feature model.
Optionally, the event type is a second type for indicating a person-goods association abnormality; the alarm data comprise target time, target goods lattice, target behavior and commodity information of target commodity; the target goods lattice is the goods lattice where the target goods is located, the target moment is the moment when the target goods are executed with the target action, and the target action is taking or putting back; the auditing strategy is a human-cargo association auditing strategy;
the auditing unit auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality comprises the following steps:
Acquiring at least one first video image which is acquired by the designated acquisition equipment before the target moment and has a first duration; obtaining at least one second video image which is acquired by the designated acquisition equipment after the target moment and has a second duration; the at least one designated collection device is a collection device with a field of view area at least comprising the target cargo compartment;
a second target user ID for performing the target action on the target commodity is determined based on the obtained at least one first video image and at least one second video image.
Optionally, the determining the second target user ID for performing the target action on the target commodity according to the obtained at least one first video image and at least one second video image includes:
extracting a human body image sequence of each suspected user from at least one first video image and at least one second video image respectively; the suspected user is a user positioned in a set area in front of the target goods lattice, and the human body image sequence of the suspected user consists of an image area of the suspected user in a first video image and an image area of the suspected user in a second video image; determining a behavior analysis result corresponding to each suspected user according to the human body image sequence of each suspected user, and determining a user ID corresponding to one suspected user as the second target user ID if the behavior analysis result corresponding to the suspected user is matched with the target behavior; or alternatively, the process may be performed,
Extracting at least one first image area from at least one first video image, wherein the first image area corresponds to a set area in front of the target goods lattice; extracting a second image area from at least one second video image, wherein the second image area corresponds to a set area in front of the target goods lattice; determining a handheld commodity analysis result corresponding to each suspected user before and after the target moment according to at least one first image area and at least one second image area, wherein the suspected user is a user in a set area in front of the target goods lattice; and if the handheld commodity analysis result corresponding to one suspected user is matched with the target behavior and the target commodity information, determining that the user ID corresponding to the suspected user is the second target user ID.
Optionally, the determining the second target user ID for performing the target action on the target commodity according to the obtained at least one first video image and at least one second video image includes:
extracting a human body image sequence of each suspected user from at least one first video image and at least one second video image respectively; the suspected user is a user positioned in a set area in front of the target goods lattice, and the human body image sequence of the suspected user consists of an image area of the suspected user in a first video image and an image area of the suspected user in a second video image; determining a behavior analysis result corresponding to each suspected user according to the human body image sequence of each suspected user, and determining a user ID corresponding to one suspected user as a first candidate user ID if the behavior analysis result corresponding to the suspected user is matched with the target behavior;
Extracting at least one first image area from at least one first video image, wherein the first image area corresponds to a set area in front of the target goods lattice; extracting a second image area from at least one second video image, wherein the second image area corresponds to a set area in front of the target goods lattice; determining the corresponding handheld commodity analysis results of each suspected user before and after the target moment according to at least one first image area and at least one second image area, wherein the suspected user is a user in a set area in front of the target goods lattice; if a handheld commodity analysis result corresponding to a suspected user is matched with the target behavior and the target commodity information, determining that the user ID corresponding to the suspected user is a second candidate user ID;
and if the first candidate user ID is consistent with the second candidate user ID, determining that the second target user ID is the first candidate user ID or the second candidate user ID.
Optionally, the determining, according to the human body image sequence of each suspected user, the behavior analysis result corresponding to each suspected user includes:
and inputting the human body image sequence of each suspected user to the trained second convolutional neural network to obtain a behavior analysis result corresponding to the suspected user.
Optionally, the determining, according to the at least one first image area and the at least one second image area, a handheld commodity analysis result corresponding to each suspected user before and after the target time includes:
inputting the at least one first image area into a trained third convolutional neural network to obtain at least one first commodity information, wherein the first commodity information at least comprises: correspondence of commodity category, commodity number and commodity position; determining first commodity information associated with the suspected user from at least one first commodity information according to the hand track corresponding to the obtained suspected user ID;
inputting at least one second image area into the third convolutional neural network to obtain at least one second commodity information, wherein the second commodity information at least comprises: correspondence of commodity category, commodity number and commodity position; determining second commodity information associated with the suspected user from at least one piece of second commodity information according to the hand track corresponding to the obtained suspected user ID;
and determining the handheld commodity analysis results corresponding to the same suspected user before and after the target moment according to the first commodity information and/or the second commodity information associated with the same suspected user.
Optionally, the event type is a third type for indicating that the commodity is abnormal in identification; the alarm data comprise target time, target goods lattice and target behavior; the target goods lattice is the goods lattice where the target goods is, the target moment is the moment of executing the target behavior, and the target behavior is taking or putting back; the auditing strategy is a commodity identification auditing strategy;
the auditing unit auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality comprises the following steps:
acquiring at least one third video image which is acquired by the designated acquisition equipment and has a third duration before the target moment; obtaining a fourth video image which is acquired by at least one appointed acquisition device, is after the target moment and has a fourth time length; the appointed acquisition equipment refers to acquisition equipment of which the field of view area at least comprises the target goods lattice;
determining commodity change information of commodities in the same commodity category before and after the target moment according to at least one third video image and at least one fourth video image; determining target commodity information of a target commodity subjected to the target behavior according to commodity change information; the target commodity information includes at least: target commodity category and target commodity quantity.
Optionally, the determining, according to the at least one third video image and the at least one fourth video image, commodity change information of the commodity of the same category occurring before and after the target time includes:
extracting a third image area from at least one third video image, wherein the third image area corresponds to a set area in front of the target goods lattice; inputting at least one third image area into a trained fourth convolutional neural network to obtain third commodity information, wherein the third commodity information at least comprises: correspondence between commodity categories and commodity numbers;
extracting a fourth image area from at least one fourth video image, wherein the fourth image area corresponds to a set area in front of the target goods lattice; inputting at least one fourth image area into the fourth convolutional neural network to obtain fourth commodity information; the fourth commodity information includes at least: correspondence between commodity categories and commodity numbers;
and determining commodity change information of commodities in the same commodity category before and after the target moment according to at least one third commodity information and/or at least one fourth commodity information.
The apparatus shown in fig. 11 is thus completed.
Correspondingly, the application also provides a hardware structure of the device shown in fig. 11. Referring to fig. 12, the hardware structure may include: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
Based on the same application concept as the above method, the embodiment of the present application further provides a machine-readable storage medium, where a number of computer instructions are stored, where the computer instructions can implement the method disclosed in the above example of the present application when the computer instructions are executed by a processor.
By way of example, the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, and the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (11)

1. An intelligent auditing method is characterized by comprising the following steps:
acquiring the event type carried by the monitored auditing service request; the auditing service request is triggered by abnormal monitoring by a configured intelligent system in a designated area when at least one acquired image is processed based on a configured intelligent algorithm; the event type is used for characterizing the anomaly;
determining an audit policy matched with the event type according to the event type;
auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality;
wherein the event types include: a first type; the first type is used for indicating that the newly generated target user track fails to be associated with the known user ID allocated when any user enters a designated area; the alarm data comprises image information corresponding to the at least one acquired image and the target user track; the auditing strategy is a user track auditing strategy;
when the event type is the first type, the auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality includes:
Extracting a user image area corresponding to the target user track from at least one acquired image corresponding to the image information;
and determining a first target user ID matched with the target user track from the at least one known user ID according to the user image area corresponding to the target user track and the obtained user image area corresponding to the at least one known user ID.
2. The method of claim 1, wherein determining a first target user ID from the at least one known user ID that matches the target user track based on the user image area corresponding to the target user track and the obtained user image area corresponding to the at least one known user ID comprises:
inputting a user image area corresponding to the target user track into a trained first convolutional neural network to obtain a target human body characteristic model;
inputting the obtained user image area corresponding to at least one known user ID into the first convolutional neural network to obtain candidate human body feature models corresponding to the known user IDs;
and determining the known user ID corresponding to one of the candidate human body feature models as the first target user ID according to the similarity between the target human body feature model and each candidate human body feature model.
3. The method according to claim 1, characterized in that the method further comprises:
the event types further include: a second type for indicating a person-cargo association anomaly; the alarm data comprise target time, target goods lattice, target behavior and commodity information of target commodity; the target goods lattice is the goods lattice where the target goods is located, the target moment is the moment when the target goods are executed with the target action, and the target action is taking or putting back; the auditing strategy is a human-cargo association auditing strategy;
the auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality comprises the following steps:
acquiring at least one first video image which is acquired by the designated acquisition equipment before the target moment and has a first duration; obtaining at least one second video image which is acquired by the designated acquisition equipment after the target moment and has a second duration; the at least one designated collection device is a collection device with a field of view area at least comprising the target cargo compartment;
a second target user ID for performing the target action on the target commodity is determined based on the obtained at least one first video image and at least one second video image.
4. A method according to claim 3, wherein said determining a second target user ID for performing said target action on said target commodity based on said at least one first video image and said at least one second video image obtained comprises:
extracting a human body image sequence of each suspected user from at least one first video image and at least one second video image respectively; the suspected user is a user positioned in a set area in front of the target goods lattice, and the human body image sequence of the suspected user consists of an image area of the suspected user in a first video image and an image area of the suspected user in a second video image; determining a behavior analysis result corresponding to each suspected user according to the human body image sequence of each suspected user, and determining a user ID corresponding to one suspected user as the second target user ID if the behavior analysis result corresponding to the suspected user is matched with the target behavior; or alternatively, the process may be performed,
extracting at least one first image area from at least one first video image, wherein the first image area corresponds to a set area in front of the target goods lattice; extracting a second image area from at least one second video image, wherein the second image area corresponds to a set area in front of the target goods lattice; determining a handheld commodity analysis result corresponding to each suspected user before and after the target moment according to at least one first image area and at least one second image area, wherein the suspected user is a user in a set area in front of the target goods lattice; and if the handheld commodity analysis result corresponding to one suspected user is matched with the target behavior and the target commodity information, determining that the user ID corresponding to the suspected user is the second target user ID.
5. The method of claim 4, wherein determining the second target user ID for performing the target action on the target commodity based on the obtained at least one first video image and at least one second video image comprises:
extracting a human body image sequence of each suspected user from at least one first video image and at least one second video image respectively; the suspected user is a user positioned in a set area in front of the target goods lattice, and the human body image sequence of the suspected user consists of an image area of the suspected user in a first video image and an image area of the suspected user in a second video image; determining a behavior analysis result corresponding to each suspected user according to the human body image sequence of each suspected user, and determining a user ID corresponding to one suspected user as a first candidate user ID if the behavior analysis result corresponding to the suspected user is matched with the target behavior;
extracting at least one first image area from at least one first video image, wherein the first image area corresponds to a set area in front of the target goods lattice; extracting a second image area from at least one second video image, wherein the second image area corresponds to a set area in front of the target goods lattice; determining the corresponding handheld commodity analysis results of each suspected user before and after the target moment according to at least one first image area and at least one second image area, wherein the suspected user is a user in a set area in front of the target goods lattice; if a handheld commodity analysis result corresponding to a suspected user is matched with the target behavior and the target commodity information, determining that the user ID corresponding to the suspected user is a second candidate user ID;
And if the first candidate user ID is consistent with the second candidate user ID, determining that the second target user ID is the first candidate user ID or the second candidate user ID.
6. The method according to claim 4 or 5, wherein determining the behavior analysis result corresponding to each suspected user according to the human body image sequence of each suspected user comprises:
and inputting the human body image sequence of each suspected user to the trained second convolutional neural network to obtain a behavior analysis result corresponding to the suspected user.
7. The method according to claim 4 or 5, wherein determining, according to the at least one first image area and the at least one second image area, a hand-held commodity analysis result corresponding to each suspected user before and after the target time comprises:
inputting the at least one first image area into a trained third convolutional neural network to obtain at least one first commodity information, wherein the first commodity information at least comprises: correspondence of commodity category, commodity number and commodity position; determining first commodity information associated with the suspected user from at least one first commodity information according to the hand track corresponding to the obtained suspected user ID;
Inputting at least one second image area into the third convolutional neural network to obtain at least one second commodity information, wherein the second commodity information at least comprises: correspondence of commodity category, commodity number and commodity position; determining second commodity information associated with the suspected user from at least one piece of second commodity information according to the hand track corresponding to the obtained suspected user ID;
and determining the handheld commodity analysis results corresponding to the same suspected user before and after the target moment according to the first commodity information and/or the second commodity information associated with the same suspected user.
8. The method according to claim 1, characterized in that the method further comprises:
the event types further include: a third type for indicating a commodity identification anomaly; the alarm data comprise target time, target goods lattice and target behavior; the target goods lattice is the goods lattice where the target goods is, the target moment is the moment of executing the target behavior, and the target behavior is taking or putting back; the auditing strategy is a commodity identification auditing strategy;
the auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality comprises the following steps:
Acquiring at least one third video image which is acquired by the designated acquisition equipment and has a third duration before the target moment; obtaining a fourth video image which is acquired by at least one appointed acquisition device, is after the target moment and has a fourth time length; the appointed acquisition equipment refers to acquisition equipment of which the field of view area at least comprises the target goods lattice;
determining commodity change information of commodities in the same commodity category before and after the target moment according to at least one third video image and at least one fourth video image; determining target commodity information of a target commodity subjected to the target behavior according to commodity change information; the target commodity information includes at least: target commodity category and target commodity quantity.
9. The method of claim 8, wherein determining merchandise change information for the same category of merchandise before and after the target time from the at least one third video image and the at least one fourth video image comprises:
extracting a third image area from at least one third video image, wherein the third image area corresponds to a set area in front of the target goods lattice; inputting at least one third image area into a trained fourth convolutional neural network to obtain third commodity information, wherein the third commodity information at least comprises: correspondence between commodity categories and commodity numbers;
Extracting a fourth image area from at least one fourth video image, wherein the fourth image area corresponds to a set area in front of the target goods lattice; inputting at least one fourth image area into the fourth convolutional neural network to obtain fourth commodity information; the fourth commodity information includes at least: correspondence between commodity categories and commodity numbers;
and determining commodity change information of commodities in the same commodity category before and after the target moment according to at least one third commodity information and/or at least one fourth commodity information.
10. An intelligent auditing device, characterized in that the device comprises:
the obtaining unit is used for obtaining the event type carried by the monitored auditing service request; the auditing service request is triggered by abnormal monitoring by a configured intelligent system in a designated area when at least one acquired image is processed based on a configured intelligent algorithm; the event type is used for characterizing the anomaly; wherein the event types include: a first type; the first type is used for indicating that the newly generated target user track fails to be associated with the known user ID allocated when any user enters a designated area;
The determining unit is used for determining an audit strategy matched with the event type according to the event type;
the auditing unit is used for auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality; when the event type is a first type, the alarm data comprises image information corresponding to the at least one acquired image and the target user track; the auditing strategy is a user track auditing strategy; the auditing the alarm data carried by the auditing service request according to the auditing policy to obtain target data for correcting the abnormality comprises the following steps:
extracting a user image area corresponding to the target user track from at least one acquired image corresponding to the image information;
and determining a first target user ID matched with the target user track from the at least one known user ID according to the user image area corresponding to the target user track and the obtained user image area corresponding to the at least one known user ID.
11. An electronic device, comprising: a processor and a machine-readable storage medium;
The machine-readable storage medium stores machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to implement the method steps of any one of claims 1-9.
CN202010897100.4A 2020-08-31 2020-08-31 Intelligent auditing method and device and electronic equipment Active CN111985440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010897100.4A CN111985440B (en) 2020-08-31 2020-08-31 Intelligent auditing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010897100.4A CN111985440B (en) 2020-08-31 2020-08-31 Intelligent auditing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111985440A CN111985440A (en) 2020-11-24
CN111985440B true CN111985440B (en) 2023-10-27

Family

ID=73439773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010897100.4A Active CN111985440B (en) 2020-08-31 2020-08-31 Intelligent auditing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111985440B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006093972A (en) * 2004-09-22 2006-04-06 Tsukuba Multimedia:Kk Live camera image automatic acquired commodity database forming web server system
WO2018014839A1 (en) * 2016-07-21 2018-01-25 北京京东尚科信息技术有限公司 Method, device and system for monitoring order delivery anomaly based on gis technology
CN108022080A (en) * 2017-11-24 2018-05-11 深圳市买买提乐购金融服务有限公司 One kind complaint processing method and relevant device
WO2019033635A1 (en) * 2017-08-16 2019-02-21 图灵通诺(北京)科技有限公司 Purchase settlement method, device, and system
CN110047197A (en) * 2019-01-24 2019-07-23 阿里巴巴集团控股有限公司 A kind of data processing method, equipment, medium and device
WO2019179256A1 (en) * 2018-03-23 2019-09-26 阿里巴巴集团控股有限公司 Self-service shopping risk control method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8862552B2 (en) * 2010-10-19 2014-10-14 Lanyon, Inc. Reverse audit system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006093972A (en) * 2004-09-22 2006-04-06 Tsukuba Multimedia:Kk Live camera image automatic acquired commodity database forming web server system
WO2018014839A1 (en) * 2016-07-21 2018-01-25 北京京东尚科信息技术有限公司 Method, device and system for monitoring order delivery anomaly based on gis technology
WO2019033635A1 (en) * 2017-08-16 2019-02-21 图灵通诺(北京)科技有限公司 Purchase settlement method, device, and system
CN108022080A (en) * 2017-11-24 2018-05-11 深圳市买买提乐购金融服务有限公司 One kind complaint processing method and relevant device
WO2019179256A1 (en) * 2018-03-23 2019-09-26 阿里巴巴集团控股有限公司 Self-service shopping risk control method and system
CN110047197A (en) * 2019-01-24 2019-07-23 阿里巴巴集团控股有限公司 A kind of data processing method, equipment, medium and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于RFID技术在绿色超市购物系统中的设计与实现;任娟娟;;粘接(02);64-67 *
基于商品关联度的智能仓库储位分配问题研究;李珍萍;卜晓奇;陈星艺;;数学的实践与认识(05);25-33 *

Also Published As

Publication number Publication date
CN111985440A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN108389316B (en) Automatic vending method, apparatus and computer-readable storage medium
US10185877B2 (en) Systems, processes and devices for occlusion detection for video-based object tracking
JP6411718B2 (en) Method for identifying tracked objects for use in processing hyperspectral data
US20070092110A1 (en) Object tracking within video images
US20190019207A1 (en) Apparatus and method for store analysis
CN104462530A (en) Method and device for analyzing user preferences and electronic equipment
CN110675426B (en) Human body tracking method, device, equipment and storage medium
JP2016143334A (en) Purchase analysis device and purchase analysis method
CN112464697A (en) Vision and gravity sensing based commodity and customer matching method and device
CN109447619A (en) Unmanned settlement method, device, equipment and system based on open environment
CN113468914B (en) Method, device and equipment for determining purity of commodity
CN110751116B (en) Target identification method and device
CN113869137A (en) Event detection method and device, terminal equipment and storage medium
CN114783037A (en) Object re-recognition method, object re-recognition apparatus, and computer-readable storage medium
CN111507792A (en) Self-service shopping method, computer readable storage medium and system
CN111985440B (en) Intelligent auditing method and device and electronic equipment
CN117437264A (en) Behavior information identification method, device and storage medium
EP1683108A2 (en) Object tracking within video images
KR101595334B1 (en) Method and apparatus for movement trajectory tracking of moving object on animal farm
CN112001349B (en) Data auditing method, system and electronic equipment
CN110610358A (en) Commodity processing method and device and unmanned goods shelf system
CN112990153A (en) Multi-target behavior identification method and device, storage medium and electronic equipment
CN110956644B (en) Motion trail determination method and system
CN111444757A (en) Pedestrian re-identification method, device, equipment and storage medium for unmanned supermarket
CN111988579B (en) Data auditing method and system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant