CN111985970A - Intelligent operation method of vending machine in market based on big data and video analysis - Google Patents

Intelligent operation method of vending machine in market based on big data and video analysis Download PDF

Info

Publication number
CN111985970A
CN111985970A CN202010840971.2A CN202010840971A CN111985970A CN 111985970 A CN111985970 A CN 111985970A CN 202010840971 A CN202010840971 A CN 202010840971A CN 111985970 A CN111985970 A CN 111985970A
Authority
CN
China
Prior art keywords
commodity
vending machine
goods taking
detection
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010840971.2A
Other languages
Chinese (zh)
Inventor
赵霆霏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010840971.2A priority Critical patent/CN111985970A/en
Publication of CN111985970A publication Critical patent/CN111985970A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/18Payment architectures involving self-service terminals [SST], vending machines, kiosks or multimedia terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Multimedia (AREA)
  • Development Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Social Psychology (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Evolutionary Biology (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Control Of Vending Devices And Auxiliary Devices For Vending Devices (AREA)

Abstract

The invention provides an intelligent operation method of an in-market vending machine based on big data and video analysis, which comprises the following steps: after the user successfully places the order and pays, carrying out commodity detection and goods taking behavior detection, wherein the commodity detection is used for obtaining the types and the quantity of commodities at a goods taking opening of the vending machine and comparing the types and the quantity with order information of the user; the goods taking action detection is used for judging whether the user takes goods after the payment is successful; the intelligent operation of the vending machine is realized by combining the results obtained by the commodity detection and the goods taking behavior detection; the invention can realize self-certification of the vending machine, and can send fault information to the background of the merchant when the user has the phenomenon of low goods or no goods after successful payment, so as to remind the merchant to process in time and reduce the loss of the user; the loss of the merchant caused by the behavior of missing or slowly taking the product due to the personal condition of the user can be prevented.

Description

Intelligent operation method of vending machine in market based on big data and video analysis
Technical Field
The invention belongs to the field of big data and artificial intelligence, and particularly relates to an intelligent operation method of an in-market vending machine based on big data and video analysis.
Background
Although a large amount of manpower is saved in the vending machine in the market, due to the fact that the using frequency of the vending machine is high, some faults often exist in the vending machine, the situation that a user does not deliver goods or delivers few goods after paying is displayed, the goods taking success is displayed, and the like, at this time, the user needs to report to a merchant by himself, and then background staff of the merchant manually confirm the goods taking, so that the user experience is very unfavorable.
Disclosure of Invention
In order to solve the problems, the invention provides an intelligent operation method of an in-market vending machine based on big data and video analysis, which comprises the following steps:
acquiring pedestrian images near vending machines, performing attitude estimation on the acquired pedestrian images by using a multi-person attitude estimation model to obtain the attitudes of pedestrians, projecting the central points of connecting lines of key points of two faces of the pedestrians in attitude information into a pre-constructed market BIM, and establishing ROI areas in the BIM, wherein each vending machine corresponds to one ROI area;
step two, after the order placing payment of the user is successful, commodity detection and goods taking behavior detection are started simultaneously;
and (3) commodity detection: starting an environment lamp, acquiring an image of a commodity at a goods taking port by using a camera embedded in a vending machine, obtaining a key point heat map from the acquired commodity image through a key point detection network, and performing post-processing on the key point heat map to obtain the type and the quantity of the commodity; comparing the obtained types and the quantity of the commodities with the order information of the user;
and (3) detecting the goods taking action: acquiring coordinates of a central point of a connecting line of key points of two foot faces of pedestrians in the BIM, classifying postures of the pedestrians of which the central point coordinates are located in a corresponding ROI of the used vending machine, judging whether goods taking behaviors exist or not, recording the goods taking behaviors as an event A when the goods taking behaviors do not exist, and recording the goods taking behaviors as an event B when the goods taking behaviors exist;
analyzing results obtained by commodity detection and goods taking behavior detection:
if the information comparison is inconsistent, the vending machine has a fault, and fault information is sent to the staff;
if the information comparison is consistent and an event A occurs, reminding the user to pick up goods and carrying out commodity detection again after a certain time;
if the information comparison is consistent and an event B occurs, carrying out commodity detection again, and turning off the environment lamp and finishing the commodity detection when no commodity is detected; when the commodity is detected, the commodity missing of the user is judged, the user is reminded to pick the commodity, and the commodity detection is carried out again after a certain time.
The ambient light is disposed at the access port.
The key point detection network comprises a key point encoder and a key point decoder, wherein the key point encoder performs feature extraction on an input image to obtain a feature map, and the key point decoder performs up-sampling on the feature map to generate a key point heat map; the training process of the network specifically comprises the following steps: acquiring images of commodities to construct a training data set; and (3) generating a label: marking the key point at the central point of the commodity, marking the pixel position of the key point, and then forming a hot spot of the key point by using a Gaussian kernel; the key point of each commodity corresponds to a channel; the network is trained using a heat map loss function.
The pedestrian postures are classified through a full-connection network, the input of the network is a posture sequence of the pedestrians of which the central point coordinates are located in the corresponding ROI area of the used vending machine, and the output is the corresponding pedestrian postures, wherein the pedestrian postures comprise standing, squatting and bending.
The judgment process of whether the goods taking action exists is as follows: and (3) correspondingly setting the standing posture as a state value 1, setting the squatting posture and the bending posture as a state value 2, judging that the goods taking action occurs if the state value has at least one jump from 2 to 1 within a certain time after the user pays successfully, and otherwise, judging that the goods taking action does not occur.
The invention has the beneficial effects that:
1. the detection of the existing vending machine on the commodities is carried out before the commodities are delivered, but the conditions of clamping and dropping the commodities can occur in the process of delivering the commodities.
2. The invention obtains the types and the quantity of the commodities at the goods taking opening by utilizing the neural network technology, compares the types and the quantity with the order information of the user, can finish the self-certification of the vending machine, and sends prompt information to a merchant background to remind the merchant to take related measures if the information comparison is inconsistent, thereby reducing the loss of the user and improving the use experience of the vending machine of the user.
3. According to the invention, the goods taking process after the user purchases is detected through the human body posture estimation, so that the loss of merchants caused by the actions of missing, taking less or taking slowly under the personal condition of the user is prevented, and the goods selling efficiency of the vending machine is improved; and the power consumption of the camera embedded in the vending machine can be reduced by judging the goods taking action firstly, the camera is not required to continuously acquire images at the goods taking opening, and the acquisition and the processing of the images are only required to be performed after the goods taking action of the user occurs.
4. If the vending machine has the abnormal condition of the goods being blocked, the method can accurately know the type and the quantity of the goods being blocked, can effectively prevent the occurrence of malicious purchasing behaviors, and avoids the loss brought to merchants.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, the following further description is provided in conjunction with the embodiments and the accompanying drawings.
The invention provides an intelligent operation method of an automatic vending machine in a market based on big data and video analysis, which mainly carries out automatic management and self-certification on the automatic vending machine in each area of the market, and finally realizes self-certification of the output of the vending machine and detection of a user goods taking process by combining a vending machine terminal and computer vision, wherein an implementation flow chart is shown in figure 1, and specifically: firstly, pedestrian postures are detected on images obtained by cameras at all positions of a market to obtain pedestrian posture information, after a user pays successfully, a goods taking port detection function is started, meanwhile, the positions and postures of the pedestrians are judged, whether the beverage goods outlet conditions detected by the goods taking port cameras are matched with the order of the user is judged, and therefore automatic operation support is provided for after-sale verification and sale of the vending machine.
Example (b):
the method comprises the steps of collecting pedestrian images near a vending machine by using a camera in a market, and carrying out posture estimation on the collected pedestrian images by using a multi-person posture estimation model to obtain pedestrian postures. The human body posture estimation technology belongs to common tasks in the field of computer vision, and the common posture estimation model is considered to have poor robustness to the environment, so that the model can be trained automatically, and the detection accuracy is improved.
Training a multi-person posture estimation model:
firstly, acquiring training image data, preferably acquiring pedestrian images at a market vending machine; the acquired image is normalized, and the value range of the image matrix is changed into a floating point number between [0 and 1] so as to facilitate better convergence of the model. .
Secondly, making a label, wherein the following method is adopted for marking so as to reduce the labor expenditure:
a) firstly, a common multi-person posture estimation model is utilized to carry out posture estimation on an image, and a processed posture joint point is displayed in an original image. The multi-person posture estimation model can adopt open-source HRNet, HigherHRNet and the like.
b) And manually carrying out error selection on the detection result image, and changing the attitude coordinate with error and larger error.
c) Then, the coordinates of the various types of joint points are convoluted through a Gaussian kernel to obtain a thermodynamic diagram (Heatmap) of the posture of the pedestrian. The details, such as the choice of the radius of the gaussian kernel, are not within the scope of the present invention.
The social embedding technology is adopted to distinguish different person examples in an image, belongs to a common technical means for distinguishing examples, enables tag values of all key points of the same person to be close, enables tag values of different persons to be distant, and achieves the similar concept through Euclidean distance. The formula is as follows:
Figure BDA0002641410890000031
n denotes the nth person, k denotes the kth keypoint, X denotes the pixel position where the real keypoint is located, hkThe label heatmap representing the kth individual key-point.
The specific training process of the model is as follows:
the method comprises the steps of training an attitude estimation encoder and an attitude estimation decoder end to end by using training images and label data which are subjected to normalization processing, outputting a first feature map after performing feature extraction on input images, and up-sampling the first feature map by the attitude estimation decoder to finally generate a pedestrian attitude thermodynamic diagram and an associated embedding diagram (Associative embedding).
The Loss function uses a weighted sum of Heatmaps Loss + Grouping Loss, where:
Figure BDA0002641410890000032
wherein, PcijThe higher the score of the human keypoint representing category C at position (i, j), the more likely it is that the human keypoint is. y iscijDenotes Heatmap of ground truth. N represents the number of key points in the ground route. Alpha and beta are hyper-parameters and need to be set manually.
Figure BDA0002641410890000033
N represents the number of people in the group channel, K represents the number of key points in each person group channel, N represents the nth person, K represents the kth key point, X represents the pixel position of the real key point,
Figure BDA0002641410890000034
the tag value of the ground channel is shown. h isk(xnkAnd,) is the predicted tag value. n' is a value which is set artificially by a person other than the nth person.
The first half of the function is to pull up the tag value of the key point input by the same person as much as possible, and the second half of the formula is to pull up different individual persons from each other.
Therefore, the formula of the loss function is:
Figure BDA0002641410890000035
γ,
Figure BDA0002641410890000036
similarly, the artificial setting is to make the two loss values relatively close to each other so as to better judge the convergence condition of the model.
And finishing the training of the multi-person posture estimation model.
The invention suggests that the multi-person posture estimation model directly applies the associated Embedding paper multi-person posture estimation network weight for fine adjustment, and further improves the robustness of the model.
It should be noted that due to the characteristics of the thermodynamic diagram itself, the pixel values of the pedestrian posture thermodynamic diagram output by the model conform to a gaussian distribution, and the value range is between [0 and 1 ].
And post-processing the pedestrian posture thermodynamic diagram to obtain key points, wherein a post-processing method is well known and is not repeated herein. The closer the tag values of the key points are, the more closely the key points belong to a group, that is, the case to which the key point belongs is determined.
Thus, the pedestrian posture is obtained.
The pedestrian posture detection is carried out on pedestrians, more accurate BIM projection can be achieved, and therefore the positions of the pedestrians are judged, specifically, instep key points of the pedestrians are selected from the obtained key points, the central point of the connecting line of the two instep key points of the pedestrians is projected to a pre-constructed market BIM, the BIM is based on market information data, a three-dimensional market building model is built and contains information such as internal facilities and sensors, and an implementer can carry out modeling and design on the market BIM through Revit software; for projection to the BIM, a homography matrix needs to be introduced, and specific operation can be estimated by a four-point method based on ground labeling and BIM corresponding coordinates. Since the four-point method is common knowledge, the specific implementation method is simple and easy to obtain, and is not described in detail herein.
Setting up an ROI (region of interest) area in the BIM, specifically setting according to a local area in front of a screen of an automatic vending machine covered by a mall camera, wherein each vending machine is provided with a corresponding ROI area;
and finishing the projection of the central point.
And then, automatically controlling the commodity detection function according to the use condition of the vending machine, and minimizing the power consumption of the vending machine.
In order to avoid the influence of the environment, an environment lamp needs to be arranged at the goods taking port so as to improve the detection accuracy. The environment lamp is automatically switched on and off according to the vending machine, the environment lamp is turned on after the user successfully places the order and pays, and the environment lamp is turned off after detection is finished, so that energy consumption is reduced.
When the order placing and payment of the user are successful, commodity detection and goods taking behavior detection are started simultaneously;
and (3) commodity detection: open the environment lamp, utilize the embedded camera in the vending machine to gather the image of getting goods mouthful department commodity, the commodity image of gathering obtains the key point heat map through key point detection network, and wherein, key point detection network includes: the key point encoder is used for extracting the characteristics of an input image to obtain a characteristic diagram, and the key point decoder is used for up-sampling the characteristic diagram to generate a key point heat map; the training process of the network specifically comprises the following steps: acquiring images of commodities to construct a training data set; and (3) generating a label: marking the key point at the central point of the commodity, marking the pixel position of the key point, and then forming a hot spot of the key point by using a Gaussian kernel; the key point of each commodity corresponds to a channel; the network is trained using a heat map loss function.
And post-processing the key point heat map to obtain the types and the quantity of the commodities, and comparing the obtained types and the quantity of the commodities with the order information of the user to realize self-certification of the vending machine.
And (3) detecting the goods taking action: the method comprises the steps of obtaining coordinates of a central point of a key point connecting line of two foot faces of pedestrians in the BIM, classifying postures of the pedestrians of which the central point coordinates are located in a region corresponding to a region of interest (ROI) of a used vending machine, wherein the classification is realized through a full-connection network, a cross entropy function is adopted as a loss function, the input of the network is a posture sequence of the pedestrians of which the central point coordinates are located in the region corresponding to the ROI of the used vending machine, and the output is a pedestrian posture, wherein the pedestrian posture comprises standing, squatting and stooping. Wherein, the pedestrian attitude sequence is obtained by a multi-person attitude estimation model.
Judging whether goods taking behaviors exist or not according to the obtained posture of each pedestrian, recording the goods taking behaviors as an event A and recording the goods taking behaviors as an event B, and specifically: and (3) correspondingly setting the standing posture as a state value 1 and the squatting and bending postures as a state value 2, if the state value has at least one jump from 2 to 1 within a certain time after the user pays successfully, determining that goods taking action exists, and otherwise, determining that no goods taking action exists. And when at least one jump of the state value from 2 to 1 exists in the state value sequence of at least one pedestrian, judging that goods taking action exists.
When a plurality of pedestrians exist in the ROI area, tracking each pedestrian through the absolute value of the Euclidean distance between G coordinates between frames: each ROI area in the BIM corresponds to one point set respectively, and each point set comprises all central points in the corresponding ROI area respectively. In the embodiment, a point set corresponding to a region of a vending machine ROI used by a user is G, if a certain frame is projected to obtain a point set G1, a next frame is projected to obtain a point set G2, coordinates of all center points in G1 and coordinates of all center points in G2 are sequentially subjected to euclidean distance calculation, and an absolute value of the euclidean distance is taken, and finally, a point with the minimum absolute value of the euclidean distance in G2 and G1 is considered as the same example, which is exemplified by:
assuming that there are 3 points in G1 and 2 points in G2, the total number of the points is 6, if the absolute values of the euclidean distances between the first point in G2 and the 3 points in G1 are [5,7,9], the first point in G2 and the first point in G1 are determined to be the same pedestrian, the absolute values of the euclidean distances between the second point in G2 and the 3 points in G1 are [3,8,1], and the second point in G2 and the 3 rd point in G1 are determined to be the same pedestrian.
And finishing commodity detection and goods taking behavior detection.
According to the comparison result of the commodity detection and the judgment result of the goods taking action, the intelligent operation of the vending machine is realized, and specifically:
if the information comparison is inconsistent, the vending machine has a fault, fault information is sent to the staff, and the vending function of the vending machine is automatically closed;
if the information comparison is consistent and an event A occurs, the user is reminded to pick the goods as soon as possible and to perform commodity detection again after a certain time if the user has a delayed goods picking condition;
if the information comparison is consistent and an event B occurs, carrying out commodity detection again, and turning off the environment lamp and finishing the commodity detection when no commodity is detected; when the commodity is detected, the commodity missing of the user is judged, the user is reminded to pick the commodity, and the commodity detection is carried out again after a certain time.
Further, the commercial products of the present invention include, but are not limited to, various bottled or packaged beverages, snacks with a packaging bag or box, cosmetics, and the like.
The foregoing is intended to provide those skilled in the art with a better understanding of the invention, and is not to be construed as limiting the invention, since modifications may be made within the spirit and scope of the invention.

Claims (5)

1. An intelligent operation method of an in-market vending machine based on big data and video analysis is characterized by comprising the following steps:
acquiring pedestrian images near vending machines, performing attitude estimation on the acquired pedestrian images by using a multi-person attitude estimation model to obtain the attitudes of pedestrians, projecting the central points of connecting lines of key points of two faces of the pedestrians in attitude information into a pre-constructed market BIM, and establishing ROI areas in the BIM, wherein each vending machine corresponds to one ROI area;
step two, after the order placing payment of the user is successful, commodity detection and goods taking behavior detection are started simultaneously;
and (3) commodity detection: starting an environment lamp, acquiring an image of a commodity at a goods taking port by using a camera embedded in a vending machine, obtaining a key point heat map from the acquired commodity image through a key point detection network, and performing post-processing on the key point heat map to obtain the type and the quantity of the commodity; comparing the obtained types and the quantity of the commodities with the order information of the user;
and (3) detecting the goods taking action: acquiring coordinates of a central point of a connecting line of key points of two foot faces of pedestrians in the BIM, classifying postures of the pedestrians of which the central point coordinates are located in a corresponding ROI of the used vending machine, judging whether goods taking behaviors exist or not, recording the goods taking behaviors as an event A when the goods taking behaviors do not exist, and recording the goods taking behaviors as an event B when the goods taking behaviors exist;
analyzing results obtained by commodity detection and goods taking behavior detection:
if the information comparison is inconsistent, the vending machine has a fault, and fault information is sent to the staff;
if the information comparison is consistent and an event A occurs, reminding the user to pick up goods and carrying out commodity detection again after a certain time;
if the information comparison is consistent and an event B occurs, carrying out commodity detection again, and turning off the environment lamp and finishing the commodity detection when no commodity is detected; when the commodity is detected, the commodity missing of the user is judged, the user is reminded to pick the commodity, and the commodity detection is carried out again after a certain time.
2. The method of claim 1, wherein the ambient light is deployed at a pick-up portal.
3. The method of claim 1, wherein the keypoint detection network comprises a keypoint encoder and a keypoint decoder, the keypoint encoder performs feature extraction on the input image to obtain a feature map, and the keypoint decoder performs upsampling on the feature map to generate a keypoint heatmap; the training process of the network specifically comprises the following steps: acquiring images of commodities to construct a training data set; and (3) generating a label: marking the key point at the central point of the commodity, marking the pixel position of the key point, and then forming a hot spot of the key point by using a Gaussian kernel; the key point of each commodity corresponds to a channel; the network is trained using a heat map loss function.
4. The method of claim 1, wherein the classifying of the pedestrian's posture is performed through a fully connected network having input of a sequence of postures of pedestrians whose center point coordinates are within a corresponding ROI area of the vending machine being used and output of corresponding pedestrian postures, wherein the pedestrian postures include standing, squatting, bending over.
5. The method of claim 4, wherein the determining whether there is a pickup action is performed by: and (3) correspondingly setting the standing posture as a state value 1, setting the squatting posture and the bending posture as a state value 2, judging that the goods taking action occurs if the state value has at least one jump from 2 to 1 within a certain time after the user pays successfully, and otherwise, judging that the goods taking action does not occur.
CN202010840971.2A 2020-08-20 2020-08-20 Intelligent operation method of vending machine in market based on big data and video analysis Withdrawn CN111985970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010840971.2A CN111985970A (en) 2020-08-20 2020-08-20 Intelligent operation method of vending machine in market based on big data and video analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010840971.2A CN111985970A (en) 2020-08-20 2020-08-20 Intelligent operation method of vending machine in market based on big data and video analysis

Publications (1)

Publication Number Publication Date
CN111985970A true CN111985970A (en) 2020-11-24

Family

ID=73442340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010840971.2A Withdrawn CN111985970A (en) 2020-08-20 2020-08-20 Intelligent operation method of vending machine in market based on big data and video analysis

Country Status (1)

Country Link
CN (1) CN111985970A (en)

Similar Documents

Publication Publication Date Title
US11501523B2 (en) Goods sensing system and method for goods sensing based on image monitoring
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
US20200272902A1 (en) Pedestrian attribute identification and positioning method and convolutional neural network system
US11151427B2 (en) Method and apparatus for checkout based on image identification technique of convolutional neural network
CN104732413B (en) A kind of intelligent personalized video ads method for pushing and system
CN105938559B (en) Use the Digital Image Processing of convolutional neural networks
CN109165645B (en) Image processing method and device and related equipment
KR101960900B1 (en) Method for recognizing products
CN111415461A (en) Article identification method and system and electronic equipment
Alahi et al. Robust real-time pedestrians detection in urban environments with low-resolution cameras
CN103853724B (en) multimedia data classification method and device
CN104318198B (en) Recognition methods and device suitable for transformer substation robot inspection
WO2020134102A1 (en) Article recognition method and device, vending system, and storage medium
CN109376631A (en) A kind of winding detection method and device neural network based
CN110532970A (en) Age-sex's property analysis method, system, equipment and the medium of face 2D image
CN108550229A (en) A kind of automatic cash method of artificial intelligence
CN110263768A (en) A kind of face identification method based on depth residual error network
CN102812474A (en) Head Recognition Method
CN111209818A (en) Video individual identification method, system, equipment and readable storage medium
CN108197636A (en) A kind of paddy detection and sorting technique based on depth multiple views feature
US8094971B2 (en) Method and system for automatically determining the orientation of a digital image
CN111222530A (en) Fine-grained image classification method, system, device and storage medium
CN111428743B (en) Commodity identification method, commodity processing device and electronic equipment
CN106650798A (en) Indoor scene recognition method combining deep learning and sparse representation
CN112085534A (en) Attention analysis method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201124