CN110348915B - Grabbing amount prediction method and system based on doll placing posture - Google Patents

Grabbing amount prediction method and system based on doll placing posture Download PDF

Info

Publication number
CN110348915B
CN110348915B CN201910657655.9A CN201910657655A CN110348915B CN 110348915 B CN110348915 B CN 110348915B CN 201910657655 A CN201910657655 A CN 201910657655A CN 110348915 B CN110348915 B CN 110348915B
Authority
CN
China
Prior art keywords
doll
machine
user
placing
grab
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910657655.9A
Other languages
Chinese (zh)
Other versions
CN110348915A (en
Inventor
沈之锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xinqi Intelligent Technology Co.,Ltd.
Original Assignee
Shaoguan Qizhi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaoguan Qizhi Information Technology Co ltd filed Critical Shaoguan Qizhi Information Technology Co ltd
Priority to CN201910657655.9A priority Critical patent/CN110348915B/en
Publication of CN110348915A publication Critical patent/CN110348915A/en
Application granted granted Critical
Publication of CN110348915B publication Critical patent/CN110348915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Accounting & Taxation (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Multimedia (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Toys (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides a grabbing amount prediction method based on a doll placing posture. When a user selects a plurality of doll machines, recording the selected doll machines; the doll is provided with a camera to obtain images of a doll and a stream of visitors in the doll; when a user starts to grab a doll on the selected doll machine, analyzing the shape of the doll, the stacking height of the doll, the distance between the doll and a dropping opening and the placing posture of the doll through image analysis software to obtain the placing characteristics of the doll; counting the number of times that the user grabs a doll on the doll machine; after a first user finishes playing a doll machine, predicting whether a second user can select the doll machine to grab or not according to the doll placing characteristics and a pre-trained machine learning model; if yes, further predicting the grabbing times; if not, the dolls in the doll machine are put again. The invention can analyze the placing posture of the doll in the doll machine, predict the probability that the doll machine is selected to be grabbed by the user under the placing posture and how many times the user grabs the doll, and adjust the placing posture of the doll in the doll machine according to the predicted value.

Description

Grabbing amount prediction method and system based on doll placing posture
Technical Field
The invention relates to the technical field of computer application, in particular to a grabbing amount prediction method and system based on a doll placing posture.
Background
People grab dolls on the doll machine and all want to obtain the doll in the doll machine. The general user can grasp the doll according to the placing position of the doll besides the grasping price of the doll machine and the shape of the doll. Because dolls that are unstable to position, close to the drop-off opening, and are easier to grasp get the consumer's favor, the more willing to grasp. People can firstly see the placing posture of the doll when grabbing the doll on the doll machine, and no doll machine analyzes the doll inside at present. In a doll machine in a market, sometimes, people only grab dolls of one machine, and other machines are not in the way. The reason why some dolls are not in business is that too few dolls are in the dolls, or the positions and postures of the arranged dolls in the dolls are too low for a user to feel that the probability of grabbing the doll is low. If what placing mode can cause the grabbing of the user can be predicted, what placing mode can cause the reduction of the user. Therefore, the merchant can pertinently correct the placement of the doll machine, the probability that the doll machine is visited by customers is improved, more users are attracted to participate, and the use degree of the doll machine by the users is improved.
Disclosure of Invention
The invention provides a doll placing posture-based grabbing amount prediction method, which mainly comprises the following steps:
when a user selects a plurality of doll machines, recording the selected doll machines;
the doll is provided with a camera to obtain images of a doll and a stream of visitors in the doll;
when a user starts to grab a doll on the selected doll machine, analyzing the shape of the doll, the stacking height of the doll, the distance between the doll and a dropping opening and the placing posture of the doll through image analysis software to obtain the placing characteristics of the doll;
counting the number of times that the user grabs a doll on the doll machine;
after a first user finishes playing a doll machine, predicting whether a second user can select the doll machine to grab or not according to the doll placing characteristics and a pre-trained machine learning model;
if yes, further predicting the grabbing times;
if not, the dolls in the doll machine are put again.
Further optionally, in the method as described above, the installing a camera on the doll machine to obtain images of a doll and a stream of visitors in the doll machine mainly includes:
the doll is provided with the camera, so that the posture of the doll in the doll can be monitored, and whether part of the limbs of the doll are clamped by other dolls is found;
the image acquisition equipment arranged on the doll machine can acquire the image of a player outside the doll machine;
the doll machine can count the number of people playing the doll machine in the stream of the people to be observed; the watching stream refers to the number of people watching the doll machine exceeding a preset time length threshold value;
and according to how many people finish the dolls in the stream of the observed people, counting the respective probability of each doll selected by the user to grab under the current placing posture.
Further optionally, in the method as described above, the analyzing, by the image analysis software, the doll shape, the stacking height, the closest distance from the drop opening, and the doll pose to obtain the doll pose characteristics mainly includes:
extracting the doll contour by adopting an openvc image processing technology based on the hog characteristic to obtain the doll shape;
presume the height that the doll is stacked according to the chassis height of the doll and the proportion that the doll is stacked in the chassis of the doll;
processing the image through openvc software to obtain the position of a square or round drop opening of the doll machine;
obtaining each doll positioned on the top in the doll machine, obtaining the doll, obtaining the distance from the doll to the drop port, and calculating the distance from the doll closest to the drop port relative to the drop port;
and acquiring the placing direction of the doll, and analyzing the placing inclination angle of the doll according to the image.
Further optionally, in the method as described above, the learning a model according to a machine trained in advance mainly includes:
taking the doll placing characteristics as characteristic data, taking whether the user selects the doll machine to capture as a labeling value of a binary classification, and training a machine learning model I;
taking the doll placing characteristics as characteristic data, taking the times of grabbing a doll on the doll machine by the user as a labeled value, and training a machine learning model II;
the machine learning model I adopts a support vector machine as a training and predicting classification model;
and the second machine learning model adopts a convolutional neural network as a training and predicting classification model.
Further optionally, in the method described above, the predicting whether the user will select the doll for grabbing; if yes, further predicting the grabbing times, and mainly comprising the following steps:
preprocessing according to the doll shape, the stacking height, the distance between the nearest doll and the drop port and doll placing posture characteristic data, inputting a trained binary classification support vector machine model, and predicting whether a user can select the current doll to grab;
if yes, preprocessing is carried out according to the doll shape, the stacking height, the distance between the nearest doll and the drop opening and the doll placing posture characteristic data, the trained multivariate convolution neural network model is input, and the fact that the user can grab for several rounds is predicted.
Further optionally, in the method as described above, the predicting that the user will perform several rounds of grabbing mainly includes:
reminding the merchant of the doll if the predicted number of grabbing turns is less than the preset number
Doll pose may be further optimized to enhance user grip.
Further optionally, in the method as described above, the repositioning of the doll in the doll machine mainly includes:
if the doll machine predicts that the user does not select the doll machine to grab or the grabbing times are less than a preset threshold value,
the doll is stirred and put again through the mechanical arm;
or, the user is prompted that the doll machine will adopt powerful grabbing and adopt a larger grabbing force to grab, and the user is attracted to grab in the future.
Further optionally, in the method as described above, the agitating and repositioning the doll by the mechanical arm mainly comprises:
the position of a doll in the doll machine is identified through image processing software, and the mechanical arm is automatically controlled according to the position of the doll to grab the doll in the doll machine, so that the doll far away from the dropping port is closer to the dropping port or the stably placed doll is easier to grab and topple;
further optionally, in the method as described above, the agitating and repositioning the doll by the mechanical arm mainly comprises:
if the doll machine has the prediction result that: the user does not choose to grab;
increasing the number of dolls in the doll machine and stacking higher; alternatively, the first and second electrodes may be,
the mechanical arm is snatched in the regulation, and the reinforcing is snatched the dynamics and is adjusted bigger probability of grabbing.
The invention discloses a grabbing amount prediction system based on doll placing posture, which comprises:
a method and a system for predicting grabbing amount based on doll placing posture are characterized in that the system comprises:
the doll placing posture information acquisition module is used for acquiring the placing posture of a doll in the doll machine;
the data training module is used for training the selected model of the doll and the doll grabbing frequency model according to the doll placing posture;
the grabbing prediction module is used for predicting the grabbing behavior of the user according to the doll placing posture;
the posture adjusting module is used for adjusting the placing posture of the doll in the doll machine according to the prediction result;
the technical scheme provided by the embodiment of the invention has the following beneficial effects:
the invention can analyze the placing posture of the doll in the doll machine, predict the probability that the doll machine is selected to be grabbed by the user under the placing posture and how many times the user grabs the doll, and adjust the placing posture of the doll in the doll machine according to the predicted value.
Drawings
FIG. 1 is a flow chart of an embodiment of a doll pose based grab size prediction method of the present disclosure;
fig. 2 is a block diagram of an embodiment of a doll pose-based grab size prediction system according to the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a flow chart of the method of the present invention. As shown in fig. 1, the method for predicting the grabbing amount of the doll putting posture in the embodiment may specifically include the following steps:
step 101, the doll machine acquires a doll image and an image of people around the doll machine.
The doll is provided with the camera, so that the posture of the doll in the doll can be monitored, and whether part of limbs or parts of the doll are clamped by other dolls is found; since even if the posed position is easily grasped, a situation in which a limb or member such as a string is pressed by another doll is avoided, so that the user does not grasp that doll.
The image acquisition equipment arranged on the doll machine can acquire the image of a player outside the doll machine;
the doll machine can count the number of people playing the doll machine in the stream of the people to be observed; the watching stream refers to the number of people watching the doll machine exceeding a preset time length threshold value;
102, analyzing the shape, stacking height, the distance between the nearest doll and the drop opening and the placing posture of the doll through image analysis software to obtain the placing characteristics of the doll, and mainly comprising the following steps:
extracting the doll contour by adopting an openvc image processing technology based on the hog characteristic to obtain the doll shape; the shape of the doll can influence the grabbing desire of the user, some lovely dolls are more favored by the user, some ugly dolls are not favored by the user and cannot be grabbed naturally;
presume the height that the doll is stacked according to the chassis height of the doll and the proportion that the doll is stacked in the chassis of the doll; the more dolls are stacked, the more dolls a user can select to grab, and the more dolls can be stacked, so that some dolls can fall to an outlet more easily, and the user can grab the dolls conveniently. Stack height is another way to attract users.
Processing the image through openvc software to obtain the position of a square or round drop opening of the doll machine; the drop port is generally square, and can be used for image recognition through an image acquisition device and acquiring the position of an outlet.
Obtaining each doll positioned on the top in the doll machine, obtaining the doll, obtaining the distance from the doll to the drop port, and calculating the distance from the doll closest to the drop port relative to the drop port; the drop-off opening is an opening through which the doll may be removed by the user after falling out.
If some dolls are close to the falling opening, the dolls can be more attracted to the user to grab, and the doll placing in the posture has strong attraction. It may be easy for a user to clip the doll to. But in fact the merchant can adjust the grip so that even if the doll is placed close to the exit, the control of the probability of success of the grip remains on the merchant's hand.
And acquiring the placing direction of the doll, and analyzing the placing inclination angle of the doll according to the image. The placement direction of the doll is also related to whether the doll is easy to grasp. If the doll is placed stably and neatly, the doll is not easy to grasp, so that the doll can be placed on the front or the back in a reverse or inclined manner, and the doll is favorable for being grasped. The above is only a simple example based on common general knowledge, and the specific placing posture is easier to grasp and more attractive to the user. In fact, multiple training analyses need to be performed in a machine learning manner, so that more accurate predictability can be achieved.
And analyzing the shape, stacking height, distance between the nearest doll and the drop port and the placing posture of the doll by image analysis software to obtain the placing characteristics of the doll.
And 103, counting the playing probability of the doll machine by the user according to the doll placing characteristics and the people flow analysis.
And according to how many people finish the dolls in the stream of the observed people, counting the respective probability of each doll selected by the user to grab under the current placing posture. Through a large amount of statistics, the doll in the doll machine is seen by the doll machine, or the number of people who look at the doll and grab the doll by others is calculated, and whether the doll in the doll machine is attractive or not in a certain placing posture can be obtained. And analyzing how many people can grab the various placing postures. This statistical approach is simpler but also has greater prediction accuracy. This is the simplest method of predicting the amount of pick-up based on the doll pose in the doll. For example, a user who passes and views a first doll may have 50 people, only 1 person playing. The play probability is 2%. The user who passed and viewed the second doll had 50 people and 5 people played. The play probability is 10%. The doll illustrating the second doll is more attractive and its doll or doll placement is more conducive to play.
And step 104, predicting the grabbing behaviors of the user according to a machine learning model trained in advance.
Taking the doll placing characteristics as characteristic data, taking whether the user selects the doll machine to capture as a labeling value of a binary classification, and training a machine learning model I;
the user has a great factor in selecting a doll according to the condition of a doll in the doll, because the doll can monitor the flow of surrounding people, so the doll can recognize whether the nearby flow of passengers selects the current doll to grab play. When a user selects one or more dolls among the dolls for grabbing, the selected dolls are marked as positive and the unselected dolls are marked as negative. The two labeling results can train a two-class machine learning model. The training characteristics are the placing characteristics of the dolls in the case corresponding to the doll machine.
On the other hand, whether to grasp is a problem, and the number of grasping is another problem.
Taking the doll placing characteristics as characteristic data, taking the times of grabbing a doll on the doll machine by the user as a labeled value, and training a machine learning model II;
because dolls are generally not intended to be held in one-time, multiple picks are required to achieve a higher probability of being picked. A user will typically decide whether to grab the doll multiple times depending on the need for the doll and whether the doll is easily grabbed. Different dolls and different placing characteristics determine the grabbing times of the user. The grabbing times are used as the marking values of the doll features, and the model is trained, so that the grabbing times can be predicted according to the placing features of the doll.
Preprocessing according to the doll shape, the stacking height, the distance between the nearest doll and the drop port and doll placing posture characteristic data, inputting a trained binary classification support vector machine model, and predicting whether a user can select the current doll to grab; a support vector machine is adopted as a classification model for training and prediction; the model belongs to binary classification, and the classification result is yes or no.
If the prediction result is yes, the user is indicated to grab the doll, and the number of times the user grabs the doll can be further predicted. Similarly, preprocessing is carried out according to the doll shape, the stacking height, the distance between the nearest doll and the drop opening and the doll placing posture characteristic data, a trained multivariate convolution neural network model is input, and the fact that the user can grab for several rounds is predicted. A convolutional neural network is used as a classification model for training and prediction. The model belongs to a multi-element class, and the classification result is a number. I.e. how many rounds the user will grab. The support vector machine is realized by open source software scimit _ spare, and the convolutional neural network can be realized by open source software pytore.
If the predicted number of grabbing rounds is smaller than the preset number, reminding the merchant that the doll placing posture in the doll machine can be further optimized to improve the grabbing amount of the user. For example, if the user is predicted to grab only once, the merchant's revenue is still relatively small. The doll placing mode is not good enough, and the doll can be placed more attractively, so that the willingness of a user to grab the doll is increased.
And 105, if the user participation degree is predicted to be low, putting the doll in the doll machine again.
If the doll machine has the prediction result that the user does not select the doll machine to grab or the grabbing times are smaller than a preset threshold value, namely the user participation degree is low.
The doll is agitated and repositioned by the robotic arms. This is mainly comprised of the following,
and identifying the position of the doll in the doll machine through image processing software according to the doll position. After the position of the doll is identified, the control device is automatically positioned to the grabbing position, and the control device controls the mechanical arm to grab the doll in the doll machine. The automatic grabbing aims at automatically changing the placing position of the doll machine, reducing the trouble of manually opening a box to place and adjust the position, and simultaneously enabling the placing of the doll machine to be more natural and reasonable.
If the doll is not placed in an automatic mode, the placing effect of the doll machine can be changed by manually opening the box to place the doll machine.
The goal of repositioning is to try to bring dolls far from the drop port closer to the drop port or to make stably positioned dolls topple over and easier to grasp; because this is a better attraction for the customer.
And step 106, if the user participation is predicted to be low, prompting grabbing strength or increasing doll putting quantity is adopted, and the user participation is improved.
If the prediction is made for the current doll, the result is: the user does not choose to grab; the user may be attracted to participate in the grabbing of the doll by other means than agitating and repositioning the doll via the robotic arms. Wherein, the method comprises increasing the number of dolls in the doll machine and stacking the dolls higher; the more dolls are stacked, the more dolls a user can select to grab, and the more dolls can be stacked, so that some dolls can fall to an outlet more easily, and the user can grab the dolls conveniently. On the other hand, adopt to adjust and snatch the arm, the reinforcing is snatched the dynamics and is adjusted bigger probability of catching. The grabbing force of the doll machine and the probability of grabbing the doll can be set and adjusted, the grabbing force is increased, the grabbing force prompt is carried out on a user, the user can better know the grabbing probability, and the participation degree of the user is improved.
Through the above description of the embodiments, it is clear to those skilled in the art that the above embodiments can be implemented by software, and can also be implemented by software plus a necessary general hardware platform. With this understanding, the technical solutions of the embodiments can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments of the present invention.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A grabbing amount prediction method based on doll placing posture is characterized by comprising the following steps:
when a user selects a plurality of doll machines, recording the selected doll machines;
the doll is provided with a camera to obtain images of a doll and a stream of visitors in the doll;
when a user starts to grab a doll on the selected doll machine, analyzing the shape of the doll, the stacking height of the doll, the distance between the doll and a dropping opening and the placing posture of the doll through image analysis software to obtain the placing characteristics of the doll; the method comprises the following steps of analyzing the shape of a doll, the stacking height, the distance between the nearest doll and a drop opening and the placing posture of the doll through image analysis software to obtain the placing characteristics of the doll, and mainly comprises the following steps: extracting the doll contour by adopting an opencv image processing technology based on the hog characteristic to obtain the doll shape;
presume the height that the doll is stacked according to the chassis height of the doll and the proportion that the doll is stacked in the chassis of the doll;
processing the image through opencv software to obtain the position of a square or round drop opening of the doll machine;
obtaining each doll positioned on the top in the doll machine, obtaining the dolls, calculating the distance between the doll closest to the drop port and the drop port; the drop port is an outlet through which the doll can be taken away by a user after dropping from the outlet;
acquiring the placement direction of the doll, and analyzing the placement inclination angle of the doll according to the image;
counting the number of times that the user grabs a doll on the doll machine;
after a first user finishes playing a doll machine, predicting whether a second user can select the doll machine to grab or not according to the doll placing characteristics and a pre-trained machine learning model; the machine learning model trained in advance mainly comprises:
taking the doll placing characteristics as characteristic data, taking whether the user selects the doll machine to capture as a labeling value of a binary classification, and training a machine learning model I;
taking the doll placing characteristics as characteristic data, taking the times of grabbing a doll on the doll machine by the user as a labeled value, and training a machine learning model II;
the machine learning model I adopts a support vector machine as a training and predicting classification model;
the machine learning model II adopts a convolutional neural network as a training and predicting classification model;
predicting whether a second user will select the doll to grab, comprising:
preprocessing according to the doll shape, the stacking height, the distance between the nearest doll and the drop port and doll placing posture characteristic data, inputting a trained binary classification support vector machine model, and predicting whether a user can select the current doll to grab;
if yes, preprocessing is carried out according to the doll shape, the stacking height, the distance between the nearest doll and the drop opening and the doll placing posture characteristic data, a trained multivariate convolution neural network model is input, and the fact that a user can grab for several rounds is predicted;
if not, the dolls in the doll machine are put again.
2. The method of claim 1, wherein the doll machine has a camera mounted thereon for capturing images of dolls and sightseeing streams in the doll machine, and further comprising:
the doll is provided with a camera for monitoring the posture of the doll and finding out whether part of the limbs of the doll are clamped by other dolls;
the doll machine is provided with an image acquisition device for acquiring the image of a player outside the doll machine;
the doll machine counts how many people play the doll machine in the stream of the lookers; the watching stream refers to the number of people watching the doll machine exceeding a preset time length threshold value;
and counting the probability that each doll machine is selected by a user to grab under the current placing posture according to how many people play the doll machines in the stream of the observed people.
3. The method of claim 1, wherein the predicting that the user will perform several rounds of grabbing, thereafter, further comprises:
if the predicted number of grabbing rounds is smaller than the preset number of times, reminding a merchant that the doll placing posture in the doll machine can be further optimized so as to improve the grabbing amount of a user.
4. The method of claim 1, wherein said repositioning of the doll in the doll machine comprises:
if the doll machine has the prediction result that: the user does not select it to grab or the number of grabs is less than a preset threshold,
the doll is stirred and put again through the mechanical arm;
or, the user is prompted that the doll machine will adopt powerful grabbing and adopt a larger grabbing force to grab, and the user is attracted to grab in the future.
5. The method of claim 4, wherein agitating and repositioning the doll via the robotic arms comprises:
the positions of the dolls in the doll machine are identified through image processing software, the mechanical arm is automatically controlled according to the positions of the dolls, the dolls far away from the dropping opening are enabled to be closer to the dropping opening or the dolls which are stably placed are enabled to be toppled and easier to be grabbed.
6. The method of claim 4, wherein agitating and repositioning the doll via the robotic arms further comprises:
if the doll machine has the prediction result that: the user does not choose to grab;
increasing the number of dolls in the doll machine and stacking higher; alternatively, the first and second electrodes may be,
the mechanical arm is snatched in the regulation, and the reinforcing is snatched the dynamics and is adjusted bigger probability of grabbing.
7. A doll pose based grab quantity prediction system, the system comprising:
the system comprises a doll placing posture information acquisition module, a doll placing posture information acquisition module and a control module, wherein a camera is installed on the doll machine to acquire images of a doll and a stream of sightseeing people in the doll machine;
when a user starts to grab a doll on a selected doll machine, analyzing the shape of the doll, the stacking height of the doll, the distance between the doll and a dropping opening and the placing posture of the doll through image analysis software to obtain the placing characteristic of the doll; the method comprises the following steps of analyzing the shape of a doll, the stacking height, the distance between the nearest doll and a drop opening and the placing posture of the doll through image analysis software to obtain the placing characteristics of the doll, and mainly comprises the following steps: extracting the doll contour by adopting an opencv image processing technology based on the hog characteristic to obtain the doll shape;
presume the height that the doll is stacked according to the chassis height of the doll and the proportion that the doll is stacked in the chassis of the doll;
processing the image through opencv software to obtain the position of a square or round drop opening of the doll machine;
obtaining each doll positioned on the top in the doll machine, obtaining the dolls, calculating the distance between the doll closest to the drop port and the drop port; the drop port is an outlet through which the doll can be taken away by a user after dropping from the outlet;
acquiring the placement direction of the doll, and analyzing the placement inclination angle of the doll according to the image;
the data training module predicts whether a second user selects the doll machine to grab or not according to the doll placing characteristics and a pre-trained machine learning model after the first user finishes playing the doll machine; the machine learning model trained in advance mainly comprises:
taking the doll placing characteristics as characteristic data, taking whether the user selects the doll machine to capture as a labeling value of a binary classification, and training a machine learning model I;
taking the doll placing characteristics as characteristic data, taking the times of grabbing a doll on the doll machine by the user as a labeled value, and training a machine learning model II;
the machine learning model I adopts a support vector machine as a training and predicting classification model;
the machine learning model II adopts a convolutional neural network as a training and predicting classification model;
the grabbing prediction module is used for preprocessing according to the doll shape, the stacking height, the distance between the nearest doll and the falling port and the doll placing posture characteristic data, inputting a trained binary classification support vector machine model and predicting whether a user can select the current doll to grab or not;
if yes, preprocessing is carried out according to the doll shape, the stacking height, the distance between the nearest doll and the drop opening and the doll placing posture characteristic data, a trained multivariate convolution neural network model is input, and the fact that a user can grab for several rounds is predicted;
and the posture adjusting module is used for putting the dolls in the doll machine again when the prediction result is negative.
CN201910657655.9A 2019-07-20 2019-07-20 Grabbing amount prediction method and system based on doll placing posture Active CN110348915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910657655.9A CN110348915B (en) 2019-07-20 2019-07-20 Grabbing amount prediction method and system based on doll placing posture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910657655.9A CN110348915B (en) 2019-07-20 2019-07-20 Grabbing amount prediction method and system based on doll placing posture

Publications (2)

Publication Number Publication Date
CN110348915A CN110348915A (en) 2019-10-18
CN110348915B true CN110348915B (en) 2021-11-12

Family

ID=68179480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910657655.9A Active CN110348915B (en) 2019-07-20 2019-07-20 Grabbing amount prediction method and system based on doll placing posture

Country Status (1)

Country Link
CN (1) CN110348915B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE1008665A6 (en) * 1994-09-05 1996-07-02 Rotero Belgium Besloten Vennoo Slot machine with game of skill for the user
CN108615310A (en) * 2018-05-07 2018-10-02 北京云点联动科技发展有限公司 A kind of control method of control doll machine crawl success rate
CN108635828A (en) * 2018-03-29 2018-10-12 上海掌门科技有限公司 A kind of doll machine operating method, equipment, system
CN109544821A (en) * 2018-11-21 2019-03-29 网易(杭州)网络有限公司 A kind of information processing method and long-range doll machine system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE1008665A6 (en) * 1994-09-05 1996-07-02 Rotero Belgium Besloten Vennoo Slot machine with game of skill for the user
CN108635828A (en) * 2018-03-29 2018-10-12 上海掌门科技有限公司 A kind of doll machine operating method, equipment, system
CN108615310A (en) * 2018-05-07 2018-10-02 北京云点联动科技发展有限公司 A kind of control method of control doll machine crawl success rate
CN109544821A (en) * 2018-11-21 2019-03-29 网易(杭州)网络有限公司 A kind of information processing method and long-range doll machine system

Also Published As

Publication number Publication date
CN110348915A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN107742107B (en) Facial image classification method, device and server
CN110147711A (en) Video scene recognition methods, device, storage medium and electronic device
CN110945522B (en) Learning state judging method and device and intelligent robot
CN108710847A (en) Scene recognition method, device and electronic equipment
CN105869645B (en) Voice data processing method and device
Stallkamp et al. The German traffic sign recognition benchmark: a multi-class classification competition
JP5359414B2 (en) Action recognition method, apparatus, and program
Szwoch et al. Emotion recognition for affect aware video games
CN107801097A (en) A kind of video classes player method based on user mutual
CN109657100A (en) Video Roundup generation method and device, electronic equipment and storage medium
WO2018094892A1 (en) Pet type recognition method and device, and terminal
CN106339507A (en) Method and device for pushing streaming media message
CN105893942B (en) A kind of sign Language Recognition Method of the adaptive H MM based on eSC and HOG
CN109670847A (en) The distribution method and device of resource
CN110711379A (en) System and method for intelligently rewarding completion of tasks
CN110322418A (en) A kind of super-resolution image generates the training method and device of confrontation network
CN107169503A (en) The sorting technique and device of a kind of indoor scene
CN106974656A (en) For gathering motion state and carrying out the system and implementation method of real time contrast's correction
CN109858344A (en) Love and marriage object recommendation method, apparatus, computer equipment and storage medium
CN102216958A (en) Object detection device and object detection method
CN110348915B (en) Grabbing amount prediction method and system based on doll placing posture
US10596458B2 (en) Intelligent service system, apparatus, and method for simulators
US20240042281A1 (en) User experience platform for connected fitness systems
Krotov Human control of a flexible object: hitting a target with a bull-whip
TWI776429B (en) Action recognition method and device, computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230630

Address after: Room 424, Building 2, No. 318, Waihuan West Road, University Town, Xiaoguwei Street, Panyu District, Guangzhou, Guangdong 510000

Patentee after: Guangzhou Xinqi Intelligent Technology Co.,Ltd.

Address before: Room f101-12, No.1 incubation and production building, guanshao shuangchuang (equipment) center, Huake City, 42 Baiwang Avenue, Wujiang District, Shaoguan City, Guangdong Province, 512026

Patentee before: Shaoguan Qizhi Information Technology Co.,Ltd.