CN112138370A - Control method and device of AR doll machine and electronic device - Google Patents

Control method and device of AR doll machine and electronic device Download PDF

Info

Publication number
CN112138370A
CN112138370A CN202010946249.7A CN202010946249A CN112138370A CN 112138370 A CN112138370 A CN 112138370A CN 202010946249 A CN202010946249 A CN 202010946249A CN 112138370 A CN112138370 A CN 112138370A
Authority
CN
China
Prior art keywords
doll
terminal
scene model
grabbing
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010946249.7A
Other languages
Chinese (zh)
Inventor
李晓燕
王欣捷
潘莎莎
曹敏力
曾波
赵怡华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yixian Advanced Technology Co ltd
Original Assignee
Hangzhou Yixian Advanced Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yixian Advanced Technology Co ltd filed Critical Hangzhou Yixian Advanced Technology Co ltd
Priority to CN202010946249.7A priority Critical patent/CN112138370A/en
Publication of CN112138370A publication Critical patent/CN112138370A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • G06Q30/0209Incentive being awarded or redeemed in connection with the playing of a video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Abstract

The application relates to a control method, a control device and an electronic device of an AR doll machine, wherein the control method comprises the following steps: acquiring video stream and virtual resource information, and generating a scene model of the AR doll according to the virtual resource information; acquiring a three-dimensional coordinate point of the scene model according to the video stream by using a VSLAM algorithm; entering a capturing interaction mode of the AR terminal under the condition that the scene model is fixedly placed according to the three-dimensional coordinate point; and under the grabbing interaction mode, obtaining a dropping result of the doll in the AR doll machine according to the moving position of the hand grab in the AR doll machine. By the method and the device, the problem that the physical doll machine wastes resources is solved, and accurate AR doll machine control is realized.

Description

Control method and device of AR doll machine and electronic device
Technical Field
The present disclosure relates to the field of Augmented Reality (AR), and in particular, to a method and an apparatus for controlling an AR doll, and an electronic apparatus.
Background
The doll machine is also called as a selective selling machine, commonly called as a doll clamping machine, and comprises a crown block, a gripper, a rocker, a coin inserting unit, a key, a cabinet body and a transparent box body in the related technology. The doll (the general name of various toys) is displayed in the transparent box body, a player can control the crown block to randomly and horizontally move in the transparent box body through the rocker, when the player thinks that the position is correct, the player presses the button, the claw at the bottom of the crown block can automatically lift and execute clamping action, and then the clamped gift is sent to the outlet and thrown.
In the related technology, the dolls in the doll machine can only be limited to the entity dolls thrown by the thrower, the variety and the style are simple, and the player can not select the loved dolls or the loved dolls are inconvenient to grab; moreover, as the grabbed dolls need to be physically picked up, the grabbers are inconvenient to carry after being picked up by an operator and cannot be transferred immediately, so that the game experience times of the grabbers are limited; in addition, the doll model in the doll machine needs to be changed continuously and the number of dolls in the doll machine needs to be supplemented at any time by the doll machine operator, so that a large amount of manpower and material resources are occupied, and more resources are wasted by the doll machine.
At present, no effective solution is provided for the problem that the entity doll machine wastes resources in the related technology.
Disclosure of Invention
The embodiment of the application provides a control method and device of an AR doll machine and an electronic device, and aims to at least solve the problem that resources are wasted by an entity doll machine in the related art.
In a first aspect, embodiments of the present application provide a method for controlling an AR doll, the method comprising:
acquiring video stream and virtual resource information, and generating a scene model of the AR doll according to the virtual resource information;
acquiring three-dimensional coordinate points of the scene model according to the video stream by using a Visual Simultaneous Localization And Mapping (VSLAM) algorithm;
entering a capturing interaction mode of the AR terminal under the condition that the scene model is fixedly placed according to the three-dimensional coordinate point;
and under the grabbing interaction mode, obtaining a dropping result of the doll in the AR doll machine according to the moving position of the hand grab in the AR doll machine.
In some of these embodiments, after obtaining the results of the drop of a doll in the AR doll, the method further comprises:
and controlling the AR terminal to push the merchant coupon information to the user as a reward under the condition that the drop result is successful.
In some embodiments, the controlling the AR terminal to push the coupon information as the reward to the user includes:
waiting and acquiring the merchant ticket information returned by the server by using an asynchronous thread; or waiting and acquiring the merchant ticket information returned by the application program of the AR terminal;
and updating the sending state of the merchant ticket information.
In some embodiments, after entering the grabbing interaction mode of the AR terminal, before obtaining a dropping result of a doll in the AR doll according to the moving position of the hand grab in the AR doll, the method further includes:
detecting a grabbing operation of the AR doll;
responding to the detected grabbing operation, and acquiring the moving position according to the orientation vector of the AR terminal; wherein the movement position is always in a plane parallel to the top surface of the AR doll.
In some embodiments, after the obtaining the movement position, before obtaining a result of a drop of a doll in the AR doll according to the movement position of the gripper in the AR doll, the method further comprises:
under the condition that collision among the AR doll machine, the hand grip and the doll is detected according to the moving position, calculating according to stress parameters to obtain the stress condition of the hand grip at the next moment;
under the condition that the hand grip is opened or closed, calculating according to hand grip state parameters to obtain the stress condition;
and calculating the grabbing state of the doll according to the stress condition.
In some of these embodiments, the obtaining the result of the drop of a doll in the AR doll comprises:
under the condition that the doll is grabbed in the grabbing state, controlling the hand grab to be loosened so as to obtain the dropping result; the AR terminal performs reminding operation according to the drop result;
and sending the dropping result to a server, and/or sending the dropping result to an application program of the AR terminal and displaying the dropping result.
In some embodiments, the obtaining three-dimensional coordinate points of the scene model from the video stream using the VSLAM algorithm includes:
and calculating the three-dimensional space structure of the video stream by using the VSLAM algorithm, and acquiring the three-dimensional coordinate points according to the three-dimensional space structure.
In some embodiments, after the obtaining the three-dimensional coordinate points according to the three-dimensional spatial structure, the method further includes:
detecting a placement operation on the AR doll;
placing the scene model at the position of the three-dimensional coordinate point in response to the detected placing operation.
In a second aspect, embodiments of the present application provide a control apparatus for an AR doll machine, the apparatus: the device comprises a generating module, a coordinate module, a grabbing module and a dropping module;
the generating module is used for acquiring video streams and virtual resource information and generating a scene model of the AR doll according to the virtual resource information;
the coordinate module is used for acquiring a three-dimensional coordinate point of the scene model according to the video stream by using a VSLAM algorithm;
the grabbing module is used for entering a grabbing interaction mode of the AR terminal under the condition that the scene model is fixedly placed according to the three-dimensional coordinate point;
the dropping module is used for acquiring a dropping result of the doll in the AR doll machine according to the moving position of the gripper in the AR doll machine in the gripping interaction mode.
In a third aspect, embodiments of the present application provide an electronic device, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, implements the method for controlling an AR doll as described in the first aspect.
Compared with the related art, the control method, the control device and the electronic device of the AR doll machine provided by the embodiment of the application generate the scene model of the AR doll machine by acquiring the video stream and the virtual resource information and according to the virtual resource information; acquiring a three-dimensional coordinate point of the scene model according to the video stream by using a VSLAM algorithm; entering a capturing interaction mode of the AR terminal under the condition that the scene model is fixedly placed according to the three-dimensional coordinate point; under the grabbing interaction mode, according to the moving position of the hand grab in the AR doll machine, the dropping result of the doll in the AR doll machine is obtained, the problem that the resource is wasted by an entity doll machine is solved, and accurate control over the AR doll machine is realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic diagram of an application scenario according to an embodiment of the present application;
fig. 2 is a flow diagram of a method of controlling an AR doll machine according to an embodiment of the present application;
FIG. 3 is a first schematic illustration of an AR doll game page according to an embodiment of the present application;
FIG. 4 is a second schematic illustration of an AR doll game page in accordance with an embodiment of the present application;
FIG. 5 is a third schematic illustration of an AR doll game page in accordance with an embodiment of the present application;
fig. 6 is a flow diagram of another AR doll machine control method according to an embodiment of the present application;
fig. 7 is a flow diagram of yet another AR doll machine control method according to an embodiment of the present application;
FIG. 8 is a fourth schematic illustration of an AR doll game page in accordance with an embodiment of the present application;
FIG. 9A is a fifth schematic illustration of an AR doll game page in accordance with an embodiment of the present application;
FIG. 9B is a sixth schematic illustration of an AR doll game page in accordance with an embodiment of the present application;
fig. 10 is a block diagram of an AR doll machine control apparatus according to an embodiment of the present application;
fig. 11 is a block diagram of the inside of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
In the present embodiment, an application scenario of an AR doll is provided, and fig. 1 is a schematic diagram of an application scenario according to an embodiment of the present application, as shown in fig. 1, in which an AR terminal 12 may communicate with a server 14 through a network; the server 14 generates a scene model of the AR doll and a three-dimensional coordinate point of the scene model according to the video stream and the virtual resource information acquired by the AR terminal 12; after the server 14 fixedly places the scene model according to the three-dimensional coordinate point, the capturing interaction mode of the AR terminal 12 is entered, the moving position of the gripper in the AR doll is obtained according to the moving position of the gripper in the AR doll, and the dropping knot of the doll in the AR doll is obtained; the AR terminal 12 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and other AR devices; the server 14 may be implemented as a stand-alone server or as a server cluster comprised of a plurality of servers.
In this embodiment, a method for controlling an AR doll is provided, and fig. 2 is a flowchart of a method for controlling an AR doll according to an embodiment of the present disclosure, as shown in fig. 2, the flowchart includes the following steps:
step S202, video stream and virtual resource information are obtained, and a scene model of the AR doll is generated according to the virtual resource information. Referring to fig. 1 and 2, upon entering a game page of a doll, a camera of the AR terminal 12 is turned on, and a video stream in a scene where the doll is to be placed is acquired by the camera. The virtual resource information can be loaded from local, the virtual resource information comprises pictures, a 3D doll model and the like to be displayed on a screen of the AR terminal 12, and the virtual resource information is packaged into a scene model of the AR doll; the scene model may be used for display on the AR terminal 12 and interaction with the user, and may also be uploaded to the server 14 for storage and synchronization.
Fig. 3 is a schematic diagram of an AR doll game page according to an embodiment of the present application, wherein after the terminal 12 enters the game page, the user may turn on the camera, and the game page is displayed as shown in fig. 3, and a rolling bullet screen is displayed right above the page to prompt the user about the winning game status of all users in the game, for example, "V9 with 15-element coupon". Displaying a button for calling the doll directly below the page, wherein the button can be grayed out and displayed in a text manner of … … during initialization and surface leveling, and can be lighted up and clicked during initialization and surface leveling, and the button displays the 'calling doll' shown in fig. 3 in a text manner, and the doll is placed after the button is clicked by a user; a plane identification prompt box is displayed above the button of the clicked calling doll machine, and the prompt box disappears after the calling doll machine is successfully called; a document prompt is also displayed right above the prompt box, for example, "the effect is better when the mobile phone and the desktop are at 45 degrees".
And a return button is displayed at the upper left of the page, and can respond to the touch click operation of the user, close the camera and exit the game. Also displayed directly below the back button are a earning coins button, a my rewards button, a screen capture share button, and a recall button. The different displays may be performed by an operation instruction of the different buttons by the user, for example, after the user clicks the earning-coin button, the number of gold coins is displayed on the screen in the format of the earning-coin button in fig. 3, and "earning coins" is displayed below the earning-coin button. Clicking a popup window after clicking 'earning coins' by a user, and displaying an earning coin mode in the popup window; the user clicks the my rewards button, after clicking, the AR terminal 12 types the H6 page and displays the rewards the user has obtained through this activity; when the operation condition of the user on the screen capture sharing button is detected, capturing a screen of the current game page and popping up a sharing control; in addition, the user may also re-find the scene plane to place the doll by clicking the recall button.
And step S204, acquiring a three-dimensional coordinate point of the scene model according to the video stream by using a VSLAM algorithm. The method comprises the following steps of tracking a 3D space structure of a video stream by utilizing a VSLAM algorithm, and enabling a scene model of the AR doll to be fixed at the same position of a scene all the time.
Step S206, entering a capturing interaction mode of the AR terminal 12 under the condition that the scene model is fixedly placed according to the three-dimensional coordinate point; and under the grabbing interaction mode, obtaining a dropping result of the doll in the AR doll machine according to the moving position of the hand grab in the AR doll machine.
After the three-dimensional coordinate point where the prompt picture is placed is obtained, the button "in preparation" is replaced by the button "call the doll machine" in the clickable figure 3. After the user clicks the "call doll" button, the position of the three-dimensional coordinate point displays a 3D scene model in the display canvas of the AR terminal 12.
Fig. 4 is a schematic view two of an AR doll game page according to an embodiment of the present application, as shown in fig. 4, after clicking the "call doll" in fig. 3, a virtual AR doll appears on the plane aligned with the AR terminal 12, and a "start game" button appears on the page of the display screen, which displays the number of coins required for each game, hides the button after clicking, and enters a capture interaction mode to start the game.
At this time, the user holds the AR terminal 12 to perform small-amplitude movement and rotation in the space, and the gyroscope on the AR terminal 12 may be used to calculate the movement condition of the terminal in the real space according to the video stream through the VSLAM algorithm, so that the user may observe the change in the relative position between the doll model and the smartphone. By utilizing the characteristic, the doll model can be observed in all directions.
Fig. 5 is a third schematic diagram of a game page of an AR doll according to an embodiment of the present application, as shown in fig. 5, when the AR doll disappears on the screen due to the movement of the camera lens of the AR terminal 12, an indication arrow appears in the page, and the indication arrow is used for indicating the direction of the AR doll.
Through the steps S202 to S206, the scene model of the AR doll is generated through the virtual resource information, the three-dimensional coordinate point of the scene model is obtained based on the VSLAM algorithm, accurate placement and capture for the scene model can be performed according to the three-dimensional coordinate point, updating of data such as doll type and number is realized based on the scene model, and the data is superimposed in a real scene in an augmented reality manner, and cannot be completely split from original traditional experience. In order to bring the experience of low learning cost to users in augmented reality interaction, the environment recognition based on the VSLAM algorithm is used for realizing the effects of placing the doll machine, grabbing and dropping the doll machine, so that the vivid and simulated experience of superposition of a real environment and a virtual object is achieved, the problem of resource waste of the doll machine is solved, and the accurate AR doll machine control is realized.
In some embodiments, a method of controlling an AR doll is provided, and fig. 6 is a flow chart of another method of controlling an AR doll according to embodiments of the present disclosure, the method comprising the steps of:
in step S602, if the drop result is successful, the AR terminal 12 is controlled to push the coupon information to the user as a reward. The method comprises the steps that the merchant ticket information returned by a server can be waited and obtained by using an asynchronous thread; or, waiting for and acquiring the merchant ticket information returned by the application program of the AR terminal 12; the transmission status of the merchant ticket information is updated.
Specifically, when the doll is successfully dropped to the bottom of the outlet of the AR doll machine in the calculation mode, namely the doll is successfully grabbed in the game, starting a thread which asynchronously waits for receiving coupon information, and waiting for a background server or an upper application program to return the coupon information; and displaying the coupon information in the page at the moment, and displaying text information of 'the coupon is put into the corresponding account' to remind the user. The background server or the upper application program dispatches the coupons according to preset rules, namely selects an unsent coupon from the existing coupon pool, sends the coupon information to the doll catching game thread through a preset communication interface, sets the state of the coupon in the coupon pool as 'dispatched', and updates the coupon information in the user account.
It should be noted that the AR doll based on merchant coupon information pushing can also be applied to a mobile phone, and under the condition of ensuring the experience, the volume of a software bag applied to the mobile phone is reduced by 30%, so that 90% or more of mobile phone models can be covered, the mobile phone is kept to be used smoothly and not to be jammed, the user area of experience AR application is wider, and the possibility of communication sharing and E-commerce propagation fission among users can be greatly improved.
In the related art, because a common coupon issuing form, such as brand coupon sending, cannot effectively attract the participation degree of a user, and the conversion effect after coupon receiving is poor, the brand is not favorable for popularization, and the requirement of the user for high-quality experience cannot be met. In the embodiment of the application, through the step S602, the merchant ticket information is pushed to the user as a reward for successful drop, and the merchant ticket information of the server or the application program is returned by using the asynchronous thread, so that an AR doll creative interaction mechanism based on e-commerce ticket issuing is realized, the e-commerce ticket issuing behavior is more attractive to the user through the game mechanism setting, the problem of single application scene mode of the doll in the related technology is solved, and the interestingness of the AR doll control method is improved.
In some embodiments, a method of controlling an AR doll is provided, and fig. 7 is a flow chart of another method of controlling an AR doll according to embodiments of the present disclosure, as shown in fig. 7, the method comprising the steps of:
step S702, in the grabbing interaction mode, detecting grabbing operation on the AR doll; in response to the detected grab operation, acquiring the moving position according to the orientation vector of the AR terminal 12; wherein the movement position is always in a plane parallel to the top surface of the AR doll.
Fig. 8 is a schematic diagram of an AR doll game page according to an embodiment of the present disclosure, showing in fig. 8, after clicking a "start game" button, hiding the "start game" button, popping up a "gold medal-1" animation from left to right from a left "earning medal" button, and displaying a directional control including four directional buttons, front, back, left and right, on the left below the page, and displaying a "grab" button on the right below the page, with a text prompt of "clip" and a countdown prompt of 30S. Clicking the buttons in the front direction, the rear direction, the left direction and the right direction, and moving the hand grab in the doll machine model accordingly. Wherein, the movement rule is as follows: the gripper can only move in a plane D parallel to the top surface of the doll, the front direction being the projected unit vector of the current AR terminal 12 oriented in D, the back direction being the negative vector of the front direction, the left direction being the vector of the front direction rotated 90 degrees counter-clockwise in D, the right direction being the negative vector of the left direction.
The user can continuously click the four buttons in the front, the back, the left and the right directions to adjust the position of the hand grip, and simultaneously, the AR terminal 12 can be moved to observe the hand grip and the doll condition in the doll machine from various angles, so that the hand grip can be successfully gripped. After the position of the gripper is adjusted, a user can click a 'grabbing' button, the four buttons in the front, back, left and right directions in the figure 8 and the 'grabbing' button are changed into a non-clickable state after clicking, the gripper grips down at the same time, and the gripper automatically rises and moves to the upper part of the hole to loosen the gripper after gripping.
Step S704, under the condition that collision among the AR doll machine, the gripper and the doll is detected according to the moving position, calculating the stress condition of the gripper at the next moment according to stress parameters; under the condition that the hand grip is opened or closed, calculating according to the state parameters of the hand grip to obtain the stress condition; and calculating the grabbing state of the doll according to the stress condition.
It should be noted that, during the process of grabbing the doll, the hand grip in the scene model starts to open and slowly descends until the hand grip falls to the bottom or is blocked by the doll. And after the hand grips stay for three seconds, the hand grips start to be inwardly closed and ascend, the hand grips move to the upper part of the outlet box of the doll machine after ascending to the top ends, and then the hand grips are released. It should be noted that this step involves physical calculations that accurately simulate the impact effects of rigid bodies in the real world.
Specifically, the gripper is a hinge structure, and the doll machine and the doll are non-deformation material objects (rigid bodies). When the hand grip, the doll and the doll collide with each other, the computer calculates according to the stress parameters to obtain the stress condition of the hand grip at the next moment and updates the position and the orientation of an object; and when the hand grip is opened and closed, the computer calculates the state parameters of the hand grip to obtain the stress condition of the hand grip and updates the position and the orientation of the hand grip. Because rigid bodies cannot penetrate through each other, the doll may be "scooped up" and lifted by the gripper and may be bounced off during the physical simulation, and whether the doll can be gripped or not is also determined by the physical calculation result. Wherein, the stress parameters comprise parameters such as gravity, preset friction of each object, current acceleration and the like; the hand grip state parameters comprise the motion rule of the hinge system and the preset rotation angle, resistance and other parameters of the hand grip.
Step S706, under the condition that the doll is grabbed in the grabbing state, controlling the hand grab to be loosened so as to obtain the dropping result; wherein, the AR terminal 12 performs a reminding operation according to the drop result; the drop result is sent to a server and/or the drop result is sent to and displayed by an application of the AR terminal 12.
If the doll is grabbed by the grabber, the doll falls off the grabber when the grabber is released, and free falling motion is generated; it is determined whether the doll has dropped into the success zone, i.e., has contacted the flat surface at the bottom of the outlet box. It should be noted that a doll dropped may hit the outlet box wall, causing the doll to bounce out of the plane of the bottom. If the doll is in contact with the plane at the bottom of the outlet box, the game is judged to be successful, a prompt of successful grabbing is popped up, and a successful state prompt is sent to a background server or an AR terminal 12 upper application program. If the doll is not in contact with the plane at the bottom of the outlet box, the game is judged to be failed, a prompt of 'capture failure' is popped up, and a failed state prompt is sent to a background server or an upper application program.
Fig. 9A is a schematic diagram of a game page of an AR doll machine according to an embodiment of the present disclosure, in which if a doll is dropped to an exit position, i.e., successfully holds the doll, a successful sound effect is played, the page center of the screen of the AR terminal 12 is displayed during movement of the doll model, and then the pop-up window disappears, as shown in fig. 9A, the pop-up window center displays a clip-to-doll prompt, and different prompt results are displayed according to different dolls clipped by a user, such as "hard |)! Added to the XX doll! ", and displaying a smiley face icon; buttons for calling friends to play, play once again and stroll money are respectively displayed on the lower sides of the pop-up windows, and a user can click one of the buttons to carry out the subsequent process.
Fig. 9B is a sixth schematic view of an AR doll game page according to an embodiment of the present disclosure, where if a drop failure is detected, i.e., a user fails to catch the doll, a failure sound is played, and a failure popup is popped up, as shown in fig. 9B, the center of the failure popup displays a text prompt of a clip failure, e.g., "you go," and do not catch ", and a crime icon, and the lower side of the failure popup also displays buttons of a list of calling friends to play, play again, and cash.
Through the steps S702 to S706, the stress condition of the hand grip is calculated according to the parameters such as gravity, preset friction force, acceleration and the like, and when the hand grip is opened or closed, the stress condition is calculated according to the parameters such as the rotation angle of the hand grip, resistance and the like, so that the collision effect of a rigid body in a real event is accurately simulated through physical calculation, the environment recognition is combined with a physical engine comprising physical calculation and physical simulation, and the interaction and simulation experience can be further improved.
In some embodiments, obtaining three-dimensional coordinate points of the scene model from the video stream using the VSLAM algorithm further includes: and calculating the three-dimensional space structure of the video stream by using the VSLAM algorithm, and acquiring the three-dimensional coordinate point appointed by the AR terminal 12 according to the three-dimensional space structure, so that the positioning of the scene model in the control method of the AR doll machine is more accurate.
Then, detecting a placement operation on the AR doll; in response to the detected placing operation, the scene model is placed at the position of the three-dimensional coordinate point. The placing operation may be a user operation such as clicking or touching a placing button on the AR terminal 12; after the placing operation is detected, the scene model of the AR doll is placed at the position of the three-dimensional coordinate point obtained through calculation, so that the accuracy of controlling the AR doll and the interaction experience of a user are improved.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The present embodiment further provides a control device for an AR doll machine, which is used for implementing the embodiments and preferred embodiments of the above method, and the description of the embodiments and preferred embodiments is omitted for brevity. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 10 is a block diagram of an AR doll machine control apparatus according to an embodiment of the present disclosure, as shown in fig. 10, the apparatus including: a generation module 102, a coordinates module 104, a grabbing module 106, and a dropping module 108.
The generating module 102 is configured to obtain a video stream and virtual resource information, and generate a scene model of the AR doll according to the virtual resource information; the coordinate module 104 is configured to obtain a three-dimensional coordinate point of the scene model according to the video stream by using a VSLAM algorithm; the capture module 106 is configured to enter a capture interaction mode of the AR terminal 12 under the condition that the scene model is fixedly placed according to the three-dimensional coordinate point; the dropping module 108 is configured to obtain a dropping result of the doll in the AR doll according to a moving position of the gripper in the AR doll in the gripping interaction mode.
Through the embodiment, the generation module 102 generates the scene model of the AR doll through the virtual resource information, the coordinate module 104 obtains the three-dimensional coordinate points of the scene model based on the VSLAM algorithm, can perform accurate placement and capture aiming at the scene model according to the three-dimensional coordinate points, and updates data such as doll types and number based on the scene model, and is superimposed in a real scene in an AR manner, so that the scene model is not completely split from original traditional experience. In order to bring low learning cost experience to users in AR interaction, the VSLAM algorithm-based environment recognition is used for achieving the effects of placing a doll machine, grabbing a doll and dropping the doll machine, so that vivid and simulated experience of superposition of a real environment and a virtual object is achieved, the problem that the doll machine wastes resources is solved, and accurate AR doll machine control is achieved.
In some of these embodiments, the control device further comprises a merchant ticket module; the coupon module is configured to control the AR terminal 12 to push the coupon information to the user as a reward if the drop result is successful.
In some embodiments, the coupon module is further configured to wait for and acquire the coupon information returned by the server by using an asynchronous thread; or, the coupon module waits for and acquires the coupon information returned by the application program of the AR terminal 12; the coupon module updates the sending state of the coupon information.
In some of these embodiments, the grabbing module 106 is further configured to detect a grabbing operation of the AR doll; the grabbing module 106, in response to the detected grabbing operation, obtains the moving position according to the orientation vector of the AR terminal 12; wherein the movement position is always in a plane parallel to the top surface of the AR doll.
In some embodiments, the grasping module 106 is further configured to, when a collision between the AR doll, the gripper, and the doll is detected according to the moving position, calculate a stress condition of the gripper at a next moment according to stress parameters; the gripping module 106 calculates the stress condition according to the gripping state parameters under the condition that the gripping is opened or closed; the grabbing module 106 calculates the grabbing state of the doll according to the stress condition.
In some embodiments, the drop module 108 is further configured to control the grip to release if the gripping state is that the doll has been gripped, so as to obtain the drop result; wherein, the AR terminal 12 performs a reminding operation according to the drop result; the drop module 108 sends the drop result to a server and/or the drop module 108 sends the drop result to an application of the AR terminal 12 and displays it.
In some embodiments, the coordinate module 104 is further configured to calculate a three-dimensional spatial structure of the video stream using the VSLAM algorithm, and obtain the three-dimensional coordinate point according to the three-dimensional spatial structure.
In some of these embodiments, the coordinate module 104 is also used to detect a placement operation on the AR doll; the coordinates module 104 places the scene model at the location of the three-dimensional coordinate point in response to the detected placement operation.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
In this embodiment, a computer device is provided, which may be a server. Fig. 11 is a diagram illustrating an internal structure of a computer device according to an embodiment of the present invention, as shown in fig. 11, which includes a processor, a memory, a network interface, and a database connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing scene model data. The network interface of the computer device is used for communicating with an external AR terminal 12 through a network connection. The computer program is executed by a processor to implement a method of controlling an AR doll.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
and S1, acquiring the video stream and the virtual resource information, and generating a scene model of the AR doll according to the virtual resource information.
And S2, acquiring the three-dimensional coordinate points of the scene model according to the video stream by using a VSLAM algorithm.
And S3, entering a grabbing interaction mode of the AR terminal 12 under the condition that the scene model is fixedly placed according to the three-dimensional coordinate point.
And S4, acquiring the dropping result of the doll in the AR doll machine according to the moving position of the gripper in the AR doll machine in the gripping interaction mode.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the control method of the AR doll in the above embodiments, the embodiments of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements a method of controlling an AR doll as in any of the above embodiments.
It should be understood by those skilled in the art that various features of the above-described embodiments can be combined in any combination, and for the sake of brevity, all possible combinations of features in the above-described embodiments are not described in detail, but rather, all combinations of features which are not inconsistent with each other should be construed as being within the scope of the present disclosure.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of controlling an AR doll, the method comprising:
acquiring video stream and virtual resource information, and generating a scene model of the AR doll according to the virtual resource information;
acquiring three-dimensional coordinate points of the scene model according to the video stream by using a VSLAM algorithm;
entering a capturing interaction mode of the AR terminal under the condition that the scene model is fixedly placed according to the three-dimensional coordinate point;
and under the grabbing interaction mode, obtaining a dropping result of the doll in the AR doll machine according to the moving position of the hand grab in the AR doll machine.
2. The control method of claim 1, wherein after obtaining the result of the drop of the doll in the AR doll, the method further comprises:
and controlling the AR terminal to push the merchant coupon information to the user as a reward under the condition that the drop result is successful.
3. The method of claim 2, wherein the controlling the AR terminal to push coupon information to the user as the reward comprises:
waiting and acquiring the merchant ticket information returned by the server by using an asynchronous thread; or waiting and acquiring the merchant ticket information returned by the application program of the AR terminal;
and updating the sending state of the merchant ticket information.
4. The control method according to claim 1, wherein after entering the grabbing interaction mode of the AR terminal, before obtaining a dropping result of the doll in the AR doll according to the moving position of the hand grip in the AR doll, the method further comprises:
detecting a grabbing operation of the AR doll;
responding to the detected grabbing operation, and acquiring the moving position according to the orientation vector of the AR terminal; wherein the movement position is always in a plane parallel to the top surface of the AR doll.
5. The control method of claim 4, wherein after the obtaining the movement position, before obtaining a result of the drop of the doll in the AR doll machine according to the movement position of the hand grip in the AR doll machine, the method further comprises:
under the condition that collision among the AR doll machine, the hand grip and the doll is detected according to the moving position, calculating according to stress parameters to obtain the stress condition of the hand grip at the next moment;
under the condition that the hand grip is opened or closed, calculating according to hand grip state parameters to obtain the stress condition;
and calculating the grabbing state of the doll according to the stress condition.
6. The control method of claim 5, wherein the obtaining of the result of the drop of the doll into the AR doll machine comprises:
under the condition that the doll is grabbed in the grabbing state, controlling the hand grab to be loosened so as to obtain the dropping result; the AR terminal performs reminding operation according to the drop result;
and sending the dropping result to a server, and/or sending the dropping result to an application program of the AR terminal and displaying the dropping result.
7. The control method according to claim 1, wherein the obtaining three-dimensional coordinate points of the scene model from the video stream using the VSLAM algorithm comprises:
and calculating the three-dimensional space structure of the video stream by using the VSLAM algorithm, and acquiring the three-dimensional coordinate points according to the three-dimensional space structure.
8. The control method according to claim 7, wherein after the obtaining of the three-dimensional coordinate points from the three-dimensional spatial structure, the method further comprises:
detecting a placement operation on the AR doll;
placing the scene model at the position of the three-dimensional coordinate point in response to the detected placing operation.
9. A control apparatus for an AR doll, the apparatus comprising: the device comprises a generating module, a coordinate module, a grabbing module and a dropping module;
the generating module is used for acquiring video streams and virtual resource information and generating a scene model of the AR doll according to the virtual resource information;
the coordinate module is used for acquiring a three-dimensional coordinate point of the scene model according to the video stream by using a VSLAM algorithm;
the grabbing module is used for entering a grabbing interaction mode of the AR terminal under the condition that the scene model is fixedly placed according to the three-dimensional coordinate point;
the dropping module is used for acquiring a dropping result of the doll in the AR doll machine according to the moving position of the gripper in the AR doll machine in the gripping interaction mode.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the method of controlling an AR doll as recited in any one of claims 1 to 8.
CN202010946249.7A 2020-09-10 2020-09-10 Control method and device of AR doll machine and electronic device Pending CN112138370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010946249.7A CN112138370A (en) 2020-09-10 2020-09-10 Control method and device of AR doll machine and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010946249.7A CN112138370A (en) 2020-09-10 2020-09-10 Control method and device of AR doll machine and electronic device

Publications (1)

Publication Number Publication Date
CN112138370A true CN112138370A (en) 2020-12-29

Family

ID=73890864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010946249.7A Pending CN112138370A (en) 2020-09-10 2020-09-10 Control method and device of AR doll machine and electronic device

Country Status (1)

Country Link
CN (1) CN112138370A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870418A (en) * 2021-09-28 2021-12-31 苏州幻塔网络科技有限公司 Virtual article grabbing method and device, storage medium and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030042738A (en) * 2001-11-23 2003-06-02 현대디지털엔터테인먼트 주식회사 Method for providing doll catch game in wireless internet
US20160019016A1 (en) * 2014-07-16 2016-01-21 Stello Girls Ltd. Augmented reality doll
CN109350966A (en) * 2018-10-12 2019-02-19 深圳仓谷创新软件有限公司 A kind of mobile phone A R grabs doll machine system and exchange method
CN109364470A (en) * 2018-11-08 2019-02-22 北京顶喜乐科技有限公司 A kind of doll machine based on AR technology
WO2019045437A1 (en) * 2017-08-30 2019-03-07 주식회사 엠비젼 Method and system for advertising by remotely controlling device
CN110533719A (en) * 2019-04-23 2019-12-03 以见科技(上海)有限公司 Augmented reality localization method and device based on environmental visual Feature point recognition technology
CN111061374A (en) * 2019-12-20 2020-04-24 京东方科技集团股份有限公司 Method and device for supporting multi-person mode augmented reality application

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030042738A (en) * 2001-11-23 2003-06-02 현대디지털엔터테인먼트 주식회사 Method for providing doll catch game in wireless internet
US20160019016A1 (en) * 2014-07-16 2016-01-21 Stello Girls Ltd. Augmented reality doll
WO2019045437A1 (en) * 2017-08-30 2019-03-07 주식회사 엠비젼 Method and system for advertising by remotely controlling device
CN109350966A (en) * 2018-10-12 2019-02-19 深圳仓谷创新软件有限公司 A kind of mobile phone A R grabs doll machine system and exchange method
CN109364470A (en) * 2018-11-08 2019-02-22 北京顶喜乐科技有限公司 A kind of doll machine based on AR technology
CN110533719A (en) * 2019-04-23 2019-12-03 以见科技(上海)有限公司 Augmented reality localization method and device based on environmental visual Feature point recognition technology
CN111061374A (en) * 2019-12-20 2020-04-24 京东方科技集团股份有限公司 Method and device for supporting multi-person mode augmented reality application

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870418A (en) * 2021-09-28 2021-12-31 苏州幻塔网络科技有限公司 Virtual article grabbing method and device, storage medium and computer equipment
CN113870418B (en) * 2021-09-28 2023-06-13 苏州幻塔网络科技有限公司 Virtual article grabbing method and device, storage medium and computer equipment

Similar Documents

Publication Publication Date Title
JP7206398B2 (en) USER INTERFACE DISPLAY METHOD, USER INTERFACE DISPLAY DEVICE, TERMINAL, COMPUTER DEVICE, AND PROGRAM
JP5832666B2 (en) Augmented reality representation across multiple devices
US8957858B2 (en) Multi-platform motion-based computer interactions
CN112704883B (en) Method, device, terminal and storage medium for grouping virtual objects in virtual environment
CN108273265A (en) The display methods and device of virtual objects
JP7390400B2 (en) Virtual object control method, device, terminal and computer program thereof
CN107132917A (en) For the hand-type display methods and device in virtual reality scenario
US20230244302A1 (en) Application processing system, method of processing application, and storage medium storing program for processing application
CN103530495A (en) Augmented reality simulation continuum
CN111589145B (en) Virtual article display method, device, terminal and storage medium
WO2021093452A1 (en) Artificial intelligence-based game service execution method and apparatus, device and medium
CN112569607B (en) Display method, device, equipment and medium for pre-purchased prop
US20230321541A1 (en) Displaying visual field picture based on interaction zone on a virtual map
JP2009070076A (en) Program, information storage medium, and image generation device
CN112138370A (en) Control method and device of AR doll machine and electronic device
CN112494958B (en) Method, system, equipment and medium for converting words by voice
US20220111295A1 (en) Virtual object interaction scripts
US20220355188A1 (en) Game program, game method, and terminal device
CN112995687A (en) Interaction method, device, equipment and medium based on Internet
CN108803862B (en) Account relation establishing method and device used in virtual reality scene
WO2019170835A1 (en) Advertising in augmented reality
JP5479503B2 (en) Program, information storage medium, and image generation apparatus
CN114159788A (en) Information processing method, system, mobile terminal and storage medium in game
CN113870418A (en) Virtual article grabbing method and device, storage medium and computer equipment
CN113641443A (en) Interface element display method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination