CN115984023A - Dynamic publishing method and device of social network and electronic equipment - Google Patents

Dynamic publishing method and device of social network and electronic equipment Download PDF

Info

Publication number
CN115984023A
CN115984023A CN202111189326.XA CN202111189326A CN115984023A CN 115984023 A CN115984023 A CN 115984023A CN 202111189326 A CN202111189326 A CN 202111189326A CN 115984023 A CN115984023 A CN 115984023A
Authority
CN
China
Prior art keywords
emoticon
emoticons
target
virtual
card
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111189326.XA
Other languages
Chinese (zh)
Inventor
陈佳钰
吴霄
王鹤
钟庆华
何芬
陈颖滨
潘红
刘镇伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Cyber Tianjin Co Ltd
Original Assignee
Tencent Cyber Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Cyber Tianjin Co Ltd filed Critical Tencent Cyber Tianjin Co Ltd
Priority to CN202111189326.XA priority Critical patent/CN115984023A/en
Publication of CN115984023A publication Critical patent/CN115984023A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a social network dynamic publication method, a social network dynamic publication device, an electronic device, a computer program product and a computer-readable storage medium; the method comprises the following steps: displaying the virtual card in a human-computer interaction interface; the virtual card comprises a virtual container and a plurality of emoticons which are positioned outside the virtual container and are respectively associated with different dynamics; controlling the plurality of emoticons to move in the virtual card based on the effect of gravity; in response to a target emoticon moving into the virtual container, sending a dynamic associated with the target emoticon to a social network; wherein the target emoticon is any one of the emoticons. Through the application, the man-machine interaction diversity of the dynamic publishing process can be improved.

Description

Dynamic publishing method and device of social network and electronic equipment
Technical Field
The present application relates to social networking technologies, and in particular, to a method, an apparatus, an electronic device, a computer program product, and a computer-readable storage medium for dynamic publication of a social network.
Background
With the development of internet technology, the interaction mode of connecting multiple users through a social network is becoming rich, for example, users of the social network can send real-time messages to each other, and each login user can also publish respective dynamic information in a respective social network page to facilitate other users to view, thereby realizing a multi-dimensional social interaction mode.
However, the distribution mode of the dynamic information in the related art is similar to the sending mode of the real-time message, for example, corresponding characters are edited, images or expressions are added, and the like, the distribution mode of the dynamic information is single, and the man-machine interaction mode lacks diversity.
Disclosure of Invention
The embodiment of the application provides a dynamic publishing method and device of a social network, electronic equipment, a computer program product and a computer readable storage medium, which can improve the man-machine interaction diversity of a dynamic publishing process.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a dynamic publishing method of a social network, which comprises the following steps:
displaying a virtual card in a human-computer interaction interface; the virtual card comprises a virtual container and a plurality of emoticons which are positioned outside the virtual container and are respectively associated with different dynamics;
controlling the plurality of emoticons to move in the virtual card based on the effect of gravity;
in response to a target emoticon moving into the virtual container, sending a dynamic associated with the target emoticon to a social network; wherein the target emoticon is any one of the plurality of emoticons.
The embodiment of the present application provides a social network's developments publishing device, includes:
the display module is used for displaying the virtual card in the human-computer interaction interface; the virtual card comprises a virtual container and a plurality of emoticons which are positioned outside the virtual container and are respectively associated with different dynamics;
the moving module is used for controlling the plurality of emoticons to move in the virtual card based on the action of gravity;
a sending module, configured to send a dynamic associated with a target emoticon to a social network in response to the target emoticon moving into the virtual container; wherein the target emoticon is any one of the plurality of emoticons.
In the above solution, before controlling the emoticons to move in the virtual card based on the gravity, the moving module is further configured to: when the automatic moving condition is met, automatically switching to the step of controlling the plurality of emoticons to move in the virtual card based on the gravity action; wherein the automatic moving condition includes: at least part of the virtual card has been displayed in the human-machine interaction interface; or responding to a first body feeling operation, and executing the step of controlling the plurality of emoticons to move in the virtual card based on the gravity; the first body feeling operation is used for changing the posture of the electronic equipment displaying the man-machine interaction interface.
In the foregoing solution, the moving module is further configured to: controlling the plurality of emoticons to move in a gravity direction from an initial position in the virtual card; wherein the initial positions of the emoticons in the virtual card are located above the virtual container, the upper position is referred to the gravity direction, and the gravity direction is the attraction direction of the gravity action.
In the above scheme, when the plurality of emoticons are controlled to move in the virtual card based on the action of gravity, the moving module is further configured to: in response to any one of the emoticons colliding with a collision object, controlling the emoticon and the collision object to move in the direction of the rebound action respectively; wherein the collision object comprises at least one of: edges of the virtual card, other emoticons, the virtual container.
In the above scheme, the virtual card further comprises text materials; the text material is used for prompting that the target emoticon is selected from the emoticons; when the plurality of emoticons are controlled to move in the virtual card based on the action of gravity, the moving module is further configured to: and responding to the collision of any one expression symbol and the text material, and controlling the expression symbol to move along the direction of the rebound action.
In the above scheme, when the plurality of emoticons are controlled to move in the virtual card based on the action of gravity, the moving module is further configured to: responding to selection operation, and displaying that the selected target emoticon is in a selected state; moving the target emoticon to the top of the virtual container in response to a moving operation for the target emoticon; wherein the upper part of the virtual container is taken as a reference in the gravity direction; in response to the movement operation being released, controlling the target emoticon to move in a direction of gravity into the virtual container.
In the above solution, the moving module is further configured to: in response to the moving operation, performing at least one of the following setting operations: setting a display style different from other emoticons for the target emoticon; wherein the display style comprises at least one of: size, color, special effect; and setting a background image matched with the target emoticon for the virtual card.
In the foregoing solution, the moving module is further configured to: in response to the target emoticon being moved above the virtual container by the moving operation and the moving operation being released, continuing to hold the setting result of the setting operation; in response to the target emoticon not being moved above the virtual container by the move operation and the move operation being released, overriding the set result of the set operation.
In the foregoing solution, the display module is further configured to: when the target emoticon starts to move from the upper part of the virtual container to the inlet of the virtual container, or when the target emoticon starts to enter the virtual container through the inlet of the virtual container, at least one of the following setting operations is carried out: setting a display style different from other emoticons for the target emoticon; wherein the display style comprises at least one of: size, color, special effect; and setting a background image matched with the target emoticon for the virtual card.
In the above scheme, when the plurality of emoticons are controlled to move in the virtual card based on the action of gravity, the moving module is further configured to: in response to a second body-sensing manipulation, controlling the plurality of emoticons to move in a direction based on the second body-sensing manipulation, and controlling the target emoticon to move in a gravity direction into the virtual container when the target emoticon moves above the virtual container; wherein the second body sensation operation is to change a posture of an electronic device displaying the human-computer interaction interface, and an upper side of the virtual container is referred to a direction of gravity.
In the foregoing solution, the moving module is further configured to: taking the emoticons meeting the dynamic adaptation condition in the emoticons as the target emoticons, and moving the target emoticons to the upper part of the virtual container; wherein the dynamic adaptation condition comprises: the target emoticon is located in the direction of the second volume sensing operation, and the distance between the target emoticon and the virtual container is positively correlated with the magnitude of the second volume sensing operation.
In the foregoing solution, the moving module is further configured to: and taking the emoticon which is positioned in the direction of the second body feeling operation and has the smallest distance with the virtual container in the plurality of emoticons as the target emoticon, and moving the target emoticon to the upper part of the virtual container.
In the foregoing solution, the moving module is further configured to: acquiring historical dynamic of the login account and historical operation data of the login account; based on the historical dynamic state and the historical operation data, calling a neural network model to determine the probability that each emoticon is matched with the current state of the login account; and taking the emoticon with the highest probability as the target emoticon, and moving the target emoticon to the upper part of the virtual container.
In the above solution, when the target emoticon moves into the virtual container, the moving module is further configured to: outputting feedback information; wherein the feedback information comprises at least one of: audio information; a somatosensory signal.
In the foregoing solution, after sending the dynamic associated with the target emoticon to a social network, the sending module is further configured to: displaying a special effect animation for turning the virtual card, and displaying at least one of the following in the turned virtual card: at least one of the following information for at least one social account of the social network: identification and dynamic; the sharing button is used for triggering the virtual card after turning to be sent to at least one social account of the social network; the target emoticon in a target display style, wherein the target display style is significantly different from a display style of the target emoticon before flipping, and the display style includes at least one of: the size, the color and the special effect, and the background image of the virtual card is matched with the target expression symbol.
In the foregoing scheme, the sending module is further configured to: in response to a triggering operation for the identification of the target social account number, displaying at least one of the following buttons: a message button for triggering to jump from displaying the virtual card to displaying a chat interface with the target social account; the interaction button is used for sending a reminding message to the target social account under the condition of keeping the virtual card; wherein the target social account is any one of the at least one social account.
In the above scheme, a viewing button is further displayed in the turned virtual card, and the sending module is further configured to: displaying a dynamic list in response to a trigger operation for the view button; wherein the dynamic list includes each emoticon, the social account number that is in a dynamic state associated with each emoticon.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the dynamic publishing method of the social network provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions and is used for implementing, when executed by a processor, the dynamic publishing method for a social network provided by the embodiment of the present application.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the dynamic publishing method of the social network provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
the method includes the steps that a virtual card comprising a virtual container and a plurality of emoticons is directly displayed in a human-computer interaction interface, so that a user does not need to trigger editing operation to trigger display of the emoticons for representing dynamic emoticons, movement of the emoticons in the virtual card is executed based on gravity, visual representation with automation and diversity is achieved, a target emoticon in the emoticons is responded to move to enter the virtual container, the dynamic state associated with the target emoticon is sent to a social network, a dynamic sending mode is triggered based on a movement result of the emoticons, and the diversity and interestingness of human-computer interaction can be effectively improved.
Drawings
FIG. 1 is a schematic diagram of an architecture of a dynamic publication system for social networks provided by an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
3A-3C are schematic flow diagrams of a method for dynamic publication of a social network according to an embodiment of the present application;
4A-4L are display interface diagrams of a method for dynamic publication of social networks provided by an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a method for dynamic publication of social networks according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a method for dynamically publishing social networks according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating a method for dynamic publication of social networks according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a social network dynamic publication method according to an embodiment of the present disclosure.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The virtual card displays the electronic card in the human-computer interaction interface, and in response to the recording operation of a user on the electronic card, dynamic information of the user is recorded in the electronic card according to the periodicity of time, wherein the dynamic information comprises at least one of the following information: mood, feeling, attitude.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) Emoticons, static or dynamic images to represent a personalised expression, for example a smiley face emoticon, a crying emoticon, or the like.
In the related art, interaction modes for connecting multiple users through a social network are increasingly abundant, for example, users of the social network can send real-time messages to each other, and each login user can also publish respective dynamic information in a respective social network page to facilitate other users to check, so that a multi-dimensional social interaction mode is realized.
Aiming at the technical problems that dynamic information in the related art is single in issuing mode and a human-computer interaction mode is lack of diversity, the embodiment of the application provides a dynamic issuing method and device of a social network, electronic equipment, a computer program product and a computer readable storage medium, and the diversity of the generated and sent dynamic human-computer interaction can be improved.
An exemplary application of the electronic device provided in the embodiment of the present application is described below, and the electronic device provided in the embodiment of the present application for implementing the dynamic publishing method for a social network may be implemented as various types of user terminals, such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device).
Referring to fig. 1, fig. 1 is a schematic structural diagram of a dynamic publishing system of a social network according to an embodiment of the present disclosure, in order to support a social application, a terminal 400 is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two.
The method comprises the steps of displaying a virtual card in a human-computer interaction interface of a terminal 400, wherein the virtual card comprises a virtual container and a plurality of emoticons which are located outside the virtual container and are respectively associated with different dynamic states, controlling the emoticons to move in the virtual card based on the action of gravity, responding to the movement of a target emoticon to enter the virtual container, sending a dynamic state associated with the target emoticon to a server 200 of a social network by the terminal 400, wherein the target emoticon is any one of the emoticons, associating the dynamic state associated with the target emoticon with an account which logs in the social network through the terminal 400 by the server 200, and returning the association relationship to the terminal 400 for displaying.
In some embodiments, the terminal or the server may implement the rights issuing method provided by the embodiments of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; can be a local (Native) Application program (APP), namely a program which needs to be installed in an operating system to be operated, namely an instant communication APP; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet that can be embedded into any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The embodiments of the present application can be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology that unifies series resources such as hardware, software, and network in a wide area network or a local area network to implement calculation, storage, processing, and sharing of data.
The cloud technology is a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources.
As an example, the server 200 may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and a terminal 400 shown in fig. 2 includes: at least one processor 410, memory 450, at least one network interface 420, and a user interface 430. The various components in the terminal 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 440 in FIG. 2.
The Processor 410 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc., wherein the general purpose Processor may be a microprocessor or any conventional Processor, etc.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable the presentation of media content. The user interface 430 also includes one or more input devices 432, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display screen, camera, other input buttons and controls.
The memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 450 optionally includes one or more storage devices physically located remote from processor 410.
The memory 450 includes either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 450 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 451, including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for communicating to other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 453 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided in this embodiment of the present application may be implemented in software, and fig. 2 illustrates a dynamic publishing apparatus 455 of a social network, which is stored in a memory 450, and may be software in the form of programs and plug-ins, and includes the following software modules: a display module 4551, a move module 4552 and a send module 4553, which are logical and thus may be arbitrarily combined or further divided according to the functions implemented. The functions of the respective modules will be explained below.
Next, a method for dynamically publishing a social network, which is provided by the terminal 400 in fig. 2 according to an embodiment of the present application, is described as an example. Referring to fig. 3A, fig. 3A is a flowchart illustrating a method for dynamically publishing social networks according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3A. Steps 101-103 are applied in an electronic device.
In step 101, a virtual card is displayed in a human-computer interaction interface.
As an example, the virtual card includes a virtual container and a plurality of emoticons located outside the virtual container and respectively associated with different dynamics, see fig. 4A, where fig. 4A is a display interface diagram of a dynamic publishing method of a social network provided in an embodiment of the present application, a dynamic record entry of a login account of a social network client is displayed in a human-computer interaction interface, in response to an operation triggered for the dynamic record entry, a mood card 402A (virtual card) is displayed in a human-computer interaction interface 401A, a plurality of emoticons 403A are displayed at a top position of the mood card 402A, and a virtual container 404A is also displayed in the mood card 402A.
In step 102, the plurality of emoticons are controlled to move in the virtual card based on the action of gravity.
As an example, a plurality of emoticons are controlled to move and fall off a virtual card based on the effect of gravity.
In step 103, in response to the target emoticon moving into the virtual container, the dynamics associated with the target emoticon is sent to the social network.
As an example, in response to the target emoticon moving into the virtual container, the dynamic associated with the target emoticon is taken as the dynamic of the login account and the dynamic associated with the target emoticon is sent to the social network, the target emoticon being any one of the plurality of emoticons.
The method includes the steps that a virtual card comprising a virtual container and a plurality of emoticons is directly displayed in a human-computer interaction interface, so that a user does not need to trigger editing operation to trigger display of the emoticons for representing dynamic emoticons, movement of the emoticons in the virtual card is executed based on gravity, visual representation with automation and diversity is achieved, a target emoticon in the emoticons is responded to move to enter the virtual container, the dynamic state associated with the target emoticon is sent to a social network, a dynamic sending mode is triggered based on a movement result of the emoticons, and the diversity and interestingness of human-computer interaction can be effectively improved.
In some embodiments, referring to fig. 3B, fig. 3B is a flowchart illustrating a dynamic publication method of a social network according to an embodiment of the present application, and before step 102, the method controls the emoticons to move in the virtual card based on gravity, step 104 or step 105 is performed.
And 104, when at least part of the virtual card is displayed in the human-computer interaction interface, executing the step of controlling the plurality of emoticons to move in the virtual card based on the gravity action.
In step 105, in response to the first body feeling operation, the step of controlling the plurality of emoticons to move in the virtual card based on the action of gravity is carried out.
As an example, the first body feel operation is to change a posture of an electronic device displaying the human-computer interaction interface.
When the portion that virtual card shows in human-computer interaction interface occupies the holistic proportion of virtual card and reaches the settlement proportion, trigger a plurality of emoticons and begin to drop based on the action of gravity, the visual expression that has automation and variety concurrently has been realized, and because it drops to simulate real physical situation, consequently can provide more lifelike visual effect for the user, it begins to drop based on the action of gravity to trigger a plurality of emoticons through first body sense operation, the human-computer interaction mode has been richened, user's sense of participation and experience have been promoted.
As an example, referring to fig. 4A, a mood card 402A (virtual card) is displayed in a human-computer interaction interface 401A, a plurality of emoticons 403A are displayed at a top position of the mood card 402A, the mood card 402A is moved upward in the human-computer interaction interface in response to a moving operation for the human-computer interaction interface 401A, and as a result of the upward movement, referring to fig. 4A, at least a part of the mood card 402A has been displayed in the human-computer interaction interface 401A, for example, a proportion of the displayed part of the mood card 402A in the human-computer interaction interface occupies the whole mood card 402A reaches a set proportion, and thus an automatic moving condition is satisfied, and the emoticons 403A in the mood card 402A starts to automatically fall from top to bottom based on gravity sensing away from the top position of the mood card 402A. Referring to fig. 4B, fig. 4B is a display interface diagram of a dynamic publication method of a social network according to an embodiment of the present disclosure, a mood card 402B (virtual card) is displayed in a human-machine interface 401B, a plurality of emoticons 403B are displayed at a top position of the mood card 402B, although at least a portion of the mood card 402B is already displayed in the human-machine interface 401B, for example, a portion of the mood card 402A displayed in the human-machine interface occupies a set proportion (for example, 50%) of the mood card 402A, but the emoticons 403B do not automatically fall, and in response to a first body feeling operation for a terminal, for example, a shaking operation, a tilting operation, and the like, the emoticons 403B in the mood card 402B do not fall from top to bottom based on gravity sensing until leaving the top position of the mood card 402B.
In some embodiments, the step 102 of controlling the plurality of emoticons to move in the virtual card based on the gravity action may be implemented by the following technical solutions: controlling the plurality of emoticons to move in the gravity direction from the initial positions in the virtual card; the initial positions of the emoticons in the virtual card are located above the virtual container, the upper position is based on the gravity direction as a reference, and the gravity direction is the attraction direction of the gravity action.
As an example, the plurality of emoticons are controlled to move in the direction of gravity starting from an initial position in the virtual card; the initial positions of the emoticons in the virtual card are located above an inlet of the virtual container, the upper position of the emoticons is based on the gravity direction, the gravity direction is the attraction direction of gravity, and referring to fig. 4A and 4B, no matter whether the emoticons fall off or are triggered by the proportion of the upper screen part of the virtual card, or by the first body feeling operation, the falling process of the emoticons from the initial positions is controlled based on gravity sensing, wherein the upper screen part refers to the part of the virtual card displayed in the human-computer interaction interface, and the whole falling process simulates a real physical environment, so that a more vivid visual effect can be provided for a user.
In some embodiments, when the plurality of emoticons are controlled to move in the virtual card based on the action of gravity, in response to any one of the emoticons colliding with the collision object, the emoticon and the collision object are controlled to move in the direction of the bounce action, respectively; wherein the collision object comprises at least one of: edges of virtual cards, other emoticons, virtual containers. The real physical environment can be simulated through a collision mechanism, so that randomness and diversity of the movement track of the emoticons can be realized, and more vivid visual effect and richer and diversified interactive experience can be provided for a user.
As an example, when a plurality of emoticons are controlled to move in a virtual card based on the action of gravity, in response to that any emoticon collides with a collision object, the emoticon and the collision object are controlled to move in the direction opposite to the original movement direction, see fig. 4E, and fig. 4E is a display interface diagram of the dynamic publishing method of the social network according to the embodiment of the present application, a mood card 402E (virtual card) is displayed in a human-computer interaction interface 401E, an emoticon 403E is displayed in the mood card 402E, the emoticon 403E gradually falls to the bottom of the mood card based on gravity sensing after leaving the top position of the mood card 402E, collision bounce may occur during falling, for example, collision between the emoticon and the card edge, the bottom and the virtual container, collision between the emoticon and each other, bottom-touch bounce may occur after the emoticon collides with the bottom of the card, bottom-touch bounce may continue after the emoticon falls, and collision object is taken as an example, and in response to that any emoticon 403E collides with the card bottom of the card, and the emoticon 403E and the collision object 403E, and the collision object moves in the direction of the collision object 404E and the collision object 404E, and the collision object moves in the direction of the collision.
In some embodiments, the virtual card also includes text material therein; the method comprises the following steps that a text material is used for prompting that a target emoticon is selected from a plurality of emoticons; when a plurality of emoticons are controlled to move in the virtual card based on the action of gravity, any emoticon is responded to and collided with the text material, the emoticon is controlled to move along the direction of the rebound action, the text material is also set as a collision object, so that the movement track of the emoticon can be enriched, and more vivid visual effects and more abundant and diversified interactive experiences can be provided for users.
As an example, in response to any emoticon colliding with a text material, the emoticon and the text material are controlled to move in the reverse direction of the original movement direction, see fig. 4D, where fig. 4D is a display interface diagram of the dynamic publishing method of the social network provided in the embodiment of the present application, a mood card 402D (virtual card) is displayed in the human-computer interaction interface 401D, an emoticon 403D and a text material 404D are displayed in the mood card 402D, for example, the text material 404D is "Hi, which is today's mood", and the text material has blocking and rebounding effects, that is, when a plurality of emoticons 403D are controlled to move in the mood card 402D based on gravity, in response to any emoticon 403D colliding with the text material 404D, the emoticon 403D and the text material 404D are controlled to move in the rebounding direction respectively, or the text material 404D is fixed, and the emoticon 403D is controlled to move in the rebounding direction.
In some embodiments, when the plurality of emoticons are controlled to move in the virtual card based on the gravity, the emoticon which is already positioned above the virtual container is used as a target emoticon, the target emoticon is kept still for a set time, and if no operation is received within the set time, the target emoticon is controlled to move into the virtual container along the gravity direction.
In some embodiments, referring to fig. 3C, fig. 3C is a flowchart illustrating a dynamic publication method of a social network provided in an embodiment of the present application, and when the plurality of emoticons are controlled to move in the virtual card based on gravity, steps 106 to 108 may also be performed.
In step 106, in response to the selection operation, the selected target emoticon is displayed in a selected state.
In step 107, the target emoticon is moved to the top of the virtual container in response to the move operation for the target emoticon.
As an example, the upper part of the virtual container is referred to the direction of gravity.
In step 108, in response to the move operation being released, the control target emoticon moves in the direction of gravity into the virtual container.
In the moving process of the emoticons, the user operation process in the real physical environment can be simulated through the selection release operation of the user, so that a more vivid visual effect and richer and diversified interactive experience are provided for the user.
As an example, in response to a selection operation, displaying that the selected target emoticon is in a selected state, for example, the color of the target emoticon changes to a reverse color, referring to fig. 4G, where fig. 4G is a display interface diagram of a dynamic publication method of a social network provided in an embodiment of the present application, a mood card 402G is displayed in a human-computer interaction interface 401G, an emoticon 403G is displayed in the mood card 402G, a certain emoticon 403G (target emoticon) is pressed and dragged, the emoticon 403G moves along with a finger, a physical effect is simulated after releasing a hand, the emoticon drops by following gravity induction, if the emoticon is dragged over a virtual container 404G (mood pot), in response to a movement operation on the emoticon 403G, for example, a pressing and dragging operation on the emoticon, or a clicking and dragging operation on the emoticon, the emoticon 403G moves over an entrance of the virtual container 404G, the emoticon 403G drops into the virtual container 404G, a virtual container 403G is finished, if the emoticon drops into the container 403G, the virtual container 403G drops, the container 403G falls into the virtual container 403G, and the container 404G drops into the virtual container 404G, and the container 404G drops into the virtual container 403G, and the container 404G can not drops into the virtual container 404G, and the container 404 drops into the virtual container 404G.
In some embodiments, in response to the moving operation, at least one of the following setting operations is performed: setting a display style different from other emoticons for the target emoticon; wherein the display style includes at least one of: the size, the color and the special effect are achieved, and the display style of the target emoticon is obviously more than that of other emoticons; the background image matched with the target emoticon is set for the virtual card, for example, the background color matched with the theme color of the target emoticon is set for the virtual card, the visual representation of the target emoticon can be associated with the visual representation of the virtual card by adjusting the color of the virtual card and the display style of the target emoticon, so that the immersive visual representation can be provided for the user, the target emoticon is displayed in a distinguishing manner, and a prompt effect can be provided for the user.
For example, referring to fig. 4F, fig. 4F is a display interface diagram of a dynamic publication method of a social network provided in an embodiment of the present application, a mood card 402F (virtual card) is displayed in a human-computer interaction interface 401F, an emoticon 403F is displayed in the mood card 402F, a certain emoticon 403F (target emoticon) is held and dragged, the emoticon 403F moves along with a finger, and when an expression is dragged, a background color of the mood card 402F changes along with a color of the emoticon, for example, the background color is consistent with the color of the emoticon, or the background color is in the same color system as the color of the emoticon, or the background color is adapted to the color of the emoticon, and in response to a movement operation on the emoticon 403F, a display style that is more prominent than other emoticons is set for the target emoticon 403, for example, the target emoticon is illuminated and simultaneously enlarged and distinguished from styles of other emoticons 404F, and the display change of the target emoticon may also occur when it is again.
In some embodiments, in response to the target emoticon being moved above the virtual container by the move operation and the move operation being released, continuing to maintain the setting result of the set operation; in response to the target emoticon not being moved above the virtual container by the move operation and the move operation being released, canceling the setting result of the setting operation, providing a continuous visual experience for the user by continuing to maintain the setting result of the setting operation, and prompting the user to reselect the target emoticon by canceling the setting result of the setting operation.
As an example, when the target emoticon is moved to the top of the virtual container and released, the setting result of the setting operation is continuously maintained in the process of releasing the drop, that is, the display style of the target emoticon is maintained to be the same as the display style before the release, and the background color of the virtual card is maintained to be the same as the background color before the release, the target emoticon is released when the target emoticon is not moved to the top of the virtual container, the setting result of the setting operation is cancelled after the target emoticon is released, that is, the target emoticon is set to be the same as the display styles of other emoticons, and the color of the virtual card is restored to the color before the setting operation.
In some embodiments, when the target emoticon starts moving from above the virtual container towards the entrance of the virtual container, or when the target emoticon starts entering the virtual container through the entrance of the virtual container, at least one of the following setting operations is performed: setting a display style different from other emoticons for the target emoticon; wherein the display style of the target emoticon is more prominent than the display styles of other emoticons, and the display style includes at least one of: size, color, special effect; the background image matched with the target emoticon is set for the virtual card, for example, the background color matched with the theme color of the target emoticon is set for the virtual card, the visual representation of the target emoticon can be associated with the visual representation of the virtual card by adjusting the color of the virtual card and the display style of the target emoticon, so that the immersive visual representation can be provided for the user, the target emoticon is displayed in a distinguishing manner, and a prompt effect can be provided for the user.
In some embodiments, when the plurality of emoticons are controlled to move in the virtual card based on the action of gravity, the plurality of emoticons are controlled to move in a direction based on the second body sensation operation in response to the second body sensation operation, and when the target emoticon is moved above the virtual container, the target emoticon is controlled to move in the direction of gravity into the virtual container; the second body sensation operation is used for changing the posture of the electronic equipment displaying the human-computer interaction interface, and the upper part of the virtual container is referred to the gravity direction. The target emoticons are moved to the upper side of the virtual container through second motion sensing operation, and the user operation process in the real physical environment can be simulated, so that a more vivid visual effect and more abundant and diversified interactive experience are provided for a user.
As an example, the second emoticon operation is a shaking terminal or a tilting terminal, see fig. 4I, fig. 4I is a display interface diagram of a dynamic publication method of a social network provided in an embodiment of the present application, a mood card 402I (virtual card) is displayed in a human-computer interaction interface 401I, an emoticon 403I is displayed in the mood card 402I, based on gravity sensing, the emoticon 403I gradually falls to the bottom of the mood card 402I, collision and bounce may occur during the falling of the emoticon 403I, the bouncing rebounds to resume the falling process after the touching, in response to the second emoticon operation for the terminal, for example, shaking and tilting the terminal, a plurality of emoticons are controlled to move in the direction of the second emoticon 403I, that is, the position of the emoticon 403I follows gravity sensing, a physical effect is simulated, the emoticon 403I shakes in a receiving space of the mood card 402I, the receiving space is a space other than the virtual card, when the target emoticon 403I moves to the top of the virtual card, the target emoticon 403I is controlled to move into the virtual card container in the gravity direction, that when the target emoticon 403I is shaken into the virtual card, the virtual card is shaken into the virtual card, and the virtual card container 404I, and the virtual card is shaken in the virtual container 404, and the virtual card, and the virtual container is finished.
In some embodiments, the controlling of the movement of the plurality of emoticons based on the direction of the second body feeling operation may be implemented by: taking the emoticons meeting the dynamic adaptation condition in the emoticons as target emoticons, and moving the target emoticons to the upper part of the virtual container; wherein the dynamic adaptation condition comprises: the target emoticon is located in the direction of the second volume sensing operation, and the distance between the target emoticon and the virtual container is positively correlated with the magnitude of the second volume sensing operation. The target emoticon is determined through the dynamic adaptation condition, and the target emoticon is directly moved to the position above the virtual container, so that the efficiency of determining the target emoticon is improved, and the card punching efficiency is improved.
As an example, the distance between the target emoticon and the virtual container is the same as the amplitude of the second volume sensing operation or satisfies a set scale, and a suitable emoticon is selected using the direction and amplitude of the shaking, for example, if the direction of the second volume sensing operation is to the left, the target emoticon is located at the right of the virtual container, and if the amplitude of the second volume sensing operation is very large, the emoticon farthest from the virtual container may be used as the target emoticon, and if the terminal is shaken to the left by a set amplitude, the emoticon corresponding to the amplitude in the emoticon at the right of the virtual container is moved to the top of the virtual container as the emoticon to be dropped into the virtual container. For example, a mood card 402I (virtual card) is displayed in the human-computer interaction interface 401I, an emoticon 403I is displayed in the mood card 402I, after a certain shake, the emoticon 403I corresponding to the shake amplitude is determined, the emoticon 403I corresponding to the shake amplitude is moved to the upper side of the virtual container 404I, and the distance between the emoticon 403I and the virtual container 404I is the same as the shake amplitude or meets a set proportion.
In some embodiments, the controlling the movement of the plurality of emoticons based on the direction of the second body sensation operation may be implemented by: and taking the emoticon which is positioned in the direction of the second body feeling operation and has the smallest distance with the virtual container in the plurality of emoticons as a target emoticon, and moving the target emoticon to the upper part of the virtual container. The emoticon which is located in the second body sensing operation direction and is the smallest in distance with the virtual container in the emoticons is taken as a target emoticon, and the target emoticon is directly moved to the position above the virtual container, so that the efficiency of determining the target emoticon is improved, and the card punching efficiency is improved.
For example, referring to fig. 4J, fig. 4J is a display interface diagram of a dynamic publication method of a social network provided in an embodiment of the present application, for example, a mood card 402J (virtual card) is displayed in a human-computer interaction interface 401J, an emoticon 403J is displayed in the mood card 402J, after shaking the terminal each time, the emoticon 403J closest to a virtual container 404J in the mood card 402J is automatically switched to be above the virtual container 404J in the human-computer interaction interface 401J, the direction of shaking is used as the direction of selecting an emoticon, the emoticon on one side of the virtual container moves a fixed distance in the direction of shaking after each shaking, for example, the emoticon on the left side of the virtual container is located when the direction of the second body feeling operation is left, the emoticon on the left side of the virtual container is located when the direction of the second body feeling operation is right, and the emoticon closest to the virtual container is used as the target emoticon regardless of the magnitude of the second body feeling operation.
In some embodiments, the controlling of the movement of the plurality of emoticons based on the direction of the second body feeling operation may be implemented by: acquiring historical dynamic of a login account and historical operation data of the login account; based on historical dynamic and historical operation data, calling a neural network model to determine the probability that each emoticon is matched with the current state of the login account; and taking the emoticon with the highest probability as a target emoticon, and moving the target emoticon to the upper part of the virtual container. The target emoticon is obtained based on historical dynamic and historical operation data, so that the target emoticon is the emoticon which best accords with the real mood of the user, namely the target emoticon is the currently most suitable emoticon, the accuracy of the target emoticon is improved in an intelligent mode, and the human-computer interaction efficiency of the user is improved.
As an example, the neural network model is obtained by training based on a historical dynamic sample and a historical operation data sample, a virtual card is displayed in the human-computer interaction interface, emoticons are displayed in the virtual card, the emoticons which are used most frequently or the emoticons which are predicted to be most likely to be used are moved to the upper side of a virtual container after a second somatosensory operation is triggered for the terminal each time, when the emoticons which are most likely to be used are predicted, the neural network model is called to determine the probability that each emoticon is matched with the current state of the login account based on the historical dynamic and historical operation data, the emoticon with the highest probability is the emoticon which is most likely to be used, and the context usage data comprises the social dynamics published before.
In some embodiments, when the target emoticon moves into the virtual container, feedback information is output; wherein the feedback information comprises at least one of: audio information; a somatosensory signal. Through the feedback information with different dimensions, the interaction dimension between the user and the user can be enhanced, and therefore immersive human-computer interaction experience is provided for the user.
As an example, referring to fig. 4H, fig. 4H is a display interface diagram of a dynamic publication method of a social network according to an embodiment of the present disclosure, after an emoticon falls into a virtual container, a mood card 402H (virtual card) is displayed in a human-computer interaction interface 401H, an emoticon 403H is displayed in the mood card 402H, and when the emoticon 403H moves into the virtual container, feedback information 404H is output, where the feedback information 404H includes at least one of: the audio information, the motion sensing signal, such as the feedback information 404H, is a motion sensing signal that represents the vibration of the terminal.
In some embodiments, after sending the dynamics associated with the target emoticon to the social network, displaying a special effect animation that flips the virtual card, displaying at least one of the following in the flipped virtual card: at least one of the following information for at least one social account of a social network: identification and dynamic; the sharing button is used for triggering the virtual card after turning over to be sent to at least one social account of the social network; a target emoji in a target display style, wherein the target display style is significantly larger than a display style of the target emoji before flipping, the display style including at least one of: the size, the color and the special effect are matched, and the background image of the turned virtual card is matched with the target expression symbol. The identification can be a head portrait, a nickname and the like of the social account, the dynamic state can be a mood dynamic state, a work dynamic state and the like of the social account, and rich interactive visual effects can be provided for the user through special effect animations and the turned virtual card.
As an example, a special effect animation of flipping a virtual card is displayed, and at least one of the following information of at least one social account of a social network is displayed in the flipped virtual card: the social contact account number and the login account number of the social network client have a social relationship, see fig. 4K, where fig. 4K is a display interface diagram of a dynamic publishing method of the social network provided in an embodiment of the present application, a mood card 402K (virtual card) is displayed in a human-computer interaction interface 401K, an emoticon 403K is displayed in the mood card 402K, after the emoticon 403K falls into a virtual container 404K, the mood card 402K is automatically turned over (displayed in the form of card turning animation), the emoticon 403K is displayed in the mood card 402K of the human-computer interaction interface 401K after the card is punched, a display style of the emoticon K of the mood card after the turning is obvious in a display style of the emoticon 403K in the mood card before the turning, a background image of the virtual card after the turning is adapted to a target emoticon, a sharing button 405K is further displayed in the mood card 402K, and in response to a triggering operation for the sharing button 405K, the mood card 402K after the turning is sent to at least one social contact account number 403 of the social network client, and the social contact account number 402K and the social contact card 402K and the social network client display style of the mood card 406 as a dynamic friend sign 407 and a friend sign 407 of the mood card in the mood card.
In some embodiments, after sending the dynamics associated with the target emoticon to the social network, displaying special effect animations of other emoticons in the virtual card and virtual container disappearing, displaying at least one of the following in the virtual card after the disappearance processing: at least one of the following information for at least one social account of a social network: identification and dynamic; the sharing button is used for triggering the virtual card after turning to be sent to at least one social account of the social network; a target emoji in a target display style, wherein the target display style is significantly larger than a display style of the target emoji before flipping, the display style including at least one of: the size, the color and the special effect are matched with the background image of the virtual card after disappearance processing and the target expression symbol. The identification can be a head portrait, a nickname and the like of the social account, the dynamic state can be a mood dynamic state, a work dynamic state and the like of the social account, and rich interactive visual effects can be provided for the user through special effect animation and the turned virtual card.
In some embodiments, in response to a triggering operation for identification of the target social account, at least one of the following buttons is displayed: the message button is used for triggering the step from displaying the virtual card to displaying a chat interface with the target social account; the interaction button is used for sending a reminding message to the target social account under the condition of keeping the virtual card; wherein the target social account is any one of the at least one social account. The message button or the interaction button provides a more direct user social function for the user, so that the interaction efficiency between the users is improved.
As an example, referring to fig. 4L, at least one of the following information of at least one social account of the social network is displayed in the flipped mood card 402L in the human-computer interaction interface 401L: the method comprises the steps of marking 403L and dynamic 404L, clicking a certain friend head portrait (marking 403L), popping up a mood card 408L of a friend on the basis of displaying a virtual card, displaying an emoticon 407L of the friend card, responding to a trigger operation of the marking 403L aiming at a social account, displaying a message button 405L positioned below the mood card 408L, triggering to jump from the display mood card to a chat interface displayed with a target social account, clicking the message button 405L to 'send message' jump a conversation window, displaying an interaction button 406L 'stamp mark him' positioned below the mood card 408L, sending a reminding message to the target social account under the condition of keeping the mood card, wherein the interaction button can also trigger functions of beating one beat and the like, clicking the interaction button 406L to send a light interaction message, at the moment, the page does not jump, but the page displays the reminding message, represents the stamped friend, and the friend perceives the stamped message on the chat interface.
In some embodiments, a viewing button is also displayed in the turned virtual card, and a dynamic list is displayed in response to a triggering operation for the viewing button; wherein the dynamic list includes each emoticon, the social account number that is in a dynamic state associated with each emoticon. The mood dynamics of the friends can be displayed through the dynamic list in expression dimensions, so that a user can know the dynamics of all the friends, and the information display efficiency is improved.
As an example, referring to fig. 4L, a view button 409L (the number of friends without avatar being displayed) is clicked, a friend mood list page is entered to show the mood dynamics of the friends in an emoticon dimension, a view button 409L is also displayed in the flipped mood card 402L, and in response to a trigger operation for the view button 409L, a dynamic list 410L is displayed, the dynamic list 410L including each emoticon and the social account number in a dynamic state associated with each emoticon.
In the following, an exemplary application of the embodiment of the present application in an application scenario of a social network will be described.
In some embodiments, a virtual card is displayed in a human-computer interaction interface of a social client of a terminal, wherein the virtual card comprises a virtual container and a plurality of emoticons which are located outside the virtual container and are respectively associated with different dynamic states, the emoticons are controlled to move in the virtual card based on the action of gravity, in response to the movement of a target emoticon into the virtual container, the terminal sends a dynamic state associated with the target emoticon to a server of the social client, wherein the target emoticon is any one of the emoticons, and the server associates the dynamic state associated with the target emoticon with an account which logs in the social client through the terminal and returns the association relationship to the terminal for display.
In some embodiments, referring to fig. 4A, a dynamic record entry of a login account of a social network client is displayed in a human-computer interaction interface, in response to a trigger operation for the dynamic record entry, a mood card 402A (virtual card) is displayed in the human-computer interaction interface 401A, a plurality of emoticons 403A are displayed at a top position of the mood card 402A, in response to a move operation for the human-computer interaction interface 401A, the mood card 402A is moved upward in the human-computer interaction interface, and a result of the upward movement is as shown in fig. 4A, at least a part of the mood card 402A is already displayed in the human-computer interaction interface 401A, for example, a displayed part of the mood card 402A in the human-computer interaction interface occupies a set proportion (for example, 50%) of the mood card 402A as a whole, and the emoticons 403A in the mood card 402A automatically start to fall from top to bottom based on gravity sensing.
In some embodiments, referring to fig. 4B, a mood card 402B is displayed in the human-computer interaction interface 401B, a plurality of emoticons 403B are displayed at the top position of the mood card 402B, although at least part of the mood card 402B has been displayed in the human-computer interaction interface 401B, the emoticons 403B do not automatically drop, and in response to a first body feeling operation for the terminal, the emoticons 403B in the mood card 402B automatically drop from top to bottom from the top position of the mood card 402B based on gravity sensing, that is, no emoticon automatically drops regardless of how the position of the mood card moves, and shaking the terminal is required to trigger the emoticon to drop.
In some embodiments, referring to fig. 4C, fig. 4C is a display interface diagram of a dynamic publishing method of a social network provided in an embodiment of the present application, where the human-computer interface 401C displays a dropping process of the emoticon 403C in the mood card 402C, and if the terminal is tilted, a position of the emoticon 403C changes according to a gravity-sensing simulation physical effect, so that a dropping track changes, that is, when a plurality of emoticons are controlled to move in the mood card based on gravity, in response to a second body sensation operation for the terminal, the second body sensation operation is a shaking or tilting operation, the plurality of emoticons 403C are controlled to move in a direction based on the second body sensation operation, for example, the emoticon 403C originally drops toward the bottom of the mood card 402C, and the emoticon drops toward a side of the mood card 402C due to the tilting of the terminal.
In some embodiments, referring to fig. 4D, a mood card 402D is displayed in the human-computer interaction interface 401D, an emoticon 403D and a text material 404D are displayed in the mood card 402D, for example, the text material 404D is "Hi, what is today's mood", and the text material has blocking and bouncing effects, that is, when a plurality of emoticons 403D are controlled to move in the mood card 402D based on gravity, in response to collision between any emoticon 403D and the text material 404D, the emoticon 403D and the text material 404D are controlled to move in the bouncing direction respectively, or the text material 404D is fixed and the emoticon 403D is controlled to move in the bouncing direction.
In some embodiments, referring to fig. 4E, a mood card 402E is displayed in a human-computer interaction interface 401E, an emoticon 403E is displayed in the mood card 402E, the emoticon 403E gradually falls to the bottom of the mood card based on gravity sensing after leaving the top of the mood card 402E, collision and bounce may occur during falling, for example, collision between the emoticon and the edge, the bottom, and the pot of the mood card, collision between the emoticon and the emoticon, bottom-touch bounce may occur after the emoticon collides with the bottom of the mood card, and the emoticon continues to fall after bottom-touch bounce, which is exemplified by taking the collision object as the bottom of the mood card, and in response to collision between any emoticon 403E and the collision object 404E, the emoticon 403E and the collision object 404E are controlled to move along the direction of the bounce action, respectively.
Referring to fig. 4F, a mood card 402F is displayed in the human-computer interaction interface 401F, an emoticon 403F is displayed in the mood card 402F, a certain emoticon 403F is held and dragged, the emoticon 403F moves along with a finger, while dragging an emoticon, the background color of the mood card 402F changes with the color of the emoticon, for example, the background color is consistent with the color of the emoticon, or the background color is in the same color system as the color of the emoticon, or the background color is adapted to the color of the emoticon, and in response to a selection operation for the emoticon 403F, the selected emoticon is displayed in a selected state, and the emoticon 403F emits light and becomes larger and is distinguished from the styles of other emoticons 404F.
In some embodiments, referring to fig. 4G, fig. 4G is a display interface diagram of a dynamic publication method of a social network provided in an embodiment of the present application, a mood card 402G is displayed in a human-computer interaction interface 401G, an emoticon 403G is displayed in the mood card 402G, a certain emoticon 403G is held and dragged, the emoticon 403G moves with a finger, a physical effect is simulated after the hand is released, the emoticon drops by gravity sensing, if the emoticon is dragged over a virtual container 404G (mood pot), a movement operation for the emoticon 403G is performed in response to the movement operation for the emoticon 403G, for example, the pressing and dragging operation for the emoticon, or the clicking and dragging operation for the emoticon moves the emoticon 403G over an entrance of the virtual container 404G, the emoticon 403G drops into the virtual container 404G after the hand is released, if the emoticon drops outside the virtual container 403G, the emoticon 403G is dropped into the virtual container 403G, and the emoticon does not drop into the virtual container 403G when the emoticon drops into the virtual container 403G, and the virtual container 403G drops outside the virtual container 403G.
In some embodiments, referring to fig. 5, fig. 5 is a schematic diagram illustrating a dynamic publishing method of a social network according to an embodiment of the present disclosure, in an initial state before an emoticon falls, a fixed coordinate value is set for each emoticon, the emoticon is statically adsorbed on the top of an area of a mood card on a human-computer interaction interface, when the mood card slides to be completely displayed in the human-computer interaction interface, a coordinate change under gravity sensing is triggered, the emoticon falls off in the human-computer interaction interface, the emoticon bounces off after touching the bottom and falls off again, a displacement of a picture element can be achieved by gesture dragging, the area of the mood card is defined as an accommodating space, a coordinate boundary of the accommodating space is set, a two-dimensional rectangular coordinate system is established, a center of gravity of each emoticon 501 is used as a coordinate point, for example, a coordinate of the emoticon is a 1 (x 1 ,y 1 ),a 2 (x 2 ,y 2 ) And defining the area of the virtual container as 502 as a coordinate range for triggering the mood card to turn over, and defining the area as (m) according to the picture edge 1 ,n 1 ) To (m) 2 ,n 2 ),(m 1 ,n 1 ) To (m) 2 ,n 2 ) The diagonal vertices of the token area are characterized, and when the coordinates of the emoticon 501 enter the area, the mood card can be triggered to turn over. The emoticon is dragged by a finger, the displacement of the finger is converted into the coordinate displacement of the emoticon, and the emoticon is dragged to the area of the virtual container, namely the emoticon is dragged to the coordinate range (m) triggering the mood card to turn over 1 ,n 1 ) To (m) 2 ,n 2 ) When the coordinates a of the emoticon 1 (x 1 ,y 1 ) After entering the coordinate range, the action of turning the mood card can be triggered, or the emoticon is dragged to the coordinate range above the area of the virtual container, and the emoticon falls vertically downwards to the area (m) of the virtual container after the hands are released 1 ,n 1 ) To (m) 2 ,n 2 ) The action of turning the mood card can be triggered.
In some embodiments, referring to fig. 4H, after the emoticon falls into the virtual container, along with sound effects and terminal vibration, a mood card 402H is displayed in the human-computer interaction interface 401H, an emoticon 403H is displayed in the mood card 402H, and when the emoticon 403H moves into the virtual container, feedback information 404H is output, where the feedback information 404H includes at least one of: the audio information, the motion sensing signal, such as the feedback information 404H, is a motion sensing signal that characterizes the vibration of the terminal.
In some embodiments, referring to fig. 4I, a mood card 402I is displayed in a human-computer interaction interface 401I, a emoticon 403I is displayed in the mood card 402I, the emoticon 403I gradually falls to the bottom of the mood card 402I based on gravity sensing, collision and rebound may occur during the falling of the emoticon 403I, the falling process may be restarted after the emoticon 403I touches down, a plurality of emoticons are controlled to move in a direction based on the second body sensing operation in response to a second body sensing operation on a terminal, such as shaking and tilting of the terminal, that is, the position of the emoticon 403I follows gravity sensing, a physical effect is simulated, the emoticon 403I shakes in a receiving space of the mood card 402I, and when a target emoticon moves above a virtual container, the target emoticon 403I is controlled to move into the virtual container in a gravity direction, that is a random emoticon 403I is shaken to a middle position of the mood card 402I, that is above the virtual container 404I, and the mood card and a sound effect of a mobile phone are played along with shaking.
In some embodiments, referring to fig. 6, fig. 6 is a schematic diagram illustrating a dynamic publication method of a social network according to an embodiment of the present application, a mood card 602 in a human-computer interaction interface 601 is moved to be completely displayed, an emoticon 603 is automatically dropped or a fixed emoticon 603 is stationary, and then the terminal is shaken to trigger a gyroscope of a mobile phone, and the emoticon 603 drops and rebounds under the induction of gravity. In order to simulate the gravity and rebound effect, a front-end code needs to be built, and the front-end code is as follows:
1, woven fabric is constructed into (1) div id = 'demo' >; // defining layer objects
div { width:100px; v/define the width of the mood card
height is 100px; v/definition of height of mood card
Url ("./images/langiu. Png"); // defining background image links
backsound-size of 100px 100px; // defining background image pixels
position, absolute; // define the position
left Opx; // define the left side of the mood card
top Opx; // define the Top of the mood card
50% by volume of border-radius: 50%; // define rounded corners
}
In some embodiments, shaking the terminal to realize the rolling bounce of the emoticon back and forth in the human-computer interaction interface through the acceleration sensor and influenced by the gravity direction, wherein when the emoticon is circular, the emoticon can be realized by moving the code of the circle as follows:
// moving circle
ball.x + = ball.mx; // x coordinate increment
ball.y + = ball.my; // y coordinate increments
// x-axis coordinate plus move distance greater than canvas width (right boundary reached) or x-axis coordinate plus move distance equal to 0 (left boundary reached)
if (ball.x + ball.mx > canvas.width | | | ball.x + ball.mx < 0) { ball.mx = -ball.mx; // x-axis coordinate decrement
};
The distance of the movement plus the y-axis coordinate is larger than the width of the canvas (reaching the lower boundary) or the distance of the movement plus the y-axis coordinate is equal to 0 (reaching the upper boundary)
if (ball.y + ball.my > canvas.height | | | ball.y + ball.my < 0) { ball.my = -ball.my; // y-axis coordinate decrement
};
// recursive Call Current method
window. Requestanimationframe (definitions. Callee); // telling the browser that animation is desired.
In some embodiments, to improve the shaking efficiency of shaking the terminal such that the emoticon falls into the virtual container, the emoticon may be selected to fall into the virtual container during shaking according to the following strategy.
As an example, an appropriate emoticon is selected using the direction and magnitude of the shake, and when the terminal shakes a set magnitude to the left, an emoticon corresponding to the magnitude among the emoticons on the right side of the virtual container is moved to the upper side of the virtual container as an emoticon to be dropped into the virtual container. For example, a mood card 402I is displayed in the human-computer interaction interface 401I, an emoticon 403I is displayed in the mood card 402I, after a certain shake, the emoticon 403I corresponding to the shake amplitude is determined, the emoticon 403I corresponding to the shake amplitude is moved to the upper side of the virtual container 404I, and the distance between the emoticon 403I and the virtual container 404I is positively correlated with the shake amplitude.
For example, referring to fig. 4J, for example, a mood card 402J is displayed in a human-computer interaction interface 401J, an emoticon 403J is displayed in the mood card 402J, after the terminal is shaken each time, the emoticon 403J closest to a virtual container 404J in the mood card 402J is automatically switched to be above the virtual container 404J regardless of the amplitude of the terminal, the direction of shaking is used as the direction of selecting the emoticon, and the emoticon on the right side of the virtual container moves a fixed distance in the direction of shaking after shaking each time.
As an example, a mood card is displayed in the human-computer interaction interface, emoticons are displayed in the mood card, after the terminal is shaken each time, emoticons used at high frequency or predicted emoticons most probably used are moved to the upper side of the virtual container, when the predicted emoticons most probably used are predicted, the possible moods of the user can be predicted according to the context use data of the user, so that the emoticons corresponding to the predicted moods are determined, and the context use data comprises previously published social dynamics.
In some embodiments, the emoticons can automatically enter the virtual container, and in the case of wirelessly shaking the terminal, the terminal only needs to be tilted to enable the emoticons to be located above the virtual container, so that the emoticons can fall off according to gravity, a random emoticon, or an emoticon which is predicted to be most likely to be used according to context use data of a user is automatically moved above the virtual container and falls into the virtual container, and in the process, the emoticon is moved above the virtual container in response to the movement operation of the emoticon by the user, thereby equivalently realizing manual intervention.
In some embodiments, referring to fig. 4K, a mood card 402K is displayed in a human-computer interaction interface 401K, an emoticon 403K is displayed in the mood card 402K, after the emoticon 403K falls into a virtual container 404K, the mood card 402K is automatically turned over (displayed in the form of card-turning animation), so that card punching is completed, an emoticon 405K is displayed in the mood card 402K of the human-computer interaction interface 401K, a display style of the emoticon 403K of the turned mood card is significantly different from a display style of the emoticon 403K in the mood card before turning, a background image of the turned virtual card is adapted to a target emoticon, a button 405K is further displayed in the mood card 402K, the turned mood card 402K is sent to at least one social account of a social network in response to a trigger operation for the button 405K, and a friend 406K and a dynamic emotion mark 407K that have been punched today are further displayed in the mood card 402K in the form of a head portrait.
In some embodiments, referring to fig. 4L, at least one of the following information of at least one social account of a social network is displayed in the flipped mood card 402L in the human-computer interaction interface 401L: an identifier 403L and a dynamic state 404L are used for clicking a mood card 408L of a certain friend avatar (identifier 403L) to pop up a friend, displaying an emoticon 407L of the friend card, responding to a trigger operation of the identifier 403L for a social account, displaying a message button 405L located below the mood card 408L, and triggering to jump from the display mood card to a chat interface displayed with a target social account, clicking the message button 405L to 'send a message' to jump a conversation window, displaying an interaction button 406L 'stamp him' located below the mood card 408L, and sending a reminding message to the target social account under the condition of keeping the mood card, clicking the interaction button 406L to send a light interaction message, wherein the page does not jump at the moment, but the page displays a prompt message, represents a friend, and the friend perceives a message stamped on the chat interface. Clicking a view button 409L (the number of friends without head portraits displayed) to enter a friend mood list page, displaying the moods of the friends in an expression dimension, displaying a view button 409L in the turned mood card 402L, and displaying a dynamic list 410L in response to a trigger operation on the view button 409L, wherein the dynamic list 410L comprises each emoticon and a social account number in a dynamic state associated with each emoticon.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating a principle of a dynamic publishing method of a social network according to an embodiment of the present disclosure, in a human-computer interaction interface 701, an emoticon 703 may gradually fall to the bottom of a mood card 702 based on gravity sensing, and collision rebounding may occur during the falling process, for example, collision between the emoticon and the edge and the bottom of the mood card, collision between the emoticon and a jar, collision between the emoticon and each other, bottom-touch rebounding may occur after collision between the emoticon and the edge and the bottom of the mood card, and the emoticon may continue to fall after bottom-touch rebounding, and during the falling process, the terminal is tilted or shaken, and the position of the emoticon follows gravity sensing, thereby simulating a physical effect, and a falling trajectory of the emoticon changes. Pressing and dragging the emoticon, moving the emoticon along with the finger, dragging the emoticon, changing the background color of the mood card along with the color of the emoticon, simulating the physical effect after releasing the hand, dropping the emoticon along with gravity induction, dropping the emoticon into a virtual container after releasing the hand if dragging the emoticon to the upper part of the virtual container, completing mood card-playing, and dropping the emoticon along with sound effect and mobile phone vibration.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a principle of a dynamic publication method of a social network according to an embodiment of the present disclosure, in a human-computer interaction interface, an emoticon may gradually fall to the bottom of a mood card based on gravity sensing, and collision and rebound may occur during the falling process, for example, an emoticon collides with the edge and the bottom of the mood card, an emoticon collides with a can (a virtual container), emoticons collide with each other, bottom-touch rebound occurs after the emoticon collides with the edge and the bottom of the mood card, and the emoticon continues to fall after the bottom-touch rebound, and then a mobile phone is shaken, and the position of the emoticon follows gravity sensing to simulate a physical effect and shakes in a receiving space of the mood card. When a certain random emoticon is shaken to the middle position of the accommodating space, namely above the virtual container, the emoticon falls into the virtual container, the mood is played, and when the emoticon falls into the virtual container, the emoticon and the mobile phone shake can be accompanied.
According to the dynamic publishing method of the social network, the mood card comprising the virtual container and the emoticons is directly displayed in the human-computer interaction interface, so that a user does not need to trigger editing operation to trigger and display the emoticons used for representing the dynamic emoticons, and because the movement of the emoticons in the mood card is executed based on gravity, the visual representation with automation and diversity is realized, a target emoticon in the emoticons moves to enter the virtual container in response to the movement of the target emoticon, the dynamic publishing method related to the target emoticon is sent to the social network, and a dynamic sending method is triggered based on the movement result of the emoticons, so that the diversity and interestingness of human-computer interaction can be effectively improved.
Continuing with the exemplary structure of the social networking dynamic publication device 455 provided by the embodiments of the present application as software modules, in some embodiments, as shown in fig. 2, the software modules stored in the social networking dynamic publication device 455 of the memory 450 may include: the display module 4551 is used for displaying the virtual cards in the human-computer interaction interface; the virtual card comprises a virtual container and a plurality of emoticons which are positioned outside the virtual container and are respectively associated with different dynamics; a moving module 4552, configured to control the plurality of emoticons to move in the virtual card based on gravity; a sending module 4553, configured to send, to the social network, a dynamic associated with the target emoticon in response to the target emoticon moving into the virtual container; wherein the target emoticon is any one of a plurality of emoticons.
In some embodiments, before controlling the plurality of emoticons to move in the virtual card based on the gravity, the moving module 4552 is further configured to: when the automatic moving condition is met, automatically switching to a step of controlling a plurality of emoticons to move in the virtual card based on the action of gravity; wherein the automatic moving conditions include: at least part of the virtual card has been displayed in the human-computer interaction interface; or responding to the first body feeling operation, and turning to the step of controlling the plurality of emoticons to move in the virtual card based on the gravity action; the first body feeling operation is used for changing the posture of the electronic equipment displaying the man-machine interaction interface.
In some embodiments, the moving module 4552 is further configured to: controlling the plurality of emoticons to move in the gravity direction from the initial positions in the virtual card; the initial positions of the emoticons in the virtual card are located above the virtual container, the upper position is referred to the gravity direction, and the gravity direction is the attraction direction of gravity.
In some embodiments, when the plurality of emoticons are controlled to move in the virtual card based on the gravity, the moving module 4552 is further configured to: in response to the collision of any emoticon with a collision object, controlling the emoticon and the collision object to move along the direction of the rebound action respectively; wherein the collision object comprises at least one of: edges of virtual cards, other emoticons, virtual containers.
In some embodiments, the virtual card further includes text material; the text material is used for prompting selection of a target emoticon from the emoticons; the moving module 4552 is further configured to, when controlling the plurality of emoticons to move in the virtual card based on the gravity action: and in response to the collision between any emoticon and the text material, controlling the emoticon to move along the rebound action direction.
In some embodiments, when the plurality of emoticons are controlled to move in the virtual card based on the gravity, the moving module 4552 is further configured to: responding to the selection operation, and displaying that the selected target emoticon is in a selected state; moving the target emoticon to the upper part of the virtual container in response to the moving operation for the target emoticon; wherein, the upper part of the virtual container is taken as the reference in the gravity direction; in response to the movement operation being released, the control target emoticon is moved in the direction of gravity into the virtual container.
In some embodiments, the moving module 4552 is further configured to: in response to the moving operation, at least one of the following setting operations is performed: setting a display style different from other emoticons for the target emoticon; wherein the display style includes at least one of: size, color, special effect; and setting a background image matched with the target emoticon for the virtual card.
In some embodiments, the mobile module 4552 is further configured to: in response to the target emoticon being moved above the virtual container by the moving operation and the moving operation being released, continuing to maintain the setting result of the setting operation; and in response to the target emoticon not being moved above the virtual container by the moving operation and the moving operation being released, canceling the setting result of the setting operation.
In some embodiments, the display module 4551 is further configured to: when the target emoticon starts moving from the upper part of the virtual container to the entrance of the virtual container, or when the target emoticon starts entering the virtual container through the entrance of the virtual container, at least one of the following setting operations is performed: setting a display style different from other emoticons for the target emoticon; wherein the display style comprises at least one of: size, color, special effect; and setting a background image matched with the target emoticon for the virtual card.
In some embodiments, when the plurality of emoticons are controlled to move in the virtual card based on the gravity, the moving module 4552 is further configured to: controlling the plurality of emoticons to move based on the direction of the second body sensation manipulation in response to the second body sensation manipulation, and controlling the target emoticon to move into the virtual container in the direction of gravity when the target emoticon moves above the virtual container; the second body sensation operation is used for changing the posture of the electronic equipment displaying the human-computer interaction interface, and the upper part of the virtual container is referred to by the gravity direction.
In some embodiments, the moving module 4552 is further configured to: taking the emoticons meeting the dynamic adaptation condition in the emoticons as target emoticons, and moving the target emoticons to the upper part of the virtual container; wherein the dynamic adaptation condition comprises: the target emoticon is located in the direction of the second volume sensing operation, and the distance between the target emoticon and the virtual container is positively correlated with the magnitude of the second volume sensing operation.
In some embodiments, the moving module 4552 is further configured to: and taking the emoticon which is positioned in the direction of the second body feeling operation and has the smallest distance with the virtual container in the plurality of emoticons as a target emoticon, and moving the target emoticon to the upper part of the virtual container.
In some embodiments, the mobile module 4552 is further configured to: acquiring historical dynamic of a login account and historical operation data of the login account; based on historical dynamic and historical operation data, calling a neural network model to determine the probability that each emoticon is matched with the current state of the login account; and taking the emoticon with the highest probability as a target emoticon, and moving the target emoticon to the upper part of the virtual container.
In some embodiments, when the target emoticon moves into the virtual container, the move module 4552 is further configured to: outputting feedback information; wherein the feedback information comprises at least one of: audio information; and (6) somatosensory signals.
In some embodiments, after sending the dynamic associated with the target emoticon to the social network, the sending module 4553 is further configured to: displaying special effect animation of the turning virtual card, and displaying at least one of the following in the turned virtual card: at least one of the following information for at least one social account of a social network: identification and dynamic; the sharing button is used for triggering the virtual card after turning over to be sent to at least one social account of the social network; a target emoji in a target display style, wherein the target display style is significantly larger than a display style of the target emoji before flipping, the display style including at least one of: the size, the color and the special effect are matched, and the background image of the virtual card is matched with the target expression symbol.
In some embodiments, the sending module 4553 is further configured to: in response to a triggering operation for the identification of the target social account number, displaying at least one of the following buttons: the message button is used for triggering the step from displaying the virtual card to displaying a chat interface with the target social account; the interaction button is used for sending a reminding message to the target social account under the condition of keeping the virtual card; wherein the target social account is any one of the at least one social account.
In some embodiments, a view button is further displayed in the flipped virtual card, and the sending module 4553 is further configured to: displaying a dynamic list in response to a trigger operation for the view button; wherein the dynamic list includes each emoticon, the social account number that is in a dynamic state associated with each emoticon.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the dynamic publishing method of the social network described in the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform a method for dynamic publication of a social network, such as the method for dynamic publication of a social network shown in fig. 3A-3C.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of a program, software module, script, or code written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the application, the virtual card comprising the virtual container and the emoticons is directly displayed in the human-computer interaction interface, so that a user does not need to trigger an editing operation to trigger and display the emoticons used for representing the dynamic emoticons, and because the movement of the emoticons in the virtual card is executed based on gravity, the visual representation with both automation and diversity is realized, a target emoticon in the emoticons is responded to move into the virtual container, the dynamic state associated with the target emoticon is sent to a social network, and a dynamic sending mode is triggered based on the movement result of the emoticons, so that the diversity and interestingness of human-computer interaction can be effectively improved.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (21)

1. A method for dynamic publication of a social network, the method comprising:
displaying the virtual card in a human-computer interaction interface; the virtual card comprises a virtual container and a plurality of emoticons which are positioned outside the virtual container and are respectively associated with different dynamics;
controlling the plurality of emoticons to move in the virtual card based on the effect of gravity;
in response to a target emoticon moving into the virtual container, sending a dynamic associated with the target emoticon to a social network; wherein the target emoticon is any one of the plurality of emoticons.
2. The method of claim 1, wherein prior to controlling the plurality of emoticons to move within the virtual card based on gravity, the method further comprises:
when at least part of the virtual card is displayed in the human-computer interaction interface, the step of controlling the plurality of emoticons to move in the virtual card based on the gravity action is carried out; or alternatively
Responding to a first body feeling operation, and turning to the step of controlling the plurality of emoticons to move in the virtual card based on the gravity; the first body feeling operation is used for changing the posture of the electronic equipment displaying the man-machine interaction interface.
3. The method of claim 1,
the controlling the plurality of emoticons to move in the virtual card based on the effect of gravity comprises:
controlling the plurality of emoticons to move in the direction of gravity starting from an initial position in the virtual card; wherein the initial positions of the emoticons in the virtual card are located above the virtual container, the upper position is referred to the gravity direction, and the gravity direction is the attraction direction of the gravity action.
4. The method of any of claims 1 to 3, wherein when controlling the plurality of emoticons to move in the virtual card based on gravity, the method further comprises:
in response to any one of the emoticons colliding with a collision object, controlling the emoticon and the collision object to move in the direction of the rebound action respectively; wherein the collision object comprises at least one of: edges of the virtual card, other emoticons, the virtual container.
5. The method of claim 1,
the virtual card also comprises text materials; the text material is used for prompting that the target emoticon is selected from the emoticons;
when the plurality of emoticons are controlled to move in the virtual card based on the action of gravity, the method further comprises:
and responding to the collision of any one expression symbol and the text material, and controlling the expression symbol to move along the direction of the rebound action.
6. The method of claim 1, wherein when controlling the plurality of emoticons to move in the virtual card based on gravitational effects, the method further comprises:
responding to selection operation, and displaying that the selected target emoticon is in a selected state;
moving the target emoticon to the top of the virtual container in response to a moving operation for the target emoticon; wherein, the upper part of the virtual container is taken as a reference in the gravity direction;
controlling the target emoticon to move in a gravity direction into the virtual container in response to the moving operation being released.
7. The method of claim 6, further comprising:
in response to the moving operation, performing at least one of the following setting operations:
setting a display style different from other emoticons for the target emoticon; wherein the display style comprises at least one of: size, color, special effect;
and setting a background image matched with the target emoticon for the virtual card.
8. The method of claim 7, further comprising:
in response to the target emoticon being moved above the virtual container by the moving operation and the moving operation being released, continuing to hold the setting result of the setting operation;
in response to the target emoticon not being moved above the virtual container by the move operation and the move operation being released, overriding the set result of the set operation.
9. The method of claim 1, further comprising:
when the target emoticon starts to move from the upper part of the virtual container to the inlet of the virtual container, or when the target emoticon starts to enter the virtual container through the inlet of the virtual container, at least one of the following setting operations is carried out:
setting a display style different from other emoticons for the target emoticon; wherein the display style comprises at least one of: size, color, special effect;
and setting a background image matched with the target emoticon for the virtual card.
10. The method of claim 1, wherein when controlling the plurality of emoticons to move in the virtual card based on gravity, the method further comprises:
in response to the second body sensation manipulation, controlling the plurality of emoticons to move based on a direction of the second body sensation manipulation, an
When the target emoticon moves to the upper part of the virtual container, controlling the target emoticon to move along the gravity direction to enter the virtual container; wherein the second body sensation operation is used for changing the posture of the electronic equipment displaying the human-computer interaction interface, and the upper part of the virtual container is referred to the gravity direction.
11. The method of claim 10, wherein the controlling the plurality of emoticons to move based on a direction of a second body sensation operation comprises:
taking the emoticons meeting the dynamic adaptation condition in the emoticons as the target emoticon, and moving the target emoticon to the upper part of the virtual container;
wherein the dynamic adaptation condition comprises: the target emoticon is located in the direction of the second body sensation operation, and the distance between the target emoticon and the virtual container is positively correlated with the amplitude of the second body sensation operation.
12. The method of claim 10, wherein said controlling the movement of the plurality of emoticons based on the direction of the second body sensation operation comprises:
and taking the emoticon which is positioned in the direction of the second body feeling operation and has the smallest distance with the virtual container in the plurality of emoticons as the target emoticon, and moving the target emoticon to the upper part of the virtual container.
13. The method of claim 10, wherein said controlling the movement of the plurality of emoticons based on the direction of the second body sensation operation comprises:
acquiring historical dynamic of the login account and historical operation data of the login account;
based on the historical dynamic state and the historical operation data, calling a neural network model to determine the probability that each emoticon is matched with the current state of the login account;
and taking the emoticon with the highest probability as the target emoticon, and moving the target emoticon to the upper part of the virtual container.
14. The method of claim 1, wherein when the target emoticon moves into the virtual container, the method further comprises:
outputting feedback information; wherein the feedback information comprises at least one of: audio information; a somatosensory signal.
15. The method of claim 1, wherein after sending the dynamic associated with the target emoticon to a social network, the method further comprises:
displaying a special effect animation for turning the virtual card, and displaying at least one of the following in the turned virtual card:
at least one of the following information for at least one social account of the social network: identification and dynamic;
the sharing button is used for triggering the virtual card after turning to be sent to at least one social account of the social network;
the target emoticon in a target display style, wherein the target display style is significantly different from a display style of the target emoticon before flipping, and the display style includes at least one of: the size, the color and the special effect, and the background image of the virtual card is matched with the target expression symbol.
16. The method of claim 15, further comprising:
in response to a triggering operation for the identification of the target social account number, displaying at least one of the following buttons: a message button for triggering to jump from displaying the virtual card to displaying a chat interface with the target social account; the interaction button is used for sending a reminding message to the target social account under the condition of keeping the virtual card; wherein the target social account is any one of the at least one social account.
17. The method of claim 15, wherein a view button is also displayed in the flipped virtual card, the method further comprising:
displaying a dynamic list in response to a trigger operation for the view button; wherein the dynamic list includes each of the emoticons, the social account number that is dynamic in association with each of the emoticons.
18. An apparatus for dynamic publication of a social network, the apparatus comprising:
the display module is used for displaying the virtual card in the human-computer interaction interface; the virtual card comprises a virtual container and a plurality of emoticons which are positioned outside the virtual container and are respectively associated with different dynamics;
the moving module is used for controlling the plurality of emoticons to move in the virtual card based on the action of gravity;
a sending module, configured to send a dynamic associated with a target emoticon to a social network in response to the target emoticon moving into the virtual container; wherein the target emoticon is any one of the emoticons.
19. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor configured to implement the method of dynamic publication of a social network of any of claims 1 to 17 when executing executable instructions stored in the memory.
20. A computer-readable storage medium storing executable instructions, wherein the executable instructions, when executed by a processor, implement the method for dynamic publication of a social network of any of claims 1 to 17.
21. A computer program product comprising a computer program or instructions, wherein the computer program or instructions, when executed by a processor, implement the method for dynamic publication of a social network according to any one of claims 1 to 17.
CN202111189326.XA 2021-10-12 2021-10-12 Dynamic publishing method and device of social network and electronic equipment Pending CN115984023A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111189326.XA CN115984023A (en) 2021-10-12 2021-10-12 Dynamic publishing method and device of social network and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111189326.XA CN115984023A (en) 2021-10-12 2021-10-12 Dynamic publishing method and device of social network and electronic equipment

Publications (1)

Publication Number Publication Date
CN115984023A true CN115984023A (en) 2023-04-18

Family

ID=85970594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111189326.XA Pending CN115984023A (en) 2021-10-12 2021-10-12 Dynamic publishing method and device of social network and electronic equipment

Country Status (1)

Country Link
CN (1) CN115984023A (en)

Similar Documents

Publication Publication Date Title
US20210383720A1 (en) Systems and methods for programming instruction
EP3040807B1 (en) Virtual sensor in a virtual environment
US20110215998A1 (en) Physical action languages for distributed tangible user interface systems
JP7447299B2 (en) Adaptive display method and device for virtual scenes, electronic equipment, and computer program
Sreedharan et al. 3D input for 3D worlds
CN115984023A (en) Dynamic publishing method and device of social network and electronic equipment
CN108292193A (en) Animated digital ink
CN114053693B (en) Object control method and device in virtual scene and terminal equipment
Bhagi Android game development with AppInventor
CN113144583A (en) Electronic equipment, key and virtual scene interaction control method, device and medium
Gerini et al. Gamified Virtual Reality for Computational Thinking
CN114425159A (en) Motion processing method, device and equipment in virtual scene and storage medium
CN110604918B (en) Interface element adjustment method and device, storage medium and electronic equipment
Yamamura et al. A development framework for RP-type serious games in a 3D virtual environment
WO2024060888A1 (en) Virtual scene interaction processing method and apparatus, and electronic device, computer-readable storage medium and computer program product
KR101190904B1 (en) Educational quiz marble game module system
WO2024060924A1 (en) Interaction processing method and apparatus for virtual scene, and electronic device and storage medium
CN117138345A (en) Game editing method, game control device and electronic equipment
WO2024037139A1 (en) Method and apparatus for prompting information in virtual scene, electronic device, storage medium, and program product
Krastev et al. Controlling a 2D computer game with a Leap Motion
Karilainen Creation of 3D scoreboard in virtual reality
Sapundzhi et al. Mobile Game Development Using Unity Engine
Rodriguez Improving the experience of exploring a virtual museum
CN117826993A (en) Information display method, information display device, electronic equipment and storage medium
CN113552985A (en) Message processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40084268

Country of ref document: HK