CN113434237A - User generated content display method, device and storage medium - Google Patents

User generated content display method, device and storage medium Download PDF

Info

Publication number
CN113434237A
CN113434237A CN202110770199.6A CN202110770199A CN113434237A CN 113434237 A CN113434237 A CN 113434237A CN 202110770199 A CN202110770199 A CN 202110770199A CN 113434237 A CN113434237 A CN 113434237A
Authority
CN
China
Prior art keywords
user
virtual object
track
generated content
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110770199.6A
Other languages
Chinese (zh)
Other versions
CN113434237B (en
Inventor
张智
龙宪焜
朱辉颖
周颖枝
翟安东
苏智威
郭诗雅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110770199.6A priority Critical patent/CN113434237B/en
Publication of CN113434237A publication Critical patent/CN113434237A/en
Application granted granted Critical
Publication of CN113434237B publication Critical patent/CN113434237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application example discloses a method for displaying user-generated content, which comprises the following steps: generating and displaying a frame of image of the virtual space at preset time intervals so as to display the dynamic effect in the virtual space; the virtual space comprises a second virtual object corresponding to the current user and a plurality of first virtual objects corresponding to the user generated content publisher; the plurality of first virtual objects surround the second virtual object, and for any first virtual object in the plurality of first virtual objects, the closer the first virtual object is to the second virtual object, the more intimate the relationship between the publisher of the user-generated content associated with the first virtual object and the current user is or the higher the popularity of the user-generated content is; and displaying the corresponding keywords of the user-generated content on each first virtual object. The application example also provides a corresponding device and a storage medium.

Description

User generated content display method, device and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method and an apparatus for displaying user-generated content, and a storage medium.
Background
At present, with the development of internet technology, networks gradually become important sources for people to acquire information, and particularly after the internet enters the web2.0 era, users are not only browsers of website contents, but also manufacturers of the website contents. For example, a user may publish logs, photos, etc., while the user may also browse content published by other users.
When the user browses the contents published by other users, the contents published by different users are displayed by sliding the scroll bar up and down. For example, when a user browses user-generated content published by other users, the user-generated content is displayed by a user-generated content list, and different user-generated content is browsed by sliding up and down.
For example, for the main group of social products 95, who has strong complaints and needs to be paid attention to, and needs a secure and private space to show the side that they do not want to know, in some applications issuing small secrets, a user can issue small secrets anonymously, while browsing the small secrets issued by other users. When the user browses the small secrets issued by other users, the small secrets issued by other users are displayed in a list form, and the user browses different small secrets by sliding up and down.
Disclosure of Invention
The application example provides a user generated content display method, a device and a storage medium, which can display more user generated contents in an effective interface and facilitate users to check the user generated contents interested in the users.
The application example provides a method for displaying user-generated content, which comprises the following steps:
generating and displaying a frame of image of the virtual space at preset time intervals so as to display the dynamic effect in the virtual space; the virtual space comprises a second virtual object corresponding to the current user and a plurality of first virtual objects corresponding to the user generated content publisher; the plurality of first virtual objects surround the second virtual object, and for any first virtual object in the plurality of first virtual objects, the closer the first virtual object is to the second virtual object, the closer the publisher of the user-generated content associated with the first virtual object is to the current user or the higher the popularity of the user-generated content; and displaying the corresponding keywords of the user-generated content on each first virtual object.
The application example provides a user generates content display device, includes:
the system comprises a memory and a processor, wherein computer readable instructions are stored in the memory, and when the computer readable instructions are executed by the processor, the method for displaying the user-generated content is realized.
The present examples provide a non-transitory computer-readable storage medium storing computer-readable instructions which, when executed by a processor, implement the method as described above.
By adopting the scheme provided by the embodiment of the application, one or more first virtual objects in the virtual space are displayed, the first virtual objects correspond to the user generated content, and the keywords corresponding to the user generated content are displayed on the first virtual objects, so that the plurality of user generated contents can be displayed on an effective interface, and the user can conveniently and quickly position the interested user generated content according to the keywords displayed on the first virtual objects. Meanwhile, the user generated content is displayed in the form of the first virtual object in the virtual space, so that the interestingness of displaying the user generated content is increased.
Drawings
In order to more clearly illustrate the technical solutions in the examples or prior art of the present application, the drawings needed to be used in the description of the examples or prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only examples of the present application, and it is obvious for a person skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a system architecture diagram to which the present application relates;
FIG. 2 is a schematic flow chart diagram of a method for user-generated content presentation in some examples of the present application;
FIGS. 3a and 3b are schematic flow diagrams of a user-generated content presentation method in some examples of the present application;
FIG. 4 is a schematic illustration of a coordinate system in virtual space in some examples of the present application;
FIG. 5 is a schematic illustration of determining a position of a first virtual object in a virtual space in some examples of the application;
FIG. 6 is a schematic illustration of generating a virtual space corresponding image in some examples of the present application;
FIG. 7a is an interface schematic of a virtual space interface shown in some examples of the present application;
FIG. 7b is a schematic illustration of a user-generated content detail interface in some examples of the application;
FIG. 8 is an interaction diagram of a user-generated content presentation method in some examples of the application;
FIG. 9 is a schematic diagram of a user-generated content presentation device in some examples of the application; and
FIG. 10 is a block diagram of a computing device in some examples of the present application.
Detailed Description
The technical solutions in the examples of the present application will be clearly and completely described below with reference to the drawings in the examples of the present application, and it is obvious that the described examples are only a part of the examples of the present application, and not all examples. All other examples, which can be obtained by a person skilled in the art without making any inventive step based on the examples in this application, are within the scope of protection of this application.
Generally, when a user browses contents published by other users, the contents published by different users are displayed by sliding up and down scroll bars. For example, when a user browses user-generated content published by other users, the user-generated content is presented in a list form, and different user-generated content can be browsed by scrolling the list up and down.
However, this method cannot display more user-generated contents in a limited interface, and cannot allow a user to locate user-generated contents that are interested in the user according to keywords, and in addition, user-generated contents published by other users are displayed through a list, and the intimacy between a publisher of the user-generated contents and the current user cannot be reflected.
Therefore, the application example provides a display method of user-generated content. In the method provided by the embodiment of the application, the keywords corresponding to different user generated contents are respectively drawn on different virtual objects (for example, the virtual objects can be displayed in a star form), and the plurality of virtual objects are displayed in the virtual space, so that the plurality of user generated contents can be displayed on an effective interface, and the user can conveniently and quickly position the interested user generated contents according to the keywords displayed on the virtual objects. Meanwhile, the user generated content is displayed in a virtual object form in the virtual space, and interestingness of displaying the user generated content is increased.
Fig. 1 is a system architecture diagram according to an example of the present application. As shown in FIG. 1, the system architecture 100 includes a terminal device 104 (e.g., terminal devices 104a-c) and a server 102, with the terminal device 104 and the server 102 communicatively coupled via a network 106. The server 102 is configured to provide a User-generated content presentation service, where the User-generated content is also referred to as a User-generated content (UGC), which generally refers to original content of a User, and the UGC of the User may be presented or provided to other users through an internet platform.
Each user connects to the server 102 through a client application 108 (e.g., client applications 108a-c or a browser) on the terminal device 104. The client application 108 may be an instant messaging application (e.g., a QQ client, a wechat client, a MSN client), or a browser, etc., and the client application 108 may also be a social application (e.g., a microblog client, etc.).
When the server 102 provides the user-generated content presentation service, the client application 108 in the terminal device 104 sends a user-generated content page data request to the server 102, and the server 102 sends keywords of the user-generated content to be presented to the client application 108. The user generated content to be displayed may include user generated content published by a friend of the current user, user generated content published by a friend of the friend, user generated content published by a stranger, and the like, wherein the user generated content published by the stranger may be selected according to an interest characteristic of the user.
The client application 108 on the terminal device 104 establishes a three-dimensional virtual space, and then the client application 108 arranges first virtual objects in the virtual space, the number of which is the same as that of the user-generated contents to be displayed, and one user-generated content corresponds to one first virtual object. The terminal device 104 may draw the keyword of the user-generated content on the surface of or around the first virtual object corresponding to the user-generated content, and may also draw the first virtual object corresponding to the user-generated content into a texture of a different color according to the gender of the publisher of the user-generated content. And finally, displaying the three-dimensional virtual space according to a preset viewpoint and a preset visual angle, wherein the viewpoint is similar to a perspective camera, the visual angle is similar to the shooting angle of the perspective camera, and the image in the virtual space is shot by the perspective camera similar to the three-dimensional virtual space at the shooting angle and displayed. In addition, after clicking a first virtual object, the client application 108 requests the server 102 for user-generated content corresponding to the clicked first virtual object, and displays the returned user-generated content.
In some instances, examples of the terminal device 104 include, but are not limited to, a palmtop computer, a wearable computing device, a Personal Digital Assistant (PDA), a tablet computer, a laptop computer, a desktop computer, a mobile phone, a smartphone, an Enhanced General Packet Radio Service (EGPRS) mobile phone, a media player, a gaming console, a television, a smart terminal, or a combination of any two or more of these or other data processing devices.
In some instances, examples of the one or more networks 106 include a Local Area Network (LAN) and a Wide Area Network (WAN) such as the internet. In some examples, one or more of the networks 106 may be implemented using any network protocol, including various wired or wireless protocols, such as ethernet, Universal Serial Bus (USB), FIREWIRE, global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), bluetooth, WiFi, voice over IP (VoIP), Wi-MAX, or any other suitable communication protocol.
Fig. 2 is a flowchart of a method for presenting user-generated content according to an example of the present application, where the method may be executed by the terminal device 104 shown in fig. 1, and as shown in fig. 2, the method includes the following steps:
s201: receiving related data of at least one user generated content, wherein the related data comprises keywords of the user generated content.
The at least one user may include friends of the current user, friends of friends, strangers, and the like. When determining a friend or a friend of a friend, the method can determine according to the user relationship chain, and when determining a stranger, the method can select a stranger with the same interest tag as the current user and with the distance within a preset range, and can also select the stranger in other modes. One or more pieces of user-generated content published by one user may be acquired.
S202: determining the number of first virtual objects in the virtual space according to the related data.
In some examples, the number of keywords may be determined from keywords of the user-generated content included in the related data, and the number of keywords may be taken as the number of first virtual objects.
Alternatively, the related data may include the number of user-generated contents, and the number may be set as the number of first virtual objects.
In some examples, the first virtual object may be a sphere or other solid shape. For example, when the first virtual object is a sphere, the surface texture of the sphere may be rendered into the effect of a planet.
In some examples, the virtual space further includes second virtual objects, each first virtual object being located on at least one track around the second virtual object.
At this time, the related data may further include a correspondence between the identifier of each user-generated content and the at least one track, and in this case, the number of the user-generated contents corresponding to each track may be determined according to the correspondence; and determining the number of the first virtual objects on each track according to the number of the user generated contents corresponding to each track.
S203: the position of each first virtual object in the virtual space is determined.
In some examples, the first virtual objects may be randomly arranged in the virtual space, so as to achieve the effect of scattering the first virtual objects. Alternatively, the first virtual object may be arranged in the virtual space according to a certain rule, for example, the first virtual object may be arranged on a track of an arbitrary shape.
For the case where each first virtual object is located on at least one track around the second virtual object, the first virtual objects may be randomly distributed on the corresponding track when determining the position of each first virtual object. The first virtual objects can be uniformly distributed on the corresponding tracks according to the number of the first virtual objects on each track, and the central angles corresponding to the track arc lengths between every two first virtual objects are the same. Specifically, the central angle corresponding to each first virtual object on each track may be determined according to the number of the first virtual objects on each track; and then determining the position of each first virtual object on the track according to the central angle corresponding to each first virtual object on the track.
S204: keywords of each user-generated content are respectively mapped to a first virtual object.
In some examples, the order of steps S203 and S204 is not limited, and step S203 may be executed first, or step S204 may be executed first.
In the case where step S203 is performed first, the position of the first virtual object in the virtual space is determined, and then the keyword of the user-generated content is drawn on the first virtual object. In the case where step S204 is performed first, the keyword of the user-generated content is drawn on the first virtual object, and then the position of the first virtual object in the virtual space is determined.
S205: and displaying the virtual space containing the first virtual objects according to the positions of the first virtual objects in the virtual space.
In some examples, to present the virtual space, an image corresponding to the virtual space may be generated by three-dimensional projection, and then the virtual space may be presented by presenting the image. For example, an image corresponding to the virtual space may be generated and displayed according to a preset viewpoint position and view angle and a position of each first virtual object in the virtual space. The viewpoint is similar to a perspective camera, the visual angle is similar to the shooting angle of the perspective camera, and the image in the virtual space is shot by the perspective camera similar to the three-dimensional virtual space at the shooting angle and displayed.
In some examples, after the keywords of the user-generated content are drawn on the first virtual objects, a correspondence between the identifier of each user-generated content and each first virtual object may be further established, and in response to a subsequent operation on one first virtual object, for example, a click operation, the terminal device 104 determines the identifier of the user-generated content corresponding to the clicked first virtual object according to the correspondence between the identifier of each user-generated content and each first virtual object, and requests the server 102 for the corresponding user-generated content according to the determined identifier of the user-generated content and displays the corresponding user-generated content. The user-generated content may be presented on an interface presenting the virtual space, and a new page may be created, for example, a user-generated content detail page on which the user-generated content is presented.
In some instances, where the virtual space includes a second virtual object, the first virtual object being located on at least one orbit around the second virtual object, the first virtual object on the orbit may rotate around the second virtual object.
In order to exhibit the effect of rotation, a new image may be generated by three-dimensional projection at predetermined time intervals, and the new image is exhibited, thereby exhibiting the effect of rotation. Here, the new image may be referred to as a frame image, and the predetermined time interval may be referred to as a frame time. Thus, when generating an image of the virtual space of the next frame, it is necessary to determine the position of each first virtual object on each track at the time of the next frame.
In some examples, the rotation angle of the first virtual object on the track within one frame time may be determined according to a radius of each track, and specifically, a ratio of a preset rotation coefficient to the radius of the track may be used as the rotation angle of the first virtual object on the track within one frame time, where the preset rotation coefficient is used to adjust the rotation speed of the first virtual object, and the first virtual object rotates faster when the preset rotation coefficient is larger, and rotates slower when the preset rotation coefficient is smaller.
After the rotation angle of each first virtual object within one frame time is determined, the position of each first virtual object in the next frame is determined according to the position of each first virtual object in the current frame and the rotation angle, and an image of the virtual space of the next frame is generated and displayed according to the position of each first virtual object in the next frame.
In some instances, in response to a swipe operation (e.g., a user swipe operation on a screen), the first virtual object on each track may rotate accordingly.
The terminal device 104 may obtain a sliding distance and a sliding direction corresponding to the sliding operation, and determine a target position of each first virtual object on each track according to the sliding distance, the sliding direction, and the current position of each first virtual object on each track. Wherein the target position is a position to which the first virtual object finally rotates in response to the sliding operation. For determining the target position of each first virtual object, the rotation angle of each first virtual object can be determined according to the sliding distance; and then determining the target position of each first virtual object according to the current position and the rotation angle of each first virtual object.
When the rotating process is displayed, the process that the first virtual object rotates to the corresponding target position may be displayed through one frame of image or through multiple frames of images. When the presentation is performed by one frame of image, an image of the rotated virtual space may be generated according to the target position of each first virtual object for presentation. When the display is performed through the multi-frame images, several intermediate positions of the first virtual object in the process of rotating to the target position need to be determined, then corresponding images are respectively generated according to the intermediate positions, and then the images are displayed one by one to display the process of gradually rotating.
In some examples, the stars corresponding to the user-generated content may be drawn in different colors according to the gender of the publisher of the user-generated content. For example, the planet corresponding to the user-generated content published by the male is drawn in blue, and the planet corresponding to the user-generated content published by the female is drawn in red.
By adopting the method provided by the embodiment of the application, one or more first virtual objects in the virtual space are displayed, the first virtual objects correspond to the user generated content, and the keywords corresponding to the user generated content are displayed on the first virtual objects, so that the plurality of user generated contents can be displayed on an effective interface, and the user can quickly position the interested user generated content according to the keywords of the user generated content displayed on the first virtual object. Meanwhile, the user generated content is displayed in the form of the first virtual object in the virtual space, so that the interestingness of displaying the user generated content is increased.
The method for displaying the user-generated content provided by the example of the present application may be implemented by executing a js code by the terminal device, where the js code may be a js code integrated in the client application 108, or a js code acquired from the server 102 when the terminal device needs to display the virtual space. For example, the server 102 may transmit the js code when transmitting the relevant data to the terminal device 104, which transmission by the server 102 may reduce the size of the installation package for the client application 108. Where the js code may invoke an interface of threejs, the js code runs on the client application 108 (e.g., a browser embedded in the client application 108) in conjunction with information of the user-generated content in the relevant data, exposing the virtual space.
The following describes, with reference to fig. 3a and 3b, a user-generated content presentation method provided by an example of the present application, where the method may be executed by the terminal device 104 shown in fig. 1, and as shown in fig. 3a and 3b, the method includes the following steps:
s301, the terminal device sends a user generated content page data request to the server.
In some examples, the user-generated content page data request carries an account of the current user, e.g., an instant application account or a social application account.
S302: data relating to at least one user-generated content is received. The related data includes keywords of the user-generated content.
At least one user in step S302 may include a friend of the current user, a friend and a stranger of the friend of the current user, and the like. The server can determine the friends of the current user according to the user relationship chain of the current user and inquire the user generated content issued by the friends of the current user. After the friends of the current user are determined, the friends of the friends are determined according to the user relationship chains of the friends, and user generated content issued by the friends of the friends is inquired. The user generated content of the stranger can be extracted, and when the user generated content of the stranger is extracted, the user generated content issued by the user with the same interest tag as the current user can be acquired. The acquired user generated content released by the friend may be user generated content released by the friend most recently, and one or more pieces of user generated content released by one friend may be acquired. After the user generated content published by the friend, the user generated content published by the friend of the friend and/or the user generated content published by the stranger are obtained, keywords of the user generated content can be further extracted. The keyword for extracting the user-generated content may be extracted in various ways, for example, a TF-IDF (Term Frequency-Inverse text Frequency) manner may be adopted, where TF is Term Frequency (Term Frequency) and IDF is Inverse text Frequency index (Inverse text Frequency). Where TF-IDF is TF, the number of times a word appears in the user-generated content/the total word count of the user-generated content, and IDF is log (total number of user-generated contents in the corpus/(number of user-generated contents including the word + 1)). And selecting the word with the maximum TF-IDF as a keyword of the user generated content.
S303: determining the number of first virtual objects in the virtual space according to the related data.
In some examples, the first virtual object may be a sphere or other solid shape. For example, when the first virtual object is a sphere, the surface texture of the sphere may be rendered into the effect of a planet.
In some examples, the number of first virtual objects may be determined in any of the following manners S3031-S3033.
S3031: the number of user-generated contents in the related data is taken as the number of first virtual objects in the virtual space.
In some examples, the amount of user-generated content may be included in the related data, and at this time, the amount of user-generated content in the related data may be used as the amount of the first virtual object in the virtual space.
S3032: determining the number of keywords according to keywords of the user-generated content included in the related data; and taking the determined number of the keywords as the number of the first virtual objects in the virtual space.
For the case where the number of user-generated contents is not included in the related data, the number of keywords may be determined from the keywords included in the related data; and taking the determined number of the keywords as the number of the first virtual objects in the virtual space.
S3033: determining the number of the user generated contents corresponding to each track according to the corresponding relation between the identification of each user generated content and the at least one track; and determining the number of the first virtual objects on each track according to the number of the user generated contents corresponding to each track.
For the case where the first virtual object is located on a track surrounding the second virtual object, the correlation data further includes a correspondence between an identification of each user-generated content and the at least one track. In this case, the number of user-generated contents corresponding to each track may be determined according to the correspondence; and then determining the number of the first virtual objects on each track according to the number of the user generated contents corresponding to each track.
The correspondence between the user-generated content in the related data and the track may be determined by the server, and the server may correspond the user-generated content of the friend to the track close to the second virtual object (i.e., the track of the inner layer, hereinafter referred to as the first track), and correspond the user-generated content published by the friend of the friend or the user-generated content published by the stranger to the track of the outer layer. Wherein the second virtual object represents the current user, and the terminal device can draw the head portrait of the current user on the second virtual object.
In this way, the closer the first virtual object is to the second virtual object, the more intimate the relationship between the publisher of the user-generated content corresponding to the first virtual object and the current user is. In the case that the number of the first virtual objects on the first track has an upper limit (for example, a first set value), the server may further determine whether the number of the user-generated contents of the friend is greater than the first set value, and when the number of the user-generated contents of the friend is not greater than the first set value, directly correspond the user-generated contents issued by the friend to the first track; when the number of the user-generated contents distributed by the friend is greater than the first setting value, the user-generated contents whose number is equal to the first setting value can be selected therefrom. For example, the user generated contents of the friend may be sorted according to the popularity of the user generated contents, and the user generated contents with the top number equal to the first set value are selected to correspond to the first track. Wherein, the popularity of the user-generated content can be determined according to the number of the comments of the user-generated content.
The server may correspond the user-generated content published by the friends of the friends with other tracks outside the first track (e.g., a second track, a third track, etc.). If the maximum value of the number of the first virtual objects on the second track is a second set value and the maximum value of the number of the first virtual objects on the third track is a third set value, the server can firstly judge whether the number of the user generated contents issued by the friends of the friends is greater than the second set value, and if the number of the user generated contents issued by the friends of the friends is not greater than the second set value, the user generated contents issued by the friends of the friends are directly corresponding to the second track; and if the number of the user generated contents released by the friends of the friends is greater than a second set value, sequencing the user generated contents released by the friends of the friends according to the user generated content heat, and selecting the user generated contents with the number of the second set value in the front of the sequencing to correspond to the second track.
The server can further judge the number of the remaining user generated contents in the user generated contents released by the friends of the friends, when the number of the remaining user generated contents is smaller than a third set value, the remaining user generated contents are corresponding to a third track, otherwise, the user generated contents of the friends, which are ranked in the front and are the third set value, are selected to be corresponding to the third track according to the rank ordering of the user generated contents. By analogy, the server can also correspond the user generated content published by the stranger to the track, and the corresponding mode is similar to the corresponding mode of the user generated content of the friend to the track, and is not described again here.
After receiving the corresponding relationship between the identifier of each user generated content and the at least one track, the terminal device may determine the number of the user generated content corresponding to each track according to the track corresponding to the user generated content, and use the number of the user generated content corresponding to each track as the number of the first virtual objects on the track.
S304: the position of each first virtual object in the virtual space is determined.
In some examples, the position of each first virtual object in the virtual space may be determined by any of the following steps 3041 and 3043.
S3041: the first virtual objects are randomly distributed in the virtual space, for example, the virtual space effect of stars in the night sky is realized.
S3042: the first virtual object is arranged in the virtual space according to a predetermined rule, for example, on trajectories of various shapes.
In a case where each first virtual object is located on at least one track around the second virtual object, determining the position of each first virtual object specifically includes the steps of:
s3043: and determining the position of the first virtual object on each track according to the number of the first virtual objects on each track.
The first virtual objects on each track may be randomly arranged on the corresponding track, or the first virtual objects on the tracks may be uniformly distributed on the corresponding track. For the case that the first virtual objects are evenly distributed on the track, first, as shown in fig. 4, a 3D coordinate system is constructed, which may be a right-hand coordinate system, with the positive z-axis direction facing out perpendicular to the paper. The position of the viewpoint can be avoided on the Z axis, for example, the position of the viewpoint can be (-x, -y, Z), where Z can be the radius of the outermost orbit, so that the displayed virtual space has the effect of a celestial sphere.
When the position of the first virtual object on each track is determined, three tracks are constructed on a plane formed by the x/y axes, and the three tracks are a first track, a second track and a third track from inside to outside respectively. The coordinate system center point (0, 0, 0) is the coordinate of the central second virtual object, and the central second virtual object represents the current user. The first virtual objects on each track are evenly distributed on the tracks. For example, as shown in fig. 5, for the third orbit, there are 12 stars on the third orbit, and the 12 stars are evenly distributed on the third orbit, and the central angle between every two stars is 30 degrees.
Specifically, when determining the position of the first virtual object on each track, the following steps may be included:
s30431: for each track, determining a central angle corresponding to each first virtual object on the track according to the number of the first virtual objects on the track; and determining the position of each first virtual object on the track according to the central angle corresponding to each first virtual object on the track.
S305: and respectively drawing the keywords of each user-generated content on a first virtual object.
In some instances, the keyword on the first virtual object may be drawn by canvas. The drawing of the keyword on the first virtual object means that the keyword is drawn on the surface of the first virtual object or drawn on the periphery of the first virtual object. The texture of the first virtual object can be drawn through canvas, the texture color of the first virtual object corresponding to each user generated content is determined according to the gender of the publishing user of each user generated content, and the corresponding color texture is drawn on each first virtual object. For example, a first virtual object corresponding to user-generated content published by a male is rendered as a blue texture or other color texture, and a first virtual object corresponding to user-generated content published by a female is rendered as a pink texture or other color texture.
When the first virtual objects are located on at least one track around the second virtual object, the method includes the following steps:
s3051: for each user generated content, determining a track corresponding to the user generated content according to the corresponding relation between each user generated content and the at least one track; keywords of the user-generated content are rendered onto a first virtual object on the track.
Wherein the keywords of the user-generated content can be drawn on any one of the first virtual objects on the corresponding track.
S306: and displaying the virtual space containing the first virtual objects according to the positions of the first virtual objects in the virtual space.
When the virtual space is displayed according to the position of the first virtual object, the method specifically comprises the following steps:
s3061: and generating and displaying an image of the virtual space according to the preset viewpoint position and view angle and the position of each first virtual object in the virtual space.
As shown in fig. 6, in the virtual space in which the positive Z-axis direction faces outward perpendicularly to the paper, the image in the virtual space generated with the viewpoint a and the angle of view θ is displayed as S in fig. 6 by transmitting the image S to the display device of the terminal device.
In some examples, the terminal device may generate and display a frame of image of the virtual space at intervals of one frame time, thereby displaying a dynamic effect in the virtual space. For example, when the first virtual object and the second virtual object are stars, the displayed virtual space includes a plurality of stars 701 and keywords 702 on the stars in the displayed virtual space image as shown in fig. 7 a.
In some examples, after the keywords of each user-generated content are respectively mapped to a first virtual object, the method further comprises the steps of:
s307: establishing a corresponding relation between the identification of each user generated content and each first virtual object; responding to the click operation of any first virtual object, and determining the identifier of the user-generated content corresponding to the first virtual object according to the corresponding relation; sending a user generated content acquisition request to a server, wherein the user generated content acquisition request carries an identifier of the user generated content; and receiving the user generated content which is sent by the server and corresponds to the identification of the user generated content, and displaying the user generated content.
After the keywords of the user-generated content are drawn to the first virtual object, the correspondence between the identifier of the user-generated content and the first virtual object may be determined according to the correspondence between the keywords of the user-generated content and the first virtual object. And then, responding to an operation on one first virtual object, such as a click operation, acquiring an identifier of the user generated content corresponding to the first virtual object, and requesting the server for the user generated content corresponding to the identifier of the user generated content and displaying the user generated content.
When the user generated content is displayed, the content can be displayed on an interface for displaying a virtual space, and a new page can be created, for example, a user generated content detail page shows topic content on the user generated content detail page. The user-generated content detail interface may be as shown in fig. 7b, on which user-generated content details 703 and comments 704 for the user-generated content may be presented.
In some instances, the first virtual objects may rotate around the second virtual object while each first virtual object is located on at least one track around the second virtual object.
In order to exhibit the effect of rotation, a new image may be generated by three-dimensional projection at predetermined time intervals, and the new image may be exhibited to exhibit the effect of rotation. Here, the new image may be referred to as a frame image, and the predetermined time interval may be referred to as a frame time. When generating the image of the next frame, it is necessary to determine the position of each first virtual object in the virtual space at the time of the next frame. Specifically, the method comprises the following steps S308-S309.
S308: for each track, determining a rotation angle of each first virtual object on the track around the second virtual object within one frame time interval according to the radius of the track.
When determining the rotation angle of each first virtual object on each track within one frame time interval, the method comprises the following steps:
s3081: and determining the rotation angle of each first virtual object on the track rotating around the second virtual object within one frame time interval according to the ratio of a preset rotation coefficient to the radius of the track.
In some examples, the preset rotation coefficient is used to adjust the rotation speed of the first virtual object, the first virtual object rotates faster when the preset rotation coefficient is larger, and the first virtual object rotates slower when the preset rotation coefficient is smaller.
S309: determining the rotated position of the first virtual object on the track according to the position of each first virtual object on the track and the rotation angle of each first virtual object; and displaying the rotated virtual space according to the rotated position of the first virtual object on each track.
In some examples, the rotated position may be determined according to the current position, the rotation angle, and the radius of the track where each first virtual object is located.
The position of the first virtual object may be represented by an angle with respect to one coordinate axis of the 3D coordinate system, for example, as shown in fig. 5, the position of the first virtual object may be represented by an angle with respect to the x-axis of the position of the first virtual object on the track. And determining the rotated angle according to the current angle and the rotation angle, and determining the rotated position according to the rotated angle and the track radius. The position of the viewpoint is fixed and the parameters of the view angle are also fixed, and when the position of the first virtual object in the virtual space changes, the image in the virtual space generated according to the viewpoint and the view angle also changes. And generating a frame of image in the virtual space at intervals of one frame time and sending the frame of image to the display device for displaying, thereby showing the effect of dynamic rotation of the first virtual object in the virtual space.
In some examples, after the virtual space is presented on the terminal device, the user may rotate the virtual space through a gesture (e.g., a swipe gesture).
In response to a sliding operation, for example, an operation of a user sliding a screen of the terminal device, each first virtual object correspondingly rotates. Mainly comprising the following steps S310-S314.
S310: and responding to the sliding operation of the virtual space, and acquiring the sliding distance and the sliding direction corresponding to the sliding operation. Wherein the rotation direction of the first virtual object is the same as the sliding direction.
S311: and determining the target position of each first virtual object on each track according to the sliding distance, the sliding direction and the current position of each first virtual object on each track.
The following steps S3111 to S3112 are included in determining the target position of each first virtual object.
S3111: and determining the rotation angle of each first virtual object according to the sliding distance.
Determining the rotation angle according to the following equation (1):
θ=S*MAX(S,Sm)*α (1)
wherein S represents the sliding distance, SmIs a preset minimum value of the sliding distance, alpha is a preset parameter, MAX (S, S)m) Is represented by S, SmAnd taking the maximum value for operation.
S3112: and determining the target position of each first virtual object according to the current position and the rotation angle of each first virtual object.
The moving track arc length of the first virtual object on the track can be determined according to the rotation angle and the radius of the track where the first virtual object is located, and the target position is determined according to the current position of the first virtual object and the track arc length. The current position and the target position may also be represented in terms of angles, for example, in fig. 5, the angle of the first virtual object on the trajectory with respect to the x-axis. And determining the angle of the target position relative to the x axis according to the angle of the current position relative to the x axis and the rotation angle, and further determining the target position.
S312, displaying the rotated virtual space according to the target position of each first virtual object on each track.
When the rotating process is displayed, the process that the first virtual object rotates to the corresponding target position may be displayed through one frame of image or through multiple frames of images. When the presentation is performed by one frame image, an image of the rotated virtual space may be generated according to the target position of each first virtual object for presentation. When the display is performed through the multi-frame images, several intermediate positions of the first virtual object in the process of rotating to the target position need to be determined, then corresponding images are respectively generated according to the intermediate positions, and then the images are displayed one by one to display the process of gradually rotating. For the case of the process of exhibiting rotation over multiple frames, the following steps S3121-S2122 are specifically included.
S3121: for each first virtual object on each track, at least one intermediate position of the first virtual object from the current position to the target position is determined from the target position of the first virtual object.
Determining said each intermediate position by the following equation (2)
Figure BDA0003151182320000161
Wherein the content of the first and second substances,
Figure BDA0003151182320000171
representing the middle position of the ith first virtual object on the lth track,
Figure BDA0003151182320000172
representing the target position of the ith first virtual object on the lth track,
Figure BDA0003151182320000173
represents the last position of the ith first virtual object on the L-th track, and beta is a preset parameter.
S3122: presenting the rotated virtual space according to the at least one intermediate position and the target position of each first virtual object on each track.
And respectively generating images of multiple frames of virtual spaces according to each middle position and target position of each first virtual object, and sequentially displaying the images in the multiple frames of virtual spaces. In some examples, the first virtual object is a planet in virtual space, the second virtual object is a planet in the center of the virtual space, and the other planets are located on orbits around the planet in the center. The star ball in the center represents the current user, and the other star balls represent the user generated content released by the friends of the current user or the user generated content released by the friends of the current user. The user-generated content may be a small secret issued by the user, a microblog issued by the user, or the like. In this example, it is assumed that the number of orbits around the central planet is 3, and the first orbit, the second orbit, and the third orbit are sequentially from inside to outside. The message interaction diagram of the present example is shown in fig. 8, and mainly includes the following steps.
S801: the terminal device 104 sends a user-generated content page data acquisition request to the server 102 through the application client 108, where the request carries an account of the current user.
In some examples, the account number may be a QQ number, a micro-signal, a micro-blog account number, or the like.
S802: the server 102 determines the friend of the current user according to the account of the current user, and queries the user generated content issued by the friend of the current user.
When determining the friends of the current user, the method can obtain the friends according to the user relationship chain of the current user. The inquired user generated content released by the friend is the user generated content released by the friend most recently, and one user generated content or a plurality of user generated contents released by one friend can be obtained.
S803: and judging whether the quantity of the user generated contents of the friends is greater than a first set value, wherein the first set value is the maximum value of the quantity of the stars on the first track. When the first setting value is greater than the first setting value, step S805 is performed, and when the first setting value is not greater than the first setting value, step S804 is performed.
In this example, the planet on the first track is associated with the user-generated content published by the friends of the current user, and the publishers of the user-generated content associated with the planet on the first track have the closest relationship to the current user. The stars on the second track and the third track are associated with user-generated content released by friends of the current user, wherein the popularity of the user-generated content corresponding to the stars on the second track is higher than that of the user-generated content corresponding to the stars on the third track. In the case where there are a plurality of tracks, the closer a track is to the star at the center, the more intimate the relationship of the publisher of the user-generated content associated with the star on the track with the current user or the higher the popularity of the user-generated content.
S804: and taking the acquired user generated content of the friend as first user generated content, extracting keywords of the first user generated content, and acquiring the gender of a publisher of the first user generated content.
S805: the method comprises the steps of sequencing user generated contents of friends according to the popularity of the user generated contents, selecting the user generated contents with the quantity equal to a first set value in the front sequencing as first user generated contents, extracting keywords of the first user generated contents, and inquiring the gender of a publisher of each first user generated content.
S806: a correspondence between the ID of the first user-generated content and the first track is established.
S807: and determining friends of all friends in the friends of the current user, and acquiring user generated content released by the friends of all friends.
S808: and judging whether the quantity of the user generated contents issued by the friends of the friends is greater than a second set value. Wherein the second set value is the maximum value of the number of stars on the second orbit. And executing the step S810 when the quantity of the user generated contents of the friend is greater than a second set value, otherwise, executing the step S809.
S809: and taking the obtained user generated content of the friend as second user generated content, extracting keywords of the second user generated content, and inquiring the gender of a publisher of the second user generated content.
S810: and sequencing the user generated contents issued by the friends of the friends according to the popularity of the user generated contents, selecting the user generated contents with the quantity equal to a second set value in the top sequencing as second user generated contents, extracting keywords of the second user generated contents, and inquiring the gender of the issuer of each second user generated content.
S811: a correspondence between the ID of the second user-generated content and the second track is established.
S812: and judging whether the number of the user generated contents issued by the friends of the remaining friends is greater than a third set value, wherein the third set value is the maximum value of the number of the stars on the third track. When the user generated content of the remaining friends of the friends is greater than the third setting value, step S814 is performed, otherwise, step S813 is performed.
S813: and taking the user generated content issued by the friends of the rest friends as third user generated content, extracting keywords of the third user generated content, and inquiring the gender of the issuer of the third user generated content.
S814: and according to the ranking of the popularity of the user generated contents issued by the friends of the friends, selecting the user generated contents with the number equal to a third set value from the rest user generated contents as third user generated contents, extracting keywords of the third user generated contents, and inquiring the gender of the issuer of each third user generated content.
S815: and establishing the corresponding relation between the ID of the third user generated content and the third track.
The steps S802 to S806 and the steps S807 to S8015 are not sequentially executed, and the steps S802 to S806 may be executed first, and then the steps S807 to S8015 may be executed; or executing S807-S8015 first and then executing S802-S806; or both may be performed in parallel.
S816: the correspondence between the ID of the first user-generated content and the first track, the correspondence between the ID of the second user-generated content and the second track, the correspondence between the ID of the third user-generated content and the third track, the keyword of each user-generated content, the ID of each user-generated content, and the sex of the publisher of each user-generated content are transmitted to the terminal device 104.
S817: and determining the number of the user generated contents corresponding to the first track, the number of the user generated contents corresponding to the second track and the number of the user generated contents corresponding to the third track according to the corresponding relation.
S818: and determining the number of the stars on each track and the positions of the stars according to the number of the user generated contents corresponding to each track. The number of the user generated contents corresponding to each track is used as the number of the stars on each track, and the positions of the stars on each track are determined according to the number of the stars on each track. The stars on each track can be randomly distributed on the track or uniformly distributed on the track. The specific determination of the position of the planet can refer to the corresponding operations in the example of fig. 3.
S819: and drawing the keywords of the user generated content on a planet on the track corresponding to the user generated content, and drawing the texture of the corresponding planet according to the gender of the publisher of the user generated content. The texture and keywords for specifically drawing the planet can refer to the corresponding operations in the example of fig. 3.
S820: after the keywords of the user-generated content are drawn on the planet, the corresponding relation between the user-generated content ID and the planet can be established. Specifically, a user generated content ID is determined according to keywords of the user generated content, and a corresponding relation between the user generated content ID and the planet is determined according to a corresponding relation between the keywords of the user generated content and the planet.
S821: and displaying the virtual space. And when the virtual space is displayed, generating an image of the virtual space according to the viewpoint position and the visual angle. The operation of specifically presenting the virtual space may refer to the corresponding operation in the example of fig. 3.
S822: and determining the user-generated content ID corresponding to the planet in response to clicking operation on one planet. For example, in the interface diagram shown in fig. 7a, when the user clicks one of the stars 701, the user-generated content ID corresponding to the clicked star is determined according to the correspondence between the user-generated content ID and the star established in step S820.
S823: the server 102 is requested for the corresponding user-generated content based on the user-generated content ID.
S824: the server 102 returns the user-generated content to the terminal device 104. And after receiving the returned user generated content, the terminal device 104 displays the user generated content. The specific presentation of the user-generated content may refer to the corresponding operations in the example of fig. 3.
S825: the rotation angle of the planet on each orbit within each frame time interval is determined. Wherein the stars on each orbit spin around the central star. The tracks in the virtual space may or may not be visible. In addition, each planet can rotate around the central axis of the planet while rotating around the planet at the center.
The rotation of the three orbits of the stars around the z-axis is controlled by default on the principle that the farther away from the central star the slower the speed. And taking the ratio of the preset rotation coefficient to the radius of each track as the rotation angle of the planet on each track in each frame time interval, wherein the larger the radius of the track is, the slower the planet on the track rotates, and the smaller the radius of the track is, the faster the planet on the track rotates. The specific determination of the rotation angle may refer to the corresponding operations in the example of fig. 3.
S826: and determining the positions of the rotated stars on the tracks according to the rotation angles of the stars on the tracks, and displaying the rotated virtual space according to the positions of the rotated stars on the tracks. The specific determination of the position of the star after rotation can refer to the corresponding operations in the example of fig. 3.
S827: responding to the sliding operation, and acquiring the sliding distance and direction; and determining the target position according to the sliding distance, the sliding direction and the current position of each planet.
When the user uses the terminal device 104, the star in the virtual space rotates accordingly when the user slides the screen. The sliding direction of the planet is the same as the sliding direction of the gesture of the user, and the position change generated by the planet sliding is adaptive to the sliding distance of the gesture. The specific determination of the target position of the planet can refer to the corresponding operations in the example of fig. 3.
S828: and displaying the rotating virtual space according to the target position of each planet. The presentation of the virtual space in particular according to the target position of the planet may refer to the corresponding operations in the example of fig. 3.
The present embodiment also provides a user-generated content presentation apparatus 900, as shown in fig. 9, the apparatus includes:
a receiving unit 901, configured to receive related data of at least one user-generated content, where the related data includes keywords of the user-generated content;
a first determining unit 902, configured to determine, according to the related data, a number of first virtual objects in a virtual space;
a second determining unit 903, configured to determine a position of each first virtual object in the virtual space;
a drawing unit 904, configured to draw each keyword of the user-generated content to a first virtual object;
the displaying unit 905 is configured to display a virtual space including each first virtual object according to a position of each first virtual object in the virtual space.
In some examples, the relevant data further includes an identification of each user-generated content;
the apparatus further comprises a user-generated content presentation unit 906 for:
establishing a corresponding relation between the identification of each user generated content and each first virtual object;
after the virtual space containing each first virtual object is displayed according to the position of each first virtual object in the virtual space, responding to the click operation of any first virtual object, and determining the identifier of the user generated content corresponding to the first virtual object according to the corresponding relation;
sending a user generated content acquisition request to a server, wherein the user generated content acquisition request carries an identifier of the user generated content;
and receiving the user generated content which is sent by the server and corresponds to the identification of the user generated content, and displaying the user generated content.
In some instances, a second virtual object is also included in the virtual space; each first virtual object is located on at least one track around the second virtual object; the related data also comprises the corresponding relation between the identification of each user generated content and the at least one track;
the first determining unit 902 is configured to:
determining the number of the user generated contents corresponding to each track according to the corresponding relation between the identification of each user generated content and the at least one track;
and determining the number of the first virtual objects on each track according to the number of the user generated contents corresponding to each track.
In some examples, the drawing unit 904 is to:
the content is generated for each of the users,
determining a track corresponding to the user generated content according to the corresponding relation between the user generated content and the at least one track;
keywords of the user-generated content are rendered onto a first virtual object on the track.
In some examples, the second determining unit 903 is configured to:
for each of the tracks there is a track,
determining a central angle corresponding to each first virtual object on the track according to the number of the first virtual objects on the track;
and determining the position of each first virtual object on the track according to the central angle corresponding to each first virtual object on the track.
In some examples, the user-generated content presentation device 900 further includes a first rotation unit 907 for:
at predetermined time intervals, the following operations are performed:
for each track, determining a rotation angle of each first virtual object on the track around the second virtual object within the preset time interval according to the radius of the track; determining the rotated position of the first virtual object on the track according to the position of each first virtual object on the track and the rotation angle of each first virtual object;
and displaying the rotated virtual space according to the rotated position of the first virtual object on each track.
In some examples, the first rotation unit 907 is to:
and determining the rotation angle of each first virtual object on the track rotating around the second virtual object within the preset time interval according to the ratio of a preset rotation coefficient to the radius of the track.
In some examples, the user-generated content presentation device 900 further comprises a second rotation unit 908 for:
responding to the sliding operation on the virtual space, and acquiring the sliding distance and the sliding direction corresponding to the sliding operation;
determining the target position of each first virtual object on each track according to the sliding distance, the sliding direction and the current position of each first virtual object on each track;
and displaying the rotating virtual space according to the target position of each first virtual object on each track.
In some examples, the second rotation unit 908 is to:
for each first virtual object on each track, determining at least one intermediate position of the first virtual object from the current position to the target position, in accordance with the target position of the first virtual object;
presenting the rotated virtual space according to the at least one intermediate position and the target position of each first virtual object on each track.
In some examples, each of the at least one intermediate position is determined by the following formula:
Figure BDA0003151182320000231
wherein the content of the first and second substances,
Figure BDA0003151182320000232
representing the middle position of the ith first virtual object on the lth track,
Figure BDA0003151182320000233
representing the target position of the ith first virtual object on the lth track,
Figure BDA0003151182320000234
represents the last position of the ith first virtual object on the L-th track, and beta is a preset parameter.
In some examples, the second rotation unit 908 is to:
determining the target position of each first virtual object on each track according to the sliding distance, the sliding direction and the current position of each first virtual object on each track comprises:
determining the rotation angle of each first virtual object according to the sliding distance;
and determining the target position of each first virtual object according to the current position and the rotation angle of each first virtual object.
In some examples, the angle of rotation is determined according to the following formula:
θ=S*MAX(S,Sm)*α
wherein S represents the sliding distance, SmIs a preset minimum value of the sliding distance, alpha is a preset parameter, and MAX () represents the operation of taking the maximum value.
In some examples, the display unit 905 is used to:
and generating an image corresponding to the virtual space through three-dimensional projection according to a preset viewpoint position and a preset view angle and the position of each first virtual object in the virtual space, and displaying the image.
The present example provides a computer readable storage medium, which stores computer readable instructions, and the computer readable instructions, when executed by a processor, implement the method for presenting user-generated content as described above.
Fig. 10 shows a block diagram of the components of a computing device on which the user-generated content presentation apparatus 900 is located. As shown in fig. 10, the computing device includes one or more processors (CPUs) 1002, a communications module 1004, a memory 1006, a user interface 1010, and a communications bus 1008 for interconnecting these components.
The processor 1002 can receive and transmit data via the communication module 1004 to enable network communications and/or local communications.
The user interface 1010 includes one or more output devices 1012 including one or more speakers and/or one or more visual displays. The user interface 1010 also includes one or more input devices 1014, including, for example, a keyboard, a mouse, a voice command input unit or microphone, a touch screen display, a touch sensitive tablet, a gesture capture camera or other input buttons or controls, and the like.
The memory 1006 may be a high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; or non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
The memory 1006 stores a set of instructions executable by the processor 1002, including:
an operating system 1016 including programs for handling various basic system services and for performing hardware related tasks;
the application 1018 includes various application programs that can implement the processing flow in each of the above examples, and may include, for example, a data processing apparatus in the example of the present application. It should be noted that not all steps and modules in the above flows and structures are necessary, and some steps or modules may be omitted according to actual needs. The execution order of the steps is not fixed and can be adjusted as required. The division of each module is only for convenience of describing adopted functional division, and in actual implementation, one module may be divided into multiple modules, and the functions of multiple modules may also be implemented by the same module, and these modules may be located in the same device or in different devices.
The hardware modules in the examples may be implemented in hardware or a hardware platform plus software. The software includes machine-readable instructions stored on a non-volatile storage medium. Thus, the examples may also be embodied as software products.
In the examples of this application, the hardware may be implemented by specialized hardware or hardware executing machine-readable instructions. For example, the hardware may be specially designed permanent circuits or logic devices (e.g., special purpose processors, such as FPGAs or ASICs) for performing the specified operations. Hardware may also include programmable logic devices or circuits temporarily configured by software (e.g., including a general purpose processor or other programmable processor) to perform certain operations.
In addition, each example of the present application can be realized by a data processing program executed by a data processing apparatus such as a computer. It is clear that a data processing program constitutes the present application. Further, a data processing program, which is generally stored in one storage medium, is executed by directly reading the program out of the storage medium or by installing or copying the program into a storage device (such as a hard disk and/or a memory) of the data processing device. Such a storage medium therefore also constitutes the present application, which also provides a non-volatile storage medium in which a data processing program is stored, which data processing program can be used to carry out any one of the above-mentioned method examples of the present application.
The nonvolatile computer-readable storage medium may be a memory provided in an expansion board inserted into the computer or written to a memory provided in an expansion unit connected to the computer. A CPU or the like mounted on the expansion board or the expansion unit may perform part or all of the actual operations according to the instructions.
The nonvolatile computer readable storage medium includes a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD + RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer via a communications network.
The foregoing is considered as illustrative only of the present application and is not intended to be limiting, since any modifications, equivalents, improvements and the like which come within the spirit and scope of the invention are desired to be protected.

Claims (15)

1. A method for displaying user-generated content, comprising:
generating and displaying a frame of image of the virtual space at preset time intervals so as to display the dynamic effect in the virtual space; the virtual space comprises a second virtual object corresponding to the current user and a plurality of first virtual objects corresponding to the user generated content publisher; the plurality of first virtual objects surround the second virtual object, and for any first virtual object in the plurality of first virtual objects, the closer the first virtual object is to the second virtual object, the more intimate the relationship between the publisher of the user-generated content associated with the first virtual object and the current user is or the higher the popularity of the user-generated content is;
and displaying the corresponding keywords of the user-generated content on each first virtual object.
2. The method of claim 1, further comprising:
establishing a corresponding relation between the identification of each user generated content and each first virtual object;
responding to the click operation of any first virtual object, and determining the identifier of the user-generated content corresponding to the first virtual object according to the corresponding relation;
sending a user generated content acquisition request to a server, wherein the user generated content acquisition request carries the determined identifier of the user generated content;
and receiving the user generated content which is sent by the server and corresponds to the determined identifier of the user generated content, and displaying the user generated content.
3. The method of claim 1, wherein each first virtual object is located on at least one track around the second virtual object; wherein, the closer the track is to the second virtual object, the more intimate the relationship between the publisher of the user-generated content associated with the first virtual object on the track and the current user is or the higher the popularity of the user-generated content is.
4. The method of claim 3, wherein presenting keywords of the corresponding user-generated content on each first virtual object comprises:
aiming at each user generated content, determining a track corresponding to the user generated content according to the corresponding relation between each user generated content and at least one track; keywords for the user-generated content are rendered to a first virtual object on the track.
5. The method according to claim 3, wherein the central angles corresponding to the track arc lengths between every two adjacent first virtual objects on each track are the same; the method further comprises:
for each track, determining a central angle corresponding to each first virtual object on the track according to the number of the first virtual objects on the track; and determining the position of each first virtual object on the track according to the central angle corresponding to each first virtual object on the track.
6. The method of claim 3, wherein the plurality of first virtual objects rotate around the second virtual object; the method further comprises:
at predetermined time intervals, the following operations are performed:
for each track, determining the rotation angle of each first virtual object on the track around the second virtual object in the preset time interval according to the radius of the track; determining the rotated position of the first virtual object on the track according to the position of each first virtual object on the track and the rotation angle of each first virtual object;
and displaying the rotated virtual space according to the rotated position of the first virtual object on each track.
7. The method of claim 3, further comprising:
receiving related data of at least one user-generated content, and determining the number of first virtual objects in a virtual space according to the related data;
wherein, when the related data contains a correspondence between an identifier of each user-generated content and the at least one track, the determining the number of first virtual objects in the virtual space according to the related data includes:
determining the number of user generated contents corresponding to each track according to the corresponding relation;
determining the number of first virtual objects on each track according to the number of user generated contents corresponding to each track;
when the related data contains the number of user-generated content, the determining the number of first virtual objects in the virtual space according to the related data comprises:
taking the number of the user-generated contents in the related contents as the number of the first virtual objects in the virtual space;
when the related data includes keywords of user-generated content, the determining the number of first virtual objects in the virtual space according to the related data includes:
determining the number of keywords according to the keywords of the user-generated content contained in the related data;
and taking the determined number of the keywords as the number of the first virtual objects in the virtual space.
8. The method of claim 1, further comprising:
responding to the sliding operation on the virtual space, and acquiring the sliding distance and the sliding direction corresponding to the sliding operation;
determining the target position of each first virtual object on each track according to the sliding distance, the sliding direction and the current position of each first virtual object on each track;
and displaying the rotating virtual space according to the target position of each first virtual object on each track.
9. The method of claim 8, wherein the exposing the rotated virtual space according to the target position of the first virtual object on each track comprises:
for each first virtual object on each track, determining at least one intermediate position of the first virtual object from the current position to the target position according to the target position of the first virtual object;
presenting the rotated virtual space according to the at least one intermediate position and the target position of each first virtual object on each trajectory.
10. The method of claim 9, wherein each of the at least one intermediate position is determined by the formula:
Figure FDA0003151182310000031
wherein the content of the first and second substances,
Figure FDA0003151182310000032
representing said intermediate position of the ith first virtual object on the lth track,
Figure FDA0003151182310000033
representing the target position of the ith first virtual object on the lth track,
Figure FDA0003151182310000041
a position preceding the intermediate position representing the ith first virtual object on the lth track, β being a preset parameter.
11. The method of claim 8,
determining the target position of each first virtual object on each track according to the sliding distance, the sliding direction and the current position of each first virtual object on each track comprises:
determining the rotation angle of each first virtual object according to the sliding distance;
and determining the target position of each first virtual object according to the current position and the rotation angle of each first virtual object.
12. The method of claim 11, wherein the angle of rotation is determined according to the formula:
θ=S*MAX(S,Sm)*α
wherein S represents the sliding distance, SmIs a preset minimum value of the sliding distance, alpha is a preset parameter, and MAX () represents the operation of taking the maximum value.
13. The method of claim 1, wherein the exposing the virtual space containing the first virtual objects according to the positions of the first virtual objects in the virtual space comprises:
and generating an image corresponding to the virtual space through three-dimensional projection according to a preset viewpoint position and a preset view angle and the position of each first virtual object in the virtual space, and displaying the image.
14. A non-transitory computer readable storage medium storing computer readable instructions, wherein the computer readable instructions, when executed by a processor, implement the method of any one of claims 1 to 13.
15. A computing device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, implement the method of any of claims 1 to 13.
CN202110770199.6A 2018-12-21 2018-12-21 User generated content display method, device and storage medium Active CN113434237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110770199.6A CN113434237B (en) 2018-12-21 2018-12-21 User generated content display method, device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110770199.6A CN113434237B (en) 2018-12-21 2018-12-21 User generated content display method, device and storage medium
CN201811569635.8A CN111352679B (en) 2018-12-21 2018-12-21 User generated content display method, device and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811569635.8A Division CN111352679B (en) 2018-12-21 2018-12-21 User generated content display method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113434237A true CN113434237A (en) 2021-09-24
CN113434237B CN113434237B (en) 2022-09-30

Family

ID=71195318

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201811569635.8A Active CN111352679B (en) 2018-12-21 2018-12-21 User generated content display method, device and storage medium
CN202110770199.6A Active CN113434237B (en) 2018-12-21 2018-12-21 User generated content display method, device and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201811569635.8A Active CN111352679B (en) 2018-12-21 2018-12-21 User generated content display method, device and storage medium

Country Status (1)

Country Link
CN (2) CN111352679B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030028595A1 (en) * 2001-02-20 2003-02-06 Vogt Eric E. System for supporting a virtual community
CN102129426A (en) * 2010-01-13 2011-07-20 腾讯科技(深圳)有限公司 Method and device for showing character relations
US20130120371A1 (en) * 2011-11-15 2013-05-16 Arthur Petit Interactive Communication Virtual Space
CN103268191A (en) * 2013-06-06 2013-08-28 百度在线网络技术(北京)有限公司 Unlocking method and device of mobile terminal, and mobile terminal
CN103838814A (en) * 2013-11-22 2014-06-04 南京欣网视讯信息技术有限公司 Method for dynamically displaying contacts diagram relationship
CN109040445A (en) * 2018-07-27 2018-12-18 努比亚技术有限公司 Information display method, dual-screen mobile terminal and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101387937A (en) * 2007-09-14 2009-03-18 英业达股份有限公司 Three-dimensional dynamic diagram display interface and display method thereof
CN103530018B (en) * 2013-09-27 2017-07-28 深圳天珑无线科技有限公司 The method for building up and mobile terminal at widget interface in Android operation system
US20170003851A1 (en) * 2015-07-01 2017-01-05 Boomcloud, Inc Interactive three-dimensional cube on a display and methods of use
CN107728886B (en) * 2017-10-25 2019-10-15 维沃移动通信有限公司 A kind of one-handed performance method and apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030028595A1 (en) * 2001-02-20 2003-02-06 Vogt Eric E. System for supporting a virtual community
CN102129426A (en) * 2010-01-13 2011-07-20 腾讯科技(深圳)有限公司 Method and device for showing character relations
US20130120371A1 (en) * 2011-11-15 2013-05-16 Arthur Petit Interactive Communication Virtual Space
CN103268191A (en) * 2013-06-06 2013-08-28 百度在线网络技术(北京)有限公司 Unlocking method and device of mobile terminal, and mobile terminal
CN103838814A (en) * 2013-11-22 2014-06-04 南京欣网视讯信息技术有限公司 Method for dynamically displaying contacts diagram relationship
CN109040445A (en) * 2018-07-27 2018-12-18 努比亚技术有限公司 Information display method, dual-screen mobile terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN111352679A (en) 2020-06-30
CN111352679B (en) 2021-07-16
CN113434237B (en) 2022-09-30

Similar Documents

Publication Publication Date Title
US10811053B2 (en) Routing messages by message parameter
US11276217B1 (en) Customized avatars and associated framework
US10554696B2 (en) Initiating a communication session based on an associated content item
US9898870B2 (en) Techniques to present location information for social networks using augmented reality
KR102021317B1 (en) Custom optimization of web pages
US10768772B2 (en) Context-aware recommendations of relevant presentation content displayed in mixed environments
JP2022177059A (en) Matching content to spatial 3d environment
JP6334696B2 (en) Hashtag and content presentation
US20150019986A1 (en) Apparatus, system and method for a graphic user interface for a multi-dimensional networked content platform
US10136289B2 (en) Cross device information exchange using gestures and locations
KR20160075822A (en) Animation sequence associated with feedback user-interface element
US10972528B2 (en) Methods and systems for accessing third-party services within applications
KR20230004966A (en) Interactive spectating interface for live videos
EP3519930B1 (en) Objective based advertisement placement platform
US20220198026A1 (en) Permission based media composition
KR20220119053A (en) How to use deep learning to determine gaze
CN111352679B (en) User generated content display method, device and storage medium
TW201610814A (en) Contextual view portals
US20230214875A1 (en) Content-based incentive program within messaging system
US20150169143A1 (en) System for providing virtual space for individual steps of executing application
US20170169504A1 (en) Descending counter value matching with information sharing
CN113747223A (en) Video comment method and device and electronic equipment
KR102464437B1 (en) Metaverse based cross platfrorm service system providing appreciation and trade gigapixel media object
US20230221797A1 (en) Ephemeral Artificial Reality Experiences
US20230216817A1 (en) Providing permissions for accessing shared content collections

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40051410

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant