CN112800360B - Object control method and device - Google Patents

Object control method and device Download PDF

Info

Publication number
CN112800360B
CN112800360B CN202110044173.3A CN202110044173A CN112800360B CN 112800360 B CN112800360 B CN 112800360B CN 202110044173 A CN202110044173 A CN 202110044173A CN 112800360 B CN112800360 B CN 112800360B
Authority
CN
China
Prior art keywords
target object
control
control area
display
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110044173.3A
Other languages
Chinese (zh)
Other versions
CN112800360A (en
Inventor
常青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110044173.3A priority Critical patent/CN112800360B/en
Publication of CN112800360A publication Critical patent/CN112800360A/en
Application granted granted Critical
Publication of CN112800360B publication Critical patent/CN112800360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Accounting & Taxation (AREA)
  • Mathematical Physics (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Economics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application provides an object control method and device, wherein the object control method comprises the following steps: receiving a control request for a target object, wherein the control request comprises a target object identification; determining a display position of a control area based on the control request of the target object; sending a display request based on the control area to a cloud server, and receiving display data of the target object based on the control area, which is constructed by the cloud server in response to the display request, wherein the display request carries a display position identifier of the target object and the target object identifier; and setting a control area based on the display position and displaying the display data in the control area.

Description

Object control method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to an object control method. The application also relates to an object control apparatus, a computing device, and a computer-readable storage medium.
Background
In an e-commerce website, a commodity is mainly controlled in a picture mode, and a user needs to browse a picture to know all parts and details of the commodity, which cannot meet the requirements of consumers. When the number of pictures is large, a large number of pages need to be occupied, and the user is troublesome to browse. When the number of pictures is small, the user cannot know the full appearance of the commodity, and poor use experience is brought to the user.
Disclosure of Invention
In view of this, the present application provides an object control method. The application also relates to an object control device, a computing device and a computer readable storage medium, which are used for solving the problems that the commodity overall view cannot be comprehensively known and the user experience is improved in the prior art.
According to a first aspect of embodiments of the present application, there is provided an object control method, including:
receiving a control request for a target object, wherein the control request comprises a target object identification;
determining a display position of a control area based on the control request of the target object;
sending a display request based on the control area to a cloud server, and receiving display data of the target object based on the control area, which is constructed by the cloud server in response to the display request, wherein the display request carries a display position identifier and a target object identifier of the target object;
and displaying the display data in the control area.
According to a second aspect of embodiments of the present application, there is provided an object control apparatus including:
a first receiving module configured to receive a control request for a target object, wherein the control request includes a target object identification;
a determination module configured to determine a presentation position of a control area based on a control request of the target object;
a second receiving module, configured to send a display request based on the control area to a cloud server, and receive display data of the target object based on the control area, which is constructed by the cloud server in response to the display request, where the display request carries a display position identifier of the target object and the target object identifier;
and the display module is configured to set a control area based on the display position and display the display data in the control area.
According to a third aspect of the embodiments of the present application, there is provided an object control method applied to a cloud server, including:
receiving a display request aiming at a target object based on a control area, wherein the display request carries size information of the control area and an object identifier of the target object;
acquiring image information of the target object based on the object identification of the target object, and processing the image information of the target object based on the size information of the control area to obtain a display object for the control area;
and sending the display object to a client, wherein the display object is the image information of the target object displayed in the control area after processing the image information of the target object based on the size information of the control area.
According to a fourth aspect of embodiments of the present application, there is provided an object control apparatus comprising:
a receiving display request module configured to receive a display request based on a control area for a target object, wherein the display request carries size information of the control area and an object identifier of the target object;
an obtaining module configured to obtain image information of the target object based on an object identifier of the target object and process the image information of the target object based on size information of the control area to obtain a display object for the control area;
the sending module is configured to send the display object to a client, wherein the display object is the image information of the target object displayed in the control area after the image information of the target object is processed based on the size information of the control area.
According to a fifth aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the object control method when executing the instructions.
According to a sixth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the object control method.
The object control method provided by the application receives a control request aiming at a target object, wherein the control request comprises a target object identifier; determining a display position of a control area based on the control request of the target object; sending a display request based on the control area to a cloud server, and receiving display data of the target object based on the control area, which is constructed by the cloud server in response to the display request, wherein the display request carries a display position identifier of the target object and the target object identifier; and setting a control area based on the display position and displaying the display data in the control area.
According to the method and the device, the display position of the control area where the target object is located is determined according to the control request of the user for the target object, the display request is sent to the cloud server, the video data stream which is constructed by the cloud server and is aimed at the target object is received, the video data stream is displayed in the control area, the updated video data stream can be obtained in real time, the display form of the target object is richer than that of a static picture, and the use experience of the user is improved by utilizing the real-time update of the cloud server for the video data stream.
Drawings
Fig. 1 is a system architecture diagram of a method for controlling an object according to an embodiment of the present application;
fig. 2 is a flowchart of an object control method according to an embodiment of the present application;
fig. 3 is a processing flow chart of an object control method applied to a cloud server according to an embodiment of the present application;
FIG. 4 is a flow chart illustrating an object control method applied to a merchandise card according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating an object control method applied to a 3D handheld touch control interaction according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an object control apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an object control apparatus according to another embodiment of the present application;
fig. 8 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit and scope of this application, and thus this application is not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context.
First, the noun terms referred to in one or more embodiments of the present application are explained.
Cloud game: the cloud game is a game mode based on cloud computing, all games run at a server side in a running mode of the cloud game, and a rendered game picture is compressed and then transmitted to a user through a network. At the client, the user's gaming device does not require any high-end processor and graphics card, but only basic video decompression capability.
Cloud computing (cloud computing): is an internet-based computing means by which shared software and hardware resources and information can be provided to computers and other devices as needed. The network that provides the resources is referred to as the "cloud".
DOM (Document Object Model in English): on a web page, objects that organize a page (or document) are organized in a tree structure to represent a standard model of the objects in the document.
3D manual: a three-dimensional model kit assembled without coloring needs a series of complex processes of polishing, assembling, coloring and the like by a player, the difficulty is far greater than that of common model manufacturing, and the main material is resin.
In the present application, an object control method is provided, and the present application relates to an object control apparatus, a computing device, and a computer-readable storage medium, which are described in detail one by one in the following embodiments.
The object control method provided in the embodiment of the present specification takes the implementation of displaying a certain commodity in a corresponding commodity card area as an example, explains how to utilize a cloud game technology of a cloud server to construct a real-time video data stream for the commodity in the commodity card, increases the display form of the commodity, sends the constructed real-time video data stream to a client and displays the real-time video data stream in the control area of the commodity card, so as to implement the generation of an interactive behavior between a user and the commodity based on the cloud game technology, improve the use experience of the user, and enrich the playing methods.
The commodities in the commodity card on the website are generally displayed in a form of static picture carousel or a form of pre-stored local video, and for a user, the information is only passively received, and no interactive behavior is generated. In order to improve the interaction attribute of the existing product elements such as the commodity card, the embodiment of the specification provides that a cloud game technology is used for showing the commodity in the commodity card, so that the showing form of the commodity card is richer than that of a static picture or a pre-stored local video, the user can generate an interaction behavior, the commodity content control is better interactive, and the playing method is richer.
It should be noted that the cloud game is essentially an interactive online video stream, and the game runs on the cloud server, and compresses the rendered game picture or instruction and transmits the compressed game picture or instruction to the user through the network. The cloud game and the user data are stored on the server, game files do not need to be installed and the user data do not need to be stored on the local terminal, and high-quality video data streams are displayed for the user on the basis that the limitation of the storage space of the local terminal is broken. In the application, the operation mode of the cloud game is utilized to display the commodities of the commodity card in the website in real time, and the user can further know the commodity contents in such a mode so as to improve the use experience of the user.
Referring to fig. 1, fig. 1 is a system architecture diagram illustrating a method for controlling a subject according to an embodiment of the present application.
In fig. 1, a user a browses a commodity a through a client a, and further determines a control area a in a commodity card of the commodity a and a display position of the touch area on the commodity a according to a control request of the user a on the commodity a, where it is noted that the control area a is a display area for displaying the commodity a and is also a control area for generating an interaction behavior between the user and the commodity a; the method comprises the steps that under the condition that the display position of a control area A of a commodity A is determined, a display request based on the display position is sent to a cloud server, after the cloud server receives the display request based on the commodity A, display data A aiming at the display position is constructed based on the display request, the display data A is returned to the control area A of the commodity A to be displayed, and therefore a user A can view data information of the commodity A in the control area A through a client A, wherein the data information can be a video data stream with high resolution or rich rendering degree. Because the video data stream of the commodity A is generated and constructed in the cloud server, the client A only displays the video data stream in the client A, and the client A can display the real-time updated video data stream to the user.
It should be noted that, in the object control method provided in the embodiment of the present specification, a cloud game technology mode is adopted, so that a user can display commodities in a commodity card and generate an interaction behavior with the commodities, and the commodities are displayed in an operation mode of a cloud game, so that an e-commerce application is more interactive, and a playing method is richer.
According to the object control method provided by the embodiment of the specification, the display request is sent to the cloud server through the control request of the user for the target object, the video data stream which is constructed by the cloud server and is aimed at the target object is received and displayed in the control area of the target object, so that the target object is displayed in the display mode of the cloud game, the display form of the video data stream of the target object is rich, and the use experience of the user is improved.
Referring to fig. 2, fig. 2 shows a flowchart of an object control method provided in an embodiment of the present application, which specifically includes the following steps:
step 202: receiving a control request for a target object, wherein the control request includes a target object identification.
The control request of the target object may be understood as a display request for the target object, and the control request includes a target object identifier, for example, a control request of a 3D handheld is received, where the control request includes identification information of the 3D handheld, so as to subsequently determine data matching with the identification information according to the identification information of the target object.
In a specific implementation, the client that receives the control request for the target object includes, but is not limited to, clients such as a video playing device, a desktop computer, a smart phone, a tablet computer, a laptop portable computer, and other display terminals.
In practical application, a user can touch a display control for a target object in an interface of a client, that is, a control request for the target object can be triggered.
Step 204: and determining the display position of the control area based on the control request of the target object.
The control area may be understood as an area where the target object is displayed in the client, and the target object may be controlled through the control area, for example, the target object is controlled by moving, rotating, and the like in the control area of the client.
The display position may be understood as a position coordinate of the control area of the target object, for example, after the control area of the target object is determined, a coordinate of an upper left corner of the control area may be used as a display position coordinate of the control area.
In specific implementation, the display position of the control area is determined in the client according to the control request of the user for the target object, and along with the above example, the display position of the control area for the 3D handheld is determined in the client according to the control request of the user for the 3D handheld. It should be noted that, in practical application, the display position of the target object in the client page may be determined as a control area for displaying the target object, and then the display position coordinate where the control area is located is determined.
The client creates a webpage element based on a control request of a user for a target object, and then determines a display position of a control area; specifically, the determining the display position of the control area based on the control request of the target object includes:
creating a DOM node tree, and determining a target node aiming at the target object in the DOM node tree based on the control request of the target object;
and taking the target node as a display position of a control area of the target object.
The DOM node tree can be understood as a tree structure formed by document objects in a web page, and is used to represent a standard model of the objects in the document. It should be noted that, a DOM node tree is created in a web page, all nodes in the node tree have an association relationship with each other, all nodes can be accessed through the DOM node tree, contents in the nodes can be deleted or modified, new elements can be created, and the whole web page can be composed according to document nodes in the tree nodes.
During specific implementation, the client creates a DOM node tree on the page, determines a target node aiming at the target object in the DOM node tree based on a control request of a user on the target object, and takes the target node as a display position of a target object control area. In practical application, the DOM node tree includes a plurality of nodes, a control region of a target object can occupy one node in the DOM node tree, a client can determine a specific target node according to a display position of the target object on the webpage, the target node is used as the control region of the target object, and the display position can be determined by the position of the control region.
In the object control method provided in the embodiment of the present specification, the target node of the target object is determined in the created DOM node tree, and the display position of the control area of the target object is further determined in the webpage, so that the display position of the control area of the target object can be quickly determined, and the node content can be modified or deleted conveniently by using the DOM node tree.
Step 206: sending a display request based on the control area to a cloud server, and receiving display data of the target object based on the control area, which is constructed by the cloud server in response to the display request, wherein the display request carries a display position identifier of the target object and the target object identifier.
The display request can be understood as a display request aiming at a target object sent by a client; the display data may be understood as a video data stream constructed by the cloud server for the target object based on the display request, and may also include display data such as an audio data stream.
In specific implementation, the client sends a display request based on the display position of the target object control area to the cloud server, wherein the display request comprises a display position identifier and a target object identifier of the target object determination control area, and receives display data of the target object based on the control area, which is constructed by the cloud server in response to the display request. For example, the client sends a display request for the 3D handheld to the cloud server, and receives a video data stream for the 3D handheld, which is constructed by the cloud server for the display request.
Step 208: and setting a control area based on the display position and displaying the display data in the control area.
In specific implementation, the client may set the control area according to the display position of the control area determined by the user according to the control request for the target object, and display the received display data in the control area, for example, in a webpage, it is determined that the display position coordinate of the control area of the 3D office is (1,1), and then in the webpage, the control area of the 3D office is set according to the display position coordinate and the width and height information of the webpage commodity card of the 3D office, and the control area is used as the display area of the 3D office.
It should be noted that, a manner of setting the control area according to the display position determined by the target object is not limited to this. In practical applications, a control area set by a client may be used to display data such as a video data stream of a target object, and a user may also implement an interactive behavior with the target object through the control area, for example, implement control over the target object through an interactive behavior such as clicking or sliding, and the like.
Furthermore, a control area where the target object is located can be set according to the size of the control area of the target object and the display position, and rendered display data are displayed in the control area; specifically, the setting of the control area based on the display position and the displaying of the display data in the display position includes:
acquiring the size of a control area of the target object, and setting the control area based on the size of the control area and the display position;
rendering the display data of the target object, and loading the rendered display data in the control area.
For example, when a user clicks a control of a target object through a client, the client may obtain the pixel values of the width and the height of the control area of the target object.
In specific implementation, the client acquires the size of the control area for the target object according to the click control operation of the user, and can set the control area of the target object based on the size of the control area and the determined display position, render the received display data, and load the rendered display data in the control area.
Along the above example, the client determines that the display position coordinate of the control area of the 3D handheld is (1,1), the length L of the size of the control area of the acquired target object is 3 (unit: cm), the height H is 3 (unit: cm), and the control area of the 3D handheld is set based on the display position coordinate (1,1) and the L and H of the control area; the client renders the received display data for the 3D handheld, and loads the rendered video data stream in the control area.
It should be noted that the size information of the control area may be the same as or different from the size information of the target object in the commodity card area in the webpage, and the size information of the control area is not limited in this specification.
In the object control method provided in the embodiment of the present specification, the control area of the target object is set, and the received display data of the target object is rendered to obtain display data with high resolution or rich colors to be displayed in the control area, so that the display data is rendered subsequently, thereby obtaining display data with high quality and strong rich feeling, and improving user experience.
In addition, under the condition that the size information of the received display data is not matched with the set size information of the control area, the client needs to process the received display data so that the size information of the display data is matched with the size information of the control area, and the processed display data can be displayed in the control area to improve the use experience of a user; specifically, the control request further includes a control area size of the target object;
correspondingly, the displaying the display data in the control area includes:
and zooming and/or cutting the display data based on the size of the control area of the target object, and displaying the zoomed and/or cut display data in the control area.
When the display data of the target object sent by the cloud server are received and do not accord with the size of the control area, the display data are zoomed and/or cut based on the size of the control area of the target object, the processed display data are matched with the size of the control area, and the processed display data can be directly displayed in the control area.
Along with the above example, if the length L of the size of the control area for obtaining the target object is 3 (unit: cm) and the height H is 3 (unit: cm), it can be determined that the resolution of the video data stream in the control area is 800 × 480 according to the ratio information, and then the resolution of the presentation data received by the client is 720 × 480, and after the presentation data is amplified, the resolution of the presentation data also reaches 800 × 480, so that the information of the video data is matched, a better video data stream can be presented to the user, and the use experience of the user is improved.
The object control method provided in the embodiment of the present specification scales and/or clips the display data according to the size of the control area of the target object, so as to match information such as the display scale of the video data with the size of the control area, and improve the viewing experience of the user on the video data.
In addition, in order to enable the commodity display modes in the commodity card to be more diversified and richer, the commodity card not only can be displayed to a user in a static picture or locally uploaded video mode, but also can generate various interactive behaviors with the user, so that the commodity display in the e-commerce platform is more interactive, and the playing method is richer; specifically, the object control method provided in the embodiment of the present specification further includes:
receiving a touch control instruction for a target object in the control area;
sending a calling request based on the touch control instruction to a cloud server based on the touch control instruction, and receiving display data of the target object based on the control area, which is constructed by the cloud server in response to the calling request;
and displaying the display data in the control area.
The touch control instruction can be understood as an instruction for triggering touch control on a target object from a client by a user, and the call request can be understood as a call request for display data of the target object based on the touch control instruction.
During specific implementation, the client receives a touch control instruction for a target object in the control area, sends a calling request based on the touch control instruction to the cloud server based on the touch control instruction, receives display data of the target object, which is constructed by the cloud server corresponding to the calling request, and then displays the received display data in the control area.
In practical application, a user can trigger a touch control instruction through a touch control of a target object in a display page of a client, and the specific triggering mode of the touch control instruction is not limited in any way in the embodiment of the specification; the client sends a call request for display data to the cloud server based on the touch control instruction, and it should be noted that the object control method provided by the embodiment of the specification adopts a cloud game technology, and when the client receives the touch control instruction for a target object, the client acquires data from the cloud server based on the touch control instruction so as to avoid the client performing complex processing on the display data, and the client only needs to display a video data stream constructed by the cloud server in a display area of the client, so that not only are complex operations reduced for the client, but also interactive experience is increased for a user by calling the display data constructed by the cloud server.
For example, a user triggers a touch control instruction for a target object in a control area through a client, where the control area is a commodity card area in a webpage, the target object is a 3D handheld in a commodity card, the client receives the touch control instruction for the 3D handheld in the commodity card area, and the touch control instruction can be triggered by clicking a screen control or by sliding the screen control, and the triggering mode of the instruction is not limited in this specification; after receiving a touch control instruction for a 3D (three-dimensional) handheld, a client sends a calling request based on the touch control instruction to a cloud server, wherein the touch control instruction can be touch operations of enabling the 3D handheld to rotate leftwards, rotate rightwards, turn upwards, turn downwards and the like; and the client receives the 3D handheld video data stream of the commodity card area constructed by the cloud server in response to the calling request, and displays the video data stream in the commodity card area.
According to the object control method provided by the embodiment of the specification, the call request of the touch control instruction for the target object in the control area is sent to the cloud server, and the display data of the target object constructed by the cloud server based on the call request is received, so that the interaction behavior between the user and the target object can be increased, and the experience degree of the user is improved.
In the object control method provided in another embodiment of the present specification, a user may perform a touch interaction with a target object, perform a dialogue interaction with the target object, and so on, thereby further improving the interaction experience of the user; specifically, the method further comprises the following steps:
receiving a dialog control instruction for a target object in the control region;
sending a calling request based on the conversation control instruction to a cloud server based on the conversation control instruction, and receiving display data of the target object based on the control area, which is constructed by the cloud server in response to the calling request;
and displaying the display data in the control area.
The dialog control instruction may be understood as an instruction for triggering dialog control for the target object from the client, and the call request may be understood as a call request for the presentation data of the target object based on the dialog control instruction.
During specific implementation, the client receives a conversation control instruction for a target object in the control area, sends a call request based on the conversation control instruction to the cloud server based on the conversation control instruction, receives display data of the target object constructed by the cloud server, and displays the received display data in the control area.
In practical application, a user can trigger a dialog control instruction through a touch control of a target object in a display page of a client, and a specific triggering mode is not limited in any way in the embodiment of the specification; the client may send a call request for presentation data to the cloud server through a dialog input box in the display page, for example, enter "what is the material asking for a hand? The client sends the dialogue input content to the cloud server, and receives display data, which is constructed by the cloud server based on the data call request and aims at the target object, wherein the display data has dialogue content constructed by the cloud server according to the dialogue control instruction, and the display data with the dialogue content is displayed in the control area.
Following the above example, the user triggers a dialog control command for the 3D handheld in the merchandise card area through the dialog control of the client, and enters "what is the material asking for the handheld? And sending the call request with the input content to the client, where the client receives a video data stream for the 3D handheld created by the cloud server based on the call request, where the video data stream has a session control content, for example, if the session control content is "ceramic in material", the video data stream is a video of the 3D handheld with the session content "ceramic in material".
According to the object control method provided by the embodiment of the specification, the session control instruction aiming at the target object in the control area is sent to the cloud game, and the display data of the target object constructed based on the session control instruction is received, so that the session interaction behavior between the user and the target object is increased, the user requirements can be met, and the interaction experience of the user aiming at the target object is further improved.
To sum up, the object control method provided in the embodiment of the present specification may determine a display position of the control area according to a control request of a user for a target object, so as to customize the display position, the width, and the height of the control area, so that the client refers to a cloud server to develop a Software Development Kit (SDK) created in advance, sends a display request based on the display position to the cloud server, receives display data of the target object constructed based on the display request, and displays the display data in the control area.
Referring to fig. 3, fig. 3 shows a processing flow chart of an object control method applied to a cloud server according to another embodiment of the present application, which specifically includes the following steps:
step 302: receiving a display request aiming at a target object based on a control area, wherein the display request carries size information of the control area and an object identifier of the target object.
In specific implementation, a cloud server receives a display request based on a control area, which is sent by a client for a target object, wherein the display request carries size information of the control area and an object identifier of the target object; in practical application, the control area of the target object can be set according to the display position coordinates of the control area and the size of the control area.
It should be noted that before receiving a display request based on a control area for a target object, the cloud server may also perform scene modeling and application scene construction on the target object at the web page end through program development, and may also perform program development on a rotation interaction model and a conversation interaction model, pack and deploy developed data in the cloud server, so that a client may call the display object of the target object in real time from the cloud server.
Step 304: and acquiring image information of the target object based on the object identification of the target object, and processing the image information of the target object based on the size information of the control area to obtain a display object for the control area.
The image information may be understood as image information for the target object obtained based on the object identification of the target object, and the presentation object may be understood as a video data stream for the target object obtained based on the size information of the control area, or the like.
In practical application, the cloud server obtains image information based on an object identifier of a target object carried in a received display request, and processes the image information of the target object based on size information of a control area to obtain the display object of the control area for the target object, for example, the cloud server receives the display request for a 3D handheld in a merchandise card area, obtains the image information of the 3D handheld according to the 3D handheld identifier carried in the display request, and processes the image information of the 3D handheld based on the size information of the control area carried in the display request to obtain a video data stream of the 3D handheld.
Step 306: and sending the display object to a client, wherein the display object is the image information of the target object displayed in the control area after processing the image information of the target object based on the size information of the control area.
In specific implementation, the cloud server sends the obtained display object of the target object to the control area of the client for display, for example, sends a video data stream for 3D handling to the control area of the client.
The object control method provided by the embodiment of the specification is applied to a cloud server, the cloud server constructs a video data stream of a target object in a cloud computing mode, the cloud server is matched with a client, and the target object is displayed in the client in a cloud game representation mode, so that the interactivity between a user and the target object is improved, and the user experience is enhanced.
Further, the object control method provided in the embodiment of the present specification is applied to a cloud server, and further includes:
receiving a touch interaction request aiming at the target object, wherein the touch interaction request carries a touch control instruction;
analyzing the touch control instruction to obtain touch control data;
adjusting the target object based on the touch control data, and constructing a touch screen for the target object according to the adjusted target object;
rendering the touch picture, obtaining display data of the target object and sending the display data to the control area.
The touch interaction request may be understood as a touch interaction request, which is received by the cloud server, of the client for the target object, for example, a rotation interaction request, which is sent by the client for a 3D handheld; the touch control data may be understood as control data such as a rotation angle, a rotation direction, and a rotation speed for the 3D handheld device, and the description does not limit the specific touch interaction request and the type of the touch control data.
In specific implementation, the cloud server receives a touch interaction request for a target object, wherein the touch interaction request carries a touch control instruction, the touch control instruction is analyzed to obtain touch control data of a user for the target object, the target object is adjusted based on the touch control data, a touch screen of the target object is built according to the adjusted touch control data, and after the touch screen is rendered, display data of the target object are obtained and sent to a control area.
For example, the cloud server receives a touch interaction request for a 3D handheld, where the touch interaction request carries a touch control instruction, the cloud server analyzes the touch control instruction to obtain control data of the 3D handheld, which is rotated by 30 degrees to the right by a user, performs corresponding adjustment on the 3D handheld according to the control data to construct a touch screen of the 3D handheld, which is rotated by 30 degrees to the right from the front, and after rendering the touch screen, obtains a rotated video data stream of the 3D handheld, and sends the video data stream to a control area of a client.
In the object control method provided in the embodiment of the present specification, the cloud server dynamically calculates the touch interaction request of the user in a cloud calculation manner, so as to return a corresponding interactive video data stream to the client based on the touch interaction request of the user, and improve the interaction experience of the user.
In addition, the object control method provided in the embodiment of the present specification is applied to a cloud server, and further includes:
receiving a dialogue interaction request aiming at the target object, wherein the dialogue interaction request carries a dialogue control instruction;
analyzing the conversation control instruction to obtain conversation control data, and determining conversation data matched with the conversation control data in a database based on the conversation control data;
constructing a dialog picture aiming at the target object based on the dialog data, and rendering the dialog picture to obtain display data of the target object;
and sending the display data of the target object to the control area.
The conversation control data can be understood as conversation content input by a user through a client; the dialogue data may be understood as dialogue data determined by the cloud server in the database to match with the dialogue content input by the user.
In practical application, the cloud server analyzes a conversation control instruction carried in a conversation interaction request for a target object to obtain conversation control data, determines conversation data matched with the conversation control data in a database based on the conversation control data, constructs a conversation picture for the target object, takes the rendered conversation picture as display data of the target object, and sends the display data to a control area of the target object of a client.
Along the above example, the cloud server receives a dialogue interaction request for a 3D handheld, where the dialogue interaction request carries a dialogue control instruction, and the cloud server analyzes the dialogue control instruction to obtain an input content "what is the material of the handheld of asking for questions? The cloud server determines, based on the input content, session data matched with the input content in the database, for example, the database is searched according to the identifier of the target object, and data corresponding to the found material label is ceramic, so that the cloud server can construct a session picture for the target object according to the session data, and send the rendered session picture as display data of the target object to the control area of the target object of the client for display.
In the object control method provided in the embodiment of the present specification, the cloud server dynamically processes the dialog interaction request of the user in a cloud computing manner, so as to return a corresponding interactive video data stream to the client based on the dialog interaction request of the user, and improve the interaction experience of the user.
Further, the presentation request includes a control region size of the target object;
correspondingly, after the display data for the control area is constructed based on the display request, the method further includes:
and scaling and/or cutting the display data of the control area according to the size of the control area of the target object.
During specific implementation, according to the size of the target object control area carried in the display request, the cloud server scales and/or cuts the created display data, so that the size of the display data is more suitable for the size of the target object control area of the client, the video quality of the video data stream output by the cloud server is improved, and the user experience is improved.
In the object control method provided in the embodiment of the present specification, after the cloud server constructs the display data from the target object, the cloud server adjusts the proportion of the display data according to the size of the control area of the target object, so as to adapt to the proportion of the control area in the client; the server can also directly send the display data to the client, and the client can adjust the proportion of the display data according to the size of the control area of the target object to adapt to the proportion of the control area in the client.
In summary, in the object control method provided in the embodiment of the present specification, the cloud server constructs the video data stream for the target object in a cloud computing manner, and displays the video data stream in the control area of the client in a cloud game display form, so that not only can the interactive operation be performed on the target object in the control area to improve the interactive experience of the user, but also the display form of the display data becomes more diverse.
Referring to fig. 4, taking an application of the object control method provided in the embodiment of the present application to a merchandise card as an example, the object control method is further described, where fig. 4 shows a flow chart of a process of applying the object control method provided in the embodiment of the present application to the merchandise card, and specifically includes the following steps:
step 402: the client receives a control request of a user for a target object.
Step 404: the client creates a control area in the commodity card area based on the control request as a display area of the target object.
Specifically, the control area may be a DOM node in the created DOM node tree, and the display position of the control area of the target object is set as the DOM node.
Step 406: and sending a display request aiming at the target object to the cloud server, wherein the display request comprises a display position identifier, a target object identifier and a display area size.
Specifically, the display request may call an initialization function in a cloud server software development kit for a client to perform data call, such as an init function, and send an identifier of a DOM element, an identifier of a target object, and pixel values of a width and a height of the DOM element.
Step 408: and receiving presentation data of the target object created by the cloud server and responding to the presentation request.
Specifically, the client receives a video data stream constructed by the cloud server according to the received DOM element identifier, the target object identifier and the width and height of the DOM element.
Step 410: and displaying the display data of the target object in the control area of the client.
To sum up, the object control method provided in this specification may determine the display position of the control area according to the control request of the user for the target object, so as to customize the display position, the width, and the height of the control area, thereby sending the display request based on the display position to the cloud server, receiving the display data of the target object constructed based on the display request, and displaying the display data in the control area, so that not only the display form of the target object in the control area is richer than that of a static picture or a local video, but also the user and the target object generate an interactive behavior through the object control method provided in this specification, which has better interactivity and richer display form.
Referring to fig. 5, taking the touch control that an object control method provided in the embodiment of the present application is applied to a 3D handheld as an example, the object control method is further described, where fig. 5 shows a touch control interaction flowchart that an object control method provided in the embodiment of the present description is applied to a 3D handheld, and specifically includes the following steps:
step 502: the client receives a control request of a user for the 3D handheld.
Step 504: the client determines a presentation position of the control area based on the control request.
Step 506: the client receives a rotation control instruction of a user for the 3D handheld.
Step 508: and the client sends the call request of the rotation control instruction to the cloud server.
Step 510: and the cloud server analyzes the touch control instruction in the calling request to obtain touch control data.
Step 512: the cloud server adjusts the 3D handheld according to the touch control data, and constructs a touch screen of the 3D handheld based on the adjusted target object.
Step 514: and the cloud server renders the touch picture to obtain a 3D handheld video data stream.
Step 516: and the cloud server sends the received 3D handheld video data stream to a control area of the client for display.
In the object control method provided in the embodiment of the present specification, the cloud server constructs a video data stream for a target object in a cloud computing manner, and displays the video data stream in a control area of the client in a display form of a cloud game, so that not only can interactive operation be performed on the target object in the control area to improve the interactive experience of a user, but also the display form of display data becomes more various.
Corresponding to the above method embodiment, the present application further provides an embodiment of an object control apparatus, and fig. 6 shows a schematic structural diagram of an object control apparatus provided in an embodiment of the present application. As shown in fig. 6, the apparatus includes:
a first receiving module 602 configured to receive a control request for a target object, wherein the control request includes a target object identification;
a determination module 604 configured to determine a presentation position of a control area based on a control request of the target object;
a second receiving module 606, configured to send a display request based on the display position to a cloud server, and receive display data of the target object based on the control area, which is constructed by the cloud server in response to the display request, where the display request carries a display position identifier of the target object and the target object identifier;
a presentation module 608 configured to set a control area based on the presentation position and present the presentation data in the control area.
Optionally, the apparatus further comprises:
receiving a touch control instruction for a target object in the control area;
sending a calling request based on the touch control instruction to a cloud server based on the touch control instruction, and receiving display data of the target object based on the control area, which is constructed by the cloud server in response to the calling request;
and displaying the display data in the control area.
Optionally, the apparatus further comprises:
receiving a dialog control instruction for a target object in the control region;
sending a calling request based on the conversation control instruction to a cloud server based on the conversation control instruction, and receiving display data of the target object based on the control area, which is constructed by the cloud server in response to the calling request;
and displaying the display data in the control area.
Optionally, the determining module 604 is further configured to:
creating a DOM node tree, and determining a target node aiming at the target object in the DOM node tree based on the control request of the target object;
and taking the target node as the display position of the control area of the target object.
Optionally, the presentation module 608 is further configured to:
and zooming and/or cutting the display data based on the size of the control area of the target object, and displaying the zoomed and/or cut display data in the control area.
Optionally, the presentation module 608 is further configured to:
acquiring the size of a control area of the target object, and setting the control area based on the size of the control area and the display position;
rendering the display data of the target object, and loading the rendered display data in the control area.
The object control device provided in this description embodiment determines the display position of the control area according to the control request of the user for the target object, so as to customize the display position, the width and the height of the control area, thereby sending the display request based on the display position to the cloud server, receiving the display data of the target object constructed based on the display request, and displaying the display data in the control area, so that not only is the display form of the target object in the control area richer than that of a static picture or a local video, but also the user and the target object generate an interactive behavior through the object control method provided in this description embodiment, which has interactivity and richer display forms.
The foregoing is a schematic arrangement of an object control apparatus of the present embodiment. It should be noted that the technical solution of the object control device and the technical solution of the object control method belong to the same concept, and for details that are not described in detail in the technical solution of the object control device, reference may be made to the description of the technical solution of the object control method.
Corresponding to the above another method embodiment, the present application further provides another object control apparatus embodiment, and fig. 7 shows a schematic structural diagram of an object control apparatus provided in an embodiment of the present application. As shown in fig. 7, the apparatus includes:
a receiving display request module 702, configured to receive a display request based on a control area for a target object, where the display request carries size information of the control area and an object identifier of the target object;
an obtaining module 704 configured to obtain image information of the target object based on the object identifier of the target object, and process the image information of the target object based on the size information of the control area to obtain a display object for the control area;
a sending module 706, configured to send the display object to a client, where the display object is the image information of the target object displayed in the control area after processing the image information of the target object based on the size information of the control area.
Optionally, the apparatus further comprises:
receiving a touch interaction request aiming at the target object, wherein the touch interaction request carries a touch control instruction;
analyzing the touch control instruction to obtain touch control data;
adjusting the target object based on the touch control data, and constructing a touch screen for the target object according to the adjusted target object;
rendering the touch control picture, obtaining display data of the target object and sending the display data to the control area.
Optionally, the apparatus further comprises:
receiving a dialogue interaction request aiming at the target object, wherein the dialogue interaction request carries a dialogue control instruction;
analyzing the conversation control instruction to obtain conversation control data, and determining conversation data matched with the conversation control data in a database based on the conversation control data;
constructing a dialog picture aiming at the target object based on the dialog data, and rendering the dialog picture to obtain display data of the target object;
and sending the display data of the target object to the control area.
Optionally, the apparatus further comprises:
and scaling and/or cutting the display data of the control area according to the size of the control area of the target object.
In the object control device provided in the embodiment of the present specification, the cloud server constructs the video data stream for the target object in a cloud computing manner, and displays the video data stream in the control area of the client in the display form of the cloud game, so that not only can the interactive operation be performed on the target object in the control area to improve the interactive experience of the user, but also the display form of the display data becomes more various.
The above is a schematic configuration of an object control apparatus of the present embodiment. It should be noted that the technical solution of the object control device and the technical solution of the object control method belong to the same concept, and for details that are not described in detail in the technical solution of the object control device, reference may be made to the description of the technical solution of the object control method.
Fig. 8 illustrates a block diagram of a computing device 800 according to an embodiment of the application. The components of the computing device 800 include, but are not limited to, a memory 810 and a processor 820. The processor 820 is coupled to the memory 810 via a bus 830, and the database 850 is used to store data.
Computing device 800 also includes access device 840, access device 840 enabling computing device 800 to communicate via one or more networks 860. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 840 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the application, the above-described components of the computing device 800 and other components not shown in fig. 8 may also be connected to each other, for example, by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 8 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 800 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 800 may also be a mobile or stationary server.
Wherein the processor 820 implements the steps of the object control method when executing the instructions.
The foregoing is a schematic diagram of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the object control method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the object control method.
An embodiment of the present application further provides a computer readable storage medium, which stores computer instructions, when executed by a processor, for implementing the steps of the object control method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the object control method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the object control method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer-readable medium may contain suitable additions or subtractions depending on the requirements of legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer-readable media may not include electrical carrier signals or telecommunication signals in accordance with legislation and patent practice.
It should be noted that for simplicity and convenience of description, the above-described method embodiments are described as a series of combinations of acts, but those skilled in the art will appreciate that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders and/or concurrently. Further, those skilled in the art will appreciate that the embodiments described in this specification are presently considered to be preferred embodiments and that acts and modules are not required in the present application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the teaching of this application. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (14)

1. An object control method is applied to a client, and is characterized by comprising the following steps:
receiving a control request aiming at a target object, wherein the control request comprises a target object identification;
determining a display position of a control area based on the control request of the target object, wherein the control area is an area where the target object is displayed on the client and is used for controlling the target object in the control area, and the display position is a position coordinate of the control area;
sending a display request based on the control area to a cloud server, and receiving display data of the target object based on the control area, which is constructed by the cloud server in response to the display request, wherein the display request carries a display position identifier of the target object and the target object identifier;
and setting a control area based on the display position and displaying the display data in the control area.
2. The object control method according to claim 1, characterized by further comprising:
receiving a touch control instruction for a target object in the control area;
sending a calling request based on the touch control instruction to a cloud server based on the touch control instruction, and receiving display data of the target object based on the control area, which is constructed by the cloud server in response to the calling request;
and displaying the display data in the control area.
3. The object control method according to claim 1, characterized by further comprising:
receiving a dialog control instruction for a target object in the control region;
sending a calling request based on the conversation control instruction to a cloud server based on the conversation control instruction, and receiving display data of the target object based on the control area, which is constructed by the cloud server in response to the calling request;
and displaying the display data in the control area.
4. The object control method according to any one of claims 1 to 3, wherein the determining the display position of the control area based on the control request of the target object includes:
creating a DOM node tree, and determining a target node aiming at the target object in the DOM node tree based on the control request of the target object;
and taking the target node as the display position of the control area of the target object.
5. The object control method according to claim 1, wherein the control request further includes a control area size of the target object;
correspondingly, the displaying the display data in the control area includes:
and zooming and/or cutting the display data based on the size of the control area of the target object, and displaying the zoomed and/or cut display data in the control area.
6. The object control method according to claim 4, wherein the setting of the control area based on the presentation position and the presentation of the presentation data in the presentation position includes:
acquiring the size of a control area of the target object, and setting the control area based on the size of the control area and the display position;
rendering the display data of the target object, and loading the rendered display data in the control area.
7. An object control method is applied to a cloud server, and is characterized by comprising the following steps:
receiving a display request aiming at a target object based on a control area, wherein the display request carries size information of the control area and an object identifier of the target object, the control area is an area where the target object is displayed on the client and is used for controlling the target object in the control area, and the display position is a position coordinate of the control area;
acquiring image information of the target object based on the object identification of the target object, and processing the image information of the target object based on the size information of the control area to obtain a display object for the control area;
and sending the display object to a client, wherein the display object is the image information of the target object displayed in the control area after processing the image information of the target object based on the size information of the control area.
8. The object control method according to claim 7, applied to a cloud server, further comprising:
receiving a touch interaction request aiming at the target object, wherein the touch interaction request carries a touch control instruction;
analyzing the touch control instruction to obtain touch control data;
adjusting the target object based on the touch control data, and constructing a touch screen for the target object according to the adjusted target object;
rendering the touch picture, obtaining display data of the target object and sending the display data to the control area.
9. The object control method according to claim 7, applied to a cloud server, further comprising:
receiving a dialogue interaction request aiming at the target object, wherein the dialogue interaction request carries a dialogue control instruction;
analyzing the conversation control instruction to obtain conversation control data, and determining conversation data matched with the conversation control data in a database based on the conversation control data;
constructing a dialog picture aiming at the target object based on the dialog data, and rendering the dialog picture to obtain display data of the target object;
and sending the display data of the target object to the control area.
10. The object control method according to claim 7, wherein the presentation request includes a control area size of the target object;
correspondingly, after the display data for the control area is constructed based on the display request, the method further comprises the following steps:
and scaling and/or cutting the display data of the control area according to the size of the control area of the target object.
11. An object control apparatus, characterized by comprising:
a first receiving module configured to receive a control request for a target object, wherein the control request includes a target object identification;
a determining module, configured to determine a display position of a control area based on a control request of the target object, where the control area is an area where the target object is displayed on the client, and is used for controlling the target object in the control area, and the display position is a position coordinate of the control area;
a second receiving module, configured to send a display request based on the control area to a cloud server, and receive display data of the target object based on the control area, which is constructed by the cloud server in response to the display request, where the display request carries a display position identifier of the target object and the target object identifier;
and the display module is configured to set a control area based on the display position and display the display data in the control area.
12. An object control apparatus, characterized by further comprising:
a display request receiving module configured to receive a display request for a target object based on a control area, where the display request carries size information of the control area and an object identifier of the target object, the control area is an area where the target object is displayed on the client and is used for controlling the target object in the control area, and the display position is a position coordinate of the control area;
the obtaining module is configured to obtain image information of the target object based on the object identification of the target object and process the image information of the target object based on the size information of the control area to obtain a display object for the control area;
the sending module is configured to send the display object to a client, wherein the display object is the image information of the target object displayed in the control area after the image information of the target object is processed based on the size information of the control area.
13. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-6 or 7-10 when executing the computer instructions.
14. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1-6 or 7-10.
CN202110044173.3A 2021-01-13 2021-01-13 Object control method and device Active CN112800360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110044173.3A CN112800360B (en) 2021-01-13 2021-01-13 Object control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110044173.3A CN112800360B (en) 2021-01-13 2021-01-13 Object control method and device

Publications (2)

Publication Number Publication Date
CN112800360A CN112800360A (en) 2021-05-14
CN112800360B true CN112800360B (en) 2023-03-31

Family

ID=75810490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110044173.3A Active CN112800360B (en) 2021-01-13 2021-01-13 Object control method and device

Country Status (1)

Country Link
CN (1) CN112800360B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558018A (en) * 2017-09-27 2019-04-02 腾讯科技(深圳)有限公司 A kind of content displaying method, device and storage medium
CN111494952A (en) * 2020-04-16 2020-08-07 腾讯科技(深圳)有限公司 Webpage end object display method and device and readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106330814A (en) * 2015-06-16 2017-01-11 阿里巴巴集团控股有限公司 Method and device for displaying detail information of target object
CN110368689B (en) * 2019-07-19 2021-08-06 腾讯科技(深圳)有限公司 Game interface display method, system, electronic equipment and storage medium
CN111282273B (en) * 2020-02-05 2024-02-06 网易(杭州)网络有限公司 Virtual object display method, device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558018A (en) * 2017-09-27 2019-04-02 腾讯科技(深圳)有限公司 A kind of content displaying method, device and storage medium
CN111494952A (en) * 2020-04-16 2020-08-07 腾讯科技(深圳)有限公司 Webpage end object display method and device and readable storage medium

Also Published As

Publication number Publication date
CN112800360A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN112738627B (en) Play control method and device
WO2023273789A1 (en) Method and apparatus for displaying game picture, storage medium, and electronic device
CN113012280B (en) VR virtual exhibition hall construction method and device
CN113099285A (en) Display method and device
CN114650434A (en) Cloud service-based rendering method and related equipment thereof
CN114463470A (en) Virtual space browsing method and device, electronic equipment and readable storage medium
KR20190104821A (en) Server, device and method for providing avatar communication
CN113127126B (en) Object display method and device
CN112132599A (en) Image processing method and device, computer readable storage medium and electronic device
CN113705156A (en) Character processing method and device
CN112604279A (en) Special effect display method and device
CN112907700A (en) Color filling method and device
CN112800360B (en) Object control method and device
CN114268626A (en) Window processing system, method and device
CN115116295B (en) Correlation interaction training display method, system, equipment and storage medium
CN115908654A (en) Interaction method, device and equipment based on virtual image and storage medium
CN113395565A (en) Display method of virtual gift and related device and equipment
CN112988310A (en) Online experiment method based on multi-split-screen browser
CN112614049A (en) Image processing method, image processing device, storage medium and terminal
CN113676765B (en) Interactive animation display method and device
CN117437342B (en) Three-dimensional scene rendering method and storage medium
CN115858069A (en) Page animation display method and device
CN113230657B (en) Role interaction method and device
CN117135367A (en) Live broadcast room virtual gift display method and device
CN113409431B (en) Content generation method and device based on movement data redirection and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant