CN114935973A - Interactive processing method, device, equipment and storage medium - Google Patents

Interactive processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114935973A
CN114935973A CN202210374813.1A CN202210374813A CN114935973A CN 114935973 A CN114935973 A CN 114935973A CN 202210374813 A CN202210374813 A CN 202210374813A CN 114935973 A CN114935973 A CN 114935973A
Authority
CN
China
Prior art keywords
interactive
information
target
environment
virtual element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210374813.1A
Other languages
Chinese (zh)
Inventor
徐世鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210374813.1A priority Critical patent/CN114935973A/en
Publication of CN114935973A publication Critical patent/CN114935973A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure relates to an interactive processing method, apparatus, device, and storage medium, the method obtaining first environment information by responding to a trigger operation for a target interaction mode; the first environment information comprises acquisition direction information and first image information of an environment acquisition assembly, and the first image information is used for indicating a scene picture acquired by the environment acquisition assembly; acquiring a first interactive virtual element for pointing to a first media resource; the first media resource is matched with the acquisition direction information; and displaying an interactive picture corresponding to the target interactive mode, wherein the interactive picture comprises the first interactive virtual element and a scene picture corresponding to the first image information. Therefore, interaction is realized through the environment acquisition assembly, so that the times of manually operating a screen of a user interface by a user are reduced, and the information transmission effect of interactive contents is improved.

Description

Interactive processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an interactive processing method, an interactive processing device, an interactive processing apparatus, and a storage medium.
Background
With the development of internet technology, interaction between users and applications is more and more frequent.
In the related art, a user realizes interactive operations with an application program by means of, for example, sliding a screen, clicking an icon, moving a mouse, clicking a keyboard, and the like, but the interactive operations are mainly interactive with respect to a user interface of the application program, the interactive operations are complicated, and the information transmission effect of interactive contents is poor.
Disclosure of Invention
The present disclosure provides an interactive processing method, an interactive processing apparatus, an interactive processing device, and a storage medium, so as to solve at least one of the problems in the related art, such as cumbersome interactive operation, poor information transmission effect of interactive content, and the like. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an interactive processing method, including:
responding to a trigger operation aiming at a target interaction mode, and acquiring first environment information; the first environment information comprises acquisition direction information and first image information of an environment acquisition assembly, and the first image information is used for indicating a scene picture acquired by the environment acquisition assembly;
acquiring a first interactive virtual element for pointing to a first media resource; the first media resource is matched with the acquisition direction information;
and displaying an interactive picture corresponding to the target interactive mode, wherein the interactive picture comprises the first interactive virtual element and a scene picture corresponding to the first image information.
As an optional embodiment, the obtaining the first environment information in response to the triggering operation for the target interaction mode includes:
responding to a trigger operation aiming at a target interaction mode, and starting an environment acquisition component;
acquiring first image information acquired by the environment acquisition assembly;
and acquiring first environment information based on the acquisition direction information of the environment acquisition assembly and the first image information.
As an optional implementation, the obtaining a first interactive virtual element for pointing to a first media resource includes:
acquiring terminal position information of a target terminal; the target terminal is a terminal executing the target interaction mode;
determining a first media resource based on the acquisition direction information and the terminal position information;
and acquiring a first interactive virtual element pointing to the first media resource based on the resource display information corresponding to the first media resource.
As an optional implementation manner, the first interactive virtual element includes at least one of a content sub-element used for representing media content information, a distance sub-element used for representing a distance relationship between a resource position corresponding to the first media resource and a position of the environment acquisition component, and an interactive sub-element used for representing interactive information of the first media resource.
As an optional implementation manner, before the presenting the interactive picture corresponding to the target interaction mode, the method further includes:
acquiring resource position information corresponding to each first media resource corresponding to the first interactive virtual element;
determining the display attribute of the first interactive virtual element corresponding to each first media resource based on the position distance information between the resource position information of the first media resource and the terminal position information of the target terminal; the target terminal is a terminal executing the target interaction mode;
and generating an interactive picture corresponding to the target interactive mode according to each first interactive virtual element, the first image information and the display attribute.
As an optional embodiment, the display attribute comprises a display position and/or a display size; the display size is inversely related to the position distance information;
the generating of the interactive picture corresponding to the target interaction mode according to each first interactive virtual element, the first image information and the display attribute comprises:
sequentially combining each first interactive virtual element with the scene picture corresponding to the first image information according to the display position to generate an interactive picture corresponding to the target interactive mode; and/or
And sequentially combining each first interactive virtual element with the scene picture corresponding to the first image information according to the display size to generate an interactive picture corresponding to the target interactive mode.
As an optional implementation, the method further comprises:
responding to the position movement information of the environment acquisition assembly, and acquiring second environment information in real time; the second environment information comprises updated acquisition direction information and second image information of the environment acquisition assembly, and the second image information is used for indicating the scene picture acquired by the environment acquisition assembly;
acquiring a second interactive virtual element for pointing to a second media resource; the second media resource is associated with the updated acquisition direction information of the environment acquisition component and/or the terminal position information of the target terminal;
and dynamically updating the interactive picture based on the second interactive virtual element and the scene picture corresponding to the second image information.
As an optional embodiment, in the case that the interactive screen further includes a direction presentation element for characterizing the orientation of the resource, the method further includes:
and in the moving process of the environment acquisition assembly, the direction pointing icon in the direction display element moves synchronously with the acquisition direction corresponding to the acquisition direction information corresponding to the environment acquisition assembly.
As an optional embodiment, the method further comprises:
responding to the triggering operation of a target interactive virtual element in the interactive picture, and displaying media resource information corresponding to the target virtual element; the target interactive virtual element belongs to the element set corresponding to the first interactive virtual element.
According to a second aspect of the embodiments of the present disclosure, there is provided an interactive processing apparatus, including:
the first acquisition module is configured to execute triggering operation aiming at the target interaction mode and acquire first environment information; the first environment information comprises acquisition direction information and first image information of an environment acquisition assembly, and the first image information is used for indicating a scene picture acquired by the environment acquisition assembly;
a second acquisition module configured to perform acquisition of a first interactive virtual element for pointing to a first media asset; the first media resource is matched with the acquisition direction information;
and the display module is configured to display an interactive picture corresponding to the target interactive mode, wherein the interactive picture comprises the first interactive virtual element and a scene picture corresponding to the first image information.
As an optional implementation manner, the first obtaining module is specifically configured to perform:
responding to the trigger operation aiming at the target interaction mode, and starting an environment acquisition component;
acquiring first image information acquired by the environment acquisition assembly;
and acquiring first environment information based on the acquisition direction information of the environment acquisition assembly and the first image information.
As an optional implementation, the second obtaining module is specifically configured to perform:
acquiring terminal position information of a target terminal; the target terminal is a terminal executing the target interaction mode;
determining a first media resource based on the acquisition direction information and the terminal position information;
and acquiring a first interactive virtual element pointing to the first media resource based on the resource display information corresponding to the first media resource.
As an optional implementation manner, the first interactive virtual element includes at least one of a content sub-element used for representing media content information, a distance sub-element used for representing a distance relationship between a resource position corresponding to the first media resource and a position of the environment acquisition component, and an interactive sub-element used for representing interactive information of the first media resource.
As an optional embodiment, the apparatus further comprises:
a position obtaining module configured to perform obtaining of resource position information corresponding to each of the first media resources corresponding to the first interactive virtual element;
the attribute determining module is configured to determine a display attribute of the first interactive virtual element corresponding to each first media resource based on position distance information between resource position information of the first media resource and terminal position information of a target terminal; the target terminal is a terminal executing the target interaction mode;
and the processing module is configured to execute the generation of an interactive picture corresponding to the target interactive mode according to each first interactive virtual element, the first image information and the display attribute.
As an optional embodiment, the display attribute comprises a display position and/or a display size; the presentation size is inversely related to the position distance information. The processing module comprises:
the first processing unit is configured to sequentially combine each first interactive virtual element with the scene picture corresponding to the first image information according to the display position to generate an interactive picture corresponding to the target interactive mode; and/or
And the second processing unit is configured to sequentially combine each first interactive virtual element with the scene picture corresponding to the first image information according to the display size to generate an interactive picture corresponding to the target interactive mode.
As an optional embodiment, the apparatus further comprises:
a third obtaining module configured to perform obtaining second environmental information in real-time in response to position movement information of the environment acquisition component; the second environment information comprises updated acquisition direction information and second image information of the environment acquisition assembly, and the second image information is used for indicating the scene picture acquired by the environment acquisition assembly;
a fourth obtaining module configured to perform obtaining a second interactive virtual element for pointing to a second media resource; the second media resource is associated with the updated acquisition direction information of the environment acquisition component and/or the terminal position information of the target terminal;
and the display updating module is configured to execute scene pictures corresponding to the second interactive virtual elements and the second image information and dynamically update the interactive pictures.
As an optional embodiment, in a case that the interactive screen further includes a direction display element for characterizing the orientation of the resource, the apparatus further includes:
and the synchronous adjustment module is configured to execute synchronous movement of the direction pointing icon in the direction display element and the acquisition direction corresponding to the acquisition direction information corresponding to the environment acquisition assembly in the movement process of the environment acquisition assembly.
As an optional embodiment, the apparatus further comprises:
the interaction module is configured to execute triggering operation responding to a target interaction virtual element in the interaction picture and display media resource information corresponding to the target interaction virtual element; the target interactive virtual element belongs to the element set corresponding to the first interactive virtual element.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, where instructions, when executed by a processor of an electronic device, enable the electronic device to perform an interactive processing method as described in any one of the above embodiments.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the interactive processing method according to any of the above embodiments.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, the computer program product comprising a computer program, the computer program, when executed by a processor, implementing the interactive processing method provided in any one of the above-mentioned embodiments.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the embodiment of the disclosure obtains first environment information by responding to a trigger operation for a target interaction mode; the first environment information comprises acquisition direction information and first image information of an environment acquisition assembly, and the first image information is used for indicating a scene picture acquired by the environment acquisition assembly; acquiring a first interactive virtual element for pointing to a first media resource; the first media resource is matched with the acquisition direction information; and displaying an interactive picture corresponding to the target interactive mode, wherein the interactive picture comprises the first interactive virtual element and a scene picture corresponding to the first image information. Therefore, the interaction is realized through the environment acquisition assembly by introducing the environment acquisition assembly and establishing the association between the environment acquisition assembly and the target interaction mode, so that the times of manually operating the screen of the user interface by a user are reduced. In addition, because the displayed interactive pictures comprise the scene pictures acquired by the environment acquisition assembly and the interactive virtual elements corresponding to the media resources matched with the acquisition direction information, more interactive resource information is displayed, and the information transmission effect of the interactive content is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments consistent with the disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is an architecture diagram illustrating a system applying an interactive processing method in accordance with one illustrative embodiment.
FIG. 2 is a flow diagram illustrating an interactive processing method according to an example embodiment.
FIG. 3 is a flow diagram illustrating another method of interactive processing according to an example embodiment.
FIG. 4 is a flow diagram illustrating another method of interactive processing according to an example embodiment.
FIG. 5 is a flow diagram illustrating another method of interactive processing according to an example embodiment.
FIG. 6 is a flow diagram illustrating another method of interactive processing according to an example embodiment.
FIG. 7 is a flow diagram illustrating another method of interactive processing in accordance with an illustrative embodiment.
FIG. 8 is an interface diagram illustrating a method of interactive processing according to an example embodiment.
Fig. 9 is a block diagram illustrating an interactive processing device according to an example embodiment.
FIG. 10 is a block diagram illustrating another interactive processing device according to an example embodiment.
FIG. 11 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in other sequences than those illustrated or described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
Fig. 1 is an architecture diagram illustrating a system applying an interactive processing method according to an exemplary embodiment, and referring to fig. 1, the architecture diagram may include a terminal 110, a terminal 120, and a server 130.
The terminal 110 or the terminal 120 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart wearable device, a digital assistant, an augmented reality device, a virtual reality device, and the like. It should be understood that terminal 110 may refer to one of a plurality of terminals and terminal 120 may refer to another of a plurality of terminals, and the embodiment is illustrated only by terminal 110 and terminal 120.
The server 130 may provide an interactive data processing service for the terminal 110 and the terminal 120. For example only, the server 130 may be, but is not limited to, an independent server, a server cluster or a distributed system formed by a plurality of physical servers, and one or more cloud servers that provide basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, intermediate services, domain name services, security services, and big data and artificial intelligence platforms.
The terminal 110 and the terminal 120 may be directly or indirectly connected to the server 130 through wired or wireless communication, and the embodiments of the present disclosure are not limited thereto.
The terminal 110 and the terminal 120 may be respectively installed with clients capable of implementing a target interaction function, which may include video interaction, live interaction, game interaction, shopping interaction, and the like. The client may be a pre-installed client, a web page version client, an applet, etc. The client of the terminal 110 may be in data communication with the environment acquisition component, and interaction may be achieved through the environment acquisition component. The environment acquisition component may be mounted on the terminal 110, or may be in contactless communication with the terminal 110 by a wireless or wired method.
Taking video interaction as an example, the terminal 120 may be a terminal used by a video publisher, and uploads video resources through a server; the terminal 110 may be a terminal used by a video viewer. A client on the terminal 110 may obtain first environment information by invoking an environment acquisition component by responding to a trigger operation for a target interaction mode, where the first environment information includes acquisition direction information and first image information of the environment acquisition component, and obtains a first interaction virtual element of the first environment for pointing to a first media resource, where the first media resource may be a video resource issued by the terminal 120; and then, displaying an interactive picture corresponding to the target interactive mode, wherein the interactive picture comprises a first interactive virtual element and a scene picture corresponding to the first image information so as to realize interaction.
It should be noted that the architecture diagram of the system applying the interactive processing method of the present disclosure is not limited thereto, and may also include more or less devices than the number of devices in fig. 1, and the embodiments of the present disclosure are not limited thereto.
The interactive processing method provided by the embodiment of the present disclosure may be implemented by a client on the terminal 110 alone, or may be implemented by the client on the terminal and a server cooperatively.
FIG. 2 is a flow diagram illustrating an interactive processing method according to an example embodiment. As shown in fig. 2, the interactive processing method can be applied to an electronic device, and is described by taking the electronic device as an example of a client on the terminal 110 in the above implementation environment schematic diagram, which includes the following steps.
In step S201, in response to a trigger operation for a target interaction mode, obtaining first environment information; the first environment information comprises acquisition direction information and first image information of the environment acquisition assembly.
Alternatively, the target interaction pattern may be an interaction pattern that the user wants to trigger to activate. For example, the target interaction mode may be bound with a target control, and the target interaction mode may be started to enter a corresponding target interaction mode by performing a trigger operation, such as clicking, on the target control, so as to obtain the first environment information. The first context information may be context information characterizing a current scene requiring interaction. The current scene may be a real scene, such as a mall, an office, a tourist attraction, etc.
The first environment information comprises acquisition direction information and first image information of the environment acquisition assembly.
The environment capture component can be configured to capture a scene view of a current scene. By way of example only, the environment acquisition component may include a camera component, such as a camera, mounted in the terminal. As another example, the environment acquisition component may include an external camera component that is not provided in the terminal, and the terminal may perform direct or indirect data communication with the external camera component in a wireless or wired manner to acquire the first environment information corresponding to the external camera component. For example, the external camera component may include a camera, a video camera, etc., and may also include a smart device with camera function, which may include, but is not limited to, a smart wearable device, a smart home device, etc.
The acquisition direction information of the environment acquisition assembly can be used for representing the acquisition direction of the environment acquisition assembly when the environment acquisition assembly acquires the current real scene. The acquisition orientations may include, but are not limited to, east, south, west, north, southwest, northwest, northeast, and the like. The acquisition orientation may be determined by an orientation sensor of the environmental acquisition component. By way of example only, the orientation sensor may include, but is not limited to, a gyroscope or the like.
The first image information is used for indicating the scene pictures collected by the environment collection component. For example, taking the environment acquisition component as the camera component, the first image information may be scene picture information currently captured by the camera component.
In an optional embodiment, the obtaining the first environment information in response to the triggering operation for the target interaction mode includes:
responding to the trigger operation aiming at the target interaction mode, and starting an environment acquisition component;
acquiring first image information acquired by an environment acquisition assembly;
and acquiring first environment information based on the acquisition direction information and the first image information of the environment acquisition assembly.
Optionally, when the target interaction mode needs to be started, the target interaction mode may be started by triggering the target control. And under the condition that the client is determined to enter the target interaction mode, if the situation that the environment acquisition assembly is not started is detected and the client has the starting permission to the environment acquisition assembly, automatically starting the environment acquisition assembly so as to call the environment acquisition assembly to acquire the picture of the current scene. The method comprises the steps of obtaining first image information collected by an environment collection assembly in real time, and generating first environment information by combining collection direction information of the environment collection assembly. If the situation that the environment acquisition assembly is started is detected, first image information acquired by the environment acquisition assembly in real time is directly acquired, and the first environment information is generated by combining acquisition direction information of the first image information.
According to the embodiment, the environment acquisition assembly is automatically started by responding to the trigger operation aiming at the target interaction mode, the relation between the environment acquisition assembly and the target interaction mode is established, the manual operation steps of a screen in a user interface are reduced, and the diversity of interaction forms is improved. In addition, the first environment information is determined based on the image information and the acquisition direction information acquired by the adjusted and started environment acquisition assembly, so that the time consumed for transmitting the interactive intermediate data among different devices is reduced, and the interaction efficiency is improved.
In step S203, a first interactive virtual element pointing to a first media resource is obtained; the first media resource is matched with the acquisition direction information.
Optionally, the first media resource refers to resource content that needs to be interactively displayed. The first media resource can be a resource such as a short video, a live broadcast, an article, a commodity and the like. By way of example only, a live may be content that is being live, a short video may be a video published in the last few hours, days, etc. The number of the first media resources may be one or more.
The first media resource is matched with the acquisition direction information, that is, for different acquisition direction information, the corresponding first media resource is different. For example, if the collection orientation corresponding to the collection direction information is a first collection orientation, the first media resource matching the first collection orientation includes { a1, a2.. An }, and if the collection orientation corresponding to the collection direction information is a second collection orientation, the first media resource matching the second collection orientation includes { B1, B2.. Bm }, where n and m are positive integers.
In an alternative embodiment, as shown in fig. 3, the obtaining a first interactive virtual element for pointing to a first media resource includes:
in step S301, terminal position information of the target terminal is acquired.
The target terminal is a terminal executing the target interaction mode, and the target terminal may be the terminal 110 shown in fig. 1, where the client is installed. The terminal position information is used for representing the geographical position of the target terminal.
In step S303, a first media resource is determined based on the acquisition direction information and the terminal location information.
Optionally, after the client acquires the terminal position information of the target terminal carried by the client, the target media resource position meeting the conditions may be determined based on the acquisition direction information and the terminal position information. The target media asset location may belong to a projection area M formed by taking the location of the target terminal as a vertex a and taking the collection orientation corresponding to the collection direction information as a projection direction, and the projection area M may be, for example, a sector-shaped area, a triangle-shaped area, or the like. The target media asset location may refer to an asset publishing location or an asset locating location of the media asset overlaid within the projected area. Next, based on the determined target media asset location, a first media asset may be determined that is located in a match from a storage location, such as a media asset library.
Alternatively, in the target application scenario, the distance between the boundary of the projection area M and the vertex a may be less than or equal to a preset distance threshold, which may include, but is not limited to, 100M, 500M, 3km, 5km, and the like. The target application scenario may include, but is not limited to, a "people nearby," "city-together," and the like. The preset distance threshold may be determined according to the number of the screened first media resources. The preset distance threshold may be inversely proportional to the number of the first media resources, and specifically, if the number of the screened first media resources is large, the corresponding preset distance threshold is smaller; conversely, the larger the corresponding preset distance threshold.
Alternatively, an azimuth angle, for example, an azimuth angle of 30 °, may be set for each acquisition azimuth. The media resources in the area range projected by the orientation included angle are the corresponding first media resources. Under the condition that the azimuth included angle corresponding to the first collection azimuth and the azimuth included angle corresponding to the second azimuth are not overlapped, the first media resources corresponding to the two collection azimuths can be different from each other. And under the condition that the azimuth angle corresponding to the first collection position and the azimuth included angle corresponding to the second collection position are overlapped, the first media resources corresponding to the two collection positions can be overlapped, namely the first media resources corresponding to the two collection positions have the same part of media resources.
In step S305, a first interactive virtual element pointing to a first media resource is obtained based on resource display information corresponding to the first media resource.
Optionally, the resource display information is used to characterize resource-related information that needs to be displayed on the interactive screen. Taking the first media resource as an example of a video resource, the resource display information may include, but is not limited to, at least one of a content cover page, interaction information, content description information, distance, relationship chain, resource status, and the like of the media resource.
Wherein, the content cover of the media resource can refer to a cover picture of the media resource; the interactive information may include the forwarding number, the praise number, the play number, the online watching number, and the like of the first media resource; the content description information may include a title of the media asset, etc.; the distance may be a distance between a location point corresponding to the first media resource and a location of the target terminal; the relationship chain may refer to a relationship between an object account corresponding to the first media resource and an object account corresponding to the target terminal, such as "paid attention to", "my friend", and the like; the resource status may refer to the current status of the media resource, such as "live in," "live end," "nearly XX hours," "hot," and so forth.
After the resource display information corresponding to the first media resource is acquired, at least one resource display information corresponding to the first media resource may be combined to generate a first interactive virtual element pointing to the first media resource. Correspondingly, the generated first interactive virtual element may include at least one of a content sub-element used for representing media content information, a distance sub-element used for representing a distance relationship between a resource position corresponding to the first media resource and a position where the environment acquisition component is located, and an interactive sub-element used for representing interactive information of the first media resource. By way of example only, the content sub-element corresponds to at least one of a content cover, content description information, and resource status in the resource presentation information, the distance sub-element corresponds to a distance in the resource presentation information, and the interaction sub-element corresponds to at least one of interaction information, a relationship chain in the resource presentation information.
The number of the first interactive virtual elements is matched with the number of the first media resources, each first interactive virtual element is bound with the corresponding first media resource, and the bound first media resources can be linked through the first interactive virtual elements. The first interactive virtual element can be in the form of a card, a popup window, a suspension frame, and the like.
According to the embodiment, the first media resource is determined by acquiring the direction information and the terminal position information, the first media resource is associated with the acquisition direction information and the terminal position information, and the first media resource can be changed by changing at least one of the acquisition direction information and the terminal position information, so that the subsequent first interactive virtual element displayed in the interactive picture is adjusted, the operation steps of a screen in a user interface are reduced, and the interactive flexibility and the interactive effect are improved. In addition, the first interactive virtual element used for pointing to the first media resource is obtained based on the resource display information corresponding to the first media resource, so that the first interactive virtual element carries richer resource information in an interactive picture, and then resource content and position conditions of the corresponding media resource are visually displayed, operation steps on a screen in a user interface are further reduced, and the transmission effect of the interactive content is improved.
In another alternative embodiment, the obtaining a first interactive virtual element for pointing to a first media asset may include: the client can only search the preset number of first media resources with the release location matched with the acquisition direction according to the acquisition direction corresponding to the acquisition direction information, and obtain a first interactive virtual element for pointing to the first media resources according to the resource display information corresponding to the first media resources. Specifically, the first media resource is matched with the acquisition direction information, that is, for different acquisition direction information, the corresponding first media resources are different.
In another alternative embodiment, the obtaining a first interactive virtual element for pointing to a first media asset may include: a first interactive virtual element for pointing to a first media asset is obtained from a server. Specifically, the server may first obtain terminal position information and acquisition direction information sent by the target terminal, and then determine the first media resource based on the acquisition direction information and the terminal position information; and determining a first interactive virtual element used for pointing to the first media resource based on the resource display information corresponding to the first media resource, and sending the first interactive virtual element to a client corresponding to the terminal, so that the client can acquire the first interactive virtual element used for pointing to the first media resource.
It should be noted that, for determining the specific content of the first media resource and determining the specific content of the first interactive virtual element, reference may be made to the foregoing embodiments, and details are not described herein again.
In step S205, an interactive picture corresponding to the target interactive mode is displayed, where the interactive picture includes a first interactive virtual element and a scene picture corresponding to the first image information.
In an optional implementation manner, the client may combine the first interactive virtual element and the scene picture corresponding to the first image information, generate an interactive picture corresponding to the target interactive mode, and render and display the interactive picture. Scene pictures corresponding to the first interactive virtual elements and the first image information can be combined and displayed in the interactive pictures. The scene picture may be used to represent a shot picture of the current scene, for example, the shot picture may include, but is not limited to, a mall picture, an office picture, a tourist attraction picture, and the like. The first interactive virtual element can be combined with the scene picture in embedding, fusion, stacking, suspension and other modes. The first interactive virtual element may include at least one of a content sub-element for representing media content information, a distance sub-element for representing a distance relationship between a resource position corresponding to the first media resource and a position of the environment acquisition component, and an interactive sub-element for representing interactive information of the first media resource. The sub-elements in the first interactive virtual element can be displayed in a mutual nesting, fusion, interval distribution and other manners.
In another alternative embodiment, as shown in fig. 4, before displaying the interactive picture corresponding to the target interactive mode, the method further includes:
in step S401, resource location information corresponding to each first media resource corresponding to the first interactive virtual element is obtained.
The resource position information is used for representing the geographic position of the target place corresponding to the media resource. By way of example only, the target location may be a publishing location of the media asset, a locating location of the media asset, and/or the like. The number of the first interactive virtual elements is the same as that of the first media resources, and the resource position information corresponding to different first media resources may be the same or different.
In step S403, a display attribute of a first interactive virtual element corresponding to each first media resource is determined based on location distance information between the resource location information of the first media resource and the terminal location information of the target terminal.
The target terminal is a terminal executing the target interaction mode, and the target terminal may be the terminal 110 shown in fig. 1, where the client is installed.
Optionally, position distance information between the position information of each first media resource and the terminal position information of the target terminal may be calculated, and the display attribute of the first interactive virtual element corresponding to each first media resource may be determined based on the position distance information. The location distance information may be used to characterize relative location information between the place of publication of the media asset and the current location of the target terminal. The display attribute reflects the display condition of the first interactive virtual element on the interactive picture.
In step S405, an interactive screen corresponding to the target interactive mode is generated according to each first interactive virtual element, the first image information, and the display attribute.
The display attribute may include a display style, and the display style may be used to represent a display style of the first interactive virtual element. Illustratively, the display position, the arrangement style, and the like of each sub-element in the first interactive virtual element corresponding to each media resource.
Optionally, the presentation attribute may further include a presentation position and/or a presentation size. The display position is used for reflecting the orientation of the first interactive virtual element on the interactive picture, such as the upper left corner, the central point, the lower right corner and the like of the interactive picture. The display size is used for reflecting the visual size of the first interactive virtual element on the interactive picture.
According to the embodiment, the interactive picture corresponding to the target interactive mode is generated according to each first interactive virtual element, the first image information and the display attribute, the display effect of the interactive picture is improved by introducing the display attribute, and the interactive interestingness is increased.
Correspondingly, as shown in fig. 5, in a case that the display attribute includes a display position and/or a display size, the generating an interactive screen corresponding to the target interaction mode according to each first interactive virtual element, the first image information, and the display attribute includes:
in step S501, according to the display position, each first interactive virtual element is sequentially combined with the scene picture corresponding to the first image information, so as to generate an interactive picture corresponding to the target interactive mode.
Optionally, the first interactive virtual elements and the scene pictures corresponding to the first image information may be sequentially combined according to the position in the display position corresponding to each first interactive virtual element, so as to generate interactive pictures corresponding to the target interactive mode. Such combinations include, but are not limited to, embedding, fusing, stacking, suspending, and the like.
Optionally, the display position of each first interactive virtual element on the interactive screen may be matched with the relative position of the corresponding publishing point of the first media resource and the target terminal. The relative position may be a position offset azimuth angle and a position offset distance. For example, if the distribution point of the first media resource p1 is located 100 meters 5 ° north of the location of the target terminal, the distribution point of the second media resource p2 is located 300 meters 30 ° north of the location of the target terminal, the first interactive virtual element corresponding to the first media resource p1 is located s1 meters 5 ° north of the interactive screen, and the first interactive virtual element corresponding to the first media resource p2 is located s2 meters 30 ° north of the interactive screen, where s1 may be smaller than s 2.
In step S503, according to the display size, the first interactive virtual elements and the scene pictures corresponding to the first image information are sequentially combined to generate interactive pictures corresponding to the target interactive mode.
Optionally, the first interactive virtual elements and the scene pictures corresponding to the first image information may be sequentially combined according to the display size corresponding to each first interactive virtual element, so as to generate interactive pictures corresponding to the target interactive mode. Such combinations include, but are not limited to, embedding, fusing, stacking, suspending, and the like.
In an alternative embodiment, the presentation size may be inversely related to the distance size corresponding to the position distance information. That is, the larger the distance corresponding to the position distance information is, the smaller the display size of the first interactive virtual element in the interactive picture is; otherwise, the larger the display size of the first interactive virtual element in the interactive picture is. Therefore, each interactive virtual element in the interactive picture is displayed through the display size, the media resources are displayed in a differentiated mode, and the interactive display effect is improved.
In addition, in the above step S501 and step S503, both may be executed alternatively or simultaneously, and the execution order of both is not particularly limited in the present disclosure.
In practical applications, as shown in fig. 8, for example only, the user interface on the left side may show a map interactive screen in a map mode, and the interactive screen on the right side shows an interactive screen in a target interactive mode. The map interactive picture comprises a target control 810, and an interactive mode switching instruction is started by clicking the target control 810, so that an environment acquisition component (such as a rear camera) can be started to take an effective target interactive mode. In the target interaction mode, the camera shoots towards a certain direction a of the real scene to obtain a picture of the real scene. According to the orientation of the camera and the position of the terminal, a media resource (e.g. a video, a live content, etc.) with the azimuth a within an azimuth angle (e.g. 30 °) and a nearby preset distance (e.g. nearby 3km) can be recalled, and based on the resource display information of the recalled media resource, a first interactive virtual element pointing to the first media resource is determined to display an interactive picture.
As shown in fig. 8, the interactive screen includes a live-action screen 820 and a plurality of first interactive virtual elements 830 displayed in the form of content cards, where the first interactive virtual elements 830 include sub-elements of media cover, content description, distance, and the like. As shown in fig. 8, such sub-elements are displayed in a manner of being spaced from each other, specifically, the sub-element corresponding to the "content description" is displayed below the sub-element corresponding to the "media cover," the sub-element corresponding to the "distance" is displayed below the sub-element corresponding to the "content description," and the sub-element corresponding to the "distance" and the "content description" may be further connected by a connection line. In addition, sub-elements such as "in live", "3.2 ten thousand people are watching" can also be shown in combination in the sub-element corresponding to "media cover" in fig. 8. The cover size of the content card corresponding to the first interactive virtual element 830 is related to the distance from the terminal, and as shown in fig. 8, the closer the publishing position corresponding to the first interactive virtual element is to the terminal, the larger the cover of the corresponding content card is; conversely, the smaller.
In the embodiment, the first environment information is obtained by responding to the trigger operation aiming at the target interaction mode; the first environment information comprises acquisition direction information and first image information of an environment acquisition assembly, and the first image information is used for indicating a scene picture acquired by the environment acquisition assembly; acquiring a first interactive virtual element for pointing to a first media resource; the first media resource is matched with the acquisition direction information; and displaying an interactive picture corresponding to the target interactive mode, wherein the interactive picture comprises the first interactive virtual element and a scene picture corresponding to the first image information. Therefore, interaction is realized through the environment acquisition assembly by introducing the environment acquisition assembly and establishing the association between the environment acquisition assembly and the target interaction mode, so that the times of manually operating a screen of a user interface by a user are reduced; because the displayed interactive pictures comprise the scene pictures acquired by the environment acquisition assembly and the interactive virtual elements corresponding to the media resources matched with the acquisition direction information, more interactive resource information is displayed, and the information transmission effect of the interactive contents is improved.
In an alternative embodiment, as shown in fig. 6, the method further comprises:
in step S601, in response to the position movement information of the environment acquisition component, obtaining second environment information in real time; the second environment information comprises updated acquisition direction information of the environment acquisition assembly and second image information, and the second image information is used for indicating the scene pictures acquired by the environment acquisition assembly.
Wherein the location movement information characterizes a change in location of the environment acquisition component. When the environment acquisition assembly is detected to perform front-and-back movement, left-and-right movement, rotation movement and the like, the environment acquisition assembly is determined to be changed in position, and position movement information of the environment acquisition assembly is generated.
For example, at time t1, if the collection orientation of the environment collection component is the first collection orientation (e.g., north), and the user only moves the environment collection component back and forth or left and right, at time t2, the collection orientation is the first collection orientation (e.g., north), but when the terminal location information of the terminal carried by the environment collection component changes, for example, location a changes to location b, it is necessary to search for the corresponding second media resource based on location b and the first collection orientation, and then obtain the corresponding second interactive virtual element based on the searched second media resource.
At time t1, if the collection orientation of the environment collection component is the first collection orientation (e.g., north), and the user rotates the environment collection component only 90 ° clockwise, at time t3, the terminal location information of the terminal carried by the environment collection component is not changed, for example, it is still the location a, but the collection orientation is updated, and the terminal is adjusted to be the second collection orientation (e.g., east), and then the corresponding second media resource needs to be searched again based on the location a and the second collection orientation.
Since the environment acquisition component is changed in position, the environment information and the like acquired by the environment acquisition component are also changed, and accordingly, the acquired second media resource is changed, and a second interactive virtual element pointing to the second media resource needs to be acquired again. The second media resource is associated with updated acquisition direction information of the environment acquisition component and/or terminal location information of the target terminal.
In step S603, a second interactive virtual element for pointing to a second media resource is acquired.
If the position of the environment acquisition component is changed, the environment information and the like acquired by the environment acquisition component are also changed, that is, the acquisition space of the environment acquisition component is changed, the second media resource in the acquisition space is also changed, and the second media resource which needs to be acquired again is determined, so that the second interactive virtual element is determined based on the second media resource which is acquired again. The second media resource is associated with updated acquisition direction information of the environment acquisition component and/or terminal location information of the target terminal.
It should be noted that the obtaining manner of the second interactive virtual element is similar to the obtaining manner of the first interactive virtual element, and is not described herein again.
In step S605, the interactive picture is dynamically updated based on the second interactive virtual element and the scene picture corresponding to the second image information.
In an optional implementation manner, the client may combine the second virtual element with a collection scene picture corresponding to the second image information collected by the environment collection component, generate a new interactive picture in real time, replace the previous interactive picture with the new interactive picture, and display the dynamically updated interactive picture in the user interface. The updated interactive picture can be combined with the collected scene picture corresponding to the second interactive virtual element and the second image information. The captured scene picture may be used to represent a captured picture of the current scene, for example, the captured picture may include, but is not limited to, a mall picture, an office picture, a tourist attraction picture, and the like. In the updating process of the interactive picture, the interactive virtual elements in the interactive picture are dynamically updated, for example, the content sub-elements, the distance sub-elements, the interactive sub-elements and the like corresponding to the interactive virtual elements are dynamically changed along with the condition of the media resources.
In another alternative embodiment, the updating the interactive screen based on the second virtual element and the second image information acquired by the environment acquisition component may include: acquiring resource position information corresponding to each second media resource corresponding to the second interactive virtual element; determining the display attribute of a second interactive virtual element corresponding to each second media resource based on the position distance information between the resource position information of the second media resource and the terminal position information of the target terminal; the target terminal is a terminal for executing a target interaction mode; according to the display attributes, combining each second interactive virtual element with the second image information to generate an updated interactive picture; and displaying the updated interactive picture.
It should be noted that, for specific details of the steps S601 and S603, reference may be made to the foregoing embodiments, and details are not described herein again.
According to the embodiment, the searching position or/and the searching direction of the environment acquisition assembly for the content resource are changed in a mode of moving the environment acquisition assembly, and then the media resource in different directions or the searching position can be quickly searched, so that the operation frequency of a user for a screen in a user interface is reduced, the interaction efficiency is improved, the interaction mode is enriched, and the improvement of the user viscosity is facilitated.
In addition, when a user drives the environment acquisition assembly carried on the terminal to move together in a real scene in a walking mode and the like, the media resources displayed in the interactive picture are synchronously updated with the walking track of the user, and because the element layout of each first interactive virtual element in the interactive picture is matched with the actual geographical position distribution of the corresponding first media resource, the user can be helped to quickly find the release place of the corresponding media resource at the actual geographical position through the position distribution of the interactive virtual elements displayed in the interactive picture, the online scene response is realized, the interactive effect is improved, the information transmission effect of the interactive content is further improved, and the interactive interest and the immersion are also enhanced.
In an optional embodiment, in the case that the interactive screen further includes a direction presentation element for characterizing the orientation of the resource, the method further includes:
in the moving process of the environment acquisition assembly, the direction pointing icon in the direction display element moves synchronously with the acquisition direction corresponding to the acquisition direction information corresponding to the environment acquisition assembly.
Optionally, as shown in fig. 8, the interactive screen may further include a direction display element 840 for characterizing the orientation of the resource. The direction display element 840 may be a thumbnail of the resource location that includes a direction pointing icon 841. In the moving process of the environment acquisition assembly, the direction pointing icon moves synchronously with the acquisition direction corresponding to the acquisition direction information corresponding to the environment acquisition assembly. For example, if the acquisition orientation of the environmental acquisition component changes from a first acquisition orientation to a second acquisition orientation, the pointing head of the direction pointing icon 841 changes from the first acquisition orientation to the second acquisition orientation. Optionally, the direction display element 840 may further include a resource distribution, for example, white dots distributed in the direction display element 840 shown in fig. 8 may represent the resource distribution, and a larger number of dots represents a larger number of media resources in the direction, and conversely, the smaller number of dots represents the media resources in the direction. It should be understood that fig. 8 is merely exemplary, and the design shape, the display position, and the like of the direction display element are not limited to those shown in fig. 8.
The direction display element used for representing the direction of the resource is arranged in the interactive picture, and in the moving process of the environment acquisition assembly, the direction pointing icon in the direction display element moves synchronously with the acquisition direction corresponding to the acquisition direction information corresponding to the environment acquisition assembly, so that a user can know the actual moving condition of the environment acquisition assembly and the direction condition of the media resource quickly, and the interactive effect and the interactive efficiency are improved.
In an alternative embodiment, as shown in fig. 7, the method further comprises:
in step S701, in response to a trigger operation on a target interactive virtual element in an interactive screen, media resource information corresponding to the target virtual element is displayed; the target interactive virtual element belongs to the element set corresponding to the first interactive virtual element.
Optionally, after the client displays the interactive picture, the user may perform a trigger operation, such as clicking, on the target virtual element in the interactive picture, and may enter a media resource interface corresponding to the target virtual element to display media resource information corresponding to the target virtual element. The media resource information may include playing content information of the media resource, user information of the media resource, and the like, and the target interactive virtual element belongs to the element set corresponding to the first interactive virtual element.
According to the embodiment, the media resource information corresponding to the target interactive virtual element is displayed by responding to the trigger operation of the target interactive virtual element in the interactive picture, so that the interactive effect is further improved.
With regard to the methods in the above-described embodiments, the specific manner in which each step is performed has been described in detail in the embodiments of the foregoing methods, and will not be described in detail herein.
Fig. 9 is a block diagram illustrating an interactive processing device according to an example embodiment. Referring to fig. 9, the apparatus includes:
a first obtaining module 910 configured to perform a triggering operation for a target interaction mode to obtain first environment information; the first environment information comprises acquisition direction information and first image information of an environment acquisition assembly, and the first image information is used for indicating a scene picture acquired by the environment acquisition assembly;
a second obtaining module 920 configured to perform obtaining a first interactive virtual element for pointing to a first media resource; the first media resource is matched with the acquisition direction information;
a display module 930 configured to perform displaying an interactive picture corresponding to the target interaction mode, where the interactive picture includes the first interactive virtual element and a scene picture corresponding to the first image information.
As an optional implementation manner, the first obtaining module is specifically configured to perform:
responding to the trigger operation aiming at the target interaction mode, and starting an environment acquisition component;
acquiring first image information acquired by the environment acquisition assembly;
and acquiring first environment information based on the acquisition direction information of the environment acquisition assembly and the first image information.
As an optional implementation, the second obtaining module is specifically configured to perform:
acquiring terminal position information of a target terminal; the target terminal is a terminal executing the target interaction mode;
determining a first media resource based on the acquisition direction information and the terminal position information;
and acquiring a first interactive virtual element pointing to the first media resource based on the resource display information corresponding to the first media resource.
As an optional implementation manner, the first interactive virtual element includes at least one of a content sub-element used for representing media content information, a distance sub-element used for representing a distance relationship between a resource position corresponding to the first media resource and a position of the environment acquisition component, and an interactive sub-element used for representing interactive information of the first media resource.
As an optional embodiment, the apparatus further comprises:
a position obtaining module configured to perform obtaining of resource position information corresponding to each of the first media resources corresponding to the first interactive virtual element;
the attribute determining module is configured to determine a display attribute of the first interactive virtual element corresponding to each first media resource based on position distance information between resource position information of the first media resource and terminal position information of a target terminal; the target terminal is a terminal executing the target interaction mode;
and the processing module is configured to execute the generation of an interactive picture corresponding to the target interactive mode according to each first interactive virtual element, the first image information and the display attribute.
As an optional embodiment, the display attribute comprises a display position and/or a display size; the presentation size is inversely related to the position distance information. The processing module comprises:
the first processing unit is configured to sequentially combine each first interactive virtual element with the scene picture corresponding to the first image information according to the display position to generate an interactive picture corresponding to the target interactive mode; and/or
And the second processing unit is configured to sequentially combine each first interactive virtual element with the scene picture corresponding to the first image information according to the display size to generate an interactive picture corresponding to the target interactive mode.
As an optional embodiment, the apparatus further comprises:
a third obtaining module configured to perform obtaining second environmental information in real-time in response to position movement information of the environment acquisition component; the second environment information comprises updated acquisition direction information and second image information of the environment acquisition assembly, and the second image information is used for indicating a scene picture acquired by the environment acquisition assembly;
a fourth acquisition module configured to perform acquisition of a second interactive virtual element for pointing to a second media resource; the second media resource is associated with the updated acquisition direction information of the environment acquisition component and/or the terminal position information of the target terminal;
and the display updating module is configured to execute scene pictures corresponding to the second interactive virtual elements and the second image information and dynamically update the interactive pictures.
As an optional embodiment, in a case that the interactive screen further includes a direction display element for characterizing the orientation of the resource, the apparatus further includes:
and the synchronous adjusting module is configured to execute synchronous movement of the direction pointing icon in the direction display element and the acquisition direction corresponding to the acquisition direction information corresponding to the environment acquisition assembly in the movement process of the environment acquisition assembly.
As an alternative embodiment, continuing to be shown in fig. 10, the apparatus further comprises:
the interaction module 940 is configured to perform a triggering operation on a target interaction virtual element in the interaction picture, and display media resource information corresponding to the target interaction virtual element; the target interactive virtual element belongs to the element set corresponding to the first interactive virtual element.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
FIG. 11 is a block diagram illustrating an electronic device in accordance with an example embodiment. Referring to fig. 11, an electronic device includes a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the steps of any of the interactive processing methods described in the above embodiments when executing the instructions stored in the memory.
The electronic device may be a terminal, a server or a similar computing device, taking the electronic device as a terminal as an example, fig. 11 is a block diagram of an electronic device for interactive processing according to an exemplary embodiment, specifically:
the terminal may include RF (Radio Frequency) circuitry 1110, memory 1120 including one or more computer-readable storage media, input unit 1130, display unit 1140, sensors 1150, audio circuitry 1160, WiFi (wireless fidelity) module 1170, processor 1180 including one or more processing cores, and power 1190. Those skilled in the art will appreciate that the terminal structure shown in fig. 11 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
RF circuit 1110 may be used for receiving and transmitting signals during a message transmission or communication process, and in particular, for receiving downlink messages from a base station and then processing the received downlink messages by one or more processors 1180; in addition, data relating to uplink is transmitted to the base station. In general, RF circuitry 1110 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, the RF circuit 1110 can also communicate with a network and other terminals through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), email, SMS (Short Messaging Service), and the like.
The memory 1120 may be used to store software programs and modules, and the processor 1180 executes various functional applications and data processing by operating the software programs and modules stored in the memory 1120. The memory 1120 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 1120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 1120 may also include a memory controller to provide the processor 1180 and the input unit 1130 access to the memory 1120.
The input unit 1130 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, input unit 1130 may include a touch-sensitive surface 1131 as well as other input devices 1132. Touch-sensitive surface 1131, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 1131 (e.g., operations by a user on or near the touch-sensitive surface 1131 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a preset program. Alternatively, touch-sensitive surface 1131 may include two portions, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1180, and receives and executes commands sent by the processor 1180. Additionally, touch-sensitive surface 1131 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 1130 may include other input devices 1132 in addition to the touch-sensitive surface 1131. In particular, other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1140 may be used to display information input by or provided to the user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 1140 may include a Display panel 1141, and optionally, the Display panel 1141 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, touch-sensitive surface 1131 may cover display panel 1141, and when touch operation is detected on or near touch-sensitive surface 1131, the touch operation is transmitted to processor 1180 to determine the type of touch event, and processor 1180 then provides corresponding visual output on display panel 1141 according to the type of touch event. Where touch-sensitive surface 1131 and display panel 1141 may implement input and output functions as two separate components, in some embodiments touch-sensitive surface 1131 and display panel 1141 may also be integrated to implement input and output functions.
The terminal may also include at least one sensor 1150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 1141 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 1141 and/or the backlight when the terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications of recognizing terminal gestures (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be further configured for the terminal, the description thereof is omitted here.
Audio circuitry 1160, speaker 1161, and microphone 1162 may provide an audio interface between a user and the terminal. The audio circuit 1160 can transmit the electrical signal converted from the received audio data to the speaker 1161, and convert the electrical signal into a sound signal for output by the speaker 1161; on the other hand, the microphone 1162 converts the collected sound signal into an electric signal, receives it by the audio circuit 1160, converts it into audio data, processes it by the audio data output processor 1180, and transmits it to, for example, another terminal via the RF circuit 1110, or outputs it to the memory 1120 for further processing. The audio circuitry 1160 may also include an ear-bud jack to provide communication of peripheral headphones with the terminal.
WiFi belongs to short distance wireless transmission technology, and the terminal can help the user to send and receive e-mail, browse web page and access streaming media, etc. through the WiFi module 1170, which provides wireless broadband internet access for the user. Although fig. 11 shows the WiFi module 1170, it is understood that it does not belong to the essential constitution of the terminal, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1180 is a control center of the terminal, connects various parts of the entire terminal by using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 1120 and calling data stored in the memory 1120, thereby performing overall monitoring of the terminal. Optionally, processor 1180 may include one or more processing cores; preferably, the processor 1180 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated within processor 1180.
The terminal further includes a power supply 1190 (e.g., a battery) for supplying power to the various components, and preferably, the power supply may be logically connected to the processor 1180 through a power management system, so that the power management system may manage charging, discharging, and power consumption management functions. Power supply 1190 may also include one or more dc or ac power supplies, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, or any other component.
Although not shown, the terminal may further include a camera, a bluetooth module, and the like, which are not described herein again. In this embodiment, the terminal further includes a memory and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for performing the virtual resource procurement methods provided by the method embodiments described above.
In an exemplary embodiment, a computer storage medium is also provided, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the steps of the method provided in any one of the above-described embodiments.
In an exemplary embodiment, there is also provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the method provided in any of the above embodiments. Optionally, the computer program is stored in a computer readable storage medium. The processor of the electronic device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the electronic device executes the method provided in any one of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus (Rambus) direct RAM (RDRAM), direct bused dynamic RAM (DRDRAM), and bused dynamic RAM (RDRAM).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An interactive processing method, comprising:
responding to a trigger operation aiming at a target interaction mode, and acquiring first environment information; the first environment information comprises acquisition direction information and first image information of an environment acquisition assembly, and the first image information is used for indicating a scene picture acquired by the environment acquisition assembly;
acquiring a first interactive virtual element for pointing to a first media resource; the first media resource is matched with the acquisition direction information;
and displaying an interactive picture corresponding to the target interactive mode, wherein the interactive picture comprises the first interactive virtual element and a scene picture corresponding to the first image information.
2. The method of claim 1, wherein obtaining the first environment information in response to the triggering operation for the target interaction mode comprises:
responding to the trigger operation aiming at the target interaction mode, and starting an environment acquisition component;
acquiring first image information acquired by the environment acquisition assembly;
and acquiring first environmental information based on the acquisition direction information of the environment acquisition assembly and the first image information.
3. The method of claim 1, wherein obtaining the first interactive virtual element for pointing to the first media asset comprises:
acquiring terminal position information of a target terminal; the target terminal is a terminal executing the target interaction mode;
determining a first media resource based on the acquisition direction information and the terminal position information;
and acquiring a first interactive virtual element pointing to the first media resource based on the resource display information corresponding to the first media resource.
4. The method of claim 1, wherein the first interactive virtual element includes at least one of a content sub-element for representing media content information, a distance sub-element for representing a distance relationship between a resource location corresponding to the first media resource and a location of the environment acquisition component, and an interactive sub-element for representing interactive information of the first media resource.
5. The method according to any one of claims 1-4, wherein before said presenting the interactive picture corresponding to the target interaction mode, the method further comprises:
acquiring resource position information corresponding to each first media resource corresponding to the first interactive virtual element;
determining the display attribute of the first interactive virtual element corresponding to each first media resource based on the position distance information between the resource position information of the first media resource and the terminal position information of the target terminal; the target terminal is a terminal executing the target interaction mode;
and generating an interactive picture corresponding to the target interactive mode according to each first interactive virtual element, the first image information and the display attribute.
6. The method of claim 5, wherein the presentation attribute comprises a presentation position and/or a presentation size; the display size and the distance size corresponding to the position distance information are in negative correlation;
the generating of the interactive picture corresponding to the target interaction mode according to each first interactive virtual element, the first image information and the display attribute comprises:
sequentially combining each first interactive virtual element with the scene picture corresponding to the first image information according to the display position to generate an interactive picture corresponding to the target interactive mode; and/or
And sequentially combining each first interactive virtual element with the scene picture corresponding to the first image information according to the display size to generate an interactive picture corresponding to the target interactive mode.
7. An interactive processing device, comprising:
the first acquisition module is configured to execute triggering operation responding to the target interaction mode and acquire first environment information; the first environment information comprises acquisition direction information and first image information of an environment acquisition assembly, and the first image information is used for indicating a scene picture acquired by the environment acquisition assembly;
a second obtaining module configured to perform obtaining a first interactive virtual element for pointing to a first media resource; the first media resource is matched with the acquisition direction information;
and the display module is configured to display an interactive picture corresponding to the target interactive mode, wherein the interactive picture comprises the first interactive virtual element and a scene picture corresponding to the first image information.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the interactive processing method of any of claims 1 to 6.
9. A computer-readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the interactive processing method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the interactive processing method of any one of claims 1 to 6 when executed by a processor.
CN202210374813.1A 2022-04-11 2022-04-11 Interactive processing method, device, equipment and storage medium Pending CN114935973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210374813.1A CN114935973A (en) 2022-04-11 2022-04-11 Interactive processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210374813.1A CN114935973A (en) 2022-04-11 2022-04-11 Interactive processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114935973A true CN114935973A (en) 2022-08-23

Family

ID=82862587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210374813.1A Pending CN114935973A (en) 2022-04-11 2022-04-11 Interactive processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114935973A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115997385A (en) * 2022-10-12 2023-04-21 广州酷狗计算机科技有限公司 Interface display method, device, equipment, medium and product based on augmented reality

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115997385A (en) * 2022-10-12 2023-04-21 广州酷狗计算机科技有限公司 Interface display method, device, equipment, medium and product based on augmented reality
WO2024077518A1 (en) * 2022-10-12 2024-04-18 广州酷狗计算机科技有限公司 Interface display method and apparatus based on augmented reality, and device, medium and product

Similar Documents

Publication Publication Date Title
CN111417028B (en) Information processing method, information processing device, storage medium and electronic equipment
US10341716B2 (en) Live interaction system, information sending method, information receiving method and apparatus
US9367961B2 (en) Method, device and storage medium for implementing augmented reality
CN113157906A (en) Recommendation information display method, device, equipment and storage medium
WO2019080065A1 (en) Display method and apparatus
CN111408136A (en) Game interaction control method, device and storage medium
CN108513671B (en) Display method and terminal for 2D application in VR equipment
US20150002539A1 (en) Methods and apparatuses for displaying perspective street view map
CN107908765B (en) Game resource processing method, mobile terminal and server
CN111464825B (en) Live broadcast method based on geographic information and related device
CN111597455A (en) Social relationship establishing method and device, electronic equipment and storage medium
CN109495638B (en) Information display method and terminal
CN113411680A (en) Multimedia resource playing method, device, terminal and storage medium
CN111949879A (en) Method and device for pushing message, electronic equipment and readable storage medium
CN110536236B (en) Communication method, terminal equipment and network equipment
CN111367444A (en) Application function execution method and device, electronic equipment and storage medium
CN114935973A (en) Interactive processing method, device, equipment and storage medium
CN114143280B (en) Session display method and device, electronic equipment and storage medium
CN113468346B (en) Resource processing method and device, electronic equipment and storage medium
CN113485596B (en) Virtual model processing method and device, electronic equipment and storage medium
CN115017340A (en) Multimedia resource generation method and device, electronic equipment and storage medium
CN115643445A (en) Interaction processing method and device, electronic equipment and storage medium
CN115379113A (en) Shooting processing method, device, equipment and storage medium
CN111625170B (en) Animation display method, electronic equipment and storage medium
CN113780291A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination