CN117008757A - Interaction method, interaction device, computer equipment and computer readable storage medium - Google Patents

Interaction method, interaction device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN117008757A
CN117008757A CN202211398960.9A CN202211398960A CN117008757A CN 117008757 A CN117008757 A CN 117008757A CN 202211398960 A CN202211398960 A CN 202211398960A CN 117008757 A CN117008757 A CN 117008757A
Authority
CN
China
Prior art keywords
image
scene
target
virtual interaction
playing interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211398960.9A
Other languages
Chinese (zh)
Inventor
王群
刘里
于子元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211398960.9A priority Critical patent/CN117008757A/en
Publication of CN117008757A publication Critical patent/CN117008757A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images

Abstract

The application relates to an interaction method, an interaction device, a computer device and a computer readable storage medium. The method comprises the following steps: displaying an image playing interface, and displaying a target interaction element on the image playing interface; the target interaction element is used for triggering an identification mode of a visual scene element of the associated virtual interaction scene; responding to the triggering operation of the target interaction element, and displaying the image playing interface in the identification mode; and under the condition that an image is played in the image playing interface in the identification mode, when a target visual field scene element associated with a target virtual interaction scene exists in the played image, jumping to the target virtual interaction scene associated with the target visual field scene element from the image playing interface. The method can improve the flexibility of image recognition in application.

Description

Interaction method, interaction device, computer equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technology, and in particular, to an interaction method, apparatus, computer device, computer readable storage medium, and computer program product.
Background
With the development of computer technology, image recognition technology has emerged, which is an important field of artificial intelligence, and refers to technology that processes, analyzes and understands images by a computer to recognize objects and objects in various modes.
For example, images related to the virtual interaction scene are identified, and the virtual interaction scene corresponding to the content in the images is identified. When the user wants to enter the virtual interaction scene, the user needs to exit the identification of the image, and the downloaded related application can enter the virtual interaction scene for experience. However, after the virtual interaction scene corresponding to the image content is identified, the virtual interaction scene experience needs to be carried out by the application after the identification of the image is needed to be exited, and the operation is inflexible.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an interaction method, apparatus, computer device, computer readable storage medium and computer program product that are more flexible in operation.
The application provides an interaction method, which comprises the following steps:
displaying an image playing interface, and displaying a target interaction element on the image playing interface; the target interaction element is used for triggering an identification mode of a visual scene element of the associated virtual interaction scene;
Responding to the triggering operation of the target interaction element, and displaying the image playing interface in the identification mode;
and under the condition that an image is played in the image playing interface in the identification mode, when a target visual field scene element associated with a target virtual interaction scene exists in the played image, jumping to the target virtual interaction scene associated with the target visual field scene element from the image playing interface.
The application also provides an interaction device, which comprises:
the interface display module is used for displaying an image playing interface and displaying target interaction elements on the image playing interface; the target interaction element is used for triggering an identification mode of a visual scene element of the associated virtual interaction scene;
the state display module is used for responding to the triggering operation of the target interaction element and displaying the image playing interface in the identification mode;
and the jump module is used for jumping from the image playing interface to the target virtual interaction scene associated with the target virtual interaction scene element when the target virtual interaction scene element associated with the target virtual interaction scene exists in the played image under the condition that the image is played in the image playing interface in the identification mode.
The application also provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
displaying an image playing interface, and displaying a target interaction element on the image playing interface; the target interaction element is used for triggering an identification mode of a visual scene element of the associated virtual interaction scene; responding to the triggering operation of the target interaction element, and displaying the image playing interface in the identification mode; and under the condition that an image is played in the image playing interface in the identification mode, when a target visual field scene element associated with a target virtual interaction scene exists in the played image, jumping to the target virtual interaction scene associated with the target visual field scene element from the image playing interface.
The present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
displaying an image playing interface, and displaying a target interaction element on the image playing interface; the target interaction element is used for triggering an identification mode of a visual scene element of the associated virtual interaction scene; responding to the triggering operation of the target interaction element, and displaying the image playing interface in the identification mode; and under the condition that an image is played in the image playing interface in the identification mode, when a target visual field scene element associated with a target virtual interaction scene exists in the played image, jumping to the target virtual interaction scene associated with the target visual field scene element from the image playing interface.
The application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of:
displaying an image playing interface, and displaying a target interaction element on the image playing interface; the target interaction element is used for triggering an identification mode of a visual scene element of the associated virtual interaction scene; responding to the triggering operation of the target interaction element, and displaying the image playing interface in the identification mode; and under the condition that an image is played in the image playing interface in the identification mode, when a target visual field scene element associated with a target virtual interaction scene exists in the played image, jumping to the target virtual interaction scene associated with the target visual field scene element from the image playing interface.
The interaction method, the interaction device, the computer equipment, the storage medium and the computer program product display an image playing interface, display target interaction elements for triggering the recognition modes of visual scene elements associated with virtual interaction scenes on the image playing interface, enable the target interaction elements to enter the recognition modes in response to triggering operation of the target interaction elements, and display the image playing interface in the recognition modes. And under the condition that the image is played in the image playing interface in the identification mode, judging whether the played image has the target visual field scene element associated with the target virtual interaction scene or not, and accurately identifying the target virtual interaction scene associated with the target visual field scene element. When the target visual field scene elements associated with the target virtual interaction scene exist in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene elements can directly jump to the corresponding virtual interaction scene based on the identification of the visual field scene elements in the image, so that the application of image identification is more flexible.
The application provides an interaction method, which comprises the following steps:
receiving an image played in an image playing interface of an identification mode of a visual scene element on a terminal; the visual field scenery elements are associated with corresponding virtual interaction scenes;
performing image recognition on the image to obtain a recognition text corresponding to a target visual scene element in the image;
when the identification text points to a target virtual interaction scene, acquiring a link address which points to the target virtual interaction scene and corresponds to the identification text;
and returning the link address to the terminal, wherein the link address is used for indicating the terminal to jump from the image playing interface to the target virtual interaction scene.
The application also provides an interaction device, which comprises:
the receiving module is used for receiving the image played in the image playing interface of the identification mode of the visual scene element on the terminal; the visual field scenery elements are associated with corresponding virtual interaction scenes;
the identification module is used for carrying out image identification on the image to obtain an identification text corresponding to the target visual scene element in the image;
the acquisition module is used for acquiring a link address which points to the target virtual interaction scene and corresponds to the identification text when the identification text points to the target virtual interaction scene;
And the return module is used for returning the link address to the terminal, and the link address is used for indicating the terminal to jump from the image playing interface to the target virtual interaction scene.
The application also provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
receiving an image played in an image playing interface of an identification mode of a visual scene element on a terminal; the visual field scenery elements are associated with corresponding virtual interaction scenes; performing image recognition on the image to obtain a recognition text corresponding to a target visual scene element in the image; when the identification text points to a target virtual interaction scene, acquiring a link address which points to the target virtual interaction scene and corresponds to the identification text; and returning the link address to the terminal, wherein the link address is used for indicating the terminal to jump from the image playing interface to the target virtual interaction scene.
The present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Receiving an image played in an image playing interface of an identification mode of a visual scene element on a terminal; the visual field scenery elements are associated with corresponding virtual interaction scenes; performing image recognition on the image to obtain a recognition text corresponding to a target visual scene element in the image; when the identification text points to a target virtual interaction scene, acquiring a link address which points to the target virtual interaction scene and corresponds to the identification text; and returning the link address to the terminal, wherein the link address is used for indicating the terminal to jump from the image playing interface to the target virtual interaction scene.
The application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of:
receiving an image played in an image playing interface of an identification mode of a visual scene element on a terminal; the visual field scenery elements are associated with corresponding virtual interaction scenes; performing image recognition on the image to obtain a recognition text corresponding to a target visual scene element in the image; when the identification text points to a target virtual interaction scene, acquiring a link address which points to the target virtual interaction scene and corresponds to the identification text; and returning the link address to the terminal, wherein the link address is used for indicating the terminal to jump from the image playing interface to the target virtual interaction scene.
According to the interaction method, the device, the computer equipment, the storage medium and the computer program product, through receiving the image played in the image playing interface of the identification mode of the visual scene element on the terminal, through carrying out image identification on the image, the identification text corresponding to the target visual scene element in the image is obtained, when the identification text points to the target virtual interaction scene and indicates that the target visual scene element in the image has the associated target virtual interaction scene, the link address corresponding to the identification text and pointing to the target virtual interaction scene is obtained and returned to the terminal, so that the terminal can directly jump to the target virtual interaction scene from the image playing interface through the link address, the application of the image identification is more flexible, and the jumping operation is simpler and more flexible.
Drawings
FIG. 1 is an application environment diagram of an interaction method in one embodiment;
FIG. 2 is a flow diagram of an interaction method in one embodiment;
FIG. 3 is an interface diagram of a jump from an image playback interface to a target virtual interactive scene associated with a target visual scene element in one embodiment;
FIG. 4 is an interface schematic diagram of jumping from an image playback interface to a target virtual interactive scene when there is a target visual scene element belonging to a target element category and there is an associated target virtual interactive scene for the target visual scene element in an image in one embodiment;
FIG. 5A is a schematic diagram of an interface for jumping from an image playback interface to a corresponding target virtual interactive scene via a target jumping portal in response to a selection event for a target jumping portal of a plurality of jumping portals, according to an embodiment;
FIG. 5B is a schematic diagram of an interface when the recognition result characterizes the presence or absence of a target visual scene element of an associated target virtual interactive scene in an image, in one embodiment;
FIG. 6 is an interface diagram of a video presentation interface of a video playback application in one embodiment;
FIG. 7A is a schematic diagram of an interface of an interactive method applied to a video playing scene in one embodiment;
FIG. 7B is a schematic diagram of an interface of an interactive method applied to a video playing scene in one embodiment;
FIG. 8 is a flow chart of an interaction method in another embodiment;
FIG. 9 is a flow chart illustrating an interaction method applied to a terminal in one embodiment;
FIG. 10 is a flow chart illustrating an interaction method applied to a server in one embodiment;
FIG. 11 is a flow diagram of training and reasoning of classification models in one embodiment;
FIG. 12 is a block diagram of an interaction device in one embodiment;
FIG. 13 is a block diagram of an interactive device in another embodiment;
Fig. 14 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The interaction method provided by the embodiment of the application can be applied to an application environment shown in figure 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other server. The terminal 102 displays an image playing interface, and displays target interaction elements on the image playing interface; the target interaction element is used to trigger an identification pattern of a visual scene element associated with the virtual interaction scene. The terminal 102 displays an image playback interface in the recognition mode in response to a trigger operation on the target interactive element. In the case of playing an image in the image playing interface in the recognition mode, the terminal 102 sends the played image to the server 104, the server 104 recognizes the image to recognize whether there is a target visual scene element associated with the target virtual interaction scene in the image, and returns a recognition result to the terminal 102. When there is a target visual field scene element associated with the target virtual interaction scene in the played image, the terminal 102 jumps from the image playing interface to the target virtual interaction scene associated with the target visual field scene element.
The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, intelligent voice interaction devices, intelligent home appliances, vehicle terminals, aircrafts, portable wearable devices, etc. The terminal 102 may be running an application or a client that installs an application, which may be an image playing application, a virtual interactive application, a Web application, etc. The server 104 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like.
In one embodiment, as shown in fig. 2, an interaction method is provided, and the method is applied to the terminal in fig. 1 for illustration, and includes the following steps:
step S202, displaying an image playing interface, and displaying a target interaction element on the image playing interface; the target interaction element is used to trigger an identification pattern of a visual scene element associated with the virtual interaction scene.
The image playing interface is an interface for playing images, and the image playing interface can be a popup window, a floating layer or an independent interface.
The image can be an image of any scene, can be a complete image, can be a partial image cut from other images, and can be an image in a video. Video is displayed at a given frequency in a series of captured images (referred to as image frames), which are the smallest units that make up the video.
The interactive element is a visual element which is displayed on the image playing interface and can be operated by a user. Visual elements refer to elements that can be displayed to be visible to the human eye to convey information. The interactive elements may be represented in various forms, for example, but not limited to, at least one of a control, an image, a text, a logo, a link, or an animation file. The control may be a button, a filled box, a radio box, or a set of options.
In one embodiment, the image playback interface may include a plurality of interactive elements, each capable of triggering a particular event. The target interaction element refers to an interaction element capable of triggering an identification pattern of a visual scene element associated with the virtual interaction scene.
A plurality of visual elements may be included in the image. The visual scene element refers to an element belonging to a virtual interaction scene or capable of reflecting the characteristics of the virtual interaction scene in a plurality of visual elements included in the image. The recognition mode of the visual scene elements refers to a state capable of recognizing the virtual interaction scene associated with the visual scene elements, namely, which virtual interaction scene is associated with each visual scene element can be recognized in the recognition mode.
A virtual interaction scenario is a scenario for a virtual character to perform an activity or to perform an interaction action when a virtual interaction application is running. The virtual interactive scene can be a simulation environment for the real world or a virtual environment which is purely imaginary. The virtual interaction scene can be a mobile game scene, an end game scene, a cloud game scene and the like, but is not limited to the mobile game scene. The user may control the virtual character to move or perform an interactive action in the virtual interactive scene.
A virtual character is an active object in a virtual interaction scene. The movable object may specifically be an avatar for representing a user, which is not limited to a virtual character, a virtual animal, or a cartoon character, etc.
The virtual interactive application can run on the terminal and also can run on the cloud. The virtual interactive application running in the terminal may refer to a client installed in the terminal, and the client refers to a program installed and running in the terminal. An application may also refer to an installation-free application, i.e. an application that can be used without downloading an installation, which may also be referred to as an applet, which is typically run as a sub-program in a client, which is then referred to as a parent application, and a sub-program running in the client is referred to as a child application. An application may also refer to a web application or the like that is opened through a browser.
The virtual interactive application running in the cloud is called a cloud application. The cloud application is an application for interaction between the terminal and the cloud, and the cloud application operates in a mode of encoding an operating process into an audio and video stream through the strong computing capacity of the cloud simulator and transmitting the audio and video stream to the terminal through a network so as to realize interaction with a user.
The cloud end is a cloud server, which is also called a cloud server. Cloud servers are based on large-scale distributed computing systems that integrate computer resources through virtualization technology to provide services of the internet infrastructure. The network that provides the resources is referred to as the "cloud". Resources in the cloud are infinitely expandable in the sense of users, and can be acquired at any time, used as needed, expanded at any time and paid for use as needed. Cloud computing (clouding) is a computing model that distributes computing tasks across a large pool of computers, enabling various application systems to acquire computing power, storage space, and information services as needed.
The virtual interaction scenario may be a cloud game, and the cloud application may be a cloud game application running in the cloud. Cloud gaming (Cloud gaming), which may also be referred to as game on demand, is an online gaming technology based on Cloud computing technology. Cloud gaming technology enables lightweight devices (thin clients) with relatively limited graphics processing and data computing capabilities to run high quality games. Under a cloud game scene, the game is not run at the terminal, but is run at the cloud, the cloud renders the game scene into an audio and video stream, and the audio and video stream is transmitted to the terminal through a network. The terminal does not need to have strong graphic operation and data processing capability, and only needs to have basic streaming media playing capability and the capability of acquiring player input instructions and sending the player input instructions to the cloud.
Specifically, an image playing application with an image playing function can be run on the terminal, an image playing interface is entered, and a target interaction element is displayed on the image playing interface. The user may trigger an identification pattern of visual scene elements associated with the virtual interaction scene by triggering the target interaction element.
The image playing application refers to an application with an image playing function, and the image playing application can be presented to a user in an application program mode, and the user can play images through the application program. The application may refer to a client installed in a terminal, an installation-free application, a web application opened through a browser, or the like. The image playing application may be specifically, but not limited to, a long video application, a short video application, a picture application, a live broadcast application, and the like. The image playing application may also be a cloud application, which refers to an application running in the cloud.
Step S204, responding to the triggering operation of the target interaction element, and displaying an image playing interface in an identification mode.
The triggering operation is a preset operation acting on the target interaction element, and the detection of the triggering operation triggers the entry of the recognition mode. The triggering operation may specifically be a touch operation, a cursor operation, a key operation, or a voice operation. The touch operation may be a touch click operation, a touch press operation, or a touch slide operation, and the touch operation may be a single-point touch operation or a multi-point touch operation; the cursor operation may be an operation of controlling the cursor to click or an operation of controlling the cursor to press; the key operation may be a virtual key operation or a physical key operation, etc.
Specifically, the user can trigger the target interaction element, the terminal responds to the triggering operation of the user on the target interaction element, enters the recognition mode, and displays the image playing interface in the recognition mode. In the recognition mode, the terminal can recognize each visual element in the image to recognize whether visual field scene elements associated with the virtual interaction scene exist in the image or not, and recognize the virtual interaction scene associated with the visual field scene elements in the image.
Step S206, when the image is played in the image playing interface in the identification mode and the target visual field scene element associated with the target virtual interaction scene exists in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene element.
The target visual scene element refers to a visual scene element associated with a target virtual interaction scene in the played image.
Specifically, in the case of playing an image in the image playing interface in the recognition mode, the terminal may recognize whether a target visual field scene element associated with the virtual interaction scene exists among a plurality of visual elements included in the played image, and recognize the target virtual interaction scene associated with the target visual field scene element. When the target visual field scene element associated with the target virtual interaction scene exists in the played image, the terminal jumps from the image playing interface to the target virtual interaction scene associated with the target visual field scene element.
Further, when the target visual field scene element associated with the target virtual interaction scene exists in the played image, the terminal jumps to the target virtual interaction scene associated with the target visual field scene element from the image playing interface in the identification mode.
As shown in fig. 3, a target interactive element 302 is displayed on the image playback interface 300, and the image playback interface 300 in the recognition mode is displayed in response to a trigger operation on the target interactive element 302. In the case of playing the image 304 in the image playing interface 300 in the recognition mode, the target visual field element 306 in the image 304 is recognized, and in the case that the associated target virtual interaction scene 308 exists in the target visual field element 306, the jump is made from the image playing interface 300 to the target virtual interaction scene 308.
In this embodiment, when there are multiple target visual scene elements associated with the same target virtual interaction scene in the played image, the terminal jumps from the image playing interface to the target virtual interaction scene.
In this embodiment, when multiple target visual elements associated with different target virtual interactive scenes exist in a played image, according to the priority order of the multiple target visual elements, the image playing interface is jumped to the target virtual interactive scene associated with the target visual element with the highest priority.
In this embodiment, when a plurality of target visual scene elements associated with different target virtual interactive scenes exist in a played image, a respective jump entry of the plurality of target virtual interactive scenes is displayed on an image playing interface; and responding to a selection event of a target jumping entrance in the plurality of jumping portals, and jumping to a corresponding target virtual interaction scene from the image playing interface through the target jumping entrance.
In this embodiment, the terminal may send the played image to the server, and the server determines whether the target visual scene element exists in the image, identifies the target virtual interaction scene associated with the target visual scene element, and returns an identification result after the server identifies the target virtual interaction scene.
In one embodiment, the method further comprises:
under the condition that the image is played in the image playing interface in the identification mode, when the target visual field scenery elements of the associated target virtual interaction scene do not exist in the played image, prompting information representing that the associated virtual interaction scene does not exist is displayed on the image playing interface.
In one embodiment, in the case of playing an image in the image playing interface in the recognition mode, when there is a target visual element associated with a target virtual interaction scene in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual element, including:
under the condition that an image is played in an image playing interface in an identification mode, when a target visual element related to a target virtual interaction scene exists in the played image, at least one jump channel option aiming at the target virtual interaction scene is displayed in the image playing interface; in response to a jump trigger event for a target jump channel option of the at least one jump channel option, jumping from the image playing interface to a target virtual interaction scenario associated with the target visual element through a target jump channel indicated by the target jump channel option.
In one embodiment, in response to an exit event triggered at the target virtual interactive scene, a jump is made from the target virtual interactive scene to the image playback interface.
In the interaction method, an image playing interface is displayed, and a target interaction element for triggering the recognition mode of the visual scene element associated with the virtual interaction scene is displayed on the image playing interface, so that the recognition mode can be entered in response to the triggering operation of the target interaction element, and the image playing interface in the recognition mode is displayed. And under the condition that the image is played in the image playing interface in the identification mode, judging whether the played image has the target visual field scene element associated with the target virtual interaction scene or not, and accurately identifying the target virtual interaction scene associated with the target visual field scene element. When the target visual field scene elements associated with the target virtual interaction scene exist in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene elements can directly jump to the corresponding virtual interaction scene based on the identification of the visual field scene elements in the image, so that the application of image identification is more flexible. And the method directly jumps to the corresponding virtual interaction scene based on the target visual scene element in the image, and can take the image as a carrier for propagation and conversion of the virtual interaction scene, thereby realizing recommendation of the virtual interaction scene based on the image and effectively improving the conversion rate of the virtual interaction scene based on the image.
In one embodiment, in the case of playing an image in an image playing interface in an identification mode, when there is a target visual field scene element associated with a target virtual interaction scene in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene element, including:
displaying at least one element category on the image playing interface under the condition that the image is played in the image playing interface in the identification mode; in response to a trigger event for a target element category in the at least one element category, jumping from the image playing interface to a target virtual interaction scene associated with the target visual scene element in the event that a target virtual interaction scene associated with the target virtual interaction scene and belonging to the target element category exists in the played image.
The element category refers to a category of a visible scene element in the image, and specifically may include, but is not limited to, a text category, a person category, an animal category, a plant category, a consumed resource category, and the like.
Specifically, in the case of playing an image in the image playing interface in the recognition mode, at least one element category is displayed in the image playing interface. Further, in the case of playing an image in the image playing interface in the recognition mode, at least one element category is displayed in the image playing interface in the recognition mode.
The user can select at least one from the displayed element categories, the terminal responds to a trigger event of a target element category in the at least one element category, whether a target visual scene element belonging to the target element category exists in the played image or not is identified, and if the target visual scene element exists, a target virtual interaction scene associated with each target visual scene element is identified.
And under the condition that the played image contains the target visual field scenery elements belonging to the target element category and the target visual field scenery elements contain the associated target virtual interaction scene, the terminal jumps from the image playing interface to the target virtual interaction scene associated with the target visual field scenery elements. Further, the terminal jumps from the image playing interface in the recognition mode to the target virtual interaction scene associated with the target visual scene element.
For example, if the target element category selected by the terminal is a text category, the terminal may identify a target visual scene element representing the text in the play image, identify an identification text corresponding to the target visual scene element, and identify a target virtual interaction scene associated with the identification text, thereby obtaining a target virtual interaction scene associated with the target visual scene element.
As shown in fig. 4, a plurality of element categories 402 including a text category, a person category, and a consumed resource category are displayed in the image playback interface 400 in the recognition mode. The user clicks to select the people category as the target element category. In the case of playing the image 404 in the image playing interface 400 in the recognition mode, the target visual scene element 406 belonging to the person category in the image 404 is recognized, and the target virtual interaction scene corresponding to the target visual scene element 406 is obtained.
In this embodiment, when an image is played in the image playing interface in the recognition mode, at least one element category is displayed on the image playing interface to provide a plurality of recognition modes distinguished according to the element category, so that the selection modes are diversified. In response to a trigger event of a target element category in at least one element category, under the condition that a target virtual interaction scene associated with a target virtual interaction scene and belonging to the target element category exists in a played image, jumping from an image playing interface to the target virtual interaction scene associated with the target virtual interaction scene, and therefore selective identification can be carried out through the category of the element in the image, and accuracy of identification of the target virtual interaction scene is improved through the same category of the element.
In one embodiment, in the case of playing an image in an image playing interface in an identification mode, when there is a target visual field scene element associated with a target virtual interaction scene in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene element, including:
in the case of playing an image in an image playing interface in an identification mode, responding to a selection event of a visual element in the played image, and when the selected visual element comprises a target visual scene element associated with a target virtual interaction scene, jumping from the image playing interface to the target virtual interaction scene associated with the target visual scene element.
Wherein, the selection event refers to an event of selecting a visual element in the image. The selection event may be implemented through a selection operation, and the selection operation may specifically be a touch operation, a cursor operation, a key operation, a voice operation, or the like. For example, clicking, intercepting, etc. on visual elements in the image to select visual elements. The intercepting operation may be to screenshot visual elements in the image through a rectangular box, resulting in the selected visual elements.
Specifically, in the case of playing an image in the image playing interface in the recognition mode, visual elements in the played image are in a selectable state, and a user can select visual elements to be recognized for recognition. And the terminal responds to a selection event of the visual element in the played image, and identifies the selected visual element so as to judge whether the selected visual element comprises a target visual scene element of the associated target virtual interaction scene.
When the selected visual element comprises a target visual scene element associated with the target virtual interaction scene, the terminal jumps from the image playing interface to the target virtual interaction scene associated with the target visual scene element. As in fig. 4, a user may click on a target visual field scene element 406 to identify the selected target visual field scene element 406 resulting in an associated target virtual interaction scene.
In this embodiment, when the selected visual element includes a target visual scene element associated with the target virtual interactive scene, a jump entry of the corresponding target virtual interactive scene is displayed in a display area of the target visual scene element. And responding to a trigger event of a jump entrance of the target virtual interaction scene, and jumping to the target virtual interaction scene from the image playing interface through the jump entrance.
The jump entrance is an entrance for jumping to the virtual interaction scene, and the jump entrance can be presented in different forms. For example, the jump portal may be presented with scene information of the virtual interactive scene, which may include at least one of a thumbnail, descriptive content, a graphic code, or a link address of the virtual interactive scene.
In this embodiment, more independent options are provided to the user in the case of playing an image in the image playing interface in the recognition mode. And in response to a selection event of a visual element in the played image, when the selected visual element comprises a target visual scene element associated with a target virtual interaction scene, jumping from the image playing interface to the target virtual interaction scene associated with the target visual scene element, so that accurate identification can be performed on the target visual scene element selected by the user, and automatically jumping to a corresponding virtual interaction scene to perform virtual interaction experience, and the operation is simpler and more flexible.
In one embodiment, jumping from the image playback interface to the target virtual interactive scene associated with the target visual scene element comprises:
displaying at least one jump channel option aiming at a target virtual interaction scene on an image playing interface; in response to a jump trigger event for a target jump channel option of the at least one jump channel options, jumping from the image playback interface to the target virtual interactive scene through the target jump channel indicated by the target jump channel option.
The jump channel refers to a channel which jumps from the image playing interface to the target virtual interaction scene. One target virtual interaction scene may correspond to at least one jump channel, which may be a channel to jump to a virtual interaction application local to the terminal, or to jump to a sub-application for displaying the target virtual interaction scene, or to jump to a web application opened through a browser for displaying the target virtual interaction scene, or to jump to a download address of the virtual interaction application, so as to enter the target virtual interaction scene after the virtual interaction application is downloaded.
Specifically, the terminal responds to the triggering operation of the target interaction element, displays an image playing interface in the identification mode, and displays at least one jump channel option aiming at the target virtual interaction scene on the image playing interface in the identification mode. The user can select a target jump channel option from the displayed jump channel options, and the terminal jumps to the target virtual interaction scene from the image playing interface in the identification mode through the target jump channel indicated by the target jump channel option in response to a jump trigger event for the target jump channel option.
For example, the image playing interface in the recognition mode displays 4 jump channel options, namely 'jump to local virtual interactive application', 'jump to sub-application of target virtual interactive scene', 'open virtual interactive application through browser', and 'download virtual interactive application', when the user selects 'jump to local virtual interactive application', the terminal can jump to virtual interactive application of corresponding target virtual interactive scene in the recognition mode from the image playing interface in the recognition mode, and run the virtual interactive application to display the target virtual interactive scene. If the virtual interactive application is installed at the terminal of the user, the user can select to jump to the local application, and the user is prevented from starting the local application by himself. The user can enter the target virtual interaction scene to experience under the condition of not installing the virtual interaction application by selecting to directly jump to the sub application or opening the online application through the browser. The virtual interactive application can be selected to be downloaded, and then the local application can be opened, so that the conversion rate of the virtual interactive application can be improved based on the identification and the skip of the visual scene elements.
In this embodiment, at least one jump channel option for the target virtual interactive scene is displayed on the image playing interface, and at least one option capable of jumping to the target virtual interactive scene is provided for the user. And in response to a jump trigger event aiming at a target jump channel option in at least one jump channel option, jumping to a target virtual interaction scene from an image playing interface through the target jump channel indicated by the target jump channel option, so that a user can select a channel suitable for the user to jump, and the jump mode is more flexible.
In one embodiment, in the case of playing an image in an image playing interface in an identification mode, when there is a target visual field scene element associated with a target virtual interaction scene in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene element, including:
under the condition that an image is played in an image playing interface in an identification mode, when target visual field elements related to a plurality of target virtual interaction scenes exist in the played image, respective jump entries of the plurality of target virtual interaction scenes are displayed on the image playing interface; and responding to a selection event of a target jumping entrance in the plurality of jumping portals, and jumping to a corresponding target virtual interaction scene from the image playing interface through the target jumping entrance.
Specifically, under the condition that an image is played in an image playing interface in an identification mode, the terminal identifies visual elements in the played image, and when a target visual scene element exists in the played image, the terminal can identify a target virtual interaction scene associated with the target visual scene element.
When a target visual field element is associated with a target virtual interaction scene, the terminal can directly jump to the target virtual interaction scene from the image playing interface in the identification mode, or the terminal displays a jump entrance of the target virtual interaction scene, and responds to a trigger event of the jump entrance of the target virtual interaction scene, the terminal jumps to the target virtual interaction scene from the image playing interface in the identification mode through the jump entrance.
When the target visual field elements are associated with the target visual field elements of the plurality of target virtual interaction scenes, the terminal can determine the jump entrance corresponding to each target virtual interaction scene, and display the jump entrance of each of the plurality of target virtual interaction scenes on the image playing interface in the recognition mode. The user can select any jump-in port as a target jump-in port, and jump to a corresponding target virtual interaction scene from the image playing interface in the identification mode through the target jump-in port in response to a selection event of the target jump-in port in the plurality of jump-in ports.
As shown in fig. 5A, in the case of playing an image 502 in the image playing interface 500 in the recognition mode, when there is a target visual field element 504 associated with the target virtual interactive scene 1 and the target virtual interactive scene 2 in the played image 502, a jump-in entry 1 of the target virtual interactive scene 1 and a jump-in entry 2 of the target virtual interactive scene 2 are displayed in the image playing interface 500 in the recognition mode. The jump-in 1 is presented in the form of the name "game a" of the target virtual interaction scene 1, and the jump-in 2 can be presented in the form of the name "game b" of the target virtual interaction scene 2.
When the user selects the jump-in portal 1, jumping from the image playing interface 500 in the recognition mode to the target virtual interactive scene 1 through the jump-in portal 1.
In this embodiment, the jump portal 1 may also be presented in the form of a thumbnail identification, a profile, a link address, etc. of the target virtual interaction scenario 1, and the jump portal 2 may also be presented in the form of a thumbnail identification, a profile, a link address, etc. of the target virtual interaction scenario 2.
In this embodiment, in response to a selection event for a target jump portal in the multiple jump portals, jumping from the image playing interface to a corresponding target virtual interactive scene through the target jump portal includes:
responsive to a selection event for a target hop entry of the plurality of hop entries, displaying at least one hop channel option for a target virtual interaction scenario to which the target hop entry is directed at the image playback interface; in response to a jump trigger event for a target jump channel option of the at least one jump channel options, jumping from the image playback interface to the target virtual interactive scene through the target jump channel indicated by the target jump channel option.
In this embodiment, when an image is played in the image playing interface in the recognition mode, when there are target visual field scenery elements associated with a plurality of target virtual interactive scenes in the played image, respective jump entries of the plurality of target virtual interactive scenes are displayed on the image playing interface for selection by a user. In response to a selection event of a target jump portal in the plurality of jump portals, the image playing interface can be accurately jumped to a corresponding target virtual interaction scene through the target jump portal, and the requirement that a user may need to further know or experience the virtual interaction scene after the virtual interaction scene is identified can be considered.
In one embodiment, in a case of playing an image in an image playing interface in an identification mode, when there are target visual field elements associated with a plurality of target virtual interaction scenes in the played image, displaying respective jump entries of the plurality of target virtual interaction scenes in the image playing interface, including:
under the condition that the image is played in the image playing interface in the identification mode, when a plurality of target visual field scene elements related to different target virtual interaction scenes exist in the played image, a jump entrance of the corresponding target virtual interaction scene is displayed in the image playing interface aiming at the display area of each target visual field scene element.
In particular, the visual scene elements may be divided by element category. Under the condition that an image is played in an image playing interface in an identification mode, the terminal identifies visual elements in the played image, and when the visual elements in the played image comprise multiple target visual scene elements with different element categories, the terminal can respectively identify target virtual interaction scenes associated with each target visual scene element. When multiple target visual field scene elements are associated with different target virtual interaction scenes, a jump entry corresponding to each target virtual interaction scene can be obtained, and the jump entry of the corresponding target virtual interaction scene is displayed for the display area of each target visual field scene element on the image playing interface in the identification mode. The user can select any one jump-in port as a target jump-in port, and the user responds to a selection event of the target jump-in port in the jump-in ports to jump to a corresponding target virtual interaction scene from the image playing interface in the identification mode through the target jump-in port.
When the terminal identifies that multiple target virtual interaction scenes are associated with the same target virtual interaction scene, a jump entrance of the target virtual interaction scene can be displayed in a display area of any one target visual scene element, or the jump entrance can be displayed in a display area aiming at each target visual scene element.
In this embodiment, when an image is played in an image playing interface in an identification mode, when a plurality of target visual scene elements associated with virtual interaction scenes of different targets exist in the played image, a jump entry of the corresponding target virtual interaction scene is displayed in the image playing interface for a display area of each target visual scene element, so that the virtual interaction scene associated with each visual scene element can be intuitively displayed, and user selection is facilitated.
In one embodiment, in a case of playing an image in an image playing interface in an identification mode, when there are target visual field elements associated with a plurality of target virtual interaction scenes in the played image, displaying respective jump entries of the plurality of target virtual interaction scenes in the image playing interface, including:
under the condition that the image is played in the image playing interface in the identification mode, when a plurality of target visual field scene elements associated with different target virtual interaction scenes exist in the played image, according to the priority order of the plurality of target visual field scene elements, a jump entrance of the target virtual interaction scene associated with each target visual field scene element is displayed on the image playing interface.
Specifically, the terminal may have previously set the priority order of the scene elements of different element categories, for example, the priority of the text category is greater than the priority of the person category, the priority of the person category is greater than the priority of the consumption resource category, etc., but is not limited thereto.
Specifically, under the condition that an image is played in an image playing interface in an identification mode, the terminal identifies visual elements in the played image, and when the visual elements in the played image comprise a plurality of target visual scene elements with different element categories, the terminal can respectively identify target virtual interaction scenes associated with each target visual scene element. When multiple target visual field scene elements are associated with different target virtual interaction scenes, a jump entry corresponding to each target virtual interaction scene can be obtained, and the priority order among the multiple target visual field scene elements is determined. And the terminal displays a jump entry of the target virtual interaction scene associated with each target visual scene element on the image playing interface in the identification mode according to the priority order of the multiple target visual scene elements.
Further, the terminal displays the jump entrance of the target virtual interaction scene associated with each target visual scene element in the form of a scene list in the image playing interface in the identification mode, wherein the jump entrance of the target virtual interaction scene in the scene list is arranged according to the priority order among the corresponding target visual scene elements.
In this embodiment, when an image is played in the image playing interface in the recognition mode, when multiple target visual field scene elements associated with different target virtual interaction scenes exist in the played image, according to the priority order of the multiple target visual field scene elements, the jump-in entry of the target virtual interaction scene associated with each target visual field scene element is displayed in the image playing interface, and the higher the priority is, the greater the possibility that the jump-in entry is selected by the user, and the jump-in entry is displayed according to the priority, so that the convenience of user selection is improved.
In one embodiment, in the case of playing an image in an image playing interface in an identification mode, when there is a target visual field scene element associated with a target virtual interaction scene in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene element, including:
displaying an identification progress generated by the identification of the image to be played in the image playing interface under the condition that the image is played in the image playing interface in the identification mode;
when the image is identified to generate a corresponding identification result, and the identification result represents that the target visual field scene element associated with the target virtual interaction scene exists in the image, the image is jumped from the image playing interface to the target virtual interaction scene associated with the target visual field scene element.
Specifically, under the condition that the image is played in the image playing interface in the identification mode, the terminal identifies the visual element in the played image in the identification mode, and when the visual element is identified, the identification progress generated by the identification of the image played in the following mode is displayed on the image playing interface in the identification mode. The recognition schedule may specifically be at least one of a schedule bar or a dynamic recognition animation.
And when the terminal finishes identifying the visual elements in the image, canceling the display of the identification progress. When the terminal finishes identifying the visual elements in the image, a corresponding identification result is generated. When the identification result represents that the target visual field scene element associated with the target virtual interaction scene exists in the image, the terminal jumps to the target virtual interaction scene associated with the target visual field scene element from the image playing interface in the identification mode.
Further, when the recognition result represents that the target visual scene element of the associated target virtual interaction scene exists in the image, the terminal can display the recognition result on the image playing interface in the recognition mode, wherein the recognition result is characterized as a jump entrance or at least one jump channel option of the target virtual interaction scene.
In this embodiment, when the image is identified to generate a corresponding identification result, and the identification result characterizes that a target visual scene element associated with a target virtual interaction scene exists in the image, the identification mode can be exited; jumping from the image playing interface to the target virtual interaction scene associated with the target visual scene element.
In this embodiment, in the case of playing an image in the image playing interface in the recognition mode, a recognition progress generated following recognition of the played image is displayed around the target interactive element in the image playing interface.
In this embodiment, when an image is played in the image playing interface in the recognition mode, a recognition progress generated by recognition of the image to be played is displayed on the image playing interface, so as to intuitively prompt the user of the recognition progress. When the image is identified to generate a corresponding identification result, and the identification result represents that the target visual field scene element associated with the target virtual interaction scene exists in the image, the image playing interface is jumped to the target virtual interaction scene associated with the target visual field scene element, so that the jump can be effectively and quickly realized based on the identified target virtual interaction scene.
In one embodiment, the method further comprises:
When the image is identified to generate a corresponding identification result, the identification result represents that no target visual field scenery element related to the target virtual interaction scene exists in the image, and prompt information representing that no related target virtual interaction scene exists is displayed on the image playing interface.
Specifically, the terminal may display a corresponding recognition result on the image playing interface in the recognition mode, and when the recognition result characterizes that no target visual field element associated with the target virtual interaction scene exists in the image, display prompt information characterizing that no associated target virtual interaction scene exists on the image playing interface. For example, the hint information is "no target virtual interaction scenario identified".
Further, when the recognition result characterizes that the target visual scene element of the associated target virtual interaction scene does not exist in the image, the terminal can display the recognition result on the image playing interface in the recognition mode, wherein the recognition result characterizes that the prompt information of the associated target virtual interaction scene does not exist.
In this embodiment, when the image is identified to generate a corresponding identification result, and the identification result characterizes that no target visible scene element associated with the target virtual interaction scene exists in the image, prompt information characterizing that no associated target virtual interaction scene exists is displayed on an image playing interface in an identification mode, so as to provide an explicit identification result. And, the recognition results of the target virtual interaction scene with the association and the target virtual interaction scene without the association are respectively presented in different expression forms, so that the recognition interestingness can be provided.
As shown in fig. 5B, in the case where the image 502 is played in the image playing interface 500 in the recognition mode, a recognition progress generated following the recognition of the played image 502 is displayed in the image playing interface 500 in the recognition mode, which is presented in the form of a dynamic recognition animation 506. When the recognition of the image 502 produces a corresponding recognition result, the display of the dynamic recognition animation 506 is canceled. When the recognition result represents that the target visual field scenery element 504 of the associated target virtual interaction scene 1 and the target virtual interaction scene 2 exists in the image 502, a jump-in port 508 is displayed on the image playing interface 500 in the recognition mode. The jump-in portal 508 includes a jump-in portal 1 including a target virtual interactive scene 1, and a jump-in portal 2 of a target virtual interactive scene 2. Jump portal 1 is presented as the name "game a" of the target virtual interaction scenario 1 and jump portal 2 is presented as the name "game b" of the target virtual interaction scenario 2.
When the user selects the jump-in 1, when the recognition result indicates that the target visual field scenery element 504 related to the target virtual interaction scene 1 exists in the image 502, the jump-in 1 of the target virtual interaction scene 1 can be displayed, or the jump-in 1 is not displayed, and the jump-in 1 is directly used for jumping from the image playing interface 500 to the target virtual interaction scene 1.
When the recognition result indicates that no target visual field element of the associated target virtual interaction scene exists in the image 502, a prompt message 510 'no virtual interaction scene is recognized' is displayed on the image playing interface 502 in the recognition mode.
In one embodiment, the method further comprises:
and under the condition that the image playing interface is jumped to the target virtual interaction scene associated with the target visual scene element, pausing the playing of the image at the image playing interface, and exiting the recognition mode.
Specifically, when the terminal jumps from the image playing interface in the identification mode to the target virtual interaction scene associated with the target visual scene element, the playing of the image is paused at the image playing interface in the identification mode, the image playing interface in the paused state is displayed, and the identification mode of the visual scene element is exited.
Further, when the jump-in port of the target virtual interaction scene is displayed in the image playing interface in the identification mode, when the terminal jumps from the image playing interface in the identification mode to the target virtual interaction scene associated with the target visual scene element, the display of the jump-in port is canceled.
In this embodiment, when the user jumps from the image playing interface to the target virtual interactive scene associated with the target visual scene element, the playing of the image is paused at the image playing interface to stay at the playing position before the jump, so that the user can continue playing from the position before the jump when the user returns to the interface. And moreover, the automatic exit of the recognition mode after the jump can avoid the waste of operation resources caused by image recognition after the pause.
In one embodiment, displaying an image playback interface includes:
displaying an image playing interface through an image playing application;
under the condition that the image is played in the image playing interface in the identification mode, when the target visual field scene element associated with the target virtual interaction scene exists in the played image, jumping to the target virtual interaction scene associated with the target visual field scene element from the image playing interface comprises the following steps:
under the condition that an image is played in an image playing interface in an identification mode, when a target visual field scenic element associated with a target virtual interaction scene exists in the played image, jumping from an image playing application to a virtual interaction application matched with the target virtual interaction scene associated with the target visual field scenic element; and displaying the target virtual interaction scene through the virtual interaction application.
Specifically, the terminal runs an image playing application with an image playing function, and an image playing interface is displayed through the image playing application. And displaying the target interaction element on the image playing interface. The user may trigger an identification pattern of visual scene elements associated with the virtual interaction scene by triggering the target interaction element.
Under the condition that the image is played in the image playing interface in the identification mode, the terminal can judge whether a target visual scene element exists in a plurality of visual elements included in the played image, and under the condition that the target visual scene element exists, the terminal identifies a target virtual interaction scene associated with the target visual scene element. When the target visual field scene element associated with the target virtual interaction scene exists in the played image, the terminal determines the virtual interaction application matched with the target virtual interaction scene associated with the target visual field scene element. The terminal jumps to the virtual interactive application from the image playing interface in the identification mode, and displays the target virtual interactive scene through the virtual interactive application.
In this embodiment, when there are target visual field scene elements associated with multiple target virtual interactive scenes or there are multiple or multiple target visual field scene elements associated with different target virtual interactive scenes in a played image, respective jump entries of the multiple target virtual interactive scenes are displayed on an image playing interface. And in response to a selection event of a target jump portal in the jump portals, jumping from the image playing application to a corresponding virtual interactive application through the target jump portal, and displaying a target virtual interactive scene through the virtual interactive application.
In one embodiment, the method further comprises: and sending the played image to a server for identification through a development kit integrated in the image playing application, and obtaining an identification result corresponding to the image fed back by the server through the development kit.
In this embodiment, an image playing interface is displayed through an image playing application, when an image is played in the image playing interface in an identification mode, and when a target visual field scenic element associated with a target virtual interaction scene exists in the played image, the image playing application jumps to a virtual interaction application matched with the target virtual interaction scene associated with the target visual field scenic element, and the target virtual interaction scene is displayed through the virtual interaction application, so that the current application can be quickly jumped to the virtual interaction application for virtual interaction experience based on image identification, and the understanding of a user on the virtual interaction application can be further improved, thereby effectively improving the click rate and conversion rate of the virtual interaction application.
In one embodiment, the virtual interactive application is a cloud application; displaying the target virtual interaction scene through the virtual interaction application comprises:
and receiving and displaying the video picture which is generated after cloud processing and aims at the target virtual interaction scene through the cloud application.
Specifically, when the virtual interactive application is a cloud application, the cloud end encodes the running process of the target virtual interactive scene into an audio-video stream in real time, and feeds back a video picture corresponding to the audio-video stream to the cloud application in real time. And the terminal loads the audio and video stream transmitted after cloud processing through the cloud application, so that a video picture aiming at the target virtual interaction scene and generated according to the audio stream is displayed in the cloud application.
In this embodiment, receiving, by a cloud application, a video frame for a target virtual interaction scene generated after cloud processing, and displaying the video frame includes:
receiving and displaying a video picture which is generated after cloud processing and aims at a target virtual interaction scene through a connecting channel between a cloud application and a cloud; the connection channel is connected with the cloud application simulator through a cloud application video stream analysis unit.
In the embodiment, the cloud terminal processes the running process of the target virtual interaction scene of the cloud application into the audio and video stream through cloud computing, and displays the corresponding video picture through the cloud application, so that the interaction between the cloud terminal and the user is realized, the target virtual interaction scene runs at the cloud terminal without running at the terminal, and the running and storage space of the terminal are effectively saved.
In one embodiment, the method further comprises:
and responding to an exit event triggered in the target virtual interaction scene, and jumping to the image playing interface from the target virtual interaction scene.
Specifically, the terminal displays the target virtual interaction scene through the virtual interaction application, and the user can trigger an exit operation on the target virtual interaction scene to exit the target virtual interaction scene. And the terminal responds to an exit event triggered in the target virtual interaction scene, jumps back to the image playing application from the virtual interaction application, and displays an image playing interface before the jump in the image playing application.
In this embodiment, the terminal, in response to an exit event triggered at the target virtual interaction scene, jumps from the target virtual interaction scene to the image playing interface in the recognition mode.
In this embodiment, when the virtual interactive application is a cloud application, the terminal may trigger an exit operation in displaying a video frame for a target virtual interactive scene, so as to exit the cloud application to display the video frame. And the terminal responds to an exit event triggered in the target virtual interaction scene, jumps back to the image playing application from the cloud application, and displays an image playing interface before the jump in the image playing application.
In this embodiment, in response to an exit event triggered at the target virtual interaction scene, the image playing interface is skipped from the target virtual interaction scene, so that the image playing interface can be directly returned when the target virtual interaction scene exits, and the user can conveniently continue playing the image.
In one embodiment, a virtual interaction method is provided, applied to a terminal, including:
displaying an image playing interface through an image playing application, and displaying a target interaction element on the image playing interface; the target interaction element is used to trigger an identification pattern of a visual scene element associated with the virtual interaction scene.
Then, in response to a triggering operation on the target interactive element, an image playing interface in an identification mode is displayed.
Further, in the case of playing an image in the image playing interface in the recognition mode, at least one element category is displayed in the image playing interface in the recognition mode.
And responding to a trigger event of a target element category in at least one element category, displaying an identification progress generated by the identification of the image which is played in a follow-up mode on an image playing interface in an identification mode, and canceling the display of the identification progress when the identification of the image generates a corresponding identification result.
Then, under the condition that the identification result represents that the target visual field elements belonging to the target element category exist in the image, and at least one associated target virtual interaction scene exists in the target visual field elements, displaying a jump entrance of each target virtual interaction scene on the image playing interface in the identification mode.
Optionally, multiple target visual field scene elements associated with different target virtual interaction scenes exist in the identification result representation image, and a jump entrance of the corresponding target virtual interaction scene is displayed for a display area of each target visual field scene element in the image playing interface in the identification mode.
Optionally, multiple target visual field scene elements associated with different target virtual interaction scenes exist in the identification result representation image, and according to the priority order of the multiple target visual field scene elements, a jump entry of the target virtual interaction scene associated with each target visual field scene element is displayed on the image playing interface in the identification mode.
Further, in response to a selection event of a target jump portal in the plurality of jump portals, jumping to a virtual interactive application matched with a corresponding target virtual interactive scene from an image playing interface in an identification mode through the target jump portal, and displaying the target virtual interactive scene through the virtual interactive application.
Then, under the condition that the target virtual interaction scene is jumped from the image playing interface in the identification mode, the playing of the image is paused at the image playing interface in the identification mode, and the identification mode is exited.
Further, when the identification result represents that the target visual field scenery elements of the target virtual interaction scene are not associated in the image, prompt information representing that the associated target virtual interaction scene is not present is displayed on the image playing interface in the identification mode.
Further, in response to an exit event triggered at the target virtual interaction scenario, a jump is made from the target virtual interaction scenario to the image playback interface.
In this embodiment, an image playing interface is displayed through an image playing application, and a target interaction element for triggering an identification mode of a visual scene element associated with a virtual interaction scene is displayed on the image playing interface, so that the identification mode can be entered in response to a triggering operation on the target interaction element, and the image playing interface in the identification mode is displayed. And under the condition that the image is played in the image playing interface in the identification mode, judging whether the played image has the target visual field scene elements associated with the target virtual interaction scene or not, and accurately identifying the target virtual interaction scene associated with the target visual field scene elements included in the image. When target visual field scene elements related to a plurality of target virtual interaction scenes exist in the played image or a plurality of target visual field scene elements related to different target virtual interaction scenes exist in the played image, the jump-in mouth of each of the plurality of target virtual interaction scenes is displayed on the image playing interface for selection by a user. In response to a selection event of a target jumping portal in the plurality of jumping portals, jumping from the image playing application to a corresponding virtual interactive application through the target jumping portal, and displaying a target virtual interactive scene through the virtual interactive application, the method can directly jump to the corresponding virtual interactive scene based on the identification of the visual scene element in the image, and the user does not need to operate to exit the image playing application and then enter the virtual interactive application, so that the application for image identification is more flexible, and the operation is simpler and faster.
And the image can be used as a carrier for propagation and conversion of the virtual interaction scene based on the jump of the target visual scene element in the image to the corresponding virtual interaction scene, so that the recommendation of the virtual interaction scene is realized based on the image, and the conversion rate of the virtual interaction scene based on the image is effectively improved.
In one embodiment, an application scenario of an interaction method is provided, and the application scenario is specifically applied to a video playing scenario. The virtual interaction scenario may be a game, in particular a cloud game. As shown in fig. 6, a user opens a video playback application and displays videos under various categories on a video presentation interface 600 of the video playback application. The various categories of video include, but are not limited to, selection, recommendation, television shows, variety, children, games, etc., each of which may also have more detailed subcategories. For example, under the category of games, it may also be classified as game selection, game periphery, game event, etc.
The video play interface may be accessed by clicking on a particular video. An operator of the video playing application sets identifiable modes on some videos in the background, so that a game crossing button is displayed on a playing interface of the videos, and for the videos without the identifiable switches, the game crossing button is not displayed on the video playing interface.
As shown in fig. 7A, when the user clicks on the game video in which the recognizable mode is set, playing of the video is started at the video playing interface 700, and a "game crossing" button 702 is displayed, which is the target interactive element. The video playback interface 700 also displays recommended videos related to the video being played.
During video playback, the user activates the "game crossing" button, displaying the video playback interface 700 in the recognition mode. When video playing is performed in the video playing interface 700 in the recognition mode, recognition of the image 704 being played in the video is triggered, specifically, the image recognition is performed by the relevant server through text recognition and a classification model based on a CNN neural network, so that game features which are most matched with the image are recognized, corresponding cloud games in a game library are searched according to the game features, and a recognition result is returned.
In the process of recognizing the image 704, as in fig. 7B, a game recognition animation 706 is presented on the video playback interface 700 in the recognition mode. When the recognition ends, the display of the game recognition animation 706 is canceled.
When the identification result indicates that the corresponding cloud game exists and only 1 cloud game exists, the user can directly jump from the video playing interface 700 in the identification mode to the cloud game related to the new window opening, and the user can log in the cloud game through the account number to experience.
When the identification result indicates that the corresponding cloud game exists and only a plurality of cloud games exist, a game list 708 is displayed on the video playing interface 700 in the identification mode, and the game name of each cloud game is displayed in the game list 708. The game list can also display a game thumbnail, a game brief introduction, a link address and the like of each cloud game, and a user can jump to a new window from a video playing interface to enter the cloud game by clicking the corresponding cloud game. E.g., clicking on "game 2" jumps to a new window and logs into "game 2", displaying game interface 712 of "game 2".
When the identification result indicates that the corresponding cloud game does not exist, a prompt message 710 with an explicit prompt meaning such as "no game is identified" is returned and displayed in the video playing interface 700 in the identification mode.
When the cloud game is skipped, the video played in the video playing interface 700 in the identification mode is paused, the video playing interface in the paused state is displayed, the "game crossing" button of the video playing interface is restored to the state before identification, and the game list disappears.
The user may trigger an exit button in the cloud game to exit the cloud game and return to the video playback interface or to the video playback interface in the recognition mode.
In this embodiment, the interaction method is applied to the video playing scene, so as to realize cloud game traversal based on the identification of the video scene. The image content in the game video is dynamically identified, and the related cloud game is recommended according to the identification result, so that the function of directly jumping to the cloud game is provided, the application of the image identification is more flexible, the image in the video can become a carrier for secondary excitation conversion and consumption, the guiding and conversion of the video to the game can be effectively improved, and more timely user response is provided. And, can also bring the benefit for experience and conversion of cloud game.
In one embodiment, an application scenario of the interaction method is further provided, and the interaction method is specifically applied to a live broadcast scenario. Live broadcast refers to an information network release mode with a bidirectional circulation process, wherein the information is synchronously manufactured and released along with the occurrence and development processes of an event on site.
In the live broadcast scene, the image playing interface is the live broadcast playing interface, and the played image is the real-time image displayed on the live broadcast playing interface.
In one embodiment, as shown in fig. 8, an interaction method is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
Step S802, receiving an image played in an image playing interface of a visual scene element in an identification mode on a terminal; the visual field elements are associated with corresponding virtual interactive scenes.
Specifically, the terminal displays an image playing interface, and displays a target interaction element on the image playing interface; the target interaction element is used to trigger an identification pattern of a visual scene element associated with the virtual interaction scene. And the terminal responds to the triggering operation of the target interaction element and displays the image playing interface in the identification mode. In the case of playing an image in the image playing interface in the recognition mode, the terminal transmits the played image to the server. The server receives the image transmitted by the terminal.
Step S804, image recognition is carried out on the image, and recognition text corresponding to the target visual scene element in the image is obtained.
Specifically, the image comprises a plurality of visual elements, the server identifies each visual element, and at least one identification text corresponding to each visual element is obtained. When the identification text points to the target virtual interaction scene, the visual element corresponding to the identification text is the target visual scene element.
In step S806, when the identification text points to the target virtual interaction scene, a link address corresponding to the identification text and pointing to the target virtual interaction scene is obtained.
Specifically, the server performs matching processing on the identification text and scene information of each virtual interaction scene, and when the identification text is successfully matched with at least one piece of scene information, the virtual interaction scene corresponding to the successfully matched scene information is used as a target virtual interaction scene pointed by the identification text. And under the condition that the matching is successful, the server acquires the link address corresponding to the target virtual interaction scene.
And when the matching of the target virtual interaction scene fails, the identification of the image is finished, and the next image sent by the terminal is identified.
Step S808, a link address is returned to the terminal, wherein the link address is used for indicating the terminal to jump from the image playing interface to the target virtual interaction scene.
Specifically, the server returns a link address of the target virtual interaction scene to the terminal, and the terminal receives the link address and jumps to the target virtual interaction scene from the image playing interface in the identification mode through the link address.
In this embodiment, the terminal receives the link address, and presents the link address in the form of a jump entry on the image playing interface.
In this embodiment, an image played in an image playing interface in an identification mode of a visual scene element on a terminal is received, an identification text corresponding to a target visual scene element in the image is obtained by performing image identification on the image, when the identification text points to a target virtual interaction scene, which indicates that the target visual scene element in the image has an associated target virtual interaction scene, a link address corresponding to the identification text and pointing to the target virtual interaction scene is obtained and returned to the terminal, so that the terminal can directly jump from the image playing interface to the target virtual interaction scene through the link address, the application of image identification is more flexible, and the jump operation is simpler and more flexible. And the method directly jumps to the corresponding virtual interaction scene based on the target visual scene element in the image, and can take the image as a carrier for propagation and conversion of the virtual interaction scene, thereby realizing recommendation of the virtual interaction scene based on the image and effectively improving the conversion rate of the virtual interaction scene based on the image.
In one embodiment, performing image recognition on an image to obtain a recognition text corresponding to a target visual scene element in the image, including:
determining a data type corresponding to the image, and carrying out image recognition on the image according to a recognition mode corresponding to the data type to obtain a recognition text corresponding to the target visual scene element in the image.
The data types corresponding to the images can comprise image array types and character types. The character type indicates the presence of text in the image. Data belonging to the image data type is used to form an image, and data belonging to the character type forms text in the image.
Specifically, the server may acquire image data corresponding to the image, and determine a data type corresponding to the image data. The data type includes an image array type, and data belonging to the image data type is used to form the image. The data type may also include a character type, and data belonging to the character type forms text in the image.
When the data type of the image comprises the character type, the server can conduct image recognition on the image according to a recognition mode corresponding to the character type, and recognition text corresponding to the target visible scene element in the image is obtained.
When the data type of the image only comprises the image array type, the server can conduct image recognition on the image according to a recognition mode corresponding to the image array type, and recognition text corresponding to the target visual scene element in the image is obtained.
In this embodiment, the recognition mode corresponding to the character type may be text recognition, and the recognition mode corresponding to the image array type may be recognition through a classification model. The classification model is a model which is trained in advance and used for classifying the visual scene elements in the image.
In this embodiment, the data type corresponding to the image is determined, and the image is identified according to the identification mode corresponding to the data type, so that the image can be identified in a targeted manner according to different data types by using different identification modes, so that the identification of the visual scene element is more accurate, and the identification text corresponding to the target visual scene element in the image is accurately obtained.
In one embodiment, according to an identification mode corresponding to a data type, performing image identification on an image to obtain an identification text corresponding to a target visual scene element in the image, including:
when the data type of the image comprises a character type, performing character recognition on a target visual scene element comprising characters in the image to obtain a recognition text corresponding to the target visual scene element; when the data type of the image comprises the image array type, classifying the target visual field scene elements in the image through a classification model to obtain the identification text corresponding to the target visual field scene elements.
Specifically, when the data type of the image includes a character type, the server may determine a target visual field element including a character in the image, and perform text recognition (OCR, optical character recognition) on the target visual field element including the character in the image, to obtain a recognition text corresponding to the target visual field element. When the data type of the image only comprises the image array type, the image can be input into a classification model, the classification model determines target visual field scene elements in the image, and the target visual field scene elements are classified to obtain identification texts corresponding to the target visual field scene elements.
In this embodiment, when the data type of the image includes an image array type and a character type, the server may classify the target visual scene elements in the image through the classification model, and perform text recognition on the target visual scene elements including characters in the image, to obtain recognition texts corresponding to the target visual scene elements.
In this embodiment, when the data type of the image includes a character type, text recognition is performed on the target visual scene element including the character in the image to obtain a recognition text corresponding to the target visual scene element, so that the associated target virtual interaction scene can be accurately determined according to the character existing in the image. When the data types of the images comprise image array types, classifying the target visual scene elements in the images through the classifying model, and accurately obtaining the identification text corresponding to the target visual scene elements in the images, so that the target virtual interaction scene associated with the target visual scene elements in the images can be accurately obtained.
In one embodiment, the method further comprises: matching the recognition text with a preset word list to form a target recognition text with a preset format;
when the identification text points to the target virtual interaction scene, acquiring a link address which corresponds to the identification text and points to the target virtual interaction scene, wherein the link address comprises the following components: and when the target identification text points to the target virtual interaction scene, acquiring a link address which corresponds to the target identification text and points to the target virtual interaction scene.
In one embodiment, when the identification text points to the target virtual interaction scene, obtaining the link address corresponding to the identification text and pointing to the target virtual interaction scene includes:
performing matching processing in a scene Jing Xinxi library based on the identification text, wherein the scene information library comprises scene information of a plurality of virtual interaction scenes; and under the condition that the identification text is matched with at least one piece of target scene information in the field Jing Xinxi library, acquiring a link address which points to a target virtual interaction scene and corresponds to the at least one piece of target scene information.
Specifically, the scene information base includes scene information corresponding to each of the plurality of virtual interaction scenes. The server may match the identification text with each of the scene information in the scene information base to determine target scene information that matches the identification text.
In the case that the recognition text is matched with at least one target scene information in the Jing Xinxi library, a link address corresponding to the at least one target scene information is acquired, wherein the link address points to a corresponding target virtual interaction scene.
In this embodiment, when the identification text field Jing Xinxi library matches at least one target scene information, obtaining a link address pointing to a target virtual interaction scene corresponding to the at least one target scene information includes:
and when the identification text is matched with at least one piece of target scene information in the field Jing Xinxi library, acquiring a link address corresponding to the target virtual interaction scene pointed by the at least one piece of target scene information at the cloud. The target virtual interaction scene operates in the cloud.
In this embodiment, matching processing is performed in a scene information base including scene information of multiple virtual interaction scenes based on the identification text, and under the condition that at least one piece of target scene information is matched in the scene Jing Xinxi base of the identification text, a link address pointing to the target virtual interaction scene corresponding to the at least one piece of target scene information is obtained, so that the link address of the target virtual interaction scene can be returned to the terminal, and the terminal can quickly jump to the target virtual interaction scene from the image playing interface.
In one embodiment, an interaction method is provided, which is applied to a terminal and a server. The terminal dynamically recognizes content related to the game in the image of the video through the access of the development kit (Software Development Kit, SDK) and provides an access function for traversing the cloud game.
The logic flow of the terminal is shown in fig. 9, and the SDK includes a video image acquirer 4, an interaction trigger 2, an information processor 7, and a network request transceiver 5.
The interactive button 1 is a target interactive element and is used for providing and realizing the recognition of the interactive triggering button, the button style UI and the interactive state, and the animation realization of the search special effect after clicking the button. And searching the animation of the special effect, namely identifying the progress.
And the interaction trigger 2 is used for executing the video image acquisition action according to the event detected in real time after triggering the interaction button 1. That is, the video image acquirer 4 projects the played video image of the video player 3 into a virtual container and converts it into a binary image array by pixel, and controls the video player 3 to pause the playing of the video while acquiring the video image.
The video player 3 is used for normal display of video images of the video playing interface and real-time detection and response of the interaction trigger 2 to dispatch the video playing event and the video pause event.
The video image acquirer 4 acquires the images played by the video player 3 at the event dispatch time into a virtual container according to the action of the interaction trigger 2, and then converts the images into a binary image array by a w3c (WorldWide Web Consortium ) binary conversion method.
The network request transceiver 5 transmits the binary image array to a server, namely a game identification server 6, in a network protocol manner, and the game identification server 6 can be an AI (Artificial Intelligence ) game identification server.
The AI game recognition server is used for carrying out image search and game matching, and if the image search and the game matching are matched, the recognition result is returned to the information processor, and the format of the recognition result is similar to { state:0, data: [ { name: "XX game", URL: "cloud gaming link", img: icon } ] }, wherein the length of data represents that there are several results, state equal to 0 represents that the response is normal, URL (Uniform Resource Locator ", uniform resource locator) represents the link address corresponding to the cloud game at the cloud, i.e. cloud game server 9.
The information processor 7 is configured to determine according to the recognition result returned by the game recognition server 6, if only 1 recognition result is available, trigger the link address of the cloud game to directly open a new window to enter the cloud game, if the result is less than 1, trigger and display prompt information of the result not found, and if the recognition result is greater than 1, trigger the scene list 8 to display the matched cloud game result.
And the scene list 8 is used for displaying the matched identification result on the image playing interface according to the processing signal of the information processor 7.
The cloud game server 9 is configured to enter a corresponding cloud game when a cloud game result in the scene list 8 is triggered.
FIG. 10 is a schematic logic flow diagram of an AI game recognition server in one embodiment.
The input data processor checks the image data transmitted and input through the network, and judges whether the image data is an image array or has text input according to the data type of the image data. The type splitter is entered if the data type is an image array or with text input. The data type with text input is the character type.
And the type splitter is used for splitting the image data according to the data type. For image data with text input, it can be classified into visual elements including characters in an image and text strings representing subtitles in the image. Performing OCR (optical character recognition) on visual elements comprising characters in the image, performing text matching on the obtained recognition text after performing character recognition on the OCR, and obtaining the recognition text in a preset format. The character strings representing the subtitles in the images directly enter a character matching process to obtain a recognition text in a preset format, for example, the character matching is carried out on the game which is the game AA, and the recognition text game AA is obtained. And if the text matching is successful, performing subsequent game retrieval. And carrying out image content identification on the image array data without text input through an AI classification model.
An OCR recognizer is used for recognizing characters in the image into character strings.
Character matching, which is used to map character string to game name or game element character string to match relative game name, and to process game information when matching relative game name character string.
The game library search is used for carrying out game library game information search according to the matched game name, returning cloud game information to a requester for data output, and if the cloud game information is not searched, executing a missing data state return operation, namely returning' data: [ ].
In this embodiment, the classification model is not classified directly into specific game names, but into specific sub-elements in the game, such as game characters, game fields, game equipments, and the like, and then mapped into corresponding games according to the matched sub-elements to establish a relationship. The training and reasoning process of the classification model is shown in fig. 11.
The training of the classification model mainly comprises the steps of loading game content by using an AI neural network reasoning engine to perform matching calculation, performing layer-by-layer neural network model calculation according to sample visual elements in sample images, and outputting sub-element results matched with the sample visual elements. Specifically, a sample image, namely image formatted data, is obtained through manual annotation, and machine learning training with intervention of a neural network, namely annotation training, is carried out through the sample image, so that any picture is input in training, and the corresponding game content classification in the classification word list can be obtained as an output effect. Namely, a sample visual element X in a sample image is input, and a classification result Y (namely a recognition result) corresponding to the sample visual element X is output through a mapping process F. For example, a picture X is mapped to a character Y in the game a under the action of F, the picture X is a sample, the character Y in the game a is a label or tag, F1 can be deduced through the picture X and the character Y in the game a, then F1 is called a model, and Fn can be obtained by repeatedly deducing and calculating more samples, and the process from deducing F to deducing Fn is called model training. Finally, given an image, this process is called model reasoning, by predicting the classification result of the image with the trained Fn.
In the embodiment, the cloud game traversing operation technology based on video scene recognition can dynamically recognize image content in a game video, and recommend related cloud games according to recognition results, and provide a function of directly jumping to the cloud games, so that the video and images in the video can become carriers for secondary excitation conversion and consumption, the guiding and conversion of the video to the game can be effectively improved, and more timely user response is provided. And, can also bring the benefit for cloud stream experience and conversion.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an interaction device for realizing the above-mentioned interaction method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in one or more embodiments of the interaction device provided below may be referred to the limitation of the interaction method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 12, an interaction apparatus 1200 is provided, which may employ software modules or hardware modules, or a combination of both, as part of a computer device, the apparatus specifically comprising:
the interface display module 1202 is configured to display an image playing interface, and display a target interaction element on the image playing interface; the target interaction element is used to trigger an identification pattern of a visual scene element associated with the virtual interaction scene.
The state display module 1204 is configured to display an image playing interface in the recognition mode in response to a triggering operation on the target interaction element.
A skip module 1206, configured to skip from the image playing interface to the target virtual interaction scene associated with the target virtual interaction scene when there is a target visual scene element associated with the target virtual interaction scene in the played image in the image playing interface in the recognition mode.
In this embodiment, an image playing interface is displayed, and a target interaction element for triggering an identification mode of a visual scene element associated with a virtual interaction scene is displayed on the image playing interface, so that the identification mode can be entered in response to a triggering operation on the target interaction element, and the image playing interface in the identification mode is displayed. And under the condition that the image is played in the image playing interface in the identification mode, judging whether the played image has the target visual field scene element associated with the target virtual interaction scene or not, and accurately identifying the target virtual interaction scene associated with the target visual field scene element. When the target visual field scene elements associated with the target virtual interaction scene exist in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene elements can directly jump to the corresponding virtual interaction scene based on the identification of the visual field scene elements in the image, so that the application of image identification is more flexible. And the method directly jumps to the corresponding virtual interaction scene based on the target visual scene element in the image, and can take the image as a carrier for propagation and conversion of the virtual interaction scene, thereby realizing recommendation of the virtual interaction scene based on the image and effectively improving the conversion rate of the virtual interaction scene based on the image. In one embodiment, the skip module 1206 is further configured to display at least one element category on the image playing interface in the case of playing the image on the image playing interface in the recognition mode; in response to a trigger event for a target element category in the at least one element category, jumping from the image playing interface to a target virtual interaction scene associated with the target visual scene element in the event that a target virtual interaction scene associated with the target virtual interaction scene and belonging to the target element category exists in the played image.
In this embodiment, when an image is played in the image playing interface in the recognition mode, at least one element category is displayed on the image playing interface to provide a plurality of recognition modes distinguished according to the element category, so that the selection modes are diversified. In response to a trigger event for a target element category in at least one element category, under the condition that a target visual field scenery element belonging to the target element category exists in a played image and an associated target virtual interaction scene exists in the target visual field scenery element, jumping from an image playing interface to the target virtual interaction scene associated with the target visual field scenery element, and therefore selective identification can be carried out through the category of the element in the image, and accuracy of identification of the target virtual interaction scene is improved through the same kind of element.
In one embodiment, the skip module 1206 is further configured to skip from the image playing interface to the target virtual interaction scene associated with the target virtual interaction scene when the selected visual element includes the target visual scene element associated with the target virtual interaction scene in response to a selection event of the visual element in the played image while the image is played in the image playing interface in the recognition mode.
In this embodiment, more independent options are provided to the user in the case of playing an image in the image playing interface in the recognition mode. And in response to a selection event of a visual element in the played image, when the selected visual element comprises a target visual scene element associated with a target virtual interaction scene, jumping from the image playing interface to the target virtual interaction scene associated with the target visual scene element, so that accurate identification can be performed on the target visual scene element selected by the user, and automatically jumping to a corresponding virtual interaction scene to perform virtual interaction experience, and the operation is simpler and more flexible.
In one embodiment, the skip module 1206 is further configured to display at least one skip channel option for the target virtual interactive scene at the image playback interface; in response to a jump trigger event for a target jump channel option of the at least one jump channel options, jumping from the image playback interface to the target virtual interactive scene through the target jump channel indicated by the target jump channel option.
In this embodiment, at least one jump channel option for the target virtual interactive scene is displayed on the image playing interface, and at least one option capable of jumping to the target virtual interactive scene is provided for the user. And in response to a jump trigger event aiming at a target jump channel option in at least one jump channel option, jumping to a target virtual interaction scene from an image playing interface through the target jump channel indicated by the target jump channel option, so that a user can select a channel suitable for the user to jump, and the jump mode is more flexible.
In one embodiment, the skip module 1206 is further configured to, in a case where an image is played in the image playing interface in the recognition mode, display, in the image playing interface, respective skip entries of the plurality of target virtual interaction scenes when there are target visual field elements associated with the plurality of target virtual interaction scenes in the played image; and responding to a selection event of a target jumping entrance in the plurality of jumping portals, and jumping to a corresponding target virtual interaction scene from the image playing interface through the target jumping entrance.
In this embodiment, when an image is played in the image playing interface in the recognition mode, when there are target visual field scenery elements associated with a plurality of target virtual interactive scenes in the played image, respective jump entries of the plurality of target virtual interactive scenes are displayed on the image playing interface for selection by a user. In response to a selection event of a target jump portal in the plurality of jump portals, the image playing interface can be accurately jumped to a corresponding target virtual interaction scene through the target jump portal, and the requirement that a user may need to further know or experience the virtual interaction scene after the virtual interaction scene is identified can be considered.
In one embodiment, the skip module 1206 is further configured to, in a case where an image is played in the image playing interface in the recognition mode, display a skip entry of the corresponding target virtual interaction scene for a display area of each target visual scene element in the image playing interface when there are multiple target visual scene elements associated with different target virtual interaction scenes in the played image.
In this embodiment, when an image is played in an image playing interface in an identification mode, when a plurality of target visual scene elements associated with virtual interaction scenes of different targets exist in the played image, a jump entry of the corresponding target virtual interaction scene is displayed in the image playing interface for a display area of each target visual scene element, so that the virtual interaction scene associated with each visual scene element can be intuitively displayed, and user selection is facilitated.
In one embodiment, the skip module 1206 is further configured to, in a case where an image is played in the image playing interface in the recognition mode, display, in the image playing interface, a skip entry of the target virtual interaction scene associated with each target virtual interaction scene according to a priority order of the plurality of target visual scene elements when the plurality of target visual scene elements associated with different target virtual interaction scenes exist in the played image.
In this embodiment, when an image is played in the image playing interface in the recognition mode, when multiple target visual field scene elements associated with different target virtual interaction scenes exist in the played image, according to the priority order of the multiple target visual field scene elements, the jump-in entry of the target virtual interaction scene associated with each target visual field scene element is displayed in the image playing interface, and the higher the priority is, the greater the possibility that the jump-in entry is selected by the user, and the jump-in entry is displayed according to the priority, so that the convenience of user selection is improved.
In one embodiment, the skip module 1206 is further configured to display, in a case that the image is played in the image playing interface in the recognition mode, a recognition progress generated by recognition of the image that follows the play; when the image is identified to generate a corresponding identification result, and the identification result represents that the target visual field scene element associated with the target virtual interaction scene exists in the image, the image is jumped from the image playing interface to the target virtual interaction scene associated with the target visual field scene element.
In this embodiment, when an image is played in the image playing interface in the recognition mode, a recognition progress generated by recognition of the image to be played is displayed on the image playing interface, so as to intuitively prompt the user of the recognition progress. When the image is identified to generate a corresponding identification result, and the identification result represents that the target visual field scene element associated with the target virtual interaction scene exists in the image, the image playing interface is jumped to the target virtual interaction scene associated with the target visual field scene element, so that the jump can be effectively and quickly realized based on the identified target virtual interaction scene.
In one embodiment, the apparatus further comprises a prompt module; the prompting module is used for generating a corresponding recognition result when the image is recognized, the recognition result represents that no target visible scene element of the associated target virtual interaction scene exists in the image, and prompting information representing that no associated target virtual interaction scene exists is displayed on the image playing interface.
In this embodiment, when the image is identified to generate a corresponding identification result, and the identification result characterizes that no target visible scene element associated with the target virtual interaction scene exists in the image, prompt information characterizing that no associated target virtual interaction scene exists is displayed on the image playing interface, so as to provide an explicit identification result. And, the recognition results of the target virtual interaction scene with the association and the target virtual interaction scene without the association are respectively presented in different expression forms, so that the recognition interestingness can be provided.
In one embodiment, the apparatus further comprises a pause module; the pause module is used for pausing the playing of the image on the image playing interface and exiting the identification mode under the condition that the image playing interface is jumped to the target virtual interaction scene associated with the target visual scene element.
In this embodiment, when the user jumps from the image playing interface to the target virtual interactive scene associated with the target visual scene element, the playing of the image is paused at the image playing interface to stay at the playing position before the jump, so that the user can continue playing from the position before the jump when the user returns to the interface. And moreover, the automatic exit of the recognition mode after the jump can avoid the waste of operation resources caused by image recognition after the pause.
In one embodiment, interface display module 1202 is further configured to display an image playback interface via an image playback application;
the skip module 1206 is further configured to skip from the image playing application to a virtual interactive application matched with a target virtual interactive scene associated with the target virtual interactive scene when there is a target visual scene element associated with the target virtual interactive scene in the played image under the condition that the image is played in the image playing interface in the recognition mode;
the device also comprises a scene display module, wherein the scene display module is used for displaying the target virtual interaction scene through the virtual interaction application.
In this embodiment, an image playing interface is displayed through an image playing application, when an image is played in the image playing interface in an identification mode, and when a target visual field scenic element associated with a target virtual interaction scene exists in the played image, the image playing application jumps to a virtual interaction application matched with the target virtual interaction scene associated with the target visual field scenic element, and the target virtual interaction scene is displayed through the virtual interaction application, so that the current application can be quickly jumped to the virtual interaction application for virtual interaction experience based on image identification, and the understanding of a user on the virtual interaction application can be further improved, thereby effectively improving the click rate and conversion rate of the virtual interaction application.
In one embodiment, the virtual interactive application is a cloud application; the scene display module is also used for receiving and displaying video pictures which are generated after cloud processing and are aimed at the target virtual interaction scene through the cloud application.
In the embodiment, the cloud terminal processes the running process of the target virtual interaction scene of the cloud application into the audio and video stream through cloud computing, and displays the corresponding video picture through the cloud application, so that the interaction between the cloud terminal and the user is realized, the target virtual interaction scene runs at the cloud terminal without running at the terminal, and the running and storage space of the terminal are effectively saved.
In one embodiment, the skip module 1206 is further configured to skip from the target virtual interactive scene to the image playback interface in response to an exit event triggered at the target virtual interactive scene.
In this embodiment, in response to an exit event triggered at the target virtual interaction scene, the image playing interface is skipped from the target virtual interaction scene, so that the image playing interface can be directly returned when the target virtual interaction scene exits, and the user can conveniently continue playing the image.
In one embodiment, as shown in fig. 13, an interaction device 1300 is provided, which may employ software modules or hardware modules, or a combination of both, as part of a computer apparatus, the device specifically comprising:
The receiving module 1302 is configured to receive an image played in an image playing interface of the terminal in an identification mode of a visual scene element; the visual field elements are associated with corresponding virtual interactive scenes.
The recognition module 1304 is configured to perform image recognition on the image to obtain a recognition text corresponding to the target visible scene element in the image.
The obtaining module 1306 is configured to obtain, when the identification text points to the target virtual interaction scene, a link address corresponding to the identification text and pointing to the target virtual interaction scene.
A return module 1308 is configured to return a link address to the terminal, where the link address is used to instruct the terminal to jump from the image playing interface to the target virtual interaction scenario.
In this embodiment, an image played in an image playing interface in an identification mode of a visual scene element on a terminal is received, an identification text corresponding to a target visual scene element in the image is obtained by performing image identification on the image, when the identification text points to a target virtual interaction scene, which indicates that the target visual scene element in the image has an associated target virtual interaction scene, a link address corresponding to the identification text and pointing to the target virtual interaction scene is obtained and returned to the terminal, so that the terminal can directly jump from the image playing interface to the target virtual interaction scene through the link address, the application of image identification is more flexible, and the jump operation is simpler and more flexible. And the method directly jumps to the corresponding virtual interaction scene based on the target visual scene element in the image, and can take the image as a carrier for propagation and conversion of the virtual interaction scene, thereby realizing recommendation of the virtual interaction scene based on the image and effectively improving the conversion rate of the virtual interaction scene based on the image.
In one embodiment, the recognition module 1304 is further configured to determine a data type corresponding to the image, and perform image recognition on the image according to a recognition manner corresponding to the data type, so as to obtain a recognition text corresponding to the target visible scene element in the image.
In this embodiment, the data type corresponding to the image is determined, and the image is identified according to the identification mode corresponding to the data type, so that the image can be identified in a targeted manner according to different data types by using different identification modes, so that the identification of the visual scene element is more accurate, and the identification text corresponding to the target visual scene element in the image is accurately obtained.
In one embodiment, the recognition module 1304 is further configured to, when the data type of the image includes a character type, perform text recognition on a target visual scene element including the character in the image, and obtain a recognition text corresponding to the target visual scene element; when the data type of the image comprises the image array type, classifying the target visual field scene elements in the image through a classification model to obtain the identification text corresponding to the target visual field scene elements.
In this embodiment, when the data type of the image includes a character type, text recognition is performed on the target visual scene element including the character in the image to obtain a recognition text corresponding to the target visual scene element, so that the associated target virtual interaction scene can be accurately determined according to the character existing in the image. When the data types of the images comprise image array types, classifying the target visual scene elements in the images through the classifying model, and accurately obtaining the identification text corresponding to the target visual scene elements in the images, so that the target virtual interaction scene associated with the target visual scene elements in the images can be accurately obtained.
In one embodiment, the obtaining module 1306 is further configured to perform a matching process in a scene Jing Xinxi library based on the recognition text, where the scene information library includes scene information of a plurality of virtual interaction scenes; and under the condition that the identification text is matched with at least one piece of target scene information in the field Jing Xinxi library, acquiring a link address which points to a target virtual interaction scene and corresponds to the at least one piece of target scene information.
In this embodiment, matching processing is performed in a scene information base including scene information of multiple virtual interaction scenes based on the identification text, and under the condition that at least one piece of target scene information is matched in the scene Jing Xinxi base of the identification text, a link address pointing to the target virtual interaction scene corresponding to the at least one piece of target scene information is obtained, so that the link address of the target virtual interaction scene can be returned to the terminal, and the terminal can quickly jump to the target virtual interaction scene from the image playing interface.
The various modules in the interaction means described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal or a server. Taking the terminal as an example, the internal structure of the terminal can be shown in fig. 14. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an interaction method. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, wherein the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 14 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements are applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region. In the case where there is a pushed video or image, the user may reject or may conveniently reject the pushed information of the video or image.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (20)

1. A method of interaction, the method comprising:
displaying an image playing interface, and displaying a target interaction element on the image playing interface; the target interaction element is used for triggering an identification mode of a visual scene element of the associated virtual interaction scene;
responding to the triggering operation of the target interaction element, and displaying the image playing interface in the identification mode;
And under the condition that an image is played in the image playing interface in the identification mode, when a target visual field scene element associated with a target virtual interaction scene exists in the played image, jumping to the target virtual interaction scene associated with the target visual field scene element from the image playing interface.
2. The method according to claim 1, wherein, in the case of playing an image in the image playing interface in the recognition mode, when there is a target visual field scene element associated with a target virtual interaction scene in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene element, includes:
displaying at least one element category on the image playing interface under the condition that the image is played in the image playing interface in the identification mode;
and responding to a trigger event of a target element category in the at least one element category, and jumping to a target virtual interaction scene associated with the target visual scene element from the image playing interface under the condition that the played image has the target virtual interaction scene associated with the target visual scene element belonging to the target element category.
3. The method according to claim 1, wherein, in the case of playing an image in the image playing interface in the recognition mode, when there is a target visual field scene element associated with a target virtual interaction scene in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene element, includes:
and under the condition that an image is played in the image playing interface in the identification mode, responding to a selection event of a visual element in the played image, and when the selected visual element comprises a target visual scene element associated with a target virtual interaction scene, jumping to the target virtual interaction scene associated with the target visual scene element from the image playing interface.
4. A method according to any one of claims 1 to 3, wherein said jumping from said image playing interface to a target virtual interaction scene associated with said target visual scene element comprises:
displaying at least one jump channel option aiming at the target virtual interaction scene on the image playing interface;
and in response to a jump trigger event for a target jump channel option in the at least one jump channel option, jumping from the image playing interface to the target virtual interactive scene through the target jump channel indicated by the target jump channel option.
5. The method according to claim 1, wherein, in the case of playing an image in the image playing interface in the recognition mode, when there is a target visual field scene element associated with a target virtual interaction scene in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene element, includes:
under the condition that an image is played in the image playing interface in the identification mode, when target visual scene elements associated with a plurality of target virtual interaction scenes exist in the played image, respectively jumping entrances of the plurality of target virtual interaction scenes are displayed on the image playing interface;
and responding to a selection event of a target jump portal in the jump portals, and jumping to a corresponding target virtual interaction scene from the image playing interface through the target jump portal.
6. The method according to claim 5, wherein, in the case of playing an image in the image playing interface in the recognition mode, when there are target visual field elements associated with a plurality of target virtual interaction scenes in the played image, displaying respective jump entries of the plurality of target virtual interaction scenes in the image playing interface, comprising:
And under the condition that an image is played in the image playing interface in the identification mode, when a plurality of target visual field scene elements related to different target virtual interaction scenes exist in the played image, displaying a jump entrance of the corresponding target virtual interaction scene in the image playing interface aiming at a display area of each target visual field scene element.
7. The method according to claim 5, wherein, in the case of playing an image in the image playing interface in the recognition mode, when there are target visual field elements associated with a plurality of target virtual interaction scenes in the played image, displaying respective jump entries of the plurality of target virtual interaction scenes in the image playing interface, comprising:
and under the condition that an image is played in the image playing interface in the identification mode, when a plurality of target visual field scene elements related to different target virtual interaction scenes exist in the played image, displaying a jump entry of the target virtual interaction scene related to each target visual field scene element in the image playing interface according to the priority order of the plurality of target visual field scene elements.
8. The method according to claim 1, wherein, in the case of playing an image in the image playing interface in the recognition mode, when there is a target visual field scene element associated with a target virtual interaction scene in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual field scene element, includes:
displaying an identification progress generated by the identification of the image which is played in a following way on the image playing interface under the condition that the image is played in the image playing interface in the identification mode;
when the image is identified to generate a corresponding identification result, and the identification result represents that a target visual field scene element associated with a target virtual interaction scene exists in the image, the image playing interface jumps to the target virtual interaction scene associated with the target visual field scene element.
9. The method of claim 8, wherein the method further comprises:
when the image is identified to generate a corresponding identification result, the identification result represents that no target visual field element of the associated target virtual interaction scene exists in the image, and prompt information representing that no associated target virtual interaction scene exists is displayed on the image playing interface.
10. The method according to claim 1, wherein the method further comprises:
and under the condition that the image playing interface is jumped to a target virtual interaction scene associated with the target visual scene element, pausing playing of the image on the image playing interface, and exiting the identification mode.
11. The method of claim 1, wherein displaying the image playback interface comprises:
displaying an image playing interface through an image playing application;
when the image is played in the image playing interface in the identification mode and a target visual scene element associated with a target virtual interaction scene exists in the played image, jumping from the image playing interface to the target virtual interaction scene associated with the target visual scene element comprises the following steps:
under the condition that an image is played in the image playing interface in the identification mode, when a target visual field scene element associated with a target virtual interaction scene exists in the played image, jumping from the image playing application to a virtual interaction application matched with the target virtual interaction scene associated with the target visual field scene element;
And displaying the target virtual interaction scene through the virtual interaction application.
12. The method of claim 11, wherein the virtual interactive application is a cloud application; the displaying the target virtual interaction scene through the virtual interaction application comprises the following steps:
and receiving and displaying the video picture which is generated after cloud processing and aims at the target virtual interaction scene through the cloud application.
13. The method of claim 11, wherein the method further comprises:
and responding to an exit event triggered at the target virtual interaction scene, and jumping to the image playing interface from the target virtual interaction scene.
14. A method of interaction, the method comprising:
receiving an image played in an image playing interface of an identification mode of a visual scene element on a terminal; the visual field scenery elements are associated with corresponding virtual interaction scenes;
performing image recognition on the image to obtain a recognition text corresponding to a target visual scene element in the image;
when the identification text points to a target virtual interaction scene, acquiring a link address which points to the target virtual interaction scene and corresponds to the identification text;
And returning the link address to the terminal, wherein the link address is used for indicating the terminal to jump from the image playing interface to the target virtual interaction scene.
15. The method of claim 14, wherein the performing image recognition on the image to obtain the recognition text corresponding to the target visual scene element in the image includes:
determining a data type corresponding to the image, and carrying out image recognition on the image according to a recognition mode corresponding to the data type to obtain a recognition text corresponding to a target visual scene element in the image.
16. The method of claim 15, wherein the performing image recognition on the image according to the recognition mode corresponding to the data type to obtain the recognition text corresponding to the target visual scene element in the image includes:
when the data type of the image comprises a character type, performing character recognition on a target visual scene element comprising characters in the image to obtain a recognition text corresponding to the target visual scene element;
when the data type of the image comprises an image array type, classifying the target visual scene element in the image through a classification model to obtain an identification text corresponding to the target visual scene element.
17. The method of claim 14, wherein when the recognition text points to a target virtual interaction scene, obtaining a link address corresponding to the recognition text and pointing to the target virtual interaction scene comprises:
performing matching processing in a scene Jing Xinxi library based on the identification text, wherein the scene information library comprises scene information of a plurality of virtual interaction scenes;
and under the condition that the identification text is matched with at least one piece of target scene information in the scene information base, acquiring a link address which points to a target virtual interaction scene and corresponds to the at least one piece of target scene information.
18. An interactive apparatus, the apparatus comprising:
the interface display module is used for displaying an image playing interface and displaying target interaction elements on the image playing interface; the target interaction element is used for triggering an identification mode of a visual scene element of the associated virtual interaction scene;
the state display module is used for responding to the triggering operation of the target interaction element and displaying the image playing interface in the identification mode;
and the jump module is used for jumping from the image playing interface to the target virtual interaction scene associated with the target virtual interaction scene element when the target virtual interaction scene element associated with the target virtual interaction scene exists in the played image under the condition that the image is played in the image playing interface in the identification mode.
19. An interactive apparatus, the apparatus comprising:
the receiving module is used for receiving the image played in the image playing interface of the identification mode of the visual scene element on the terminal; the visual field scenery elements are associated with corresponding virtual interaction scenes;
the identification module is used for carrying out image identification on the image to obtain an identification text corresponding to the target visual scene element in the image;
the acquisition module is used for acquiring a link address which points to the target virtual interaction scene and corresponds to the identification text when the identification text points to the target virtual interaction scene;
and the return module is used for returning the link address to the terminal, and the link address is used for indicating the terminal to jump from the image playing interface to the target virtual interaction scene.
20. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 17 when the computer program is executed.
CN202211398960.9A 2022-11-09 2022-11-09 Interaction method, interaction device, computer equipment and computer readable storage medium Pending CN117008757A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211398960.9A CN117008757A (en) 2022-11-09 2022-11-09 Interaction method, interaction device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211398960.9A CN117008757A (en) 2022-11-09 2022-11-09 Interaction method, interaction device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117008757A true CN117008757A (en) 2023-11-07

Family

ID=88560713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211398960.9A Pending CN117008757A (en) 2022-11-09 2022-11-09 Interaction method, interaction device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117008757A (en)

Similar Documents

Publication Publication Date Title
KR102436734B1 (en) method for confirming a position of video playback node, apparatus, electronic equipment, computer readable storage medium and computer program
CN110784759B (en) Bullet screen information processing method and device, electronic equipment and storage medium
US20190373322A1 (en) Interactive Video Content Delivery
US20170289619A1 (en) Method for positioning video, terminal apparatus and cloud server
CN108847214B (en) Voice processing method, client, device, terminal, server and storage medium
CN111294663A (en) Bullet screen processing method and device, electronic equipment and computer readable storage medium
US11087072B2 (en) Internet browsing
CN109154943A (en) Conversion based on server of the automatic broadcasting content to click play content
CN111708948A (en) Content item recommendation method, device, server and computer readable storage medium
CN112068920A (en) Content display method and device, electronic equipment and readable storage medium
CN111949908A (en) Media information processing method and device, electronic equipment and storage medium
CN113438492B (en) Method, system, computer device and storage medium for generating title in live broadcast
CN113271251B (en) Virtual resource activity control method and device, electronic equipment and storage medium
CN112399230A (en) Playing method for mobile terminal application
CN112286617B (en) Operation guidance method and device and electronic equipment
CN111597361B (en) Multimedia data processing method, device, storage medium and equipment
CN116049490A (en) Material searching method and device and electronic equipment
CN117008757A (en) Interaction method, interaction device, computer equipment and computer readable storage medium
CN114745558B (en) Live broadcast monitoring method, device, system, equipment and medium
CN111885139B (en) Content sharing method, device and system, mobile terminal and server
CN111225250B (en) Video extended information processing method and device
CN113515701A (en) Information recommendation method and device
CN113821677A (en) Method, device and equipment for generating cover image and storage medium
CN112165626A (en) Image processing method, resource acquisition method, related device and medium
KR20220053021A (en) video game overlay

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40097794

Country of ref document: HK