WO2023056849A1 - 交互方法、装置、设备及存储介质 - Google Patents

交互方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2023056849A1
WO2023056849A1 PCT/CN2022/121211 CN2022121211W WO2023056849A1 WO 2023056849 A1 WO2023056849 A1 WO 2023056849A1 CN 2022121211 W CN2022121211 W CN 2022121211W WO 2023056849 A1 WO2023056849 A1 WO 2023056849A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
shooting
information
user
tasks
Prior art date
Application number
PCT/CN2022/121211
Other languages
English (en)
French (fr)
Inventor
王一同
高彧
吴俊塔
颜建波
何国劲
邢林杰
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2023056849A1 publication Critical patent/WO2023056849A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Definitions

  • the present disclosure relates to the technical field of computer networks, for example, to an interaction method, device, equipment and storage medium.
  • Smart terminals have become an indispensable tool in people's lives. Users can interact with smart terminals to perform social activities, such as multiplayer online games through smart terminals.
  • the interaction between the user and the terminal device can only be realized by controlling the game character, and most of the user's attention is focused on the game character, and cannot pay attention to the current scene situation. That is, only the interaction between the user and the terminal device can be realized, but the interaction between the terminal device and the scene cannot be realized, so that the interaction mode with the terminal device is single.
  • the present disclosure provides an interaction method, device, device, and storage medium, which can realize the interaction between a terminal device and an offline scene, and can increase the diversity of interaction modes with the terminal device.
  • the present disclosure provides an interactive method including:
  • a list of shooting tasks in the virtual room is displayed; wherein, the list of shooting tasks includes a plurality of shooting tasks, and each shooting task carries task information;
  • For the shooting task acquiring an image taken by the first user according to the task information carried in the shooting task;
  • the feature information is compared with the task information, and if the feature information matches the task information, it is determined that the shooting task is completed.
  • the present disclosure also provides an interactive device, including:
  • the task list display module is configured to display the shooting task list in the virtual room when it is detected that the first user enters the virtual room; wherein, the shooting task list includes a plurality of shooting tasks, and each shooting task carries task information;
  • the image acquisition module is configured to, for the shooting task, acquire an image taken by the first user according to the task information carried in the shooting task;
  • the feature information acquisition module is configured to perform feature extraction on the image to obtain feature information
  • the information comparison module is configured to compare the feature information with the task information, and if the feature information matches the task information, it is determined that the shooting task is completed.
  • the present disclosure also provides an electronic device, the electronic device comprising:
  • a storage device configured to store one or more programs
  • the one or more processing devices are made to implement the above interaction method.
  • the present disclosure also provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the above-mentioned interaction method is realized.
  • the present disclosure also provides a computer program product, including a computer program carried on a computer-readable medium, where the computer program includes program code for executing the above-mentioned interaction method.
  • FIG. 1 is a flowchart of an interaction method provided by an embodiment of the present disclosure
  • Fig. 2 is an example diagram of a task interface provided by an embodiment of the present disclosure
  • Fig. 3 is an example diagram of another task interface provided by an embodiment of the present disclosure.
  • Fig. 4 is a process diagram of jumping from a task list to a task interface provided by an embodiment of the present disclosure
  • Fig. 5 is a schematic diagram of an interaction provided by an embodiment of the present disclosure.
  • Fig. 6 is a schematic structural diagram of an interaction device provided by an embodiment of the present disclosure.
  • Fig. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • the interaction between the user and the terminal device can only be realized by controlling the game character, and most of the user's attention is focused on the game character, and cannot pay attention to the current scene situation.
  • the embodiment of the present disclosure proposes an interaction method.
  • Fig. 1 is a flowchart of an interaction method provided by an embodiment of the present disclosure. This embodiment is applicable to the interaction between a terminal device and a scene.
  • the method can be executed by an interaction device, which can be implemented by hardware and/or software.
  • Composition and generally can be integrated in devices with interactive functions, such devices can be electronic devices such as servers, mobile terminals or server clusters. As shown in Figure 1, the method includes:
  • the shooting task list includes multiple shooting tasks, and each shooting task carries task information.
  • the task information can be associated with an offline scene (for example: a shopping mall, a school, a tourist attraction, etc.).
  • the first user may be a player entering the game in the virtual room, and the first user may be one person or multiple people.
  • the virtual room can be a game-exclusive room created by the organizer, and the game room can be customized by the organizer. For example, the interests of game participants can be collected, and a game-exclusive room can be created according to the interests.
  • the first user can enter the virtual room in various ways, such as inputting a virtual room number, clicking an invitation link, and the like.
  • organizer A creates virtual room A
  • organizer B creates virtual room B
  • the team building activity personnel organized by organizer A can enter virtual room A for shooting tasks
  • the team building activity personnel of the group can enter the virtual room B to carry out shooting tasks.
  • the user can log in to the game application program (Application, APP) installed on the terminal device.
  • the user opens the mobile phone APP. If the user logs in the APP for the first time, he enters the user registration interface, and then logs in again after the user registers.
  • Mobile APP If the user already has the mobile APP account before, log in and enter the game home page to select the game identity.
  • the terminal device When it is detected that the first user has entered the virtual room, the terminal device will display the shooting task list set in the virtual room, and the first user can perform shooting operations on the target scene according to the task information carried by each shooting task in the shooting task list .
  • FIG. 2 is an example diagram of a task interface provided by an embodiment of the present disclosure. As shown in Figure 2, four images of items are displayed on the interface, and the task information stipulated in this shooting task is: "Find these items and complete their qualification certification!”.
  • For each shooting task acquire an image taken by the first user according to the task information.
  • Shooting tasks can be set by the game organizer, and can correspond to a variety of photo types, thus forming a variety of gameplay and increasing fun.
  • it can be a task of photographing type (for example, an indoor treasure hunt task connecting people and objects or a check-in challenge task connecting people and scenery, etc.); Designated points, etc.); multiplayer cooperative gameplay (such as multiplayer heart-to-heart tasks, etc.).
  • the terminal device For each shooting task, the terminal device displays the task information of the shooting task and a shooting button to the first user, and when the user presses the shooting button, the terminal device takes a picture on which the camera is aimed.
  • Task information can be displayed in at least one of the following ways: text, pictures and videos.
  • FIG. 3 is an example diagram of another task interface provided by an embodiment of the present disclosure.
  • the task information is displayed in the form of pictures + text.
  • the picture shows a group photo of 1 man and 2 women.
  • the text shows that the task rule is: "Refer to the picture above, find two opposite sexes, and take a photo with you! ", "Take photo now" in the picture is the shooting button.
  • the process of obtaining the image taken by the first user according to the task information may be: when it is detected that the first user clicks on any shooting task in the shooting task list, control the current interface to jump to the task where the shooting task is located
  • the task interface is used to display the task information corresponding to the shooting task and the shooting button.
  • the current interface is used to display the list of shooting tasks.
  • the trigger instruction can be realized through action operation.
  • a shooting button is also set in the interface, which is displayed as "Take a photo now", and the trigger of taking a photo can be sent by pressing the button instruction.
  • the terminal device can detect the click operation of the first user through the detection technology, and then the terminal device controls the current interface to jump to the task interface where the shooting task is located, and the task interface displays There are task information and a shooting button corresponding to the shooting task; the first user triggers the shooting button, and the terminal device receives the first user's trigger instruction for the shooting button, and captures the picture aligned with the camera according to the trigger instruction to obtain the captured image.
  • FIG. 4 is a process diagram of jumping from a task list to a task interface provided by an embodiment of the present disclosure.
  • the terminal device will control the current interface to jump to the task interface where the shooting task is located.
  • the task information displayed on the redirected task interface is: "Find the most shining landmark of the hackathon project, and complete the co-production with TA!”.
  • Feature extraction is performed on the image taken by the first user according to the task information by image recognition technology to obtain feature information.
  • the method of extracting features from an image and obtaining feature information may be: inputting the image into a neural network model to obtain feature information corresponding to the image.
  • the feature information includes at least one of the following: portrait information, object information and action information.
  • portrait information e.g. portrait information
  • object information e.g. portrait information
  • action information e.g. portrait information
  • the neural network model can be constructed based on any neural network, and is used to extract features such as portraits, objects, and actions.
  • the similarity between the feature information and the task information is compared, and if the similarity between the feature information and the task information satisfies the similarity condition, then the feature information and the task information match, and the shooting task is completed.
  • the task information is that the participant needs to take a photo with a specified fruit, compare the feature information with the task information, and judge that the participant has taken a photo with the specified fruit, that is, the feature information and the task information match, then The shooting task is completed.
  • the shooting task After the shooting task is completed, it also includes:
  • the time length required for the first user to complete all the shooting tasks is obtained; and the score of the first user is determined according to the time length.
  • the time required for the first user to complete all the shooting tasks is obtained, and the time required for the first user to complete all the shooting tasks may be recorded by the terminal device itself.
  • the terminal device detects that the task list is displayed on the interface for the first time, it starts the timing module for timing.
  • the timing ends, so that the time required for the first user to complete all the shooting tasks can be obtained.
  • the time required for the first user to complete all shooting tasks is determined as the score of the first user. Exemplarily, if the time required for a user to complete all shooting tasks is a, the user's score is a. The shorter the time required to complete all shooting tasks, the lower the first user's score and the higher the game ranking.
  • the game organizer may or may not limit the time for completing the task.
  • the game creator limits the time to complete the task, after the shooting task is completed, it also includes:
  • the first user completes part of the shooting tasks within the first set duration, then close all unfinished shooting tasks; obtain the second set duration corresponding to each unfinished shooting task, according to the second set duration and the first Set the duration to determine the duration required by the first user; determine the score of the first user according to the duration.
  • the first set duration may be the maximum time required to complete all tasks set by the game organizer; the second set duration may be the required duration set by the game organizer for each shooting task.
  • Closing the unfinished shooting task may be closing the shooting channel of the unfinished shooting task and stopping the timing of the user.
  • the first user completes some of the shooting tasks within the first set time period, that is, not completes all the shooting tasks, all unfinished shooting tasks are closed.
  • the second set duration corresponding to each unfinished shooting task is obtained, and the first set duration and the second set duration are combined By adding and summing, the time length required by the first user is obtained, and the time length is determined as the score of the first user.
  • the first set duration is b
  • the second set duration of the unfinished shooting task a is c1
  • the second set duration of the unfinished shooting task b is c2
  • the first user completes all shooting tasks
  • the required duration is b+c1+c2
  • the score of the first user is determined to be a+c1+c2.
  • the method further includes: for multiple first users entering the virtual room; sorting the multiple first users according to the score; and displaying the sorting results.
  • first users in the virtual room can be sorted according to the score, and the sorting results can be displayed, and related rewards and punishments can be carried out according to the sorting results. For example, the shorter the duration and the lower the score, the higher the ranking.
  • FIG. 5 is a schematic diagram of an interaction provided by an embodiment of the present disclosure.
  • the game organizer or participant opens the interactive interface to register or log in; the game organizer can create a game, and the game participant You can join and participate in the game; if the user is a game participant, enter the virtual room by searching room number 123; choose a group photo challenge; the challenge task is to find the pictorial icon and take a group photo; when the participant finds the pictorial item, take a photo and submit it; complete You can check the ranking after the challenge.
  • the first user enters the virtual room Before detecting that the first user enters the virtual room, it also includes: receiving the creation instruction triggered by the second user, and creating the virtual room according to the creation instruction; receiving the task type selected by the second user and the task information input by the second user in the virtual room ; Establish a shooting task list according to the task type and task information.
  • Task information is displayed in at least one of the following ways: text, pictures and videos.
  • the second user can be a game organizer, and can create a virtual room by triggering a creation instruction, and add corresponding tasks.
  • the task types here can include a variety of gameplays, and these gameplays can include camera-type tasks (such as indoor treasure hunting tasks connecting people and objects, or check-in task challenges connecting people and scenery, etc.); video-type tasks (for example, The user's body is located at a designated point on the screen within a specified time, etc.); multiplayer cooperative gameplay (such as multiplayer heart-to-heart tasks, etc.).
  • the organizer completes the creation of the room by adding corresponding game tasks in the room.
  • Game organizers can set up some large-scale shopping malls, scenic spots or parks in advance to obtain images or video information in real scenes, set mission rules, and enter shooting missions by uploading images, videos, voice or text.
  • the same second user can create multiple virtual rooms, and different second users can respectively create different virtual rooms, and different virtual rooms are independent of each other.
  • a company's employees want to use the game for group building they can create multiple virtual rooms, divide the employees into several groups, and enter different virtual rooms respectively.
  • employees of multiple companies want to use the game for team building they can create multiple virtual rooms, and each company employee enters the virtual room established by its own organizer for shooting tasks.
  • the process of using the interactive game as a team building project and users participating in the team building process through terminal devices is described below as an example.
  • the game is presented in the form of a mobile APP:
  • the task types here can include a variety of gameplays, and these gameplays can include camera-type tasks (for example, indoor treasure hunting tasks connecting people and objects or check-in challenge tasks connecting people and scenery, etc.); video-type tasks (for example, The user's body is located at a designated point on the screen within a specified time, etc.); multiplayer cooperative gameplay (such as multiplayer heart-to-heart tasks, etc.).
  • the organizer completes the creation of the room by adding corresponding team building tasks in the room.
  • the participant enters the room created by the organizer and challenges according to the tasks set in the room.
  • the terminal device will use the neural network model to judge the challenge of the participants. Take the indoor treasure hunting task connecting people and objects as an example. The task requires users to find a specified item within a given time, and take a photo and upload it. After the participant uploads the photo result, the terminal device will use the image recognition ability of the neural network model to perform image recognition. If the user finds the corresponding item, it will determine that the user has completed the task and record the corresponding completion time.
  • Participants complete the tasks in turn. According to the completion of each participant, determine the time required for each participant to complete all the shooting tasks, so as to determine the score of each participant and rank them. The organizer will conduct a certain ranking according to the ranking results rewards and punishments.
  • the embodiment of the present disclosure discloses an interaction method, device, equipment and storage medium.
  • the shooting task list in the virtual room is displayed; wherein, the shooting task list includes multiple shooting tasks, each shooting task carries task information, and the task information is associated with the target scene;
  • For each shooting task obtain the image taken by the first user according to the task information; perform feature extraction on the image to obtain feature information; compare the feature information with the task information, and if the feature information matches the task information, the shooting task is completed.
  • the interaction method provided by the embodiments of the present disclosure associates the task information with the offline scene, compares the feature information of the offline captured image with the online task information, completes the shooting task, and realizes the interaction between the terminal device and the offline scene. The diversity of interaction modes with the terminal device can be increased.
  • Fig. 6 is a schematic structural diagram of an interaction device provided by an embodiment of the present disclosure. As shown in Figure 6, the device includes:
  • the task list display module 210 is configured to display the shooting task list in the virtual room when it is detected that the first user enters the virtual room; wherein, the shooting task list includes a plurality of shooting tasks, and each shooting task carries task information;
  • the image acquisition module 220 is configured to obtain the image taken by the first user according to the task information for each shooting task; the feature information acquisition module 230 is configured to perform feature extraction on the image to obtain feature information; the information comparison module 240 is configured to The feature information is compared with the task information, and if the feature information matches the task information, it is determined that the shooting task is completed.
  • the image acquisition module 220 is set to:
  • the feature information acquisition module 230 is set to:
  • the image is input into a neural network model to obtain feature information corresponding to the image; wherein, the feature information includes at least one of the following: portrait information, object information, and action information.
  • the device also includes:
  • the first score determination module is configured to obtain the time required for the first user to complete all the shooting tasks if all the shooting tasks in the shooting task list are completed; determine the score of the first user according to the time length.
  • the device also includes:
  • the second score determination module is configured to close all unfinished shooting tasks if the first user completes part of the shooting tasks within the first set duration; obtain the second set duration corresponding to each unfinished shooting task, according to The second set duration and the first set duration determine the duration required by the first user; determine the score of the first user according to the duration.
  • the device also includes:
  • the sorting result display module is configured to sort the multiple first users according to the scores for the multiple first users entering the virtual room, and display the sorting results.
  • the device also includes:
  • the creation module is configured to receive a creation instruction triggered by the second user, and create a virtual room according to the creation instruction; in the virtual room, the task type selected by the second user and the task information input by the second user are received; wherein, the task information is passed through at least one of the following Display in multiple ways: text, pictures and videos; create a shooting task list based on the task type and task information.
  • the above-mentioned device can execute the methods provided by all the foregoing embodiments of the present disclosure, and has corresponding functional modules and effects for executing the above-mentioned methods.
  • the above-mentioned device can execute the methods provided by all the foregoing embodiments of the present disclosure, and has corresponding functional modules and effects for executing the above-mentioned methods.
  • FIG. 7 it shows a schematic structural diagram of an electronic device 300 suitable for implementing the embodiments of the present disclosure.
  • the electronic device 300 in the embodiment of the present disclosure may include but not limited to mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia player Mobile terminals such as Portable Media Player (PMP), vehicle-mounted terminals (such as vehicle navigation terminals), fixed terminals such as digital TV (Television, TV), desktop computers, etc., or various forms of servers, such as independent servers Or server clusters.
  • PDA Personal Digital Assistant
  • PAD Portable multimedia player Mobile terminals
  • PMP Portable Media Player
  • vehicle-mounted terminals such as vehicle navigation terminals
  • fixed terminals such as digital TV (Television, TV)
  • desktop computers etc.
  • servers such as independent servers Or server clusters.
  • the electronic device 300 shown in FIG. 7 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 300 may include a processing device (such as a central processing unit, a graphics processing unit, etc.)
  • the device 308 loads programs in the random access storage device (Random Access Memory, RAM) 303 to perform various appropriate actions and processes.
  • RAM Random Access Memory
  • various programs and data necessary for the operation of the electronic device 300 are also stored.
  • the processing device 301, ROM 302, and RAM 303 are connected to each other through a bus 304.
  • An input/output (Input/Output, I/O) interface 305 is also connected to the bus 304 .
  • an input device 306 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; including, for example, a liquid crystal display (Liquid Crystal Display, LCD) , an output device 307 such as a speaker, a vibrator, etc.; a storage device 308 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 309.
  • the communication means 309 may allow the electronic device 300 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 7 shows electronic device 300 having various means, it is not a requirement to implement or possess all of the means shown. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program comprising program code for performing an interactive method.
  • the computer program may be downloaded and installed from a network via communication means 309, or from storage means 308, or from ROM 302.
  • the processing device 301 When the computer program is executed by the processing device 301, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof.
  • Examples of computer readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM) or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • the program code contained on the computer readable medium can be transmitted by any appropriate medium, including but not limited to: electric wire, optical cable, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as Hypertext Transfer Protocol (HyperText Transfer Protocol, HTTP), and can communicate with digital data in any form or medium Communications (eg, communication networks) are interconnected.
  • Examples of communication networks include local area networks (Local Area Network, LAN), wide area networks (Wide Area Network, WAN), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently existing networks that are known or developed in the future.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: when detecting that the first user enters the virtual room, The shooting task list is displayed; wherein, the shooting task list includes multiple shooting tasks, each shooting task carries task information, and the task information is associated with the target scene; for each shooting task, the first user An image taken according to the task information; performing feature extraction on the image to obtain feature information; comparing the feature information with the task information, and determining the feature information if the feature information matches the task information The above shooting task is completed.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a LAN or WAN, or it can be connected to an external computer (eg via the Internet using an Internet Service Provider).
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the unit does not constitute a limitation of the unit itself in one case.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (Field Programmable Gate Arrays, FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Parts, ASSP), System on Chip (System on Chip, SOC), Complex Programmable Logic Device (Complex Programming Logic Device, CPLD) and so on.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing. Examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard drives, RAM, ROM, EPROM or flash memory, optical fibers, CD-ROMs, optical storage devices, magnetic storage devices, or Any suitable combination of the above.
  • the embodiments of the present disclosure disclose an interaction method, including:
  • a list of shooting tasks in the virtual room is displayed; wherein, the list of shooting tasks includes a plurality of shooting tasks, and each shooting task carries task information;
  • For the shooting task acquiring an image taken by the first user according to the task information carried in the shooting task;
  • the feature information is compared with the task information, and if the feature information matches the task information, it is determined that the shooting task is completed.
  • obtaining the image taken by the first user according to the task information carried in the shooting task includes:
  • feature extraction is performed on the image to obtain feature information, including:
  • the feature information includes at least one of the following: portrait information, object information, and action information.
  • the shooting task after the shooting task is completed, it also includes:
  • the shooting task after the shooting task is completed, it also includes:
  • the method further includes:
  • before detecting that the first user enters the virtual room further includes:
  • the task type selected by the second user and the task information input by the second user are received in the virtual room; wherein, the task information is displayed in at least one of the following ways: text, pictures and videos;
  • a shooting task list is established according to the task type and the task information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本文公开了一种交互方法、装置、设备及存储介质。交互方法包括:当检测到第一用户进入虚拟房间时,将虚拟房间内的拍摄任务列表进行展示;其中,拍摄任务列表包含多个拍摄任务,每个拍摄任务携带有任务信息;对于所述拍摄任务,获取第一用户根据所述拍摄任务携带的任务信息拍摄的图像;对图像进行特征提取,获得特征信息;将特征信息和任务信息进行比对,若特征信息和任务信息匹配,则确定拍摄任务完成。

Description

交互方法、装置、设备及存储介质
本申请要求在2021年10月09日提交中国专利局、申请号为202111176740.7的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机网络技术领域,例如涉及一种交互方法、装置、设备及存储介质。
背景技术
智能终端已经成为人们生活中不可或缺的工具,用户可以通过与智能终端的交互进行社交等活动,例如:通过智能终端实现多人网络游戏。
相关技术中,用户在进行多人网络游戏时,用户与终端设备间的交互只能通过控制游戏角色实现,用户的大部分注意力集中在游戏角色上,无法关注当前所处的场景情况。即只能实现用户与终端设备间的交互,无法实现终端设备与场景间的交互,使得与终端设备的交互方式单一。
发明内容
本公开提供一种交互方法、装置、设备及存储介质,可以实现终端设备与线下场景的交互,可以增加与终端设备的交互方式的多样性。
本公开提供了一种交互方法,包括:
当检测到第一用户进入虚拟房间时,将所述虚拟房间内的拍摄任务列表进行展示;其中,所述拍摄任务列表包含多个拍摄任务,每个拍摄任务携带有任务信息;
对于所述拍摄任务,获取所述第一用户根据所述拍摄任务携带的任务信息拍摄的图像;
对所述图像进行特征提取,获得特征信息;
将所述特征信息和所述任务信息进行比对,若所述特征信息和所述任务信息匹配,则确定所述拍摄任务完成。
本公开还提供了一种交互装置,包括:
任务列表展示模块,设置为当检测到第一用户进入虚拟房间时,将所述虚拟房间内的拍摄任务列表进行展示;其中,所述拍摄任务列表包含多个拍摄任 务,每个拍摄任务携带有任务信息;
图像获取模块,设置为对于所述拍摄任务,获取所述第一用户根据所述拍摄任务携带的任务信息拍摄的图像;
特征信息获取模块,设置为对所述图像进行特征提取,获得特征信息;
信息比对模块,设置为将所述特征信息和所述任务信息进行比对,若所述特征信息和所述任务信息匹配,确定所述拍摄任务完成。
本公开还提供了一种电子设备,所述电子设备包括:
一个或多个处理装置;
存储装置,设置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理装置执行,使得所述一个或多个处理装置实现上述的交互方法。
本公开还提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现上述的交互方法。
本公开还提供了一种计算机程序产品,包括承载在计算机可读介质上的计算机程序,所述计算机程序包含用于执行上述的交互方法的程序代码。
附图说明
图1是本公开实施例提供的一种交互方法的流程图;
图2是本公开实施例提供的一种任务界面的示例图;
图3是本公开实施例提供的另一种任务界面的示例图;
图4是本公开实施例提供的一种由任务列表跳转至任务界面的过程图;
图5是本公开实施例提供的一种交互的示意图;
图6是本公开实施例提供的一种交互装置的结构示意图;
图7是本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的些实施例,然而本公开可以通过多种形式来实现,提供这些实施例是为了理解本公开。本公开的附图及实施例仅用于示例性作用。
本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。 本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
相关技术中,用户在进行多人网络游戏时,用户与终端设备间的交互只能通过控制游戏角色实现,用户的大部分注意力集中在游戏角色上,无法关注当前所处的场景情况。为了实现终端设备与场景的交互,通过线上的游戏拉近游戏参与者在现实世界的距离,本公开实施例提出了一种交互方法。
图1是本公开实施例提供的一种交互方法的流程图,本实施例可适用于终端设备与场景间的交互的情况,该方法可以由交互装置来执行,该装置可由硬件和/或软件组成,并一般可集成在具有交互功能的设备中,该设备可以是服务器、移动终端或服务器集群等电子设备。如图1所示,该方法包括:
110、当检测到第一用户进入虚拟房间时,将虚拟房间内的拍摄任务列表进行展示。
拍摄任务列表包含多个拍摄任务,每个拍摄任务携带有任务信息。任务信息可以与线下的场景(例如:一个商场、学校及旅游景点等)相关联。
本实施例中,第一用户可以为进入虚拟房间内的游戏的参与者,第一用户可以是1人,也可以是多人。虚拟房间可以为由组织者创建的游戏专属房间,该游戏房间可以由组织者进行自定义设置,例如,可以收集游戏参与者的兴趣,依据兴趣创建游戏专属房间。第一用户可以通过多种方式进入虚拟房间,如输入虚拟房间号,点击邀请链接等。
示例性的,如,组织者A创建了虚拟房间A,组织者B创建了虚拟房间B,则参与组织者A组织的团建活动人员,可以进入虚拟房间A进行拍摄任务;参与组织者B组织的团建活动人员,可以进入虚拟房间B进行拍摄任务。
本实施例中,用户可以登录安装在终端设备的游戏应用程序(Application,APP),用户打开手机APP,如果用户是第一次登陆该APP,则进入用户注册界面,用户注册完后再重新登录手机APP。如果用户之前已有该手机APP账号,则登陆进入游戏首页,进行游戏身份的选择。其中,共有两种身份可以选择:组织者或参与者。这里选择参与者身份,进入虚拟房间,根据该房间设定的任务进行挑战。
当检测到第一用户进入到虚拟房间时,终端设备将虚拟房间内设置好的拍摄任务列表进行展示,第一用户可以根据拍摄任务列表中每个拍摄任务携带的任务信息对目标场景进行拍摄操作。
示例性的,图2为本公开实施例提供的一种任务界面的示例图。如图2所示,界面中展示了四个物品图像,该拍摄任务规定的任务信息是:“找到这些物品,完成TA们的资质认证!”。
120、对于每个拍摄任务,获取第一用户根据任务信息拍摄的图像。
拍摄任务可以由游戏组织者设置,可以对应多种拍照类型,从而形成多种玩法,增加趣味性。示例性的,可以是拍照类型的任务(例如,连接人与物的室内寻宝任务或者连接人与景的打卡挑战任务等);视频类型的任务(例如,用户在规定时间内身体位于屏幕中的指定点等);多人配合玩法(例如多人比心任务等)。
对于每个拍摄任务,终端设备向第一用户展示拍摄任务的任务信息和拍摄按钮,通过用户按动拍摄按钮,终端设备拍摄摄像头对准的画面。
任务信息可以通过如下至少一种方式展示:文字、图片及视频。示例性的,图3是本公开实施例提供的另一种任务界面的示例图。如图3所示,任务信息以图片+文字的形式展示,图片显示为1名男性和2名女性合影,文字显示任务规则为:“参考上图,找到两名异性,和你完成合影吧!”,图中的“马上拍照”为拍摄按钮。
对于每个拍摄任务,获取第一用户根据任务信息拍摄的图像的过程可以是:当检测到第一用户点击拍摄任务列表中的任一拍摄任务时,控制当前界面跳转至拍摄任务所在的任务界面;接收第一用户对拍摄按钮的触发指令,根据触发指令拍摄摄像头对准的画面,获得拍摄的图像。
任务界面用于展示拍摄任务对应的任务信息及拍摄按钮。当前界面用于展示拍摄任务列表。触发指令可以是通过动作操作来实现。
示例性的,继续参考图3,如图3所示,界面中除了展示图像以及任务信息外,还设置了拍摄按钮,拍摄按钮显示为“马上拍照”,可以通过按触该按钮发送 拍照的触发指令。
第一用户点击拍摄任务列表中的任一拍摄任务时,终端设备可以通过检测技术检测到第一用户的点击操作,则终端设备控制当前界面跳转至拍摄任务所在的任务界面,任务界面中展示有拍摄任务对应的任务信息及拍摄按钮;第一用户触发拍摄按钮,终端设备接收到第一用户对拍摄按钮的触发指令,根据触发指令拍摄摄像头对准的画面,获得拍摄的图像。
示例性的,图4是本公开实施例提供的一种由任务列表跳转至任务界面的过程图,如图4所示,任务列表中显示有三项任务:合影挑战、动作挑战和地标打卡。当用户点击地标打卡任务选项时,终端设备会控制当前界面跳转至拍摄任务所在的任务界面。由图4可以看出,跳转到的任务界面显示的任务信息为:“找到Hackathon项目最闪耀的地标,和TA完成合拍!”。
130、对图像进行特征提取,获得特征信息。
通过图像识别技术对第一用户根据任务信息拍摄的图像进行特征提取,获得特征信息。
对图像进行特征提取,获得特征信息的方式可以是:将图像输入设定神经网络模型中,获得图像对应的特征信息。
特征信息包括如下至少一项:人像信息、物体信息及动作信息。其中,设定神经网络模型可以是基于任意的神经网络构建的,用于实现人像、物体及动作等特征的提取。
140、将特征信息和任务信息进行比对,若特征信息和任务信息匹配,则拍摄任务完成。
将特征信息和任务信息进行相似度比对,如果特征信息和任务信息之间的相似度满足相似度条件,则特征信息和任务信息匹配,拍摄任务完成。
示例性的,任务信息为参与者需与指定的一个水果进行合影,将特征信息和任务信息进行比对,判断出参与者跟指定的一个水果进行了合影,即特征信息和任务信息匹配,则该拍摄任务完成。
在拍摄任务完成之后,还包括:
若拍摄任务列表中的所有拍摄任务均完成,则获取第一用户完成所有拍摄任务所需的时长;根据时长确定第一用户的得分。
若第一用户将拍摄任务列表中的所有拍摄任务均完成,则获取第一用户完成所有拍摄任务所需的时长,第一用户完成所有拍摄任务所需的时长可以由终端设备本身记录。当终端设备检测到任务列表第一次显示在界面时,启动计时 模块进行计时,当第一用户完成所有拍摄任务时,计时结束,从而可以获得第一用户完成所有拍摄任务所需的时长。
本实施例中,将第一用户完成所有拍摄任务所需的时长确定为第一用户的得分。示例性的,一用户完成所有拍摄任务所需时长为a,则该用户得分为a。完成所有拍摄任务所需的时长越短,则第一用户的得分越低,游戏排名越靠前。
本实施例中,游戏组织者可以限制完成任务的时长,也可以不限制完成任务的时长。
若游戏创建者限制完成任务的时长,则在拍摄任务完成之后,还包括:
若第一用户在第一设定时长内完成部分拍摄任务,则关闭所有未完成的拍摄任务;获取每个未完成的拍摄任务对应的第二设定时长,根据第二设定时长和第一设定时长确定第一用户所需的时长;根据时长确定第一用户的得分。
第一设定时长可以为游戏组织者设定的完成所有任务所需的最长时间;第二设定时长可以为游戏组织者为每个拍摄任务设置的所需时长。关闭未完成的拍摄任务可以为将未完成的拍摄任务的拍摄通道关闭,且停止对该用户的计时。
若第一用户在第一设定时长内完成了部分拍摄任务,即未完成所有拍摄任务,则所有关闭未完成的拍摄任务。本实施例中,当第一用户在第一设定时长内完成部分拍摄任务时,获取每个未完成的拍摄任务对应的第二设定时长,将第一设定时长与第二设定时长相加求和,获得第一用户所需的时长,并将该时长确定为第一用户的得分。示例性的,第一设定时长为b,未完成的拍摄任务a的第二设定时长为c1,未完成的拍摄任务b的第二设定时长为c2,则第一用户完成所有拍摄任务所需的时长为b+c1+c2,确定该第一用户得分为a+c1+c2。
在根据时长确定第一用户的得分之后,还包括:对于进入虚拟房间的多个第一用户;根据得分对多个第一用户进行排序;将排序结果进行展示。
在根据时长确定第一用户的得分之后,根据得分大小可以对在该虚拟房间里的多个第一用户进行排序,并将排序结果进行展示,可以根据排序结果进行相关奖惩等。如,时长越短,得分越低,则排名越靠前。
示例性的,图5是本公开实施例提供的一种交互的示意图,如图5所示,游戏组织者或参与者打开交互界面,进行注册或登录;游戏组织者可以创建游戏,游戏参与者可以加入参与游戏;假如该用户为游戏参与者,通过搜索房间号123,进入虚拟房间;选择合影挑战;挑战任务是找到图示图标进行合影;参与者找到图示物品时进行拍摄并提交;完成挑战后可以查看排名。
在检测到第一用户进入虚拟房间之前,还包括:接收第二用户触发的创建指令,根据创建指令创建虚拟房间;在虚拟房间中接收第二用户选择的任务类 型以及第二用户输入的任务信息;根据任务类型和任务信息建立拍摄任务列表。
任务信息通过如下至少一种方式展示:文字、图片及视频。
第二用户可以为游戏组织者,可以通过触发创建指令创建虚拟房间,并且添加对应的任务。这里的任务类型可以包含多种玩法,这些玩法可以包括拍照类型的任务(例如连接人与物之间的室内寻宝任务,或者连接人与景的打卡任务挑战等);视频类型的任务(例如,用户在规定时间内身体位于屏幕中的指定点等);多人配合玩法(例如多人比心任务等)。组织者通过在房间内添加对应的游戏任务来完成房间的创建。游戏组织者可以提前在一些大型商场、景区或公园进行踩点,获取真实场景下的图像或视频信息,并设置好任务规则,通过上传图像、视频、语音或者文字的形式录入拍摄任务。
同一个第二用户可以创建多个虚拟房间,不同第二用户可以分别创建不同的虚拟房间,不同虚拟房间互相独立。示例性的,一家公司员工欲通过该游戏进行团建,可以建立多个虚拟房间,将员工分成几组,分别进入不同虚拟房间。又或者多家公司员工欲通过该游戏进行团建,可以建立多个虚拟房间,每家公司员工分别进入自家组织者建立的虚拟房间进行拍摄任务。
为了表述本公开实施例,下面以使用该交互游戏为团建项目,用户通过终端设备参与该团建的过程为例进行描述,示例性的,该游戏以手机APP的方式呈现:
1)用户打开手机APP,如果用户是第一次登陆该APP,则进入用户注册界面,用户注册完后再重新登录手机APP。
2)如果用户之前已有该手机APP账号,则登陆进入游戏首页,进行团建身份的选择。其中,共有两种身份可以选择:组织者或参与者。
3)如果选择作为组织者(即上述第二用户),则创建对应的团建专属房间(即虚拟房间),并且添加对应的任务。这里的任务类型可以包含多种玩法,这些玩法可以包括拍照类型的任务(例如,连接人与物之间的室内寻宝任务或者连接人与景的打卡挑战任务等);视频类型的任务(例如,用户在规定时间内身体位于屏幕中的指定点等);多人配合玩法(例如多人比心任务等)。组织者通过在房间内添加对应的团建任务来完成房间的创建。
4)如果选择作为参与者(即上述第一用户),则参与者进入组织者所创建的房间,根据该房间设定的任务进行挑战。终端设备会利用神经网络模型对参与者的挑战进行判定。以连接人与物的室内寻宝任务为例,任务要求用户在给定时间内找到指定的物品,并拍照上传。参与者上传拍照结果后,终端设备会利用神经网络模型的图像识别能力进行图像识别,如果用户找到对应的物品, 则判定用户完成该任务,并记下对应的完成时间。
5)参与者依次完成任务,根据每个参与者完成情况,确定每个参与者完成所有拍摄任务所需时长,从而确定每个参与者得分情况,并进行排名,组织者根据排名结果进行一定的奖惩。
本公开实施例公开了一种交互方法、装置、设备及存储介质。当检测到第一用户进入虚拟房间时,将虚拟房间内的拍摄任务列表进行展示;其中,拍摄任务列表包含多个拍摄任务,每个拍摄任务携带有任务信息,且任务信息与目标场景关联;对于每个拍摄任务,获取第一用户根据任务信息拍摄的图像;对图像进行特征提取,获得特征信息;将特征信息和任务信息进行比对,若特征信息和任务信息匹配,则拍摄任务完成。本公开实施例提供的交互方法,将任务信息与线下场景关联,通过线下拍摄图像的特征信息与线上任务信息进行比对,完成拍摄任务,可以实现终端设备与线下场景的交互,可以增加与终端设备的交互方式的多样性。
图6是本公开实施例提供的一种交互装置的结构示意图。如图6所示,该装置包括:
任务列表展示模块210,设置为当检测到第一用户进入虚拟房间时,将虚拟房间内的拍摄任务列表进行展示;其中,拍摄任务列表包含多个拍摄任务,每个拍摄任务携带有任务信息;图像获取模块220,设置为对于每个拍摄任务,获取第一用户根据任务信息拍摄的图像;特征信息获取模块230,设置为对图像进行特征提取,获得特征信息;信息比对模块240,设置为将特征信息和任务信息进行比对,若特征信息和任务信息匹配,则确定拍摄任务完成。
一实施例中,图像获取模块220设置为:
当检测到第一用户点击拍摄任务列表中的任一拍摄任务时,控制当前界面跳转至拍摄任务所在的任务界面;其中,任务界面用于展示拍摄任务对应的任务信息及拍摄按钮;接收第一用户对拍摄按钮的触发指令,根据触发指令拍摄摄像头对准的画面,获得拍摄的图像。
一实施例中,特征信息获取模块230设置为:
将图像输入设定神经网络模型中,获得图像对应的特征信息;其中,特征信息包括如下至少一项:人像信息、物体信息及动作信息。
一实施例中,该装置还包括:
第一得分确定模块,设置为若拍摄任务列表中的所有拍摄任务均完成,则获取第一用户完成所有拍摄任务所需的时长;根据时长确定第一用户的得分。
一实施例中,该装置还包括:
第二得分确定模块,设置为若第一用户在第一设定时长内完成部分拍摄任务,则所有关闭未完成的拍摄任务;获取每个未完成的拍摄任务对应的第二设定时长,根据第二设定时长和第一设定时长确定第一用户所需的时长;根据时长确定第一用户的得分。
一实施例中,该装置还包括:
排序结果展示模块,设置为对于进入虚拟房间的多个第一用户;根据得分对多个第一用户进行排序;将排序结果进行展示。
一实施例中,该装置还包括:
创建模块,设置为接收第二用户触发的创建指令,根据创建指令创建虚拟房间;在虚拟房间中接收第二用户选择的任务类型以及第二用户输入的任务信息;其中,任务信息通过如下至少一种方式展示:文字、图片及视频;根据任务类型和任务信息建立拍摄任务列表。
上述装置可执行本公开前述所有实施例所提供的方法,具备执行上述方法相应的功能模块和效果。未在本实施例中详尽描述的技术细节,可参见本公开前述所有实施例所提供的方法。
下面参考图7,其示出了适于用来实现本公开实施例的电子设备300的结构示意图。本公开实施例中的电子设备300可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端,或者多种形式的服务器,如独立服务器或者服务器集群。图7示出的电子设备300仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图7所示,电子设备300可以包括处理装置(例如中央处理器、图形处理器等)301,其可以根据存储在只读存储装置(Read-Only Memory,ROM)302中的程序或者从存储装置308加载到随机访问存储装置(Random Access Memory,RAM)303中的程序而执行多种适当的动作和处理。在RAM 303中,还存储有电子设备300操作所需的多种程序和数据。处理装置301、ROM 302以及RAM 303通过总线304彼此相连。输入/输出(Input/Output,I/O)接口305也连接至总线304。
通常,以下装置可以连接至I/O接口305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置306;包括例如液晶显 示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置307;包括例如磁带、硬盘等的存储装置308;以及通信装置309。通信装置309可以允许电子设备300与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有多种装置的电子设备300,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行交互方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置309从网络上被下载和安装,或者从存储装置308被安装,或者从ROM 302被安装。在该计算机程序被处理装置301执行时,执行本公开实施例的方法中限定的上述功能。
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:当检测到第一用户进入虚拟房间时,将所述虚拟房间内的拍摄任务列表进行展示;其中,所述拍摄任务列表包含多个拍摄任务,每个拍摄任务携带有任务信息,且所述任务信息与目标场景关联;对于每个拍摄任务,获取所述第一用户根据所述任务信息拍摄的图像;对所述图像进行特征提取,获得特征信息;将所述特征信息和所述任务信息进行比对,若所述特征信息和所述任务信息匹配,则确定所述拍摄任务完成。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括LAN或WAN—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在一种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific  Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programming Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、EPROM或快闪存储器、光纤、CD-ROM、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开实施例的一个或多个实施例,本公开实施例公开了一种交互方法,包括:
当检测到第一用户进入虚拟房间时,将所述虚拟房间内的拍摄任务列表进行展示;其中,所述拍摄任务列表包含多个拍摄任务,每个拍摄任务携带有任务信息;
对于所述拍摄任务,获取所述第一用户根据所述拍摄任务携带的任务信息拍摄的图像;
对所述图像进行特征提取,获得特征信息;
将所述特征信息和所述任务信息进行比对,若所述特征信息和所述任务信息匹配,则确定所述拍摄任务完成。
一实施例中,对于所述拍摄任务,获取所述第一用户根据所述拍摄任务携带的任务信息拍摄的图像,包括:
当检测到所述第一用户点击所述拍摄任务列表中的任一拍摄任务时,控制当前界面跳转至所述一拍摄任务所在的任务界面;其中,所述任务界面用于展示所述一拍摄任务对应的任务信息及拍摄按钮;
接收所述第一用户对所述拍摄按钮的触发指令,根据所述触发指令拍摄摄像头对准的画面,获得拍摄的图像。
一实施例中,对所述图像进行特征提取,获得特征信息,包括:
将所述图像输入设定神经网络模型中,获得所述图像对应的特征信息;其中,所述特征信息包括如下至少一项:人像信息、物体信息及动作信息。
一实施例中,在所述拍摄任务完成之后,还包括:
若所述拍摄任务列表中的所有拍摄任务均完成,则获取所述第一用户完成所有拍摄任务所需的时长;
根据所述时长确定所述第一用户的得分。
一实施例中,在所述拍摄任务完成之后,还包括:
若所述第一用户在第一设定时长内完成部分拍摄任务,则关闭所有未完成的拍摄任务;
获取每个未完成的拍摄任务对应的第二设定时长,根据所述第二设定时长和所述第一设定时长确定所述第一用户所需的时长;
根据所述时长确定所述第一用户的得分。
一实施例中,在根据所述时长确定所述第一用户的得分之后,还包括:
对于进入所述虚拟房间的多个第一用户;
根据所述得分对所述多个第一用户进行排序;
将排序结果进行展示。
一实施例中,在检测到第一用户进入虚拟房间之前,还包括:
接收第二用户触发的创建指令,根据所述创建指令创建虚拟房间;
在所述虚拟房间中接收第二用户选择的任务类型以及第二用户输入的任务信息;其中,所述任务信息通过如下至少一种方式展示:文字、图片及视频;
根据所述任务类型和所述任务信息建立拍摄任务列表。

Claims (11)

  1. 一种交互方法,包括:
    当检测到第一用户进入虚拟房间时,将所述虚拟房间内的拍摄任务列表进行展示;其中,所述拍摄任务列表包含多个拍摄任务,每个拍摄任务携带有任务信息;
    对于所述拍摄任务,获取所述第一用户根据所述拍摄任务携带的任务信息拍摄的图像;
    对所述图像进行特征提取,获得特征信息;
    将所述特征信息和所述任务信息进行比对,在所述特征信息和所述任务信息匹配的情况下,确定所述拍摄任务完成。
  2. 根据权利要求1所述的方法,其中,所述对于所述拍摄任务,获取所述第一用户根据所述拍摄任务携带的任务信息拍摄的图像,包括:
    当检测到所述第一用户点击所述拍摄任务列表中的一拍摄任务时,控制当前界面跳转至所述一拍摄任务所在的任务界面;其中,所述任务界面用于展示所述一拍摄任务对应的任务信息及拍摄按钮;
    接收所述第一用户对所述拍摄按钮的触发指令,根据所述触发指令拍摄摄像头对准的画面,获得拍摄的图像。
  3. 根据权利要求1所述的方法,其中,所述对所述图像进行特征提取,获得特征信息,包括:
    将所述图像输入设定神经网络模型中,获得所述图像对应的特征信息;其中,所述特征信息包括如下至少一项:人像信息、物体信息及动作信息。
  4. 根据权利要求1所述的方法,在所述拍摄任务完成之后,还包括:
    在所述拍摄任务列表中的所有拍摄任务均完成的情况下,获取所述第一用户完成所述所有拍摄任务所需的时长;
    根据所述时长确定所述第一用户的得分。
  5. 根据权利要求1所述的方法,在所述拍摄任务完成之后,还包括:
    在所述第一用户在第一设定时长内完成部分拍摄任务的情况下,关闭所有未完成的拍摄任务;
    获取每个未完成的拍摄任务对应的第二设定时长,根据所述第二设定时长和所述第一设定时长确定所述第一用户所需的时长;
    根据所述时长确定所述第一用户的得分。
  6. 根据权利要求4或5所述的方法,在所述根据所述时长确定所述第一用户的得分之后,还包括:
    对于进入所述虚拟房间的多个第一用户;
    根据所述得分对所述多个第一用户进行排序;
    将排序结果进行展示。
  7. 根据权利要求1所述的方法,在所述检测到第一用户进入虚拟房间之前,还包括:
    接收第二用户触发的创建指令,根据所述创建指令创建虚拟房间;
    在所述虚拟房间中接收所述第二用户选择的任务类型以及所述第二用户输入的任务信息;其中,所述任务信息通过如下至少一种方式展示:文字、图片及视频;
    根据所述任务类型和所述任务信息建立拍摄任务列表。
  8. 一种交互装置,包括:
    任务列表展示模块,设置为当检测到第一用户进入虚拟房间时,将所述虚拟房间内的拍摄任务列表进行展示;其中,所述拍摄任务列表包含多个拍摄任务,每个拍摄任务携带有任务信息;
    图像获取模块,设置为对于所述拍摄任务,获取所述第一用户根据所述拍摄任务携带的任务信息拍摄的图像;
    特征信息获取模块,设置为对所述图像进行特征提取,获得特征信息;
    信息比对模块,设置为将所述特征信息和所述任务信息进行比对,在所述特征信息和所述任务信息匹配的情况下,确定所述拍摄任务完成。
  9. 一种电子设备,包括:
    至少一个处理装置;
    存储装置,设置为存储至少一个程序;
    当所述至少一个程序被所述至少一个处理装置执行,使得所述至少一个处理装置实现如权利要求1-7中任一项所述的交互方法。
  10. 一种计算机可读介质,存储有计算机程序,其中,所述程序被处理装置执行时实现如权利要求1-7中任一项所述的交互方法。
  11. 一种计算机程序产品,包括承载在计算机可读介质上的计算机程序,所述计算机程序包含用于执行如权利要求1-7中任一项所述的交互方法的程序 代码。
PCT/CN2022/121211 2021-10-09 2022-09-26 交互方法、装置、设备及存储介质 WO2023056849A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111176740.7A CN115981452A (zh) 2021-10-09 2021-10-09 交互方法、装置、设备及存储介质
CN202111176740.7 2021-10-09

Publications (1)

Publication Number Publication Date
WO2023056849A1 true WO2023056849A1 (zh) 2023-04-13

Family

ID=85803881

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/121211 WO2023056849A1 (zh) 2021-10-09 2022-09-26 交互方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN115981452A (zh)
WO (1) WO2023056849A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389155A (zh) * 2018-09-11 2019-02-26 广东智媒云图科技股份有限公司 一种语言学习方法、电子设备及存储介质
CN109446891A (zh) * 2018-09-11 2019-03-08 广东智媒云图科技股份有限公司 一种基于图像识别的语言学习方法、电子设备及存储介质
CN109636464A (zh) * 2018-12-11 2019-04-16 深圳市房多多网络科技有限公司 基于ar技术的智能找房方法及系统
US20190356788A1 (en) * 2018-05-21 2019-11-21 Taeoa Co., Ltd. Apparatus and method of providing photo printing camera application, and photo printing service providing system using shared film
CN111371993A (zh) * 2020-03-13 2020-07-03 腾讯科技(深圳)有限公司 一种图像拍摄方法、装置、计算机设备和存储介质
CN112915526A (zh) * 2021-03-19 2021-06-08 北京橘拍科技有限公司 一种游戏模拟方法、系统及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190356788A1 (en) * 2018-05-21 2019-11-21 Taeoa Co., Ltd. Apparatus and method of providing photo printing camera application, and photo printing service providing system using shared film
CN109389155A (zh) * 2018-09-11 2019-02-26 广东智媒云图科技股份有限公司 一种语言学习方法、电子设备及存储介质
CN109446891A (zh) * 2018-09-11 2019-03-08 广东智媒云图科技股份有限公司 一种基于图像识别的语言学习方法、电子设备及存储介质
CN109636464A (zh) * 2018-12-11 2019-04-16 深圳市房多多网络科技有限公司 基于ar技术的智能找房方法及系统
CN111371993A (zh) * 2020-03-13 2020-07-03 腾讯科技(深圳)有限公司 一种图像拍摄方法、装置、计算机设备和存储介质
CN112915526A (zh) * 2021-03-19 2021-06-08 北京橘拍科技有限公司 一种游戏模拟方法、系统及存储介质

Also Published As

Publication number Publication date
CN115981452A (zh) 2023-04-18

Similar Documents

Publication Publication Date Title
US10039988B2 (en) Persistent customized social media environment
US20240169221A1 (en) Actionable suggestions for activities
CN111556278B (zh) 一种视频处理的方法、视频展示的方法、装置及存储介质
US20230007058A1 (en) Method, system, and non-transitory computer-readable record medium for displaying reaction during voip-based call
US20190139157A1 (en) Acceleration of social interactions
CN110809175B (zh) 视频推荐方法及装置
US20220318306A1 (en) Video-based interaction implementation method and apparatus, device and medium
JP2022551660A (ja) シーンのインタラクション方法及び装置、電子機器並びにコンピュータプログラム
TW201104644A (en) Interactive information system, interactive information method, and computer readable medium thereof
CN110366023B (zh) 一种直播互动方法、装置、介质和电子设备
EP4096223A1 (en) Live broadcast interaction method and apparatus, electronic device, and storage medium
CN105824799A (zh) 一种信息处理方法、设备和终端设备
CN112188223B (zh) 直播视频播放方法、装置、设备及介质
WO2023138425A1 (zh) 虚拟资源的获取方法、装置、设备及存储介质
US10740388B2 (en) Linked capture session for automatic image sharing
CN111126980A (zh) 虚拟物品发送方法、处理方法、装置、设备及介质
US20230005206A1 (en) Method and system for representing avatar following motion of user in virtual space
CN110384929B (zh) 一种游戏互动方法、装置、介质和电子设备
WO2022184030A1 (zh) 穿戴设备的交互方法和装置
CN110336957A (zh) 一种视频制作方法、装置、介质和电子设备
CN110417728B (zh) 一种在线互动方法、装置、介质和电子设备
WO2023056849A1 (zh) 交互方法、装置、设备及存储介质
CN110384930A (zh) 一种互动游戏群组构建方法、装置、介质和电子设备
KR102184396B1 (ko) 세계예능올림픽경연대회 중계방송 운영 시스템 및 그 방법
KR101772436B1 (ko) 오프라인 식별코드를 이용한 온라인 그룹 생성방법 및 프로그램

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22877868

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18699843

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE