CN111773658B - Game interaction method and device based on computer vision library - Google Patents

Game interaction method and device based on computer vision library Download PDF

Info

Publication number
CN111773658B
CN111773658B CN202010630855.8A CN202010630855A CN111773658B CN 111773658 B CN111773658 B CN 111773658B CN 202010630855 A CN202010630855 A CN 202010630855A CN 111773658 B CN111773658 B CN 111773658B
Authority
CN
China
Prior art keywords
interaction
information
target
external environment
game
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010630855.8A
Other languages
Chinese (zh)
Other versions
CN111773658A (en
Inventor
赵博强
杨林
温佩贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Digital Network Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Digital Network Technology Co Ltd filed Critical Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority to CN202010630855.8A priority Critical patent/CN111773658B/en
Publication of CN111773658A publication Critical patent/CN111773658A/en
Application granted granted Critical
Publication of CN111773658B publication Critical patent/CN111773658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/28Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/302Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device specially adapted for receiving control signals not targeted to a display device or game input means, e.g. vibrating driver's seat, scent dispenser

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a game interaction method and device based on a computer vision library, wherein the game interaction method based on the computer vision library comprises the following steps: creating a game role, and calling an image acquisition device to acquire an external environment image; analyzing the external environment image through a computer vision library and acquiring interaction information corresponding to the external environment image; searching an interaction instruction corresponding to the interaction information in a preset interaction database according to the interaction information; and executing the interaction instruction. Therefore, the behavior actions of the game roles are closer to the real world, the interest and the playability of the game are increased, and the viscosity of the user is improved.

Description

Game interaction method and device based on computer vision library
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a game interaction method and apparatus based on a computer visual library, a computing device, and a computer readable storage medium.
Background
Along with the development of computer technology, the augmented reality (Augmented Reality, AR) technology has also been rapidly developed, and the application of the AR technology in the game field can display virtual game characters in a real environment by using the augmented reality technology through media such as mobile phones and game machines, so that the AR game realizes the optimal combination of the game and the AR technology from three aspects of location service, image recognition and data processing, and great breakthrough of the AR game in playing method and form brings brand-new game experience to players.
In the existing AR game, the expression of the virtual character is completely derived from the game itself, the interaction with the real world cannot be realized, the AR game character cannot interact with people, objects or environments in the actual scene, and the real-time perception and reaction cannot be carried out along with the change of the external environment.
Therefore, how to enable the AR game character to interact with the real world and react in real time according to the real world scene becomes a problem to be solved by the technicians.
Disclosure of Invention
In view of the foregoing, embodiments of the present application provide a game interaction method and apparatus based on a computer vision library, a computing device and a computer readable storage medium, so as to solve the technical drawbacks in the prior art.
According to a first aspect of embodiments of the present application, there is provided a game interaction method based on a computer vision library, including:
creating a game role, and calling an image acquisition device to acquire an external environment image;
analyzing the external environment image through a computer vision library and acquiring interaction information corresponding to the external environment image;
searching an interaction instruction corresponding to the interaction information in a preset interaction database according to the interaction information;
and executing the interaction instruction.
Optionally, acquiring the interaction information corresponding to the external environment image includes:
determining a target interaction object in the external environment image;
and acquiring the interaction information of the target interaction object.
Optionally, determining the target interaction object in the external environment image includes:
determining at least one interactive object in the external environment image according to a preset interactive object table, wherein the interactive object table is used for storing the interactive objects and the priorities corresponding to each interactive object;
and determining the interactive object with high priority as a target interactive object according to the priority corresponding to each interactive object.
Optionally, determining at least one interactive object in the external environment image according to a preset interactive object table includes:
and determining at least one of a user, an article and a natural environment in the external image according to a preset interactive object table.
Optionally, determining the interaction object with high priority as the target interaction object according to the priority corresponding to each interaction object includes:
extracting characteristic information of each interactive object under the condition that at least two interactive objects exist in the same priority;
searching in a preset feature library according to the feature information of each interactive object, and determining the interactive object with high association value with the game role as a target interactive object, wherein the preset feature library comprises the feature information of the interactive object and the association value of the interactive object and the game role.
Optionally, the method further comprises:
recording the interaction time of the target interaction object and the game role;
and updating the association value of the target interaction object and the game role according to the interaction time.
Optionally, obtaining the interaction information of the target interaction object includes:
extracting interaction characteristics of the target interaction object;
and obtaining the interaction information of the target interaction object according to the interaction characteristics.
Optionally, the target interaction object includes a user;
extracting the interaction characteristics of the target interaction object comprises the following steps:
extracting facial expression features of the user.
Optionally, obtaining the interaction information of the target object according to the interaction feature includes:
and acquiring facial expression information of the user by identifying the facial expression characteristics, and taking the facial expression information as interaction information of the user.
According to a second aspect of embodiments of the present application, there is provided a game interaction device based on a computer vision library, comprising:
the creation module is configured to create a game role and call the image acquisition equipment to acquire an external environment image;
the acquisition module is configured to analyze the external environment image through a computer vision library and acquire interaction information corresponding to the external environment image;
the searching module is configured to search an interaction instruction corresponding to the interaction information in a preset interaction database according to the interaction information;
and the execution module is configured to execute the interaction instruction.
Optionally, the acquiring module is further configured to determine a target interaction object in the external environment image; and acquiring the interaction information of the target interaction object.
Optionally, the obtaining module is further configured to determine at least one interactive object in the external environment image according to a preset interactive object table, where the interactive object table is used to store the interactive objects and priorities corresponding to each interactive object; and determining the interactive object with high priority as a target interactive object according to the priority corresponding to each interactive object.
Optionally, the acquiring module is further configured to determine at least one of a user, an article, and a natural environment in the external image according to a preset interactive object table.
Optionally, the acquiring module is further configured to extract feature information of each interactive object under the condition that at least two interactive objects exist in the same priority; searching in a preset feature library according to the feature information of each interactive object, and determining the interactive object with high association value with the game role as a target interactive object, wherein the preset feature library comprises the feature information of the interactive object and the association value of the interactive object and the game role.
Optionally, the obtaining module is further configured to record interaction duration of the target interaction object and the game character; and updating the association value of the target interaction object and the game role according to the interaction time.
Optionally, the obtaining module is further configured to extract interaction characteristics of the target interaction object; and obtaining the interaction information of the target interaction object according to the interaction characteristics.
Optionally, the target interaction object includes a user;
the acquisition module is further configured to extract facial expression features of the user.
Optionally, the acquiring module is further configured to acquire facial expression information of the user by identifying the facial expression features, and take the facial expression information as interaction information of the user.
According to a third aspect of embodiments of the present application, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor executing the instructions to implement the steps of the computer vision library-based game interaction method.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the computer vision library-based game interaction method.
In the embodiment of the application, the game role is created, the image acquisition device is called to acquire the external environment image, the external environment image is analyzed through the computer vision library, and the interaction information corresponding to the external environment image is acquired, which is equivalent to installing a pair of eyes for the game role, which perceives the real world in real time, so that the game role can acquire the interaction information of the real world, find the corresponding interaction instruction in the preset interaction database according to the interaction information, and execute the interaction execution, and the game role can make corresponding interaction action according to the interaction information in the real world, thereby enabling the action of the AR game role to be closer to the real world, increasing the interestingness and playability of the game, and improving the user viscosity.
Drawings
FIG. 1 is a block diagram of a computing device provided by an embodiment of the present application;
FIG. 2 is a flow chart of a computer vision library-based game interaction method provided by an embodiment of the present application;
FIG. 3 is a flow chart of a computer vision library-based game interaction method provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a game interface provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a game interaction device based on a computer vision library according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The terminology used in one or more embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of one or more embodiments of the application. As used in this application in one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In the present application, a game interaction method and apparatus based on a computer vision library, a computing device and a computer readable storage medium are provided, and are described in detail in the following embodiments.
FIG. 1 illustrates a block diagram of a computing device 100, according to an embodiment of the present application. The components of the computing device 100 include, but are not limited to, a memory 110 and a processor 120. Processor 120 is coupled to memory 110 via bus 130 and database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 140 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 100, as well as other components not shown in FIG. 1, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device shown in FIG. 1 is for exemplary purposes only and is not intended to limit the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
Wherein the processor 120 may perform the steps of the computer vision library-based game interaction method shown in fig. 2. FIG. 2 shows a flowchart of a computer vision library based game interaction method, including steps 202 through 208, according to an embodiment of the present application.
Step 202: and creating a game role, and calling the image acquisition device to acquire an external environment image.
And opening a game interface through the terminal, creating a corresponding game role according to game role information selected when the terminal enters the game interface, and simultaneously opening an image acquisition device of the terminal to acquire an external environment image in real time.
The terminal can be an intelligent terminal with an image acquisition function such as a mobile phone, a tablet personal computer and a notebook personal computer, and the terminal is not particularly limited in the application.
Creating a corresponding game role according to game role information selected when entering a game interface, and if the selected game role is a cat when entering the game interface, creating a virtual cat; when entering a game interface, the selected game role is a rabbit, and a virtual rabbit is created; when entering the game interface, the selected game character is a dog, and a virtual dog is created.
The game characters are placed in a game scene corresponding to the game characters, such as placing a virtual cat in a game scene with a sofa, placing a virtual rabbit in a game scene on a floor in a home, and placing a virtual dog in a game scene with a kennel.
Peripheral external environment images are acquired by image acquisition devices such as front cameras, rear cameras and the like. If the external environment image is shot by the front camera of the mobile phone, the external environment image shot by the front camera is acquired.
Step 204: analyzing the external environment image through a computer vision library and acquiring interaction information corresponding to the external environment image.
The computer vision library is internally provided with a plurality of basic elements for image processing and computer vision related operation, such as building a graphical user interface, video analysis, feature extraction, target detection, machine learning, face recognition and the like, and the common computer vision library is OpenCV, CCV, openBR, and the OpenCV is an open-source cross-platform computer vision library and can be operated on a multi-platform system, so that the computer vision library has wide application.
Analyzing an external environment image through a computer vision library, and acquiring interaction information, such as expression, trees, sky, weather and the like of a player, for interacting with a game role in the external environment image.
Optionally, the obtaining the interaction information corresponding to the external environment image includes: and determining a target interaction object in the external environment image, and acquiring interaction information of the target interaction object.
In practical application, after an external environment image is analyzed through a computer vision library, determining a target interaction object which interacts with a game character in the external environment image, and acquiring interaction information of the target interaction object, wherein if the external environment image contains house, tree, weather, illumination and other interaction objects, the tree is determined to be the target interaction object which interacts with the game character, and the interaction information of the tree is acquired as a tree, when the external environment image contains people, tree, weather and other interaction objects, the target interaction object which interacts with the game character is determined, and facial expression of the people is acquired as happy, and the interaction information is a happy expression.
Step 206: searching an interaction instruction corresponding to the interaction information in a preset interaction database according to the interaction information.
The interaction instructions corresponding to the interaction information are stored in a preset interaction database, the interaction instructions and the creek roles have close relations, when the interaction information is a happy expression of a player, for example, for a virtual dog, the corresponding interaction instructions are a virtual dog spitting tongue and shaking tails, and for a virtual cat, the corresponding interaction instructions are a light shaking tail; when the interaction information is a tree, the corresponding interaction instruction is left and right jumping for the virtual dogs, and the corresponding interaction instruction is gazing at the tree for the virtual cats.
In the preset interaction database, the attribute of each game role is also stored in advance, such as for a virtual rabbit, the character is small, the threshold value of interaction with the player is low, for a virtual dog, the character enthusiasm and the like, the threshold value of interaction with the player is high, for a virtual cat, the character is high and cold, the threshold value of interaction with the player and the like. In this application, only this is explained briefly, which is based on the specific use in practical applications.
Step 208: and executing the interaction instruction.
After the interaction instruction is acquired, the interaction instruction is executed, the game role is controlled to make the action appointed in the interaction instruction, if the game role is a virtual dog, the interaction instruction is to spit the tongue and add a tail, the virtual dog is controlled to make the actions of spitting the tongue and shaking the tail, when the game role is a virtual rabbit, the interaction instruction is to avoid, the virtual rabbit is controlled to make the action of avoiding, and when the virtual role is a virtual cat, the interaction instruction is to turn, and the virtual cat is controlled to make the action of turning.
According to the game interaction method based on the computer vision library, the external environment image is collected by the image collection device of the calling terminal while the game character is created, the external environment image is analyzed through the computer vision library so as to obtain the interaction information corresponding to the external environment image, the interaction instruction corresponding to the interaction information is searched in the interaction database according to the interaction information, the game character can interact with the real world, the external environment image of the image collection device of the terminal device is analyzed through the computer vision library so as to obtain the interaction information, namely, a pair of eyes are installed for the game character, so that the game character can make corresponding real-time reaction according to the interaction information and the preset rich interaction database, the game character is closer to the real world, the game playability and the interestingness are improved, and the user viscosity is enhanced.
FIG. 3 illustrates a computer vision library-based game interaction method according to an embodiment of the present application, which is described by taking interaction of a virtual dog with a player as an example, and includes steps 302 to 312.
Step 302: and creating a game role, and calling the image acquisition device to acquire an external environment image.
Step 302 corresponds to the method of step 202 described above, and for the specific explanation of step 302, reference is made to the details of step 202 in the foregoing embodiment, which will not be repeated here.
In the embodiment provided by the application, a game interface is opened through the mobile phone, a virtual dog character is selected before entering a game, the game character of the virtual dog is created, and a front camera of the mobile phone is called to collect an external environment image.
Step 304: analyzing the external environment image through a computer vision library and determining at least one interactive object in the external environment image according to a preset interactive object table.
Analyzing the external environment image through OpenCV, determining at least one interactive object in the external environment image according to a preset interactive object table, wherein the preset interactive object table is used for storing the interactive objects and the priorities corresponding to each interactive object, and identifying the interactive objects pre-stored in the interactive object table as candidate interactive objects through analyzing the external environment image.
Optionally, at least one of a user, an object and a natural environment in the external image is determined according to a preset interactive object table.
In the embodiment provided by the application, the interactive objects in the preset interactive object table are people, trees, sky and weather, the priorities of the people, the trees, the sky and the weather are arranged in the order from high to low, namely, the priority of the people is highest, the priority of the weather is lowest, in the picture shot by the front-facing camera, after OpenCV analysis, the situation that the user, the tall trees, the sky and the weather are sunny can be determined, and at the moment, the interactive objects are people, the trees, the sky and the sunny day.
In the embodiment provided by the application, when the player moves the mobile phone, the front camera moves to the sky, and after OpenCV analysis, the sky and the weather are determined to be sunny, and the interactive objects at the moment are the sky and the sunny.
Step 306: and determining the interactive object with high priority as a target interactive object according to the priority corresponding to each interactive object.
The preset interactive object table also comprises the priority of each interactive object, and in practical application, according to the priority of each interactive object, the interactive object with the highest priority is selected from the current interactive objects as the target interactive object.
In the embodiment provided by the application, when the interactive object is a person, a tree, a sky or a sunny day, the priority of the person is highest as known from the preset interactive object table, namely, the person is taken as the target interactive object.
In the embodiment provided by the application, when the interactive object is sky and sunny, the priority of the sky is highest, namely, the sky is used as the target interactive object.
Optionally, under the condition that at least two interactive objects exist in the same priority, extracting feature information of each interactive object, searching in a preset feature library according to the feature information of each interactive object, and determining that the interactive object with a high game role association value is a target interactive object, wherein the preset feature library comprises the feature information of the interactive object and the association value of the interactive object and the game role.
In practical application, under the condition of the same priority, at least two interactive objects exist, if two people exist in the interactive objects, in this case, the characteristic information of each interactive object under the same priority needs to be extracted and searched in a preset characteristic library, the association value of each interactive object and a game role is determined, the interactive object with the highest association value is selected as a target interactive object, and if the association value is the same, one interactive object is selected as a target interactive object.
In the embodiment provided by the application, after the external environment image is analyzed, 2 people with highest priority are identified as the interaction object, namely Zhang three and Li four, facial features of each person are extracted, the facial features of each person are searched and compared with features in a feature library, the association value of Zhang three and a virtual dog is determined to be 90, the association value of Li four and the virtual dog is determined to be 10, and Zhang three is determined to be the target interaction object.
In the embodiment provided by the application, when the user rotates the mobile phone and shoots a tree forest, the external environment image is analyzed to identify the interaction object with the highest priority as the tree, the characteristics of the tree are not recorded in the characteristic library, and the association value of each tree and the virtual dog can be identified as the same, so that one tree is randomly selected as the target interaction object.
In practical application, the interaction time between the target interaction object and the game character can be recorded, and the association value between the target interaction object and the game character can be updated according to the interaction time, for example, when the game is a pet raising game, each person is a stranger when a new pet arrives, the user can often interact with the pet, the intimacy value between the pet and the player can be increased, namely, the pet considers the player as the master, and when people except the user appear, the pet considers the master as the target interaction object.
In the embodiment provided by the application, when the target interaction object is Zhang san, the correlation value between Zhang san and the virtual dog stored in the database is updated by recording the interaction time between Zhang san and the virtual dog and converting the interaction time into the correlation value according to the preset rule.
Step 308: and extracting the interaction characteristics of the target interaction object, and obtaining the interaction information of the target interaction object according to the interaction characteristics.
Extracting interactive features of the target interactive object, taking trees as an example, namely the trees, taking weather as an example, and taking sunny days, rainy days, snowy days, windy days and the like.
In the embodiment provided by the application, when the user shoots the sky through the mobile phone camera, the sky serves as a target interaction object, and the interaction information can be the sky in the daytime.
Optionally, when the target interaction object is a user, extracting facial expression features of the user, acquiring facial expression information of the user by identifying the facial expression features, and taking the facial expression information as interaction information of the user.
When the target interactive object is a person, namely a user, facial expression characteristics of the user are extracted, the facial expression characteristics of the player are identified and analyzed through the expression analysis system, corresponding facial expression information such as happiness, sadness, anger and the like is obtained, and the facial expression information is used as interactive information.
In the embodiment provided by the application, when the target interaction object is a user, the facial expression of the user is obtained to be happy, and the facial expression is regarded as interaction information of the user.
Step 310: searching an interaction instruction corresponding to the interaction information in a preset interaction database according to the interaction information.
Step 312: and executing the interaction instruction.
Steps 310 to 312 correspond to the methods of steps 206 to 208 described above, and the detailed explanation of steps 310 to 312 is referred to the details of steps 206 to 208 in the foregoing embodiments, and will not be repeated here.
In the embodiment provided by the application, when the target interactive object is sky, the interactive information can be the sky in the daytime, the interactive instruction corresponding to the sky in the daytime is searched in the preset interactive database, the virtual dog is laid on the ground, the limbs face the sky, and the interactive instruction is executed to control the virtual dog to make corresponding actions.
In the embodiment provided by the application, when the target interactive object is a user, the interactive information is that the facial expression is happy, the interactive instruction corresponding to the happy facial expression of the virtual dog is found in the preset interactive database, the interactive instruction is that the tongue is spitted, the tail is shaken and the left and right are jumped, and the interactive instruction is executed to control the virtual dog to make corresponding actions.
According to the game interaction method based on the computer vision library, the external environment image is collected by the image collection device of the calling terminal while the game character is created, the external environment image is analyzed through the computer vision library so as to obtain the interaction information corresponding to the external environment image, the interaction instruction corresponding to the interaction information is searched in the interaction database according to the interaction information, the game character can interact with the real world, the external environment image of the image collection device of the terminal device is analyzed through the computer vision library so as to obtain the interaction information, namely, a pair of eyes are installed for the game character, so that the game character can make corresponding real-time reaction according to the interaction information and the preset rich interaction database, the game character is closer to the real world, the game playability and the interestingness are improved, and the user viscosity is enhanced.
And secondly, by recording the interaction time between the target interaction object and the game character, the association value between the target interaction object and the game character can be increased, and the game character is more fit with the real world, so that the participation of a player is stronger, and the game experience is better.
The game interaction method based on the computer vision library according to an embodiment of the present application is further explained below with reference to fig. 4, where fig. 4 shows a schematic diagram of a game interface according to an embodiment of the present application. The user opens the game interface through the mobile phone to enter the game, and selects the role of the game as a pet dog when entering the game, then the pet dog is created according to the selection, meanwhile, a front camera of the mobile phone is opened to shoot, the camera shoots the head of the user, the head of the user is taken as a target interaction object, facial expression characteristics of the user are extracted, the current expression of the user is happy through the expression analyzer, and the interaction instruction of the pet dog is found in a preset interaction database according to interaction information of the happy expression, wherein the interaction instruction comprises the following steps: spit tongue and shake tail. Executing the interaction instruction, and controlling the pet dog to make actions of spitting the tongue and shaking the tail.
At this time, the facial expression of the user becomes panic, facial expression characteristics of the user are extracted, after analysis by the expression analyzer, the obtained interaction information is a panic expression, and the interaction instruction of the pet dog is found in a preset interaction database according to the interaction information of the panic expression, and is as follows: rapidly trip and make a call for three sounds. And executing the interaction action, controlling the pet dog to make jumping action and shouting three sounds in a 'Wang Wangwang' way.
Then, the user flushes the camera to the sky, the target interactive object is the sky, the weather at the moment is clear, and the interactive instructions which are searched in the interactive database by taking the clear sky as the interactive information are as follows: lying on the ground and the limbs are contracted to make sun-shine. Executing the interactive instruction, controlling the pet dog to lie on the ground, enabling limbs to be contracted, and making a sun-drying posture.
In the embodiment of the application, the game role is created, the image acquisition device is called to acquire the external environment image, the external environment image is analyzed through the computer vision library, and the interaction information corresponding to the external environment image is acquired, which is equivalent to installing a pair of eyes for the game role, which perceives the real world in real time, so that the game role can acquire the interaction information of the real world, find the corresponding interaction instruction in the preset interaction database according to the interaction information, execute the interaction execution, and enable the game role to make the corresponding interaction action according to the interaction information in the real world, thereby enabling the action of the AR game role to be closer to the real world, increasing the interestingness and playability of the game, and improving the user viscosity.
Corresponding to the above method embodiments, the present application further provides an embodiment of a game interaction device based on a computer vision library, and fig. 5 shows a schematic structural diagram of the game interaction device based on the computer vision library according to one embodiment of the present application. As shown in fig. 5, the apparatus includes:
a creation module 502 configured to create a game character and call an image collection device to collect an external environment image;
an obtaining module 504, configured to parse the external environment image through a computer vision library and obtain interaction information corresponding to the external environment image;
the searching module 506 is configured to search an interaction instruction corresponding to the interaction information in a preset interaction database according to the interaction information;
an execution module 508 configured to execute the interaction instructions.
Optionally, the obtaining module 504 is further configured to determine a target interaction object in the external environment image; and acquiring the interaction information of the target interaction object.
Optionally, the obtaining module 504 is further configured to determine at least one interactive object in the external environment image according to a preset interactive object table, where the interactive object table is used to store the interactive objects and priorities corresponding to each interactive object; and determining the interactive object with high priority as a target interactive object according to the priority corresponding to each interactive object.
Optionally, the obtaining module 504 is further configured to determine at least one of a user, an article, and a natural environment in the external image according to a preset interactive object table.
Optionally, the obtaining module 504 is further configured to extract feature information of each interactive object in the case that there are at least two interactive objects in the same priority; searching in a preset feature library according to the feature information of each interactive object, and determining the interactive object with high association value with the game role as a target interactive object, wherein the preset feature library comprises the feature information of the interactive object and the association value of the interactive object and the game role.
Optionally, the obtaining module 504 is further configured to record an interaction duration of the target interaction object with the game character; and updating the association value of the target interaction object and the game role according to the interaction time.
Optionally, the obtaining module 504 is further configured to extract interaction characteristics of the target interaction object; and obtaining the interaction information of the target interaction object according to the interaction characteristics.
Optionally, the target interaction object includes a user;
the acquisition module 504 is further configured to extract facial expression features of the user.
Optionally, the obtaining module 504 is further configured to obtain facial expression information of the user by identifying the facial expression features, and regards the facial expression information as interaction information of the user.
The above is an exemplary scheme of a game interaction device based on a computer vision library in this embodiment. It should be noted that, the technical solution of the application program interface calling device and the technical solution of the game interaction method based on the computer vision library belong to the same concept, and details of the technical solution of the game interaction device based on the computer vision library, which are not described in detail, can be referred to the description of the technical solution of the game interaction method based on the computer vision library.
According to the game interaction device based on the computer vision library, the external environment image is collected by the image collection equipment of the calling terminal while the game character is created, the external environment image is analyzed through the computer vision library so as to obtain the interaction information corresponding to the external environment image, the interaction instruction corresponding to the interaction information is searched in the interaction database according to the interaction information, the game character can interact with the real world, the external environment image of the image collection equipment of the terminal equipment is analyzed through the computer vision library so as to obtain the interaction information, namely, a pair of eyes are installed for the game character, so that the game character can make corresponding real-time reaction according to the interaction information and the preset rich interaction database, the game character is closer to the real world, the game playability and the interestingness are improved, and the user viscosity is enhanced.
And secondly, by recording the interaction time between the target interaction object and the game character, the association value between the target interaction object and the game character can be increased, and the game character is more fit with the real world, so that the participation of a player is stronger, and the game experience is better.
An embodiment of the present application further provides a computing device, including a memory, a processor, and computer instructions stored on the memory and executable on the processor, where the processor executes the instructions to implement the steps of the computer vision library-based game interaction method.
An embodiment of the present application also provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement the steps of a computer vision library-based game interaction method as described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the game interaction method based on the computer vision library belong to the same concept, and details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the game interaction method based on the computer vision library.
The foregoing describes specific embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all necessary for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The above-disclosed preferred embodiments of the present application are provided only as an aid to the elucidation of the present application. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teaching of this application. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This application is to be limited only by the claims and the full scope and equivalents thereof.

Claims (12)

1. A computer vision library-based game interaction method, comprising:
creating a game role, and calling an image acquisition device to acquire an external environment image;
analyzing the external environment image through a computer vision library and acquiring interaction information corresponding to the external environment image, wherein the method specifically comprises the steps of analyzing the external environment image to determine at least one interaction object, taking the interaction object with the highest priority as a target interaction object, and acquiring the interaction information of the target interaction object, wherein under the condition that at least two interaction objects exist in the same priority, the interaction object with the highest association value is selected as the target interaction object;
searching an interaction instruction corresponding to the interaction information in a preset interaction database according to the interaction information, wherein the interaction information comprises at least one of tree, sky and weather information;
and executing the interaction instruction.
2. The computer vision library-based game interaction method of claim 1, wherein obtaining interaction information corresponding to the external environment image comprises:
determining a target interaction object in the external environment image;
and acquiring the interaction information of the target interaction object.
3. The computer vision library-based game interaction method of claim 2, wherein determining a target interaction object in the external environment image comprises:
determining at least one interactive object in the external environment image according to a preset interactive object table, wherein the interactive object table is used for storing the interactive objects and the priorities corresponding to each interactive object;
and determining the interactive object with high priority as a target interactive object according to the priority corresponding to each interactive object.
4. The computer vision library-based game interaction method of claim 3, wherein determining at least one interaction object in the external environment image according to a preset interaction object table comprises:
and determining at least one of users, articles and natural environments in the external environment image according to a preset interactive object table.
5. The computer vision library-based game interaction method of claim 3, wherein determining the interaction object with the higher priority as the target interaction object according to the priority corresponding to each interaction object comprises:
extracting characteristic information of each interactive object under the condition that at least two interactive objects exist in the same priority;
searching in a preset feature library according to the feature information of each interactive object, and determining the interactive object with high association value with the game role as a target interactive object, wherein the preset feature library comprises the feature information of the interactive object and the association value of the interactive object and the game role.
6. The computer vision library-based game interaction method of claim 5, further comprising:
recording the interaction time of the target interaction object and the game role;
and updating the association value of the target interaction object and the game role according to the interaction time.
7. The computer vision library-based game interaction method of claim 2, wherein obtaining interaction information of the target interaction object comprises:
extracting interaction characteristics of the target interaction object;
and obtaining the interaction information of the target interaction object according to the interaction characteristics.
8. The computer vision library-based game interaction method of claim 7, wherein the target interaction object comprises a user;
extracting the interaction characteristics of the target interaction object comprises the following steps:
extracting facial expression features of the user.
9. The computer vision library-based game interaction method of claim 8, wherein obtaining interaction information of the target interaction object according to the interaction characteristics comprises:
and acquiring facial expression information of the user by identifying the facial expression characteristics, and taking the facial expression information as interaction information of the user.
10. A computer vision library-based game interaction device, comprising:
the creation module is configured to create a game role and call the image acquisition equipment to acquire an external environment image;
the acquisition module is configured to analyze the external environment image through a computer vision library and acquire interaction information corresponding to the external environment image, and specifically comprises the steps of analyzing the external environment image to determine at least one interaction object, taking the interaction object with the highest priority as a target interaction object, and acquiring the interaction information of the target interaction object, wherein under the condition that at least two interaction objects exist in the same priority, the interaction object with the highest association value is selected as the target interaction object;
the searching module is configured to search an interaction instruction corresponding to the interaction information in a preset interaction database according to the interaction information, wherein the interaction information comprises at least one of trees, sky and weather information;
and the execution module is configured to execute the interaction instruction.
11. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor, when executing the instructions, implements the steps of the method of any of claims 1-9.
12. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 9.
CN202010630855.8A 2020-07-03 2020-07-03 Game interaction method and device based on computer vision library Active CN111773658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010630855.8A CN111773658B (en) 2020-07-03 2020-07-03 Game interaction method and device based on computer vision library

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010630855.8A CN111773658B (en) 2020-07-03 2020-07-03 Game interaction method and device based on computer vision library

Publications (2)

Publication Number Publication Date
CN111773658A CN111773658A (en) 2020-10-16
CN111773658B true CN111773658B (en) 2024-02-23

Family

ID=72758322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010630855.8A Active CN111773658B (en) 2020-07-03 2020-07-03 Game interaction method and device based on computer vision library

Country Status (1)

Country Link
CN (1) CN111773658B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112587934B (en) * 2020-12-25 2022-09-02 珠海金山数字网络科技有限公司 Information processing method and device
CN113332726B (en) * 2021-06-11 2024-07-02 网易(杭州)网络有限公司 Virtual character processing method and device, electronic equipment and storage medium

Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007027738A2 (en) * 2005-08-29 2007-03-08 Evryx Technologies, Inc. Interactivity via mobile image recognition
CN102999160A (en) * 2011-10-14 2013-03-27 微软公司 User controlled real object disappearance in a mixed reality display
WO2017069396A1 (en) * 2015-10-23 2017-04-27 오철환 Data processing method for reactive augmented reality card game and reactive augmented reality card game play device, by checking collision between virtual objects
KR101740213B1 (en) * 2017-01-09 2017-05-26 오철환 Device for playing responsive augmented reality card game
CN106850774A (en) * 2017-01-13 2017-06-13 英华达(南京)科技有限公司 The environmental interaction system and method for virtual reality terminal
CN107251103A (en) * 2014-12-23 2017-10-13 M·D·富克斯 Augmented reality system and its operating method
CN107509030A (en) * 2017-08-14 2017-12-22 维沃移动通信有限公司 A kind of focusing method and mobile terminal
WO2018032970A1 (en) * 2016-08-19 2018-02-22 腾讯科技(深圳)有限公司 Authentication method based on virtual reality scene, virtual reality device, and storage medium
CN107888823A (en) * 2017-10-30 2018-04-06 维沃移动通信有限公司 One kind shooting processing method, apparatus and system
CN108022301A (en) * 2017-11-23 2018-05-11 腾讯科技(上海)有限公司 A kind of image processing method, device and storage medium
CN207380667U (en) * 2017-08-10 2018-05-18 苏州神秘谷数字科技有限公司 Augmented reality interactive system based on radar eye
CN108057246A (en) * 2017-11-08 2018-05-22 江苏名通信息科技有限公司 Hand based on deep neural network study swims augmented reality method
CN108089704A (en) * 2017-12-15 2018-05-29 歌尔科技有限公司 A kind of VR equipment and its experience control method, system, device, storage medium
WO2018140397A1 (en) * 2017-01-25 2018-08-02 Furment Odile Aimee System for interactive image based game
CN108391445A (en) * 2016-12-24 2018-08-10 华为技术有限公司 A kind of virtual reality display methods and terminal
CN108525305A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108595005A (en) * 2018-04-20 2018-09-28 深圳市天轨年华文化科技有限公司 Exchange method, device based on augmented reality and computer readable storage medium
WO2018195475A1 (en) * 2017-04-20 2018-10-25 Saysearch, Inc. Communication sessions between computing devices using dynamically customizable interaction environments
CN108995590A (en) * 2018-07-26 2018-12-14 广州小鹏汽车科技有限公司 A kind of people's vehicle interactive approach, system and device
CN109240576A (en) * 2018-09-03 2019-01-18 网易(杭州)网络有限公司 Image processing method and device, electronic equipment, storage medium in game
CN109806584A (en) * 2019-01-24 2019-05-28 网易(杭州)网络有限公司 Scene of game generation method and device, electronic equipment, storage medium
CN109876450A (en) * 2018-12-14 2019-06-14 深圳壹账通智能科技有限公司 Implementation method, server, computer equipment and storage medium based on AR game
US10338695B1 (en) * 2017-07-26 2019-07-02 Ming Chuan University Augmented reality edugaming interaction method
CN110298925A (en) * 2019-07-04 2019-10-01 珠海金山网络游戏科技有限公司 A kind of augmented reality image processing method, calculates equipment and storage medium at device
CN110354495A (en) * 2019-08-13 2019-10-22 网易(杭州)网络有限公司 The determination method, apparatus and electronic equipment of target object
CN110362209A (en) * 2019-07-23 2019-10-22 辽宁向日葵教育科技有限公司 A kind of MR mixed reality intelligent perception interactive system
WO2020021319A1 (en) * 2018-07-27 2020-01-30 Yogesh Chunilal Rathod Augmented reality scanning of real world object or enter into geofence to display virtual objects and displaying real world activities in virtual world having corresponding real world geography
CN110750161A (en) * 2019-10-25 2020-02-04 郑子龙 Interactive system, method, mobile device and computer readable medium
CN110917619A (en) * 2019-11-18 2020-03-27 腾讯科技(深圳)有限公司 Interactive property control method, device, terminal and storage medium
CN111201069A (en) * 2017-09-29 2020-05-26 索尼互动娱乐美国有限责任公司 Spectator view of an interactive game world presented in a live event held in a real-world venue
CN111263118A (en) * 2020-02-18 2020-06-09 浙江大华技术股份有限公司 Image acquisition method and device, storage medium and electronic device
CN111282272A (en) * 2020-02-05 2020-06-16 腾讯科技(深圳)有限公司 Information processing method, computer readable medium and electronic device
CN111309276A (en) * 2020-02-25 2020-06-19 Oppo广东移动通信有限公司 Information display method and related product

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070265089A1 (en) * 2002-05-13 2007-11-15 Consolidated Global Fun Unlimited Simulated phenomena interaction game
US8400548B2 (en) * 2010-01-05 2013-03-19 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
US9678617B2 (en) * 2013-01-14 2017-06-13 Patrick Soon-Shiong Shared real-time content editing activated by an image
US10101803B2 (en) * 2015-08-26 2018-10-16 Google Llc Dynamic switching and merging of head, gesture and touch input in virtual reality

Patent Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007027738A2 (en) * 2005-08-29 2007-03-08 Evryx Technologies, Inc. Interactivity via mobile image recognition
CN102999160A (en) * 2011-10-14 2013-03-27 微软公司 User controlled real object disappearance in a mixed reality display
CN107251103A (en) * 2014-12-23 2017-10-13 M·D·富克斯 Augmented reality system and its operating method
WO2017069396A1 (en) * 2015-10-23 2017-04-27 오철환 Data processing method for reactive augmented reality card game and reactive augmented reality card game play device, by checking collision between virtual objects
WO2018032970A1 (en) * 2016-08-19 2018-02-22 腾讯科技(深圳)有限公司 Authentication method based on virtual reality scene, virtual reality device, and storage medium
CN108391445A (en) * 2016-12-24 2018-08-10 华为技术有限公司 A kind of virtual reality display methods and terminal
KR101740213B1 (en) * 2017-01-09 2017-05-26 오철환 Device for playing responsive augmented reality card game
CN106850774A (en) * 2017-01-13 2017-06-13 英华达(南京)科技有限公司 The environmental interaction system and method for virtual reality terminal
WO2018140397A1 (en) * 2017-01-25 2018-08-02 Furment Odile Aimee System for interactive image based game
WO2018195475A1 (en) * 2017-04-20 2018-10-25 Saysearch, Inc. Communication sessions between computing devices using dynamically customizable interaction environments
US10338695B1 (en) * 2017-07-26 2019-07-02 Ming Chuan University Augmented reality edugaming interaction method
CN207380667U (en) * 2017-08-10 2018-05-18 苏州神秘谷数字科技有限公司 Augmented reality interactive system based on radar eye
CN107509030A (en) * 2017-08-14 2017-12-22 维沃移动通信有限公司 A kind of focusing method and mobile terminal
CN111201069A (en) * 2017-09-29 2020-05-26 索尼互动娱乐美国有限责任公司 Spectator view of an interactive game world presented in a live event held in a real-world venue
CN107888823A (en) * 2017-10-30 2018-04-06 维沃移动通信有限公司 One kind shooting processing method, apparatus and system
CN108057246A (en) * 2017-11-08 2018-05-22 江苏名通信息科技有限公司 Hand based on deep neural network study swims augmented reality method
CN108022301A (en) * 2017-11-23 2018-05-11 腾讯科技(上海)有限公司 A kind of image processing method, device and storage medium
CN108089704A (en) * 2017-12-15 2018-05-29 歌尔科技有限公司 A kind of VR equipment and its experience control method, system, device, storage medium
CN108525305A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108595005A (en) * 2018-04-20 2018-09-28 深圳市天轨年华文化科技有限公司 Exchange method, device based on augmented reality and computer readable storage medium
CN108995590A (en) * 2018-07-26 2018-12-14 广州小鹏汽车科技有限公司 A kind of people's vehicle interactive approach, system and device
WO2020021319A1 (en) * 2018-07-27 2020-01-30 Yogesh Chunilal Rathod Augmented reality scanning of real world object or enter into geofence to display virtual objects and displaying real world activities in virtual world having corresponding real world geography
CN109240576A (en) * 2018-09-03 2019-01-18 网易(杭州)网络有限公司 Image processing method and device, electronic equipment, storage medium in game
CN109876450A (en) * 2018-12-14 2019-06-14 深圳壹账通智能科技有限公司 Implementation method, server, computer equipment and storage medium based on AR game
CN109806584A (en) * 2019-01-24 2019-05-28 网易(杭州)网络有限公司 Scene of game generation method and device, electronic equipment, storage medium
CN110298925A (en) * 2019-07-04 2019-10-01 珠海金山网络游戏科技有限公司 A kind of augmented reality image processing method, calculates equipment and storage medium at device
CN110362209A (en) * 2019-07-23 2019-10-22 辽宁向日葵教育科技有限公司 A kind of MR mixed reality intelligent perception interactive system
CN110354495A (en) * 2019-08-13 2019-10-22 网易(杭州)网络有限公司 The determination method, apparatus and electronic equipment of target object
CN110750161A (en) * 2019-10-25 2020-02-04 郑子龙 Interactive system, method, mobile device and computer readable medium
CN110917619A (en) * 2019-11-18 2020-03-27 腾讯科技(深圳)有限公司 Interactive property control method, device, terminal and storage medium
CN111282272A (en) * 2020-02-05 2020-06-16 腾讯科技(深圳)有限公司 Information processing method, computer readable medium and electronic device
CN111263118A (en) * 2020-02-18 2020-06-09 浙江大华技术股份有限公司 Image acquisition method and device, storage medium and electronic device
CN111309276A (en) * 2020-02-25 2020-06-19 Oppo广东移动通信有限公司 Information display method and related product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
增强现实技术的伦理挑战及其机遇;苏令银;;长沙理工大学学报(社会科学版)(01);全文 *
游戏中的交互性;陶冶;王芙亭;;艺术与设计(理论)(08);全文 *

Also Published As

Publication number Publication date
CN111773658A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN110166827B (en) Video clip determination method and device, storage medium and electronic device
CN109173263B (en) Image data processing method and device
CN110364146B (en) Speech recognition method, speech recognition device, speech recognition apparatus, and storage medium
CN106897372B (en) Voice query method and device
CN105451029B (en) A kind of processing method and processing device of video image
CN110956691B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN111773658B (en) Game interaction method and device based on computer vision library
CN108681390B (en) Information interaction method and device, storage medium and electronic device
US20230044146A1 (en) Video processing method, video searching method, terminal device, and computer-readable storage medium
CN111643900B (en) Display screen control method and device, electronic equipment and storage medium
CN112426724B (en) Matching method and device for game users, electronic equipment and storage medium
WO2022116604A1 (en) Image captured image processing method and electronic device
CN112102157B (en) Video face changing method, electronic device and computer readable storage medium
CN113254683B (en) Data processing method and device, and tag identification method and device
CN113627402B (en) Image identification method and related device
CN110062163B (en) Multimedia data processing method and device
CN111432206A (en) Video definition processing method and device based on artificial intelligence and electronic equipment
CN115713715A (en) Human behavior recognition method and system based on deep learning
CN111586466A (en) Video data processing method and device and storage medium
CN112190921A (en) Game interaction method and device
CN114064974A (en) Information processing method, information processing apparatus, electronic device, storage medium, and program product
CN107844765A (en) Photographic method, device, terminal and storage medium
CN111046209B (en) Image clustering retrieval system
CN107704471A (en) A kind of information processing method and device and file call method and device
CN111798367B (en) Image processing method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant after: Zhuhai Jinshan Digital Network Technology Co.,Ltd.

Address before: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant before: ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant