CN109063662B - Data processing method, device, equipment and storage medium - Google Patents

Data processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN109063662B
CN109063662B CN201810904405.6A CN201810904405A CN109063662B CN 109063662 B CN109063662 B CN 109063662B CN 201810904405 A CN201810904405 A CN 201810904405A CN 109063662 B CN109063662 B CN 109063662B
Authority
CN
China
Prior art keywords
game
elements
strategy
game strategy
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810904405.6A
Other languages
Chinese (zh)
Other versions
CN109063662A (en
Inventor
张云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Chengdu Co Ltd
Original Assignee
Tencent Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Chengdu Co Ltd filed Critical Tencent Technology Chengdu Co Ltd
Priority to CN201810904405.6A priority Critical patent/CN109063662B/en
Publication of CN109063662A publication Critical patent/CN109063662A/en
Application granted granted Critical
Publication of CN109063662B publication Critical patent/CN109063662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a data processing method, a data processing device, a terminal, a server and a storage medium, so as to reduce manual operation of a user. The method comprises the following steps: acquiring a game image sent by a client; identifying a game element in the game image; generating a game strategy description sentence by using the identified game elements; the description keywords in the game strategy description sentence comprise part or all of the identified game elements; retrieving according to the generated game strategy description sentence to obtain a matched game strategy; and returning the game strategy to the client. In the embodiment of the application, the game elements can be identified based on the acquired game images, and the game strategy description sentences are generated by using the identified game elements for retrieval. In the process, the user does not need to input the keywords manually, so that the user operation is simplified.

Description

Data processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a data processing method, apparatus, device, and storage medium.
Background
Often, a user playing a game will search for some game related information, for example, for a game play. The current common search modes are: the user manually inputs keywords into the search engine, and the search engine searches according to the keywords manually input by the user and displays the search result.
The searching mode needs the user to think and input the key words, and has the problem of inconvenient operation.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data processing method, apparatus, device, and storage medium to reduce manual operations by a user.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
a data processing method is applied to a server side and comprises the following steps:
acquiring a game image sent by a client;
identifying a game element in the game image;
generating a game strategy description sentence by using the identified game elements; the description keywords in the game strategy description sentence comprise part or all of the identified game elements;
retrieving according to the generated game strategy description sentence to obtain a matched game strategy;
and returning the game strategy to the client.
A data processing method is applied to a client and comprises the following steps:
acquiring a game image and sending the game image to a server side;
receiving the game strategy which is returned by the server and is associated with the game image, and displaying the game strategy; the game strategy is a retrieval result obtained by the server side retrieving according to a game strategy description sentence, and the description keywords in the game strategy description sentence comprise game elements in the game image; the game element is obtained by the server side through recognition of the game image.
A data processing apparatus comprising:
the first acquisition unit is used for acquiring a game image and sending the game image to a server side;
the display unit is used for receiving the game strategy which is returned by the server and is associated with the game image and displaying the game strategy; the game strategy is a retrieval result obtained by the server side retrieving according to a game strategy description sentence, and the description keywords in the game strategy description sentence comprise game elements in the game image; the game element is obtained by the server side through recognition of the game image.
A data processing apparatus comprising:
a second acquisition unit configured to acquire a game image transmitted by the client;
a data processing unit for:
identifying a game element in the game image;
generating a game strategy description sentence by using the identified game elements; the description keywords in the game strategy description sentence comprise part or all of the identified game elements;
retrieving according to the generated game strategy description sentence to obtain a matched game strategy;
and returning the game strategy to the client.
A server comprising at least a processor and a memory; the processor executes the data processing method between the devices by executing the program stored in the memory and calling other devices.
A terminal comprising at least a processor and a memory; the processor executes the data processing method between the devices by executing the program stored in the memory and calling other devices.
The embodiment of the present invention further provides a storage medium, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform any of the steps in the data processing method between devices provided in the embodiment of the present invention.
As can be seen, in the embodiment of the present application, a game element may be identified based on an acquired game image, and a game strategy description sentence may be generated for retrieval using the identified game element. In the process, the user does not need to input the keywords manually, so that the user operation is simplified.
Drawings
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present invention;
FIG. 2a is a block diagram illustrating an exemplary data processing apparatus according to an embodiment of the present invention;
FIG. 2b is another exemplary block diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 2c is an exemplary structural diagram of a terminal or a server provided in an embodiment of the present invention;
fig. 3, fig. 5, fig. 7 and fig. 8 are exemplary flowcharts of a data processing method according to an embodiment of the present invention;
fig. 4, fig. 6a to fig. 6e are schematic diagrams of user interfaces provided by the embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a data processing method, a data processing device, data processing equipment (a terminal and a server) and a storage medium.
The core idea of the technical scheme provided by the invention is as follows: game elements are identified based on the game images, and game strategy description sentences are generated for retrieval using the identified game elements.
After the core idea is introduced, a data processing apparatus according to an embodiment of the present invention is described below.
The data processing device may include a user-side data processing device (which may be referred to as a first data processing device) and a server-side data processing device (which may be referred to as a second data processing device).
The first data processing means may be applied in the form of software or hardware to terminals such as mobile terminals (e.g. smart phones), ipads, PCs, etc.
When the method is applied to a data processing device in the form of software, in one example, the first data processing apparatus may be stand-alone software (for example, APP deployed on a terminal), or may be an operating system or an operating system-level program of the terminal (for example, an operating system on a robot). The first data processing device may be a Mini Program (Mini Program) in the WeChat or an H5(html5) web page, and the "Mini Program" may be an application that can be used without downloading and installing.
The standalone software, applet or H5 web page identifies multiple game elements of the game and provides a game strategy search service to the user.
In another example, the first data processing device may also be used as a component of a game to provide a game strategy search service for the game.
In the following, an application will be used to refer to the first data processing means in the form of software. The above application may call hardware (e.g., one or more of a display screen and a mobile communication module) on the terminal to implement the technical solution provided by the present application.
When the first data processing apparatus is implemented in a terminal in hardware, the first data processing apparatus may be exemplified as a controller/processor of the terminal.
Similarly, the second data processing apparatus may be applied to the server in the form of software or hardware. When the second data processing device is applied to the server in the form of software, the second data processing device may be independent software, an applet, an H5 webpage, a component in a game, or the like.
When the second data processing apparatus is applied to a server in a hardware form, the second data processing apparatus may be a controller/processor of the server.
For a server, the terminal containing the first data processing device or the first data processing device is the client.
Fig. 1 illustrates one of the application scenarios of the above technical solution, in which a user, a client and a server are involved, wherein the user can log in the server through the client.
In the application scenario shown in fig. 1, under the operation of a user, a client may invoke a camera of a terminal to capture a game interface, a game poster, a promotion, and the like to obtain a game image; or, under the operation of the user, the client may also obtain the game image by scanning the captured game interface, the game poster, and the publicized picture, and of course, the client may also receive the game image transmitted by other external devices.
The client transmits the game images to the server after acquiring the game images, the server identifies the game elements in the game images and generates game strategy description sentences by using the identified game elements, and then the server searches according to the generated game strategy description sentences to obtain matched game strategies and returns the game strategies to the client. The client then presents the game strategy to the user.
From the user role, the flow it sees is: after the game image is captured or scanned, the client then presents the game strategy associated with the game image. The user does not need to manually input the search keyword, thereby simplifying the operation of the user.
The internal structure of the data processing apparatus is described below.
Fig. 2a shows an exemplary structure of the first data processing apparatus described above, including: a first acquisition unit 201 and a presentation unit 202. The first obtaining unit 201 may be configured to obtain a game image and provide the game image to the server, and the displaying unit 202 may be configured to receive a game strategy returned by the server and associated with the game image and display the game strategy.
Fig. 2b shows an exemplary structure of the second data processing apparatus described above, including: a second acquisition unit 203 and a data processing unit 204. The second obtaining unit 203 may be configured to receive a game image sent by the client, and the data processing unit 204 may be configured to identify a game element in the game image, generate a game strategy description sentence by using the identified game element, then perform retrieval according to the generated game strategy description sentence, obtain a matched game strategy, and return the game strategy to the client.
The functions of the above-mentioned units will be described later in this document in connection with a data processing method.
Fig. 2c shows a possible structure diagram of the terminal, which includes:
a bus, a processor 1, a memory 2, a communication interface 3, an input device 4, and an output device 5. The processor 1, the memory 2, the communication interface 3, the input device 4, and the output device 5 are connected to each other by a bus. Wherein:
a bus may include a path that transfers information between components of a computer system.
The Processor 1 may be a general-purpose Processor, such as a general-purpose Central Processing Unit (CPU), a Network Processor (NP), a microprocessor, etc., or an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the program according to the present invention. But also a Digital Signal Processor (DSP), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
The memory 2 stores programs or scripts for executing the technical solution of the present invention, and may also store an operating system and other key services. In particular, the program may include program code including computer operating instructions. Scripts are typically saved as text (e.g., ASCII) and are interpreted or compiled only when called.
More specifically, memory 2 may include a read-only memory (ROM), another type of static storage device that may store static information and instructions, a Random Access Memory (RAM), another type of dynamic storage device that may store information and instructions, a magnetic disk storage, a flash memory, and so forth.
The input device 4 may include means for receiving data and information input by a user, such as a keyboard, mouse, camera, voice input means, touch screen, etc.
The output device 5 may comprise means allowing output of information to a user, such as a display screen, a loudspeaker, etc.
The communication interface 3 may comprise means for using any transceiver or the like for communicating with other devices or communication networks, such as ethernet, Radio Access Network (RAN), Wireless Local Area Network (WLAN) or the like. In the present embodiment, the communication interface 3 can implement the functions of the foregoing broadcast unit 203 and connection unit 204, and furthermore, the communication interface 3 can implement part or all of the functions of the acquisition unit 201 (e.g., receiving wireless signals, etc.).
Specifically, the communication interface 3 may include one or more of a bluetooth module, a WI-FI module, and a mobile communication module.
It will be appreciated that fig. 2c only shows a simplified design of the data processing device. In practical applications, the data processing device may comprise any number of transmitters, receivers, processors, controllers, memories, communication interfaces, etc., and all servers/intelligent terminals that can implement the present invention are within the scope of the present invention.
The processor 1 can implement the data processing method provided by the following embodiments by executing the program stored in the memory 2 and calling other devices.
The functions of the units shown in fig. 2a or fig. 2b can be realized by the processor 1 executing the program stored in the memory 2 and calling other devices.
The server and the terminal have similar structures and are not described in detail herein.
Based on the above-mentioned common aspects related to the present invention, the technical solution provided by the present invention will be further described in detail by taking the application scenario shown in fig. 1 as an example.
Fig. 3 shows an exemplary flow of the above data processing method, which may include the following steps:
part 300: the client acquires the game image and sends the game image to the server.
More specifically, under the operation of the user, the client may call the camera of the terminal to capture a game interface, a game poster, a promotion, a video, and the like to obtain a game image. For example, when a user plays a game on a PC or a virtual reality device, the mobile terminal may be used to tap a game interface of a host side of the PC or the virtual reality device.
Or, under the operation of the user, the client may also obtain a game image by scanning the captured game interface, game poster, promotion, and video picture (the game running by the client cannot be photographed, and the game image needs to be obtained by scanning the screenshot).
Alternatively, the client may receive game images transmitted by other external devices.
The portion 300 may be executed by the aforementioned first acquiring unit 201, or the action of sending the game image in the portion 300 may be executed by the aforementioned communication interface, or the action of sending the game image in the portion 300 may be executed by the processor of the terminal calling the photographing device to photograph or scan by executing a program stored in the memory and calling the communication interface to execute the action of sending the game image in the portion 300.
On the server side, the second obtaining unit 203 may receive the game image, or a communication interface of the server may receive the game image, or a processor may call a communication interface by executing a program stored in a memory to receive the game image.
Part 301: the server identifies game elements in the game image.
The portion 301 may be executed by the aforementioned data processing unit 204, or may be executed by the processor of the server by executing a program stored in the memory.
The following describes game elements, and exemplary categories/attributes of game elements may include: at least one of a name of the game, a character of the game, an item, a personalized element of the player, a game event, and a scene.
The game characters can include player characters and non-player characters. The game characters and props in different games are generally different.
The player-personalized elements described above may illustratively include player ratings, player vital values (blood volume), defense values, etc., which are typically characterized in the game by different additional elements, such as different color halos, different badges on top of the player character. Life value, defense value, and the like.
The game events can comprise reminding events in games, for example, first killing, double killing, third killing, fifth killing, super god, group killing and the like in some games can remind in the form of characters, voice or graphs, and for example, the double killing can broadcast 'double kill'.
The above-mentioned scene elements may illustratively include a level, a game mode, a background environment, and the like. The scenes of different games are typically different.
For example, in the case of royal glory, the game modes may include various game modes such as a pass-through mode, 1v1 ("v" stands for confrontation), 5v5, and the like; take marrio game as an example, which has a plurality of levels, in addition, an above ground level and an below ground level; for example, the background environment of the hungry game may include spring, summer, autumn and winter, day and night.
In the present embodiment, in addition to identifying the category/attribute of each game element in the game image, a specific keyword (attribute value) is also identified.
Taking fig. 4 as an example, in addition to identifying element 1 as a player character, element 2 as a non-player character, and element 3 as a prop, keywords corresponding to the player character (e.g., the name of the player character- "Tom"), keywords corresponding to the non-player character (e.g., "piglet"), and keywords corresponding to the prop (e.g., the name of the prop- "such as wind sword") need to be identified.
Of course, the game elements may also correspond to certain areas in the game image.
As the Recognition technology, an image Recognition technology may be used, and further, since characters may be included in the game image, a Character Recognition technology such as OCR (Optical Character Recognition) may be used.
Wherein, when image recognition is carried out, the game elements in the game image can be recognized by the recognition model.
Part 302: the server generates a game strategy description sentence using the identified game elements.
More specifically, a game strategy description sentence in a natural language may be generated using the identified game elements. The descriptive keywords in the game play description sentence may include some or all of the identified game elements (keywords).
Natural language generally refers to a language that naturally evolves with culture, being a crystal of human intelligence.
The game strategy description sentence in the natural language can be obtained by performing natural language processing on the game element.
The natural language processing can automatically generate a section of high-quality natural language text through a planning process according to some key information and the expression form of the key information in the machine.
In one example, where the game element includes a scene element, the game play profile statement may generate the game play profile statement by combining the scene element with other game elements.
Taking a hungry game as an example, if it is recognized that a background environment (the background environment belongs to a scene) in a game image is autumn and the image includes a player character wilson and a non-player character yak, the generated game strategy description sentence may be: ' Wilson domesticates yak in autumn and tries out the grass ".
Taking a marrio game as an example, if it is recognized that a level (level belongs to a scene) in a game image is specifically an uncapped level, game strategy description sentences which may be generated are as follows: ' Marrio passes the customs strategy at the uncapping stage.
Taking the royal glory as an example, the aforesaid refers to different game modes such as a breakthrough mode, 1v1, 5v5, etc., and if the game mode in the game image (the game mode belongs to a scene) is recognized as 5v5 and the image includes a player character "swevir", a game strategy description sentence which may be generated is as follows: "Wang is glorious 5v5 Wang is canyon Dian Wei Qu Chao".
In another example, where the game element includes a player personalized element, the game play strategy description may also generate the player personalized element in combination with other game elements.
Taking royal glory as an example, assuming that the player characters identified in the game image include plum white and swertia, and the rank of player a corresponding to plum white is bronze three stars, and the rank of player B corresponding to swertia is royal five stars (i.e. player B is ranked higher than player a), then for player a, because its rank is lower, the game competitive power is relatively poor, the game strategy description sentence may include: defending against classical weirs.
For player B, because of its higher level, the game competitive power is relatively higher, and its game strategy description sentence may include: rush to Li Bai people's head.
Portion 302 may be performed by the aforementioned data processing unit 204, or alternatively, portion 302 may be performed by the processor of the server by executing a program stored in memory.
In the above-described game strategy description sentence, there are related words associated with the game element, such as verbs (defense, robbery, domestication), adjectives, and the like, in addition to the game element.
The related words can be obtained through learning. Specifically, the associated words can be obtained by learning the description of the strategy in the game community, the forum and the bullet screen and the description of the strategy by the person who issues the strategy.
For example, someone has developed an attack named "WilsonTaming and generating qiHow yaks areSoothing device"associated words such as" tame "," generate qi "and" pacify "can be learned.
The related words are language materials which are actually appeared and are related to the game elements, and can be put into a corpus for storage.
After the target keyword is obtained, the associated word corresponding to the target keyword can be inquired from the corpus, and then the scene element and the personalized element of the player are used to generate the game strategy description sentence by combining the target keyword and the associated word.
Part 303: and the server searches by using the description keywords in the game strategy description sentences to obtain the matched game strategy.
Part 303 may be executed by the aforementioned data processing unit 204, or part 303 may be executed by the processor of the server by executing a program stored in the memory.
More specifically, the description keyword in the game strategy description sentence can be used for retrieval.
In the search, the server may search within an internal strategy library, which may include player-entered or worker-entered search strategies.
The server may perform a search using an external search engine, or may perform a search using an internal strategy library, or may perform a search using a search engine, and finally collect the internal and external search results.
In the search, the matching game strategy may be determined by matching the descriptive keywords with the title or tag of the game strategy. Since the game play is retrieved based on the game elements in the game image, the resulting game play is associated with the game image.
The retrieved game play may be a video, an article, or rich media content such as a picture, a text, a video, and a live broadcast.
Of course, in other embodiments of the present application, after the game element is identified, the keyword of the game element may be directly used for searching, and the step of generating the game strategy description sentence may be skipped.
Part 304: the server returns the retrieved game play strategy to the client.
More specifically, the server may return a list of game strategies to the client.
Of course, there may be situations where a matching game strategy cannot be retrieved, and at this point, information characterizing the failure of the retrieval may be returned.
Portion 304 may be performed by the aforementioned data processing unit 204, or may be performed by the processor of the server by executing a program stored in memory and calling a communication interface.
305: the client displays the retrieved game strategy.
Of course, in the case of no retrieval, the client may prompt for a failed retrieval and may return to part 300 or enter an interface waiting for the user to take a picture or scan.
Portion 305 may be performed by presentation unit 202 as described above, or may be performed by the processor of the terminal by executing a program stored in memory and invoking a display screen.
As can be seen, in the embodiment of the present application, a game element may be identified based on an acquired game image, and a game strategy description sentence may be generated for retrieval using the identified game element. In the process, the user does not need to input the keywords manually, so that the user operation is simplified.
In addition, in the conventional search method, the user needs to consider the keyword and organize the language before inputting the keyword. If the keywords thought by the user are not accurate enough or the language description of the organization is not accurate enough, the retrieved result is not accurate and is not focused enough. It may be inefficient to require the user to search for content again in the search results.
In the embodiment of the present application, in addition to the manual input steps of the user, by identifying the game image, comprehensive and accurate game elements (such as game name, game role, player level, used property, located level, game mode, etc.) can be obtained, and the game strategy retrieved based on the identified game elements is more accurate and targeted relative to the manual input of the user.
And under the condition that the personalized elements of the player are identified, the game strategy with higher association degree and more personalization can be pushed to the user.
Typically, a game image may include multiple elements, such as props and characters, as illustrated in fig. 4. Some elements may not be of interest to the user.
The following embodiments will provide a service for the user to select elements in an attempt to make the search results more consistent with the user's requirements.
Taking the application scenario shown in fig. 1 as an example, fig. 5 shows another exemplary flow of the data processing method, which may include the following steps:
part 500: the user shoots or intercepts game images through the client.
The interface seen by the user when capturing the game image is exemplary as shown in fig. 6 a.
Part 501: the client sends the game image to the server.
Reference is made to the description of the aforementioned part 300 for parts 500 and 501, which are not described in detail herein.
Part 502: the server identifies game elements in the game image.
The portion 502 is the same as the portion 301, and will not be described herein.
If the server does not recognize the game element, the user can be prompted to take a picture or scan again through the client, and the system returns to the 500 part or enters a state of waiting for the client to take a picture or scan.
Part 503: the server marks key elements of the identified game elements on the game image and returns the key elements to the client.
In one example, the key elements may include at least one of characters and props.
Taking the game image shown in FIG. 6a as an example, which includes the player character Tom61, the non-player character "porkling" 62, and "e.g. FengJian" 63, the three key elements can be labeled.
The form of the mark is various, for example, the area of the key element is framed by using a selection box control, the area of the key element is highlighted, and the like.
Part 503 may be executed by the aforementioned data processing unit 204, or part 503 may be executed by the processor of the server by executing a program stored in the memory and calling the communication interface.
Part 504: the client presents the game image with the key elements marked.
Specifically, the portion 504 may be executed by the presentation unit 202, or the portion 504 may be executed by the processor of the terminal by executing a program stored in the memory and calling the display screen.
Taking the game image shown in FIG. 6a as an example, which includes the player character Tom61, the non-player character "porkling" 62, and "e.g. FengJian" 63, the interface marked with the game image shown in FIG. 6a is shown in FIG. 6 b.
The user can select more or one of the marked key elements. Specifically, the user may select a key element by clicking on the area where the key element is located. The selected key element may be referred to as a target key element.
Further, the client may highlight the target key element, e.g., the target key element may be highlighted with other colors to distinguish it from the unselected key elements. Alternatively, referring to fig. 6c, a check box may be displayed in the area where each key element is located, and if the user can click on the check box, the check box presents a check to indicate that the key element is selected.
In other embodiments of the present application, the user (i.e., user) of the client may be allowed to mark the key elements by itself. For example, the user may be allowed to circle the content in the game image using a square selection box or a free graphic selection box, and the game element corresponding to the circled content is the selected key element (target key element).
505 part: the client generates a first selection instruction according to the first selection operation of the user and sends the first selection instruction to the server.
The first selection operation may illustratively comprise: the user selects (by touching, clicking, etc.) one or some key elements and clicks a certain control; alternatively, the user marks the content in the game image with a square selection box or a free graphic selection box and clicks on the determination control.
The first selection instruction is used for indicating a target key element. More specifically, the first selection instruction may include at least one of an identification, a number, and an index of the target key element, so that the server side may determine which key element or elements are selected according to the first selection instruction.
Part 505 may be executed by the aforementioned first acquisition unit 201, or part 505 may be executed by a communication interface of the terminal, or part 505 may be executed by a processor of the terminal by executing a program stored in a memory and calling the communication interface.
Part 506: the server generates game strategy description sentences in natural language form at least according to the target key elements.
Specifically, game strategy description sentences can be generated by combining the target key elements and the scenes.
Taking a hungry game as an example, assuming that a scene in a game image is identified as autumn, the game image includes a player character "wilson" and a non-player character "yak", and a user clicks the "wilson" and the "yak", the generated game strategy description sentence may include: ' Wilson domesticates yak in autumn and tries out the grass ".
Assuming that the royal glory is taken as an example, it is recognized that a scene in a game image is 5v5, and a user clicks a "swerver" character from among a plurality of game characters, a game strategy description sentence may be generated as follows: rong Qi of Wang5v5Wangzi canyonDianwuAnd (5) attacking and omitting.
In addition, in the case that the player personalized element is included in the identified game element, the game attack description sentence may further generate a game attack description sentence by combining the target key element, the scene element, and the player personalized element.
Taking the royal glory as an example, assume that the user clicks the player characters "plum" and "swervi" in the game image, and recognizes that the rank of player a corresponding to plum is three stars bronze and the rank of player B corresponding to swervi is five stars royal (i.e., player B is ranked higher than player a). If Player A is the user of the current client, the generated game strategy description sentence may include: the 'plum white' defends against the 'classic Wei' skill.
Specifically, when generating a natural language game strategy descriptive sentence, the target key elements, the scene elements, the player personalized elements, and the like may be input into the corpus, and a natural language game strategy descriptive sentence text (sentence) may be generated from the corpus.
For related introduction, reference may be made to the description of section 302 above, which is not repeated herein.
507: and the server searches according to the generated game strategy description sentence to obtain the matched game strategy.
Portion 507 is similar to portion 303 described above and will not be described in detail herein.
Of course, if the server does not retrieve the matching game strategy, the client may prompt the user that there is no matching result, or prompt to take a picture or scan again, and return to 500 parts or enter a state waiting for the client to take a picture or scan.
Portions 508 and 509 are the same as portions 304-305 described above, and are not described in detail here.
In this embodiment, the user only needs to take a picture or scan and click the concerned key element, and then the strategy content corresponding to the concerned key element can be obtained, so that the efficiency of searching the strategy and the precision of the strategy content are improved.
In practice, there may be a plurality of game attack description sentences generated based on the target key elements, for example, the user clicks the player character "wilson" and the non-player character "yak", and the generated game attack description sentences may include: "Weierson domesticates yak in autumn and tries to fight against", "pacify skill when yak grows vital energy", "kill yak skill", "ride and fight against", "yak shaves hair skill" etc.. The above-mentioned multiple game play strategies do not necessarily all have to be actually expressed by the user. The following embodiments will provide a service for users to select game strategy description sentences, so as to make the search results more accurate and meet the requirements of users.
Taking the application scenario shown in fig. 1 as an example, fig. 7 shows another exemplary flow of the data processing method, which may include the following steps:
the sections 700-706 are similar to the sections 500-506 described above, and are not described herein.
Section 707: the server returns the generated game strategy description sentence to the client.
Part 707 may be executed by the aforementioned data processing unit 204, or part 707 may be executed by the processor of the server by executing a program stored in the memory and calling the communication interface.
Section 708: the client shows the game strategy description sentences returned by the server so as to guide the user to select the game strategy description sentences closest to the user's needs.
Portion 708 may be performed by presentation unit 202 as previously described, or portion 708 may be performed by the processor of the terminal by executing a program stored in memory and invoking a display.
Taking the presentation interfaces shown in fig. 6a-6c as an example, the presentation interface of the game strategy description sentence seen by the user is exemplified as shown in fig. 6 d.
709 part: and the client generates a second selection instruction according to the second selection operation of the user and sends the second selection instruction to the server.
The second selection operation may illustratively include: the user selects (by touching, clicking, etc.) one or some of the game strategy description sentences and clicks on the decision control.
The user can select more or only one game strategy description sentence. Further, the client may highlight the selected game strategy descriptive sentence, for example, the selected game strategy descriptive sentence may be highlighted in other colors or with other visual effects to distinguish the selected game strategy descriptive sentence from the unselected game strategy descriptive sentence.
The second selection instruction is for indicating a selected game strategy description sentence. For convenience, the selected game strategy description sentence can be referred to as a target game strategy description sentence.
More specifically, the second selection instruction may include at least one of an identification, a number, and an index of the target game strategy descriptive sentence, so that the server can determine which game strategy descriptive sentence or sentences are selected according to the second selection instruction.
Taking fig. 6d as an example, if the user selects "calming skill when piglets live", the second selection instruction may include at least one of identification, number, and index of the game strategy description sentence.
In other embodiments of the present application, there may be a case where all the generated game strategy description sentences do not meet the user's needs, and in consideration of such a special case, an input bar may be provided on the interface for the user to manually input keywords or game strategy description sentences.
The section 709 may be executed by the aforementioned first acquisition unit 201, or the section 709 may be executed by a communication interface of the terminal, or the section 709 may be executed by a processor of the terminal by executing a program stored in a memory and calling the communication interface.
Part 710: the server uses the description keywords in the target game strategy description sentence for retrieval.
The portion 710 is similar to the portions 507 and 303, and will not be described herein.
711 part (b): the server returns the retrieved game play strategy to the client.
More specifically, the server may return a list of game plays to the client.
Of course, there may be situations where a matching game strategy cannot be retrieved, and at this point, information characterizing the failure of the retrieval may be returned.
The 711 portion may be executed by the aforementioned data processing unit 204, or may be executed by the processor of the server by executing a program stored in the memory and calling the communication interface.
712: the client displays the retrieved game strategy.
If the user selects "calming skill when piglets live", the interface presented by the client is exemplarily shown in fig. 6 e.
Of course, in the case of no retrieval, the client may prompt for a retrieval failure and may return to part 700 or enter an interface waiting for the user to take a picture or scan.
Portion 712 may be performed by presentation unit 202 as previously described, or may be performed by the processor of the terminal by executing a program stored in memory and invoking a display screen.
The main steps performed in time sequence can be seen in fig. 8.
In this embodiment, the user only needs to take a picture or scan, click on a key element of interest, and click on a game strategy description sentence, so that a targeted game strategy can be obtained, and the efficiency of searching for the strategy and the accuracy of the strategy content are further improved.
The training process of the aforementioned recognition model is described below, which may include:
step A: and inputting the sample image of the marked game element into the recognition model to be trained.
For example, sample images that label characters, props, checkpoints, game patterns, events, etc. may be input into the recognition model to be trained.
The more sample images are, the thinner the label is, and the higher the identification accuracy rate finally achieved.
And B: and carrying out feature extraction on the game elements marked in the sample image by the identification model to be trained, and carrying out classification and identification on the extracted features to obtain a classification and identification result.
The recognition model may specifically be a neural network or other model. The method can be used for extracting the characteristics of the marked part in the sample image and carrying out classification and identification to obtain a classification and identification result.
And C: and C, adjusting parameters of the recognition model to be trained according to the classification recognition result, and returning to the step A until the accuracy of the classification recognition result of the recognition model to be trained meets the preset requirement, so that the recognition model is obtained.
Taking a neural network as an example, the parameters may be filter parameters in the neural network.
In other embodiments of the present application, speech recognition may also be performed on a piece of audio to identify game elements in the audio.
Taking a marrio game as an example, different players can be played by different levels, and different players can be played for success and failure events, so that the game elements can be identified by identifying the players.
The subsequent operations are the same as those in all the embodiments, and are not described herein.
Embodiments of the present application also claim a terminal or a server, which includes at least a processor and a memory, where the processor executes the data processing method by executing a program stored in the memory and calling other devices.
Embodiments of the present invention also claim a storage medium, which stores a plurality of instructions, where the instructions are suitable for being loaded by a processor to execute steps of the data processing method provided in any embodiment of the present invention.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software unit executed by a processor, or in a combination of the two. The software cells may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention should not be limited to the embodiments shown herein, but rather
Is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A data processing method is applied to a server side and comprises the following steps:
acquiring a game image sent by a client;
identifying game elements in the game image by using an identification model, wherein the identification model is obtained by training based on a sample image marked with the game elements, the sample image comprises at least one of characters, props, stages, game modes and events, the game elements comprise scene elements, key elements and player personalized elements, and the key elements comprise at least one of the characters and the props;
marking the identified key elements on the game image and returning the key elements to the client;
receiving a first selection instruction sent by the client, wherein the first selection instruction is used for indicating a target key element, and the target key element is a selected key element;
inquiring relevant words corresponding to the target key elements from a corpus, wherein the corpus stores pre-learned relevant words having a correlation relation with game elements, and the relevant words comprise at least one of verbs and adjectives;
generating a game strategy description sentence in a natural language form according to the target key element, the associated word corresponding to the target key element, the scene element and the player personalized element, wherein the description keyword in the game strategy description sentence comprises part or all of the identified game elements;
retrieving according to the generated game strategy description sentences to obtain matched game strategies;
and returning the game strategy to the client.
2. The method of claim 1,
the retrieving according to the generated game strategy description sentence comprises:
and using the description keywords in the game strategy description sentence for retrieval.
3. The method of any one of claims 1-2,
when the number of the generated game strategy description sentences is multiple, the searching by using the description keywords in the game strategy description sentences comprises:
receiving a second selection instruction sent by the client, wherein the second selection instruction is used for indicating a selected game strategy description sentence; the selected game strategy description sentence is a target game strategy description sentence;
and retrieving by using the description keywords in the target game strategy description sentence.
4. The method of claim 1, wherein the training process of the recognition model comprises:
inputting a sample image of a marked game element into a recognition model to be trained;
carrying out feature extraction on the game elements marked in the sample image by using the identification model to be trained, and carrying out classification and identification on the extracted features to obtain a classification and identification result;
and adjusting parameters of the recognition model to be trained according to the classification recognition result until the accuracy of the classification recognition result of the recognition model to be trained meets a preset requirement, so as to obtain the recognition model.
5. A data processing method is applied to a client and comprises the following steps:
acquiring a game image and sending the game image to a server side;
displaying the game image which is returned by the server and marked with the key elements;
generating a first selection instruction according to a first selection operation of a user, and sending the first selection instruction to the server side; the first selection instruction is used for indicating a target key element, and the target key element is a selected key element;
receiving the game strategy which is returned by the server and is associated with the game image, and displaying the game strategy; the game strategy is a retrieval result obtained by the server side retrieving according to a game strategy description sentence, wherein a description keyword in the game strategy description sentence comprises a game element in the game image, the game element comprises a scene element, a key element and a player personalized element, the key element comprises at least one of a character and a prop, the game strategy description sentence is specifically a game strategy description sentence in a natural language form, and the generation process of the game strategy description sentence comprises the following steps: inquiring relevant words corresponding to the target key elements from a corpus, wherein the corpus stores pre-learned relevant words having a correlation relation with game elements, and the relevant words comprise at least one of verbs and adjectives; generating a game strategy description sentence in a natural language form according to the target key element, the associated word corresponding to the target key element, the scene element and the player personalized element; the game elements are obtained by the server side through recognition of the game images through a recognition model, the recognition model is obtained through training based on sample images marked with the game elements, and the sample images comprise at least one of characters, props, stages, game modes and events.
6. The method of claim 5, wherein when the number of game strategy descriptive sentences to be presented is plural, the method further comprises:
generating a second selection instruction according to a second selection operation of the user;
and sending the second selection instruction to the server, wherein the second selection instruction is used for indicating a selected game strategy description sentence, and the selected game strategy description sentence is used for the server to retrieve the game strategy.
7. A data processing apparatus, comprising:
the first acquisition unit is used for acquiring a game image and sending the game image to the server;
the display unit is used for displaying the game image which is returned by the server and marks the key elements;
the first obtaining unit is further configured to generate a first selection instruction according to a first selection operation of a user, and send the first selection instruction to the server; the first selection instruction is used for indicating a target key element, and the target key element is a selected key element;
the display unit is further used for receiving the game strategy which is returned by the server and is associated with the game image, and displaying the game strategy; the game strategy is a retrieval result obtained by the server side retrieving according to a game strategy description sentence, a description keyword in the game strategy description sentence comprises a game element in the game image, the game element comprises a scene element, a key element and a player personalized element, the key element comprises at least one of a character and a prop, the game strategy description sentence is specifically a game strategy description sentence in a natural language form, and the generation process of the game strategy description sentence comprises the following steps: inquiring relevant words corresponding to the target key elements from a corpus, wherein the corpus stores pre-learned relevant words having a correlation relation with game elements, and the relevant words comprise at least one of verbs and adjectives; generating a game strategy descriptive statement in a natural language form according to the target key element, the associated word corresponding to the target key element, the scene element and the player personalized element; the game elements are obtained by the server side through recognition of the game images through a recognition model, the recognition model is obtained through training based on sample images marked with the game elements, and the sample images comprise at least one of characters, props, stages, game modes and events.
8. A data processing apparatus, comprising:
a second acquisition unit configured to acquire a game image transmitted by the client;
a data processing unit for:
identifying game elements in the game image by using an identification model, wherein the identification model is obtained by training based on a sample image marked with the game elements, the sample image comprises at least one of characters, props, stages, game modes and events, the game elements comprise scene elements, key elements and player personalized elements, and the key elements comprise at least one of the characters and the props;
marking the identified key elements on the game image and returning the key elements to the client;
receiving a first selection instruction sent by the client, wherein the first selection instruction is used for indicating a target key element, and the target key element is a selected key element;
inquiring relevant words corresponding to the target key elements from a corpus, wherein the corpus stores pre-learned relevant words having a correlation relation with game elements, and the relevant words comprise at least one of verbs and adjectives;
generating a game strategy description sentence in a natural language form according to the target key element, the associated word corresponding to the target key element, the scene element and the player personalized element, wherein the description keyword in the game strategy description sentence comprises part or all of the identified game elements;
retrieving according to the generated game strategy description sentence to obtain a matched game strategy;
and returning the game strategy to the client.
9. A data processing device comprising at least a processor and a memory; the processor executes the data processing method of any one of claims 1 to 4 or any one of claims 5 to 6 by executing a program stored in the memory and calling other devices.
10. A storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the data processing method according to any one of claims 1 to 4 or 5 to 6.
CN201810904405.6A 2018-08-09 2018-08-09 Data processing method, device, equipment and storage medium Active CN109063662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810904405.6A CN109063662B (en) 2018-08-09 2018-08-09 Data processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810904405.6A CN109063662B (en) 2018-08-09 2018-08-09 Data processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109063662A CN109063662A (en) 2018-12-21
CN109063662B true CN109063662B (en) 2022-05-17

Family

ID=64683162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810904405.6A Active CN109063662B (en) 2018-08-09 2018-08-09 Data processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109063662B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109710255B (en) * 2018-12-24 2022-07-12 网易(杭州)网络有限公司 Special effect processing method, special effect processing device, electronic device and storage medium
CN109740510B (en) * 2018-12-29 2023-03-24 三星电子(中国)研发中心 Method and apparatus for outputting information
CN111035931A (en) * 2019-12-13 2020-04-21 张哲� Programming processing method and device and server
CN113010732A (en) * 2019-12-19 2021-06-22 宏正自动科技股份有限公司 Game strategy movie recommendation system, strategy providing device and method thereof
CN113198172A (en) * 2020-02-03 2021-08-03 宏碁股份有限公司 Game key mode adjusting method and electronic device
JP6812583B1 (en) * 2020-02-28 2021-01-13 株式会社Cygames Systems and methods to assist in the creation of game scripts
CN111481925A (en) * 2020-04-08 2020-08-04 网易(杭州)网络有限公司 Information processing method and device in game and terminal equipment
CN111970555A (en) * 2020-08-18 2020-11-20 深圳康佳电子科技有限公司 Intelligent television real-time game auxiliary method and system based on image recognition
CN116020122B (en) * 2023-03-24 2023-06-09 深圳游禧科技有限公司 Game attack recommendation method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294918A (en) * 2013-05-30 2013-09-11 佛山电视台南海分台 Method and system for realizing virtual games in real images
CN104166707A (en) * 2014-08-08 2014-11-26 百度在线网络技术(北京)有限公司 Search recommendation method and device
CN106075913A (en) * 2016-06-16 2016-11-09 深圳市金立通信设备有限公司 A kind of information processing method and terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294918A (en) * 2013-05-30 2013-09-11 佛山电视台南海分台 Method and system for realizing virtual games in real images
CN104166707A (en) * 2014-08-08 2014-11-26 百度在线网络技术(北京)有限公司 Search recommendation method and device
CN106075913A (en) * 2016-06-16 2016-11-09 深圳市金立通信设备有限公司 A kind of information processing method and terminal

Also Published As

Publication number Publication date
CN109063662A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN109063662B (en) Data processing method, device, equipment and storage medium
CN111143610B (en) Content recommendation method and device, electronic equipment and storage medium
CN113115099B (en) Video recording method and device, electronic equipment and storage medium
CN106897372B (en) Voice query method and device
US20180268307A1 (en) Analysis device, analysis method, and computer readable storage medium
CN112426724B (en) Matching method and device for game users, electronic equipment and storage medium
WO2018004731A1 (en) Facilitating use of images as search queries
CN111954087B (en) Method and device for intercepting images in video, storage medium and electronic equipment
CN112163560A (en) Video information processing method and device, electronic equipment and storage medium
CN112052784A (en) Article searching method, device, equipment and computer readable storage medium
CN111542817A (en) Information processing device, video search method, generation method, and program
CN111767386B (en) Dialogue processing method, device, electronic equipment and computer readable storage medium
CN113869063A (en) Data recommendation method and device, electronic equipment and storage medium
CN112100501A (en) Information flow processing method and device and electronic equipment
CN111144141A (en) Translation method based on photographing function
CN111223014B (en) Method and system for online generation of subdivision scene teaching courses from a large number of subdivision teaching contents
CN114398514B (en) Video display method and device and electronic equipment
KR20170114453A (en) Video processing apparatus using qr code
US20150039643A1 (en) System for storing and searching image files, and cloud server
CN113407772B (en) Video recommendation model generation method, video recommendation method and device
CN109284364B (en) Interactive vocabulary updating method and device for voice microphone-connecting interaction
US20230394728A1 (en) Data Sticker Generation for Sports
CN113205091B (en) Question identification method, device, equipment and medium
CN111444707B (en) Title generation method and device and computer readable storage medium
CN117874290A (en) Method, device, equipment and storage medium for searching game video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant