CN117714728A - Information generation method, device, electronic equipment and storage medium - Google Patents

Information generation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117714728A
CN117714728A CN202311732479.3A CN202311732479A CN117714728A CN 117714728 A CN117714728 A CN 117714728A CN 202311732479 A CN202311732479 A CN 202311732479A CN 117714728 A CN117714728 A CN 117714728A
Authority
CN
China
Prior art keywords
target
information
user
target object
live broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311732479.3A
Other languages
Chinese (zh)
Inventor
张连生
朱磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Interactive Entertainment Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Interactive Entertainment Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Interactive Entertainment Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202311732479.3A priority Critical patent/CN117714728A/en
Publication of CN117714728A publication Critical patent/CN117714728A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to an information generating method, an apparatus, an electronic device, and a storage medium, the method including: in the live broadcast process, acquiring target operation of a target user on a target object in live broadcast; and responding to the target operation, acquiring target information corresponding to the target object, and generating the target information in an information input box. According to the information generation method, the target user can quickly inquire the target information of the target object by performing target operation on the target object in the live broadcast picture, and the target information is automatically generated in the information input box, so that the time for inquiring and inputting the target information by the target user is reduced. The method and the device solve the technical problems that a user needs to input quickly when describing the content of the live broadcast picture, and scene description is inaccurate due to unfamiliar target information of a target object.

Description

Information generation method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an information generating method, an information generating device, an electronic device, and a storage medium.
Background
With the continuous development of the internet, live broadcast technology is widely used. During viewing live, users often interact with a host or other user by describing the content of the live view. In order to keep pace with the progress of live pictures, the user needs to increase the input speed as much as possible, and at the same time, quickly inquire the names of the objects appearing in the live pictures, otherwise, the names cannot be described by text.
However, when a user needs to describe a specific scene in detail, the generated information cannot clearly express the corresponding scene content due to the fact that proper nouns of objects are not clearly described, so that unnecessary misunderstanding is caused, and the information generation method in the related art cannot be well applied.
Disclosure of Invention
According to a first aspect of the present disclosure, there is provided an information generating method including:
in a live broadcast process, acquiring target operation of a target user on a target object in the live broadcast;
and responding to the target operation, acquiring target information corresponding to the target object, and generating the target information in an information input box.
According to a second aspect of the present disclosure, there is provided an information generating apparatus including:
the data acquisition module is used for acquiring target operation of a target user on a target object in live broadcast in the live broadcast process;
and the data processing module is used for responding to the target operation, acquiring target information corresponding to the target object and generating the target information in an information input frame.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor; the method comprises the steps of,
a memory storing a program;
Wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method according to an exemplary embodiment of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform a method according to an exemplary embodiment of the present disclosure.
According to the one or more technical schemes provided by the embodiment of the disclosure, the target user can quickly inquire the target information of the target object by performing target operation on the target object in the live broadcast picture, and the target information is automatically generated in the information input box, so that the time for inquiring and inputting the target information by the target user is reduced. The method and the device solve the technical problems that a user needs to input quickly when describing the content of the live broadcast picture, and scene description is inaccurate due to unfamiliar target information of a target object.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
FIG. 1 shows a flow diagram of a method of information generation;
FIG. 2 shows a standard picture of a game element;
FIG. 3 shows a variation picture of a game element;
FIG. 4 shows a game element overlay schematic;
FIG. 5 illustrates a flowchart of an information generation method of an exemplary embodiment of the present disclosure;
fig. 6 shows a functional block diagram of an information generating apparatus according to an exemplary embodiment of the present disclosure;
FIG. 7 shows a schematic block diagram of a chip according to an exemplary embodiment of the present disclosure;
fig. 8 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window. It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
The current information content is mainly written and is also aided with a certain picture. When the text information content is generated, the generated information cannot clearly express the corresponding scene content because the user does not clearly describe the proper noun of the object, so that unnecessary misunderstanding is caused. For example, in a live game room, users often need to describe a game scene with the names of game screen elements, which may include: peashooters, tombstones phagocytes, charming mushrooms, dance-accompaniment zombies, diving zombies, sled zombies, and the like. However, because of the large number of game elements, the names of the elements to be described are often not clear to the user, so that it is difficult for the generated game information to clearly present the corresponding game screen content. If a user queries the element names while editing the information content, the information generation speed is slowed down, and game progress may be missed.
In order to solve the above-mentioned problem, the present disclosure proposes an information generating method, when a user clicks a target screen, a target element corresponding to the click position is obtained. When the fact that the user drags the target element to the information input box area is detected, the name corresponding to the target element is obtained, and the name is displayed in the input box in a text mode. Finally, receiving the edit and sending operation of the text information in the input frame by the user.
By way of example, fig. 1 shows a flow diagram of a method of information generation. In this embodiment, the target object is a target element in this embodiment.
Step S110: and maintaining the name corresponding to the target element.
Illustratively, taking a game element as an example, the name of the game element may be obtained by querying a game element library, game hosting naming, user naming, and the like.
Illustratively, maintaining a game element name by querying a library of game elements may include the steps of:
in the game development process, a key task is to create a game element library, which can contain all game elements in the game, including game characters, game props, game backgrounds (such as buildings and the like), and provide standard pictures thereof and various changing form pictures. By way of example, fig. 2 shows a standard picture of a game element, and fig. 3 shows a variation picture of a game element.
In addition, the game element library also contains a name corresponding to each game element. For example, FIG. 2 and the name "Xiaomei" corresponding to the game element may be stored in the game element library.
In practical applications, the game element library may be embedded in game software or may be published as a separate component. After the game element library is released, the manager can rename the game elements in the game element library. In addition, the game software or third party application needs to support a picture alignment function, that is, the user can provide a picture as input, the system will query and align the library of game elements, and then return the names of the game elements that are most similar to the input picture.
In particular, the user or game developer provides a picture that needs to be queried, which may be a screenshot of some element in the game or a picture of other source.
The game software or the third party application program supports a picture comparison function, and can perform feature extraction on the input picture, and convert the input picture into a group of numbers or vectors for subsequent comparison.
And comparing the extracted picture features with the features of each element in the game element library. This step may be implemented using various image processing and computer vision techniques, such as feature matching, similarity metrics, or deep learning models (e.g., convolutional neural networks).
Finally, the game element most similar to the input picture is found, and the name of the game element is returned to the user.
Illustratively, maintaining a game element name by way of a host name may specifically include the steps of:
first, the anchor client may provide new add, modify, and delete functions so that the anchor may perform the following operations:
newly added elements: the anchor may upload the new element picture and assign it a name. If the name already exists in the library, an association is established.
Modification element: the anchor may change the name of the element or replace the element picture. At the same time, the associated elements also need to be updated accordingly.
Delete element: the anchor may delete the element by the delete function of the client.
Illustratively, the anchor may add a personalized name "big beautiful" to the game element of FIG. 2.
Second, the anchor client may also display existing elements and names: in the hosting client, an interface may be provided that displays the game element pictures and corresponding names that already exist in the game element library. This helps the anchor to know which elements are contained in the current library and can manage them.
Illustratively, maintaining a game element name by way of user naming may specifically include the steps of:
the user may add pictures to and name the library of game elements. When adding content to the game element library, the owner of the game element library has control authority of auditing, modification and the like. In addition, owners of the library of game elements may also set a variety of mechanisms including voting, etc., to manage and control the new and modification of content.
Specifically, the live broadcast server receives a picture of a game element and a corresponding candidate name, wherein the picture of the game element and the candidate name can come from a host, an operator or a game user.
The live service end can screen and audit the game element pictures and names and then upload the game element pictures and names into a game element library.
The live broadcast server may push the game element picture and the candidate name to the live broadcast client, which presents the game element picture and the candidate name to the audience. The game user can vote for the candidate name of the self-heart instrument through the voting interface of the live client.
And the live client takes the candidate name with the highest vote count as the target name of the game element, and sends the voting result to the game element library. The game element library stores the pictures and the target names of the game elements, and associates the pictures of the game elements with the corresponding target names.
By way of example, the candidate names for the game elements of FIG. 2 may include "little beauty" and "big beauty". When the 'big beautiful' obtains the highest ticket number, the 'big beautiful' is used as the target name of the game element and is sent to a game element library, and the game element library stores the picture and the target name of the game element and associates the picture of the game element with the corresponding target name.
Based on the method, the game element names are maintained in a mode of inquiring the game element library, so that a game development team is allowed to accurately manage the pictures and the names of the game elements, and consistency and accuracy of the association of the pictures and the names are ensured. Helps to maintain standardization of game elements within the game and reduces confusion and error. This approach is applicable to games with highly standardized and highly fixed game element names.
By maintaining the game element names in a host naming mode, individuality and interactivity of the game element names can be increased, so that the game is more entertaining. The anchor can add unique names to the elements, attract the attention of audiences and improve the input sense of players. The method can also update element names rapidly, adapt to the requirements of different live scenes and anchor, and bring freshness to games.
The game element names are maintained in a user naming mode, so that diversity and social interaction are improved. Different users can provide names with rich creatives, so that game contents are enriched. Voting and auditing mechanisms help ensure the quality and suitability of names, avoiding improper or offensive naming. The user participation naming also increases the sense of investment and loyalty of the user to the game, and improves the interactivity of the game community.
Step S120: a target element is determined.
Taking a game live room as an example, when a user watching live broadcast clicks a game screen, live broadcast software obtains the screen position clicked by the user. Subsequently, an image segmentation algorithm may be used to identify the picture element containing the click point and identify it in the form of an edge broken line or highlighting, etc.
In the case where there are multiple overlapping elements at the same time at the location clicked by the user, these elements are different from each other but are all located at the user click location. In this case we choose to identify one of the elements. If the user clicks the same location again in a very short time, the system will identify and mark the second element, and so on, continuously and circularly display.
By way of example, FIG. 4 shows a game element overlay, as shown in FIG. 4, where the cup element, character element, and car element overlap, the system will first identify and identify the cup element when the user clicks on any point within the cup element. If the user clicks the same location again, the system will identify and mark the character element. If the user clicks on the same location again, the system will identify and mark the car elements until all elements are displayed once, and then display is looped back from scratch.
In the process of determining the target element, the decision criteria of whether the front click position and the back click position are the same may be that the elements associated with each click position are identical, where the number of elements is the same and the area range of each element is the same.
By means of effective image segmentation and cyclic display, the method enables users to interactively identify and know different elements in the picture, and richer user experience is provided.
Step S130: the name of the target element is determined.
And under the condition that the live broadcast software detects that the user drags the currently identified target element to the input box area, the system sends the target element picture to the element comparison functional component for comparison. The contrast functional component may be flexibly integrated in a live client, live server, game client or game server.
After the element comparison component receives the target element picture, the picture can be compared and identified with game elements in the game element library by using a similarity comparison technology, and then the corresponding element names are returned.
This way of determining the name of the target element allows the user to quickly obtain the element name by dragging a picture of the target element and dropping it into the input box area, providing the user with an intuitive way to interact and identify elements in the game. This approach also provides flexibility and allows integration of element contrast functionality at different locations as desired.
Step S140: information content is generated.
Illustratively, after the live software input box receives the name of the target element, the name is generated in the input box in a literal manner. In this way, the user can clearly see the name of the target element.
Finally, the user can edit the information content in the input box and perform the transmission operation.
One or more of the technical solutions provided in the exemplary embodiments of the present disclosure increase the diversity and interactivity of games by allowing game masters and users to participate in the naming process of game elements. Different anchor and users can provide respective unique element names, so that not only is game content enriched and user participation improved, but also the element names can be adjusted in real time according to different live broadcast scenes and audience demands, and flexibility is improved.
In addition, the names of the target elements are acquired through image recognition and comparison, so that dependence on proper nouns in a specific scene is solved. The user does not need to know the name description of the element in advance, and only needs to drag the element into the input box, the system can accurately identify and generate the corresponding element name, so that the user can clearly express the corresponding scene content, and the misunderstanding problem caused by unclear element names is avoided.
Therefore, the information generation method provided in the exemplary embodiment of the present disclosure not only scientifically manages element names, but also significantly improves game experience and user engagement. Meanwhile, the method solves the technical problems that a user needs to input quickly when describing scene contents and the scene description is inaccurate due to unfamiliar element names.
Based on the above embodiments, the present disclosure further provides an information generating method, and fig. 5 shows a flowchart of the information generating method according to an exemplary embodiment of the present disclosure, and as shown in fig. 5, the method may include the following steps:
step S510: in the live broadcast process, target operation of a target user on a target object in live broadcast is obtained.
In embodiments, the target operation may include an interactive gesture or operation such as object selection, object dragging, clicking, double clicking, long pressing, dragging, and the like. Such diversity can accommodate preferences of different users and live content types. The target operation is used for selecting a target object and sending out a request instruction for acquiring target information of the target object.
In the live broadcast process, a target user can automatically select a target object through target operation and send out a request instruction for acquiring target information.
Step S520: and responding to the target operation, acquiring target information corresponding to the target object, and generating the target information in an information input box.
In an embodiment, the target information may include a name, an attribute, a description, a picture feature, and the like of the target object.
And responding to target operation performed by a target user, acquiring target information corresponding to the target object, and generating the target information in an information input box of the live broadcasting room.
For example, in generating the target information, natural language processing techniques may be considered to convert the target information into natural language text for the user to better describe or comment on the selected object.
For example, the target information may also be automatically combined with interactions or comments of the target user to form a meaningful context, thereby improving the user experience.
According to the one or more technical schemes provided by the embodiment of the disclosure, the target user can quickly inquire the target information of the target object by performing target operation on the target object in the live broadcast picture, and the target information is automatically generated in the information input box, so that the time for inquiring and inputting the target information by the target user is reduced. The method and the device solve the technical problems that a user needs to input quickly when describing the content of the live broadcast picture, and scene description is inaccurate due to unfamiliar target information of a target object.
Based on the above embodiment, in still another embodiment provided by the present disclosure, the information generating method may further include:
acquiring position information corresponding to target operation in live broadcast;
and determining the object corresponding to the position information as a target object.
In an embodiment, when a target user watching live broadcast performs a target operation, such as clicking a live broadcast picture, the system may acquire specific location information clicked by the user, that is, click coordinates.
The object at this location can then be identified using an image segmentation algorithm. Taking live game as an example, objects may include various objects in a game scene, such as game characters, props, backgrounds, and the like. By analyzing the image data, the system can determine the object at the click location and identify the target object using visual identification methods, such as using an edge broken line, highlighting, or other visualization, to help the user clarify the selected object.
By way of example, the target operation may include clicking, double clicking, long pressing, dragging, or other means to accommodate different user preferences and live content types. For example, in a live game, a target user may select a target object using a click, while watching a live sports game, they may select the target object by double-clicking or long-clicking.
This process can respond to the target operation of the user in real time, and the position clicked by the user is associated with a specific object through image analysis and identification technology so as to determine the target object. The method provides a convenient way for users to interact and select target objects in live game scenes and other live game scenes, and enables the users to accurately operate and acquire required information.
Based on the above embodiments, in still another embodiment provided in the present disclosure, the step of determining the object corresponding to the location information as the target object may include:
when a plurality of objects exist in the target area corresponding to the position information, taking a preset object in the plurality of objects as a target object;
when the target operation of the target user for the target area is received again, the target object in the plurality of objects is redetermined.
In the embodiment, the target area corresponding to the location information is an area where the target user performs the target operation, and when there are a plurality of objects in the target area, one preset object may be set, for example, the object at the uppermost layer in the live broadcast picture may be determined as the preset object. Taking fig. 4 as an example, the "cup" of the uppermost layer may be determined as a preset object, the overlapping area of the "cup", "person", and "car" may be taken as a target area, and when the target user performs a target operation in the target area, the "cup" object may be selected and the "cup" object may be taken as a target object.
And re-determining the target object in the plurality of objects when the target user performs the target operation again in the target area. Taking fig. 4 as an example, in the case of receiving the target operation performed by the target user for the first time in the target area, the preset object "cup" is taken as the target object. In the case that the target user performs the target operation again in the target area, the 'person' object is determined to be a new target object, and so on until all the objects in the target area are looped through.
Based on the method, when a plurality of objects exist in the target area, the system can intelligently switch among the plurality of objects, so that a user can quickly select the target object.
Based on the above embodiment, in still another embodiment provided in the present disclosure, the target operation includes an object selection operation and an object information extraction operation, and the step S520 may include:
responding to the object selection operation of a target user for the target object, and selecting the target object in the live broadcast picture;
and responding to the object information extraction operation of the target user on the target object, acquiring target information corresponding to the target object, and generating the target information in the information input box.
In an embodiment, the target operation includes an object selection operation for selecting a target object and an object information extraction operation, which may be a drag operation for dragging the selected target object to a target area, where the target area may be an information input box area.
In the embodiment, taking live game as an example, in the case where the target user performs an object selection operation on a target object in a live view, the target object is selected in the live view in response to the operation.
In embodiments, target users may select target objects of interest to them through a variety of object selection operations. These object selection operations may include clicking, double clicking, long pressing, dragging, or other ways to accommodate different user preferences and live content types. For example, in a live game, a target user may select a target object using a click, while watching a sports game, they may select the target object by double-clicking or long-clicking.
In an embodiment, when there are multiple overlapping objects at the position where the target user performs the object selection operation, the target user may also select different objects through multiple different object selection operations, for example, when the object selection operation is a click, the object at the uppermost layer corresponding to the position in the live broadcast picture is selected; under the condition that the object selection operation is double-click, selecting an object at the next upper layer at a corresponding position in the live broadcast picture; and under the condition that the object selection operation is long press, selecting an object at a third layer at a corresponding position in the live broadcast picture, and the like, and continuously and circularly displaying. Taking fig. 4 as an example, in the case where the object selection operation is a click, the "cup" object of the uppermost layer is selected; under the condition that the object selection operation is double-click, selecting a 'person' object at the next upper layer; when the object selection operation is long press, the "car" object of the third layer is selected.
In the embodiment, taking live game broadcast as an example, the target user can execute the object information extraction operation when watching live game broadcast, and move the selected target object to the information input box in a dragging mode. In response to the operation, the system acquires target information corresponding to the target object, and automatically fills the target information into an information input box so that a target user can describe, interact or comment.
In an embodiment, the target user may set a variety of object information extraction operations, which may include click and drag, double click and drag, long press and drag, or other user friendly ways to accommodate different user preferences and live content types. For example, in a combat game live, the target user may prefer to select and obtain target information by clicking and dragging, while in a strategic game live, double clicking and dragging may be more practical. This diversity and user-personalized selection arrangement helps to enhance the user experience, enable them to interact more freely with live content, and obtain the required information. The method provides greater applicability for different live scenes and user requirements.
Based on the method, flexible and diversified object selection operation and object information extraction operation are set, so that user experience is improved, a target user can interact with live broadcast content more freely, required target information is acquired, and wider applicability is provided for different live broadcast scenes and user requirements.
Based on the above embodiment, in still another embodiment provided by the present disclosure, the step of obtaining target information corresponding to the target object may include:
acquiring identification information of a target object;
determining target information corresponding to the target object in a preset object library based on the identification information; the preset object library comprises a plurality of objects, and the objects respectively correspond to different identification information.
In an embodiment, the identification information of the target object may include a standard picture and various kinds of variation form pictures, wherein the standard picture may be an image of a normal or reference form of the target object, which is a typical and basic presentation of the target object and is used for representing the appearance or state of the target object under standard conditions.
While the morphed picture may be an image of a variety of different appearances or states of the target object that may appear under different circumstances, conditions, or points in time. The change form picture reflects various change forms which may occur to the target object, such as different expressions, postures, colors, sizes and the like. The morphed picture may be used to more fully describe aspects and states of the target object to meet the needs of different viewers or users.
In an embodiment, once the target user identifies the target object, the system will obtain identification information for the target object, which may include a picture of the target object when selected.
In an embodiment, determining, in the preset object library, the target information corresponding to the target object based on the identification information may include:
first, the identification information of the target object is used to compare with the identification information in the preset object library.
Illustratively, once the target user identifies the target object, the system will obtain identification information for the target object, which may include a screenshot of the target object when selected. And comparing the screenshot when the target object is selected with standard pictures and change form pictures of all objects in a preset object library. The preset object library stores a plurality of objects, and each object has a corresponding object name, a standard picture and various change form pictures.
Specifically, an image recognition technology can be used to calculate the matching degree of the screenshot when the target object is selected and the standard picture and the change form picture of each object in the preset object library respectively, so as to find the object with the highest matching degree.
After determining the object with the highest matching degree, the system can acquire the corresponding target information. The target information may include object names, descriptions, key attribute information, and the like.
Based on this, the target user can easily identify the target object and acquire the target information related thereto without requiring manual input or troublesome query operations. This may provide a more convenient and accurate way when a live audience interacts with the anchor, commentary, or describes the scene.
Based on the above embodiment, in still another embodiment provided by the present disclosure, the step of determining the target information corresponding to the target object in the preset object library based on the identification information may include:
traversing identification information respectively corresponding to a plurality of objects in a preset object library;
respectively acquiring the similarity between the identification information of the target object and the identification information corresponding to the plurality of objects;
and under the condition that the similarity is larger than a preset threshold value, acquiring object information associated with the corresponding object in a preset object library, and determining the object information as target information corresponding to the target object.
In an embodiment, the system traverses identification information corresponding to each of a plurality of objects in a preset object library, where the identification information includes a standard picture and a variation picture.
For the objects in each library, the system will compare the similarity between the target object and the identification information of each object, respectively. Specifically, the feature extraction may be performed on the picture of the target object, and it may be converted into a set of numbers or vectors for subsequent comparison. And then comparing the extracted picture characteristics with the picture characteristics of each object in a preset object library, and calculating the similarity between the identification information of the target object and the identification information corresponding to the plurality of objects. This step may be implemented using various image processing and computer vision techniques, such as feature matching, similarity metrics, or deep learning models (e.g., convolutional neural networks).
A preset threshold is then set to determine when the two objects are considered similar. And if the similarity between the object and the target object is higher than a preset threshold value, the object is considered to be matched with the target object.
After the matched objects are determined, the associated object information such as object names, descriptions, key attributes and the like is acquired from a preset object library, and the information is determined as target information corresponding to the target objects.
Based on the above, the similarity between the target object and a plurality of objects in the preset object library is calculated, and the matched object is found according to the similarity and the object information of the object is acquired, so that the target information of the target object is obtained. By setting a preset threshold, the system can control the strictness degree of the matching result so as to adapt to different application scenes. The method can be used for improving the accurate identification of the target object and automatically generating corresponding information, thereby providing more convenient and intelligent user experience.
Based on the above embodiment, in still another embodiment provided in the present disclosure, the step S520 may include:
displaying a plurality of pieces of information to be selected in an information input box;
and receiving editing operation of the target user on the plurality of pieces of information to be selected, and generating target information based on the editing operation.
In an embodiment, the candidate information may include an object name, attribute information, and description content of the target object.
Specifically, in response to the target operation, the information to be selected corresponding to the target object may be acquired, and the information to be selected may be displayed in the information input box. Taking live game as an example, when the target object is "pea spray", the object name is "pea spray", and the attribute information is "spray weapon: pea(s); the injection speed is moderate; is easy to be eaten by zombies, and the description content is that the peas are sprayed and sprayed to be plant fighters with rich skills. They can cause a large number of explosive injuries and are effective in attacking smaller targets. The plurality of pieces of information to be selected displayed by the target object in the information input box are respectively' target object names: spraying peas; "attribute information is: jet weapon: pea(s); the injection speed is moderate; is easy to be gnawed by zombies; "description: pea spraying is a skilled plant fighter. They can cause a large number of explosive injuries and are effective in attacking smaller targets. "
In the embodiment, the target user can perform editing operation on the plurality of pieces of information to be selected, and generate the target information based on the editing operation. For example, after editing operation, the multiple pieces of information to be selected can be added, modified or deleted in part to generate target information, namely, the peas are sprayed very well, that is, the spraying speed is not fast enough, but the peas are used for dealing with small zombies. "
Based on the information, the plurality of pieces of information to be selected provide various information selections for the user, including object names, attribute information and description content, so as to improve information accuracy. And secondly, the user can edit a plurality of pieces of information to be selected, so that personalized customization is realized, and target information conforming to the viewpoint and style of the user is generated. In addition, the scheme has wide applicability in different live broadcast scenes, is suitable for various fields, meets the information generation requirements of different types of live broadcast contents, provides a more flexible and personalized information generation mode for users, and can effectively improve the experience of live broadcast audiences.
One or more technical solutions provided in exemplary embodiments of the present disclosure increase diversity and interactivity of objects in a live view by allowing a host and a user to participate in the process of object naming. Different anchor and users can provide respective unique object names, so that not only are live broadcast contents enriched and user participation improved, but also the object names can be adjusted in real time according to different live broadcast scenes and audience demands, and flexibility is improved.
In addition, the names of the target objects are acquired through image recognition and comparison, so that dependence on proper nouns in a specific scene is solved. The user does not need to know the name or description of the object in advance, and only needs to drag the object into the input box, the system can accurately identify and generate the corresponding object name, so that the user can clearly express the corresponding scene content, and the misunderstanding problem caused by unclear object names is avoided.
Therefore, the information generation method provided by the exemplary embodiment of the present disclosure not only scientifically manages the object names, but also significantly improves the user experience and user participation of live broadcasting. Meanwhile, the method solves the technical problems that a user needs to input quickly when describing scene contents and the scene description is inaccurate due to unfamiliar object names.
The foregoing description of the embodiments of the present disclosure has been presented primarily in terms of methods. It will be appreciated that, in order to implement the above-mentioned functions, the apparatus corresponding to the method of the exemplary embodiment of the present disclosure includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The embodiments of the present disclosure may divide functional units of a server according to the above method examples, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present disclosure, the division of the modules is merely a logic function division, and other division manners may be implemented in actual practice.
In the case where respective functional modules are divided with corresponding respective functions, exemplary embodiments of the present disclosure provide an information generating apparatus, which may be a server or a chip applied to the server. Fig. 6 shows a functional block diagram of an information generating apparatus according to an exemplary embodiment of the present disclosure. As shown in fig. 6, the information generating apparatus 600 includes:
the data acquisition module 610 is configured to acquire, in a live broadcast process, a target operation of a target user on a target object in the live broadcast;
the data processing module 620 is configured to obtain target information corresponding to the target object in response to a target operation, and generate the target information in an information input box.
In yet another embodiment provided by the present disclosure, the data processing module 620 is further configured to obtain location information corresponding to the target operation in the live broadcast; and determining the object corresponding to the position information as the target object.
In still another embodiment provided by the present disclosure, the data processing module 620 is further configured to, when a plurality of objects exist in a target area corresponding to the location information, take a preset object of the plurality of objects as the target object; and re-determining a target object in the plurality of objects when the target operation of the target user on the target area is received again.
In yet another embodiment provided in the present disclosure, the target operation includes an object selection operation and an object information extraction operation, and the data processing module 620 is further configured to select the target object in a live view in response to an object selection operation of the target user for the target object; and responding to the object information extraction operation of the target user on the target object, acquiring target information corresponding to the target object, and generating the target information in an information input box.
In yet another embodiment provided in the present disclosure, the data processing module 620 is further configured to obtain identification information of the target object; determining the target information corresponding to the target object in a preset object library based on the identification information; the preset object library comprises a plurality of objects, and the objects respectively correspond to different identification information.
In yet another embodiment provided in the present disclosure, the data processing module 620 is further configured to traverse the identification information corresponding to each of the plurality of objects in the preset object library; respectively acquiring the similarity between the identification information of the target object and the identification information corresponding to the objects; and under the condition that the similarity is larger than a preset threshold value, acquiring object information associated with the corresponding object in a preset object library, and determining the object information as the target information corresponding to the target object.
In yet another embodiment provided in the present disclosure, the data processing module 620 is further configured to display a plurality of information to be selected in the information input box; and receiving editing operation of the target user on the plurality of pieces of information to be selected, and generating target information based on the editing operation.
Fig. 7 shows a schematic block diagram of a chip according to an exemplary embodiment of the present disclosure. As shown in fig. 7, the chip 700 includes one or more (including two) processors 701 and a communication interface 702. The communication interface 702 may support a server to perform the data transceiving steps of the method described above, and the processor 701 may support a server to perform the data processing steps of the method described above.
Optionally, as shown in fig. 7, the chip 700 further includes a memory 703, where the memory 703 may include a read only memory and a random access memory, and provides operating instructions and data to the processor. A portion of the memory may also include non-volatile random access memory (non-volatile random access memory, NVRAM).
In some embodiments, as shown in FIG. 7, the processor 701 performs the corresponding operation by invoking a memory-stored operating instruction (which may be stored in an operating system). The processor 701 controls the processing operations of any one of the terminal devices, and may also be referred to as a central processing unit (central processing unit, CPU). Memory 703 may include read only memory and random access memory and provides instructions and data to the processor. A portion of the memory 703 may also include NVRAM. Such as a memory, a communication interface, and a memory coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. But for clarity of illustration, the various buses are labeled as bus system 704 in fig. 7.
The method disclosed by the embodiment of the disclosure can be applied to a processor or implemented by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general purpose processor, a digital signal processor (digital sinal processing, DSP), an ASIC, an off-the-shelf programmable gate array (field-prorammable Gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks of the disclosure in the embodiments of the disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The exemplary embodiments of the present disclosure also provide an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor for causing the electronic device to perform a method according to embodiments of the present disclosure when executed by the at least one processor.
The present disclosure also provides a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to an embodiment of the present disclosure.
The present disclosure also provides a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to embodiments of the disclosure.
Referring to fig. 8, a block diagram of an electronic device 800 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in electronic device 800 are connected to I/O interface 805, including: an input unit 806, an output unit 807, a storage unit 808, and a communication unit 809. The input unit 806 may be any type of device capable of inputting information to the electronic device 800, and the input unit 806 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. The output unit 807 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. The storage unit 808 may include, but is not limited to, magnetic disks, optical disks. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices over computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a graphics Processing Unit (PU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The computing unit 801 performs the various methods and processes described above. Each of the methods described above may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described by the embodiments of the present disclosure are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a terminal, a user equipment, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; optical media, such as digital video discs (digital video disc, DVD); but also semiconductor media such as solid state disks (solid state drive, SSD).
Although the present disclosure has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations thereof can be made without departing from the spirit and scope of the disclosure. Accordingly, the specification and drawings are merely exemplary illustrations of the present disclosure as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents within the scope of the disclosure. It will be apparent to those skilled in the art that various modifications and variations can be made to the present disclosure without departing from the spirit or scope of the disclosure. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. An information generation method, the method comprising:
in a live broadcast process, acquiring target operation of a target user on a target object in the live broadcast;
and responding to the target operation, acquiring target information corresponding to the target object, and generating the target information in an information input box.
2. The method according to claim 1, wherein the method further comprises:
Acquiring corresponding position information of the target operation in the live broadcast;
and determining the object corresponding to the position information as the target object.
3. The method according to claim 2, wherein the determining the object corresponding to the location information as the target object includes:
under the condition that a plurality of objects exist in a target area corresponding to the position information, taking a preset object in the plurality of objects as the target object;
and re-determining a target object in the plurality of objects when the target operation of the target user on the target area is received again.
4. The method of claim 1, wherein the target operations include an object selection operation and an object information extraction operation; the responding to the target operation, obtaining the target information corresponding to the target object, and generating the target information in an information input box, comprises the following steps:
responding to the object selection operation of the target user for the target object, and selecting the target object in a live broadcast picture;
and responding to the object information extraction operation of the target user on the target object, acquiring target information corresponding to the target object, and generating the target information in an information input box.
5. The method of claim 4, wherein the obtaining the target information corresponding to the target object includes:
acquiring identification information of the target object;
determining the target information corresponding to the target object in a preset object library based on the identification information; the preset object library comprises a plurality of objects, and the objects respectively correspond to different identification information.
6. The method of claim 5, wherein the determining the target information corresponding to the target object in a preset object library based on the identification information comprises:
traversing the identification information respectively corresponding to the plurality of objects in the preset object library;
respectively acquiring the similarity between the identification information of the target object and the identification information corresponding to the objects;
and under the condition that the similarity is larger than a preset threshold value, acquiring object information associated with the corresponding object in a preset object library, and determining the object information as the target information corresponding to the target object.
7. The method of claim 1, wherein generating the target information in an information input box comprises:
Displaying a plurality of pieces of information to be selected in the information input box;
and receiving editing operation of the target user on the plurality of pieces of information to be selected, and generating target information based on the editing operation.
8. An information generating apparatus, comprising:
the data acquisition module is used for acquiring target operation of a target user on a target object in live broadcast in the live broadcast process;
and the data processing module is used for responding to the target operation, acquiring target information corresponding to the target object and generating the target information in an information input frame.
9. An electronic device, comprising:
a processor; the method comprises the steps of,
a memory storing a program;
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method according to any of claims 1-7.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-7.
CN202311732479.3A 2023-12-15 2023-12-15 Information generation method, device, electronic equipment and storage medium Pending CN117714728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311732479.3A CN117714728A (en) 2023-12-15 2023-12-15 Information generation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311732479.3A CN117714728A (en) 2023-12-15 2023-12-15 Information generation method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117714728A true CN117714728A (en) 2024-03-15

Family

ID=90143914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311732479.3A Pending CN117714728A (en) 2023-12-15 2023-12-15 Information generation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117714728A (en)

Similar Documents

Publication Publication Date Title
US11800192B2 (en) Bullet screen processing method and apparatus, electronic device, and computer-readable storage medium
CN106658200B (en) Live video sharing and acquiring method and device and terminal equipment thereof
US8245124B1 (en) Content modification and metadata
KR20220127887A (en) Method and apparatus for displaying live broadcast data, device and storage medium
US20170161931A1 (en) Adapting content to augmented reality virtual objects
CN109091861B (en) Interactive control method in game, electronic device and storage medium
CN113225572B (en) Page element display method, device and system of live broadcasting room
WO2022057722A1 (en) Program trial method, system and apparatus, device and medium
CN109964275A (en) For providing the system and method for slow motion video stream simultaneously with normal speed video flowing when detecting event
CN109309842B (en) Live broadcast data processing method and device, computer equipment and storage medium
CN109829064B (en) Media resource sharing and playing method and device, storage medium and electronic device
CN109144652B (en) View display method and device, electronic equipment and storage medium
CN113178015A (en) House source interaction method and device, electronic equipment and storage medium
CN109299326A (en) Video recommendation method and device, system, electronic equipment and storage medium
CN106471819A (en) System and method for improving the accuracy in media asset recommended models
US11632585B2 (en) Systems and methods for streaming media menu templates
CN113573090A (en) Content display method, device and system in game live broadcast and storage medium
CN113938696B (en) Live broadcast interaction method and system based on custom virtual gift and computer equipment
US20160118084A1 (en) Apparatus and method for calculating and virtually displaying football statistics
CN112752132A (en) Cartoon picture bullet screen display method and device, medium and electronic equipment
CN111309428B (en) Information display method, information display device, electronic apparatus, and storage medium
CN117714728A (en) Information generation method, device, electronic equipment and storage medium
CN113569089B (en) Information processing method, device, server, equipment, system and storage medium
CN111225266B (en) User interface interaction method and system
US11249823B2 (en) Methods and systems for facilitating application programming interface communications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination