CN111209068A - Information processing method and device and computer readable storage medium - Google Patents
Information processing method and device and computer readable storage medium Download PDFInfo
- Publication number
- CN111209068A CN111209068A CN202010001716.9A CN202010001716A CN111209068A CN 111209068 A CN111209068 A CN 111209068A CN 202010001716 A CN202010001716 A CN 202010001716A CN 111209068 A CN111209068 A CN 111209068A
- Authority
- CN
- China
- Prior art keywords
- information
- processed
- interaction
- control
- information processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses an information processing method, an information processing device and a computer readable storage medium; after the information processing page is displayed, the information processing page comprises an information display canvas and an information input interface, the information display canvas at least comprises an interaction area, the information to be processed input by the information input interface is received, the type of the information to be processed is identified, then, a target interaction control corresponding to the type of the information to be processed is screened out from a preset interaction control set according to the identification result, the information to be processed and the target interaction control are added to the interaction area, and the information to be processed and the target interaction control are displayed on the information display canvas; the scheme can greatly reduce the information processing time in the interaction process.
Description
Technical Field
The present application relates to the field of communications technologies, and in particular, to an information processing method, an information processing apparatus, and a computer-readable storage medium.
Background
In recent years, with the development of internet technology, human-to-human interaction becomes very convenient, especially interaction on a social platform, such as praise, praise as an emotional interaction, and a uniform interactive control style praise the single interaction mode. Under the condition of single interaction mode, at present, a user adds a label to dynamic content, manually matches the label with the interaction control style one by one in a background, and displays the label and the interaction control style after matching is finished.
In the research and practice process of the prior art, the inventor of the invention finds that the processing of the note information based on the manual background is time-consuming, and in addition, from the perspective of a user, the manual addition of a label is needed in the process of issuing the dynamic content, and the operation time is correspondingly increased, so that the information processing time in the interaction process is greatly increased.
Disclosure of Invention
The embodiment of the invention provides an information processing method, an information processing device and a computer readable storage medium. The time for information processing in the interaction process can be reduced.
An information processing method comprising:
displaying an information processing page, wherein the information processing page comprises an information display canvas and an information input interface, and the information display canvas at least comprises an interaction area;
receiving information to be processed input in the information input interface;
identifying the type of the information to be processed;
screening out a target interaction control corresponding to the type of the information to be processed from a preset interaction control set according to the identification result;
and adding the information to be processed and the target interaction control to the interaction area, and displaying on the information display canvas.
Optionally, an embodiment of the present invention further provides another information processing method, including:
displaying an information processing page, wherein the information processing page comprises an information sharing control and an information display canvas, and the information canvas at least comprises an interaction area;
when receiving a triggering operation aiming at an information sharing control, displaying an information publishing page, wherein the information publishing page comprises at least one information input control and a publishing control;
when receiving the information to be processed input through the information input control, displaying the information to be processed on the information publishing page;
when a trigger operation aiming at the release control is received, displaying the information to be processed on an information display canvas of the information processing page, and displaying a target interaction control corresponding to the information to be processed in the interaction area;
and when the trigger operation aiming at the target interaction control is detected, displaying an interaction record in the interaction area.
Accordingly, an embodiment of the present invention provides an information processing apparatus, including:
the display unit is used for displaying an information processing page, the information processing page comprises an information display canvas and an information input interface, and the information display canvas at least comprises an interaction area;
the receiving unit is used for receiving the information to be processed which is input in the information input interface;
the identification unit is used for identifying the type of the information to be processed;
the screening unit is used for screening out a target interaction control corresponding to the type of the information to be processed from a preset interaction control set according to the identification result;
and the processing unit is used for adding the information to be processed and the target interaction control to the interaction area and displaying the information on the information display canvas.
Optionally, an embodiment of the present invention further provides an information processing apparatus, including:
the information processing and displaying unit is used for displaying an information processing page, the information processing page comprises an information sharing control and an information display canvas, and the information canvas at least comprises an interaction area;
the information sharing device comprises a publishing page display unit, a processing unit and a processing unit, wherein the publishing page display unit is used for displaying an information publishing page when receiving triggering operation aiming at an information sharing control, and the information publishing page comprises at least one information input control and a publishing control;
the information to be processed display unit is used for displaying the information to be processed on the information publishing page when the information to be processed input through the information input control is received;
the interaction display unit is used for displaying the information to be processed on the information display canvas of the information processing page and displaying a target interaction control corresponding to the information to be processed in the interaction area when the trigger operation aiming at the release control is received;
and the interactive record display unit is used for displaying the interactive record in the interactive area when the trigger operation aiming at the target interactive control is detected.
Optionally, in some embodiments, the identification unit may be specifically configured to extract at least one image to be identified from the information to be processed; identifying the content of the image to be identified; and determining the type of the information to be processed according to the identification result.
Optionally, in some embodiments, the recognition unit may be specifically configured to, when there is one image to be recognized in the information to be processed, recognize the content of the image to be recognized by using a trained recognition model to obtain the recognition result; and when at least two images to be recognized exist in the information to be processed, recognizing the content of the images to be recognized by adopting the trained recognition model to obtain at least two initial recognition results, and fusing the initial recognition results to obtain the recognition results.
Optionally, in some embodiments, the identification unit may be specifically configured to classify the initial identification result; according to the classification result, obtaining a weighting coefficient corresponding to the initial identification result; weighting the initial recognition result based on the weighting coefficient; and fusing the weighted initial recognition results to obtain the recognition result.
Optionally, in some embodiments, the identification unit may be specifically configured to perform feature extraction on the image to be identified, so as to obtain a local feature of the image to be identified; carrying out feature coding on the local features to obtain coded local features; fusing the coded local features to obtain fused features; and identifying the content of the image to be identified based on the fused features.
Optionally, in some embodiments, the identification unit may be specifically configured to acquire a plurality of image samples, where the image samples include images to which identification results have been labeled; predicting the recognition result of the image sample by adopting a preset recognition model to obtain the predicted recognition result of the image sample; and converging the preset recognition model according to the predicted recognition result and the label recognition result to obtain the trained recognition model.
Optionally, in some embodiments, the screening unit may be specifically configured to obtain a type of an interaction control in the preset interaction control set; matching the type of the information to be processed with the type of the interactive control; and screening out a target interaction control corresponding to the type of the information to be processed from the preset interaction control set according to a matching result.
Optionally, in some embodiments, the processing unit may be specifically configured to obtain a first receiving time of the information to be processed; identifying a target interaction area corresponding to the first receiving time in the interaction area; adding the information to be processed and a target interaction control to the target interaction area; displaying the content of the target interaction region on the information display canvas.
Optionally, in some embodiments, the processing unit may be specifically configured to query the display content of the interaction area; when the display content exists in the interactive area, acquiring second receiving time for the interactive area to receive the display content; sorting the first and second receive times; and identifying a target interaction area corresponding to the first receiving time in the interaction area according to the sequencing result.
Optionally, in some embodiments, the processing unit may be specifically configured to divide the target interaction area into an information display area and an interaction area; adding the information to be processed to the information display area; adding the target interaction control to the interaction region.
Optionally, in some embodiments, the processing unit may be specifically configured to generate an interaction record when a trigger operation for the target interaction control is detected; and displaying the interaction record in the interaction area.
Optionally, in some embodiments, the processing unit may be specifically configured to, when the trigger operation for the target interaction control is detected, obtain an identity of a terminal that triggers the operation, and count once; accumulating the counting times to obtain the total number of the counting times; and taking the total number of the counting times and the identity as an interaction record.
In addition, an electronic device is further provided in an embodiment of the present invention, and includes a processor and a memory, where the memory stores an application program, and the processor is configured to run the application program in the memory to implement the information processing method provided in the embodiment of the present invention.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform steps in any one of the information processing methods provided by the embodiments of the present invention.
After an information processing page is displayed, the information processing page comprises an information display canvas and an information input interface, the information display canvas at least comprises an interaction area, information to be processed input into the information input interface is received, the type of the information to be processed is identified, then, a target interaction control corresponding to the type is intensively screened out from preset interaction controls according to an identification result, the information to be processed and the target interaction control are added into the interaction area, and the information to be processed and the target interaction control are displayed on the information display canvas; according to the scheme, after the information to be processed is received, the type of the information to be processed is automatically identified, the target interaction control is directly matched according to the identified type, manual matching is not needed, and a user does not need to manually add a label in the issuing process, so that the time for information processing in the interaction process can be greatly reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a scene of an information processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an information processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an information handling page provided by an embodiment of the present invention;
FIG. 4 is a diagram of a trigger information handling page provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of an information entry page provided by an embodiment of the invention;
FIG. 6 is a schematic structural diagram of a simple recognition module for recognizing models according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a dimension reduction recognition module for recognizing a model according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating preset interactive control styles according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a target interaction area provided by an embodiment of the invention;
FIG. 10 is a schematic diagram of an interactive recording display provided by an embodiment of the present invention;
FIG. 11 is another schematic diagram of an interactive recording display according to an embodiment of the present invention;
FIG. 12 is a schematic flow chart of an information processing method according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating an information publication page provided by an embodiment of the invention;
FIG. 14 is a flowchart illustrating a process for displaying a target interaction control according to an embodiment of the present invention;
FIG. 15 is another flow diagram of information processing provided by an embodiment of the invention;
fig. 16 is a schematic structural diagram of an information processing apparatus provided by an embodiment of the present invention;
fig. 17 is a schematic structural diagram of an identification unit of an information processing apparatus according to an embodiment of the present invention;
fig. 18 is a schematic structural diagram of a filtering unit of the information processing apparatus according to the embodiment of the present invention;
fig. 19 is another schematic structural diagram of an information processing apparatus according to an embodiment of the present invention;
FIG. 20 is a schematic diagram of another structure of information processing provided by an embodiment of the present invention;
fig. 21 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an information processing method, an information processing device and a computer readable storage medium. The information processing apparatus may be integrated into an electronic device, and the electronic device may be a server or a terminal.
For example, referring to fig. 1, taking an example that an information processing apparatus is integrated in an electronic device, an information processing page is displayed, where the information processing page includes an information display canvas and an information entry interface, the information display canvas includes at least one interaction area, receives information to be processed entered at the information entry interface, identifies a type of the information to be processed, screens out a target interaction control corresponding to the type in a preset interaction control set according to an identification result, adds the information to be processed and the target interaction control to the interaction area, and displays the information on the information display canvas.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The present embodiment will be described from the perspective of an information processing apparatus, which may be specifically integrated in an electronic device, which may be a terminal or the like; the terminal may include a tablet Computer, a notebook Computer, a Personal Computer (PC), and other devices.
An information processing method comprising:
the information processing method comprises the steps of displaying an information processing page, wherein the information processing page comprises an information display canvas and an information input interface, the information display canvas at least comprises an interaction area, receiving information to be processed input into the information input interface, identifying the type of the information to be processed, screening target interaction controls corresponding to the type in a preset interaction control set according to an identification result, adding the information to be processed and the target interaction controls into the interaction area, and displaying the information on the information display canvas.
As shown in fig. 2, the specific flow of the information processing method is as follows:
101. and displaying the information processing page.
The information processing page comprises an information display canvas and an information entry interface, wherein the information display canvas at least comprises an interaction area.
The information display canvas can be a display carrier for displaying information flow. For example, the information display canvas may include a plurality of interaction regions, each interaction region is independent from another interaction region, and information content and a comment or a praise interaction control for the information content may be displayed in the interaction region.
For example, the user operates through the login page of the client, thereby triggering the display of the information processing page, which is shown in fig. 3. For example, the user may input information such as a preset identity identifier on the login page through the client, and after the information is verified, the information processing page is triggered and displayed, as shown in fig. 4, the user may also directly trigger and display the information processing page through the client.
For example, a search control may be included in the information processing page in fig. 3, and in the application embodiment, the representation form of the control may be various, such as an input box, an icon, a button, and the like. After the user inputs the content to be searched in the search input box, the processed information related to the searched content can be displayed on the information display canvas of the information processing page based on the triggering operation of the search control. The information processing page can also comprise a user personal central control, when the user triggers the control, a personal information page matched with the user personal identity information can be displayed, the personal information page can comprise identity information, information to be processed and/or processed information and the like which are stored to the application program by the user through a client, the personal central control can also trigger a login page, and when the user successfully logs in through the login page, the personal identity information, the information to be processed and/or the processed information and the like can be displayed. The information processing page can also comprise an information input interface, the input interface can be 'transmitting' or 'uploading' and the like, and the information input interface is mainly used for uploading information to be processed to an information processing device for processing through the interface by a user, so that the information to be processed can be displayed on an information display canvas for the user and other users to browse. The information processing page can further comprise a classification control, when the classification control is triggered by a user, information processed or displayed on the information display canvas can be classified, a classification list is generated, and when the type in the classification list is triggered by the user, displayed information corresponding to the type can be displayed on the information display canvas. The information processing page can further comprise a focus control, and when the user successfully applies the program through the login page and triggers the focus control, processed or displayed information issued by other users in the information processing device corresponding to the user identification can be displayed on the information display canvas.
It should be emphasized that the processed information in the information display canvas can be dynamically updated, and the time published by the user is taken as a reference to realize dynamic update, for example, a user a publishes a piece of first information to be processed at ten am, the information is displayed on the information display canvas after being processed by the information processing device, a user a or other users publish a piece of second information to be processed at ten am, and the information is displayed on the information display canvas after being processed by the information processing device, at this time, the information existing on the information display canvas changes from one to two, the display sequence of the second information to be processed published can be the first information to be processed preferentially, and it can also be understood that the information to be processed published later is displayed preferentially at the head of the information display canvas.
102. And receiving the information to be processed which is input in the information input interface.
The information to be processed may be information that a user wants to enter and issue to the information display canvas through a client, and the content of the information may be any one of text, image, audio and/or video, or may be a plurality of combinations, for example, a combination of an image and a text, or a combination of an image, a video and a text.
For example, to-be-processed information input by a user through an information input interface control of a trigger information processing page is received, for example, the user may trigger the information input interface control of the information processing page, at this time, the information input page may be displayed, as shown in fig. 5, and the information input page may include a text input control, an image input control, an expression input control, an image preview control, a video input control, an audio input control, a cancel control, a release control, and the like. The user can input the information to be processed which needs to be input by operating the control, for example, the user inputs the characters which need to be input through the character input control, inputs the images which need to be input through the image input control, and after the input is completed, a preview image or a thumbnail of the input image can be generated on an information input page; the user can also enter the required expression through the expression entry control. After the user inputs the information to be input through various input controls of the information input page, if the input information to be processed is not satisfactory, the cancel control can be directly triggered, and the information processing page can be returned. If the input information to be processed is satisfied, the release control can be triggered to generate the information to be processed, and the information processing device receives the information to be processed.
103. And identifying the type of the information to be processed.
The type of the information to be processed may be a type of evaluating content of the information to be processed, for example, if the information to be processed is news or opinion about sports, the type of the information to be processed may be considered as sports, and for example, if the information to be processed is a promotional image of a certain game, the type of the information to be processed may be considered as a game.
For example, the type of the image in the information to be processed may be identified, and then the type of the information to be processed may be determined, specifically as follows:
and S1, extracting at least one image to be recognized from the information to be processed.
For example, the content of the information to be processed is classified according to the content format, and according to the classification result, the image to be recognized, which is input by the user through the image input control, is extracted from the information to be processed, for example, the user may input files in image formats such as jpg, bmp, tif (format corresponding to image data) through the image input control. The content of the information to be processed is classified by format, for example, the content corresponding to the text format is classified into text type, the content corresponding to the audio format is classified into audio type, the content corresponding to the video format is classified into video type, and the content corresponding to the image format is classified into image type. And extracting at least one image to be identified from the information to be processed according to the classification result.
And S2, identifying the content of the image to be identified.
(1) And when the information to be processed has one image to be recognized, recognizing the content of the image to be recognized by adopting the trained recognition model to obtain a recognition result.
For example, when there is an image to be recognized in the information to be processed, the content of the image to be recognized is recognized by using the trained recognition model, and the specific recognition process may be as follows:
and C1, performing feature extraction on the image to be recognized to obtain local features of the image to be recognized.
For example, a convolutional neural network may be used to extract the underlying features by setting a fixed step size and Scale in the trained recognition model, for example, an inclusion simple module (simple recognition module) of a GoogleNet model (an image recognition model) may be used to extract the underlying features, as shown in fig. 6, the inclusion simple module advances the frequently used Local features of the image to be recognized, for example, Local features such as Scale-Invariant feature transform (SIFT), Histogram of Oriented Gradients (HOG), and/or Local Binary Pattern (LBP).
It should be emphasized that if the inclusion simple module is adopted, the number of characteristic channels will not be changed due to the existence of the pooling layer, the number of characteristic channels will be larger after splicing, and after several layers of modules are stacked, the number of channels will be larger and larger, so that the parameters and the calculation amount are increased. Therefore, in order to improve this drawback, an inclusion dimension reduction module (dimension reduction identification module) may also be used, and as shown in fig. 7, the inclusion dimension reduction module introduces 31 × 1 convolution layers for dimension reduction, so called dimension reduction is to reduce the number of channels, and the adopted 1 × 1 convolution may also correct the linear characteristic.
And C2, carrying out feature coding on the local features to obtain coded local features.
The feature encoding may be an encoding operation of the underlying features using a feature transformation algorithm.
For example, the extracted local features are mainly bottom-layer features, which include a large amount of redundancy and noise, and in order to improve the robustness of feature expression, feature coding needs to be performed on the local features, and the coded local features can be obtained, for example, coding modes such as vector quantization coding, sparse coding, local linear constraint coding, and Fisher vector coding can be adopted.
And C3, fusing the local features after encoding to obtain fused features.
The feature fusion can also be called as feature aggregation, which mainly means that a maximum value or an average value of each dimension of features is removed in a space range, and a feature expression with certain features and no deformation can be obtained.
The fused features may be vectors describing a fixed dimension of the image to be recognized.
For example, in a certain spatial range, a maximum value or an average value is taken for each dimension of the coded local features, and a feature expression that the features of the image to be recognized are not deformed can be obtained. For example, feature fusion may be performed on the encoded local features in a pyramid feature matching manner to obtain fused features. Specifically, the image to be recognized is uniformly partitioned, feature fusion is performed on each partition to obtain fused features of each partition, then, feature fusion is performed on the fused features of each partition again to obtain the fused features of the image to be recognized, and the feature vector of the fixed dimension of the image to be recognized is obtained.
And C4, identifying the content of the image to be identified based on the fused features.
For example, the classifier is used to classify the feature vectors of the image to be recognized in fixed dimensions to obtain the recognition result of the content of the image to be recognized, where the recognition result may be the content of the image to be recognized and the type corresponding to the content, and the recognition process is completed. For example, a Support Vector Machine (SVM) and a random forest can be used to classify the feature vectors of the fixed dimensions of the image to be recognized.
(2) When at least two images to be recognized exist in the information to be processed, recognizing the content of the images to be recognized by adopting the trained recognition model to obtain at least two initial recognition results, and fusing the initial recognition results to obtain a recognition result.
For example, when there are at least two images to be recognized in the information to be processed, the trained recognition model is used to recognize the content of the images to be recognized, the recognition process of the images to be recognized refers to steps C1-C4, and at least two initial recognition results are obtained, for example, taking the case that there are three images to be recognized in the information to be processed, three images to be recognized are respectively recognized, and the recognition results of the three images to be recognized are obtained, which are respectively the probability value of the type a, the probability value of the type B, and the probability value of the type C. The initial recognition result is classified, for example, the type a is a scene graph of a certain game, the type B and the type C are character images of a certain game, and the initial recognition result can be classified into a game scene and a game character image. According to the classification result, a weighting coefficient corresponding to the initial recognition result is obtained, for example, the weighting coefficient of a preset game scene is 0.7, the weighting coefficient of a game character is 0.3, the initial recognition results are weighted according to the weighting coefficients, the weighted initial recognition results are fused to obtain a recognition result, for example, after the probability value of the type A, the probability value of the type B and the probability value of the type C are weighted, the maximum value can be screened out of the three recognition results to be used as the recognition result, and the three recognition results can be considered to be comprehensively analyzed to obtain the type of the image to be recognized as a certain game.
The trained recognition model may be trained from a plurality of image samples, and specifically may be trained by other devices and then provided to the information processing apparatus, or may be trained by the information processing apparatus, that is, before the step "recognizing the image to be recognized by using the trained recognition model", the information processing method may further include:
(1) a plurality of image samples are acquired, the image samples including images with labeled recognition results.
For example, a plurality of images may be acquired as original image samples through a database, a network and an image acquisition device, then the original image samples are preprocessed, such as de-duplicated, cropped, rotated and/or flipped, to obtain image samples meeting the input criteria of the preset recognition model, and then the preprocessed image samples are subjected to type labeling to obtain labeled image samples.
(2) And predicting the recognition result of the image sample by adopting a preset recognition model to obtain the predicted recognition result of the image sample.
For example, feature extraction is performed on an image sample by using a preset identification model to obtain local features of the image sample, feature coding is performed on the local features to obtain coded local features, the coded local features are fused to obtain fused features, and the image sample is identified based on the fused features to obtain a prediction result of image sample identification.
(3) And converging the preset recognition model according to the predicted recognition result and the marked recognition result to obtain the trained recognition model.
For example, the preset recognition model may be converged according to the predicted recognition result and the labeled recognition result by an interpolation loss function, so as to obtain a trained recognition model. For example, the following may be specifically mentioned:
and adjusting parameters for feature fusion in the preset recognition model according to the recognition result marked by the image sample and the predicted recognition result by adopting a Dice function (a loss function) to obtain the trained recognition model.
Optionally, in order to improve the accuracy of the context feature, besides the Dice function, other loss functions such as a cross entropy loss function may be used for convergence, which may specifically be as follows:
adjusting parameters for feature fusion in the preset recognition model according to the recognition result marked by the image sample and the predicted recognition result by adopting a cross entropy loss function, and adjusting the parameters for feature fusion in the preset recognition model according to the recognition result marked by the image sample and the predicted recognition result by adopting an interpolation loss function to obtain a trained recognition model
And S3, confirming the type of the information to be processed according to the recognition result.
For example, the type of the information to be processed is determined according to the recognition result of the image to be recognized, the recognition result may be the content of the image to be recognized and the type corresponding to the content, the type of the content of the image to be recognized may be obtained, and the type of the content of the image to be recognized and the type of the information to be processed are associated with each other. For example, according to the type of the image content to be recognized, matching is performed in a type set of preset information to be processed, for example, the type of the image content to be recognized is a certain game, the type set of the preset information to be processed may include games, dances, music, animations, drawings and the like, according to the type of the image content to be recognized, matching is performed in the type set of the preset information to be processed, it can be determined that the type of the information to be processed is a game, and for example, a promotional image of a certain piece of music of the image content to be recognized, it can be determined that the type of the information to be processed is music.
104. And according to the identification result, screening out the target interaction control corresponding to the type of the information to be processed in the preset interaction control set.
For example, the interaction control may be a praise button, and the user triggers the praise button to indicate a preference for the displayed information.
For example, the type of the interaction control in the preset interaction control set is obtained, for example, the type or the type of each interaction control in the preset interaction control set is obtained, each type or type corresponds to an information type, as shown in fig. 8, the information type may include games, dances, music, animations, drawings, and the like. The method comprises the steps of matching the type of information to be processed with the type of an interactive control, for example, taking the type of the information to be processed as a game, matching the type of the interactive control corresponding to the type of the game in a preset interactive control set, after matching is successful, screening the successfully matched interactive control in the preset interactive control set, for example, taking the game as an example, screening the interactive control corresponding to the game in the preset interactive control set, and taking the interactive control as a target interactive control.
105. And adding the information to be processed and the target interaction control to the interaction area, and displaying on the information display canvas.
A1, acquiring a first receiving time of the information to be processed, and identifying a target interaction area corresponding to the first receiving time in the interaction area.
The receiving time may be a time for a receiving user to enter information to be processed through the information entry interface, for example, if the user enters information to be processed in 05 minutes 02 seconds when the user enters information to be processed in XX times of X months and X days of 201X years, XX minutes XX seconds when the user enters information to be processed in XX times of X months and X days of 201X years. When the display content of the interactive area is inquired, when the display content exists in the interactive area, acquiring a second receiving time of the interactive area for receiving the display content, for example, the interactive area also displays other processed information, acquiring a second receiving time of the information processing device for receiving other processed information, for example, further displays one processed information in the interactive area, acquiring a second receiving time of the processed information, for example, 01 minutes and 01 seconds in XX of X month of 201X year, and sorting the first receiving time and the second receiving time, obviously, the interactive area of the information to be processed corresponding to the first receiving time should be before the interactive area of the processed information corresponding to the second receiving time, and if two interactive areas exist on the information display canvas, the interactive area corresponding to the first receiving time should be above the interactive area corresponding to the second receiving time, and taking the interactive area above the interactive area corresponding to the second receiving time as a target interactive area.
And A2, adding the information to be processed and the target interaction control to the target interaction area.
For example, the target interaction area is divided into an information display area and an interaction area, the information to be processed is added to the information display area, and the target interaction control is added to the interaction area, as shown in fig. 9, the information display area is used for displaying the information to be processed, including displaying images, texts, audios and/or videos of the information to be processed. The interactive area is used for adding a target interactive control, for example, taking interactive activity as praise, and the interactive area is used for placing praise buttons.
A3, displaying the content of the target interaction region on the information display canvas.
For example, after the information to be processed and the target interactive control are added, the information to be processed and the target interactive control added in the target interactive area are displayed on the information display canvas, for example, taking an interactive activity as an example of approval, content information issued by a user and an approval button are displayed on the information display canvas, the approval button is triggered by the user or other users, the content information issued by the user is subjected to the interactive activity, a user identity of a publisher of the information to be processed and a publishing time (i.e. a first receiving time) can be displayed in the information display area, wherein the publishing time (i.e. the first receiving time) can be directly displayed, for example, directly displayed for X minutes and X seconds when 20XX years, X months, X days, X, and X minutes, and a difference value between the publishing time and the current time can be displayed, for example, the publishing time is 50 minutes at 10 minutes, and the current time is 51 minutes at 10 minutes, it may be displayed 1 minute ago.
Optionally, after the information to be processed and the target interaction control are displayed on the information display canvas, the method may further include the following steps:
(1) and when the trigger operation aiming at the target interaction control is detected, generating an interaction record.
For example, when a trigger operation for a target interaction control is detected, an identifier of a terminal that triggers the operation is obtained and counted once, for example, when a user (the user who releases information himself or another user) triggers the target interaction control, an identifier of the user, such as an identifier of an account number, a nickname, and the like of the user, is obtained and interaction is recorded once. And accumulating the counting times to obtain the total number of the counting times, recording the interaction once when a plurality of users trigger the target interaction control, and accumulating the number of the interaction times to obtain the total number of the interaction users or the total number of the interaction users, for example, taking the interactive activity praise as an example, when three users praise, acquiring the identification of the three users, and simultaneously recording the total number of the interaction times or the total number of the three users as 3. Taking the total number of the counted times and the identification as interaction records, for example, when there is a user interaction, the interaction record is the identification of the user, the interaction times is 1, the identification and the interaction times are taken as the first interaction record, when a second user appears, the identification of the second user is obtained, the interaction times are accumulated to obtain the interaction times 2, the identification of the two users and the total interaction times 2 are taken as the second interaction record, when a third user appears, the identification of the third user is obtained, the interaction times are continuously accumulated to obtain the total number of the interaction times 3, and the identification of the three users and the total interaction times 3 are taken as the third interaction record. It can be seen that the interaction log is dynamically changed, and one interaction log needs to be regenerated each time the target interaction control is triggered.
(2) And displaying the interaction record in the interaction area.
For example, the interaction record is displayed in the interaction area, for example, the total number of interactions in the interaction record may be displayed beside the target interaction control, as shown in fig. 10, the total number of interactions display area and the target interaction control are taken as a whole, and the number of times displayed in the total number of interactions display area beside the target interaction space is increased by 1 each time the target interaction control is triggered by the user. The user's identity in the interaction record may also be displayed below the total number of interactions display area, as shown in fig. 11. Another interaction time control can be added, the total interaction time is displayed on the interaction time control, and the identification is associated with the interaction time control in a list form, for example, when a user triggers the interaction time control, an identification list can be displayed. As can be seen from the above, after the information processing page is displayed in the embodiment of the application, the information processing page includes the information display canvas and the information entry interface, the information display canvas includes at least one interaction area, receives the information to be processed entered in the information entry interface, identifies the type of the information to be processed, then, according to the identification result, selects the target interaction control corresponding to the type in the preset interaction control set, adds the information to be processed and the target interaction control to the interaction area, and displays the information on the information display canvas; according to the scheme, after the information to be processed is received, the type of the information to be processed is automatically identified, the target interaction control is directly matched according to the identified type, manual matching is not needed, and a user does not need to manually add a label in the issuing process, so that the time for information processing in the interaction process can be greatly reduced.
The method described in the above examples is further illustrated in detail below by way of example.
In this embodiment, a description will be given taking as an example that the information processing apparatus is specifically integrated with a device such as a terminal, and specifically may include a tablet Computer, a notebook Computer, a Personal Computer (PC), and the like.
As shown in fig. 12, an information processing method specifically includes the following steps:
201. the terminal displays an information processing page.
The information processing page can comprise an information sharing control and an information display canvas, and the information canvas at least comprises an interaction area.
For example, a user may input information such as a preset identity identifier on a login page through a client of the terminal, and after the information is verified, the electronic device is triggered to display an information processing page, the user may input information such as a preset identity identifier on the login page through the client, and after the information is verified, the electronic device is triggered to display the information processing page, and the information processing page displayed by the electronic device is as shown in fig. 3.
202. And when receiving the triggering operation aiming at the information sharing control, displaying an information publishing page by the terminal.
The information publishing page comprises at least one information input control and a publishing control. The information posting page may be a page where a user inputs and posts image, text, audio, and/or video pending information.
For example, when the user triggers the information sharing control on the information processing page through the client of the terminal, the information publishing page is displayed, as shown in fig. 13. For example, the user may trigger the information sharing control through a touch, a slide, a scroll, or other trigger action, and when the information processing apparatus receives a trigger operation of the user for the information sharing control through the terminal, the information publishing page is displayed on the terminal of the user.
203. And when receiving the information to be processed input through the information input control, the terminal displays the information to be processed on the information publishing page.
The information input control can comprise a character input control, an image input control, an expression input control, a video input control, an audio input control and the like, and information such as texts, images, expressions, videos, audios and the like can be input through the information input control.
For example, a user can enter information to be processed, which needs to be entered, by operating the control, for example, the user enters a text, which needs to be entered, through the text entry control, and enters an image, which needs to be entered, through the image entry control, after the entry is completed, a preview image or a thumbnail of the entered image can be generated on an information entry page, if the user is unsatisfied with the preview image or the thumbnail and needs to replace or change the position, the image preview control can be triggered, the position can be deleted or changed, a specific deletion action can be performed through a deletion button of the image preview control, or the preview image or the thumbnail can be selected to be pressed for a long time, and the image can be dragged to change the position through a position; the user can also input the needed expression and the like through the expression input control. When the user inputs information through the information input controls, the information input by the user can be displayed on the information publishing page in real time.
204. When receiving a trigger operation aiming at the release control, the terminal displays information to be processed on an information display canvas of the information processing page and displays a target interaction control corresponding to the information to be processed in an interaction area.
For example, when a user triggers a publishing control on an information publishing page of a client through a terminal, the to-be-processed information input by the user on the publishing page is stored, an image is extracted from the to-be-processed information, the content of the image is identified, for example, the content of the image can be identified by using a trained identification model, and the type of the to-be-processed information is determined according to the identification result. And screening target interaction controls matched with the type of the information to be processed from the preset interaction control set according to the determined type of the information to be processed, for example, if the type of the information to be processed is a game, screening the target interaction controls corresponding to the game type from the preset interaction control set. And displaying the information to be processed on the information display canvas of the information processing page, and displaying the target interaction control in the interaction area, as shown in fig. 14.
205. And when the triggering operation aiming at the target interaction control is detected, the terminal displays the interaction record in the interaction area.
The interaction record may be the number of interactions with the target user, the user identity of the interactions, and the interaction content, for example, record information such as approval or comment of one or more users on the information to be processed shared by the target user, such as the number of approval, the number of access times, the visitor record, and the comment information.
For example, in the process of displaying information to be processed shared by a target user on an information processing page, the target user triggers a target interaction control by himself or another user, at this time, the total number of triggering times is recorded, and a user identity that triggers the target interaction control is acquired, if some users also input comment information after triggering the target interaction control, the input comment information needs to be acquired, and the information is displayed in an interaction area of the information processing page based on the total number of triggering times, the user identity, the comment information and other production interaction records, as shown in fig. 11.
As can be seen from the above, after the information processing page is displayed in the embodiment of the application, the information processing page includes the information sharing control and the information display canvas, the information canvas at least comprises an interaction area, when the triggering operation of the user for the information sharing control is detected, the information publishing page is displayed, the information publishing page comprises at least one information input control and a publishing control, when receiving the information to be processed input by the user through the information input control, displaying the information to be processed on the information publishing page, when detecting the triggering operation of the user on the publishing control, displaying information to be processed on an information display canvas of an information processing page, displaying a target interaction control corresponding to the information to be processed in an interaction area, when the triggering operation of a user for the target interaction control is detected, displaying an interaction record in an interaction area; according to the scheme, after the information to be processed is received, the type of the information to be processed is automatically identified, the target interaction control is directly matched according to the identified type of the information to be processed, manual matching is not needed, and in addition, a user does not need to manually add a label in the issuing process, so that the information processing time in the interaction process can be greatly reduced.
The method described in the above examples is further illustrated in detail below by way of example.
In this embodiment, the information processing apparatus is specifically integrated in an electronic device, the electronic device may be a terminal or other devices, specifically including a tablet Computer, a notebook Computer, and a Personal Computer (PC), and the interactive control is a praise control.
As shown in fig. 15, an information processing method specifically includes the following steps:
301. the electronic device displays an information processing page.
For example, a user may input information such as a preset identity identifier on a login page through a client, and after the information is verified, the electronic device is triggered to display an information processing page, the user may input information such as a preset identity identifier on the login page through the client, and after the information is verified, the electronic device is triggered to display the information processing page, and the information processing page displayed by the electronic device is as shown in fig. 3.
302. The electronic equipment receives the information to be processed which is input in the information input interface.
For example, a user may trigger an information entry interface control of an information processing page, at this time, the electronic device may display the information entry page, as shown in fig. 5, the information entry page may include a text entry control, an image entry control, an expression entry control, an image preview control, a video entry control, an audio entry control, a cancel control, a release control, and the like, and the electronic device receives information of various formats entered by the user through various entry controls of the information entry page, and generates information to be processed from the information.
303. The electronic equipment extracts at least one image to be identified from the information to be processed.
For example, the electronic device may classify the content of the information to be processed by a content format, and extract an image to be recognized, which is input by the user through the image input control, from the information to be processed according to the classification result, for example, the user may input files in image formats such as jpg, bmp, tif (format corresponding to image data) and the like through the image input control. The content of the information to be processed is classified by format, for example, the content corresponding to the text format is classified into text type, the content corresponding to the audio format is classified into audio type, the content corresponding to the video format is classified into video type, and the content corresponding to the image format is classified into image type. And extracting at least one image to be identified from the information to be processed according to the classification result.
304. The electronic equipment identifies the content of the image to be identified.
(1) And when the information to be processed has one image to be recognized, recognizing the content of the image to be recognized by adopting the trained recognition model to obtain a recognition result.
For example, the electronic device may recognize the type of the image to be recognized by using the trained recognition model, where the specific recognition process is as follows:
and B1, the electronic equipment performs feature extraction on the image to be recognized to obtain local features of the image to be recognized.
For example, the electronic device may set a fixed step size and a Scale in the trained recognition model, and perform bottom-layer Feature extraction by using a convolutional neural network, for example, may perform bottom-layer Feature extraction by using an inclusion module of a google net model (an image recognition model), as shown in fig. 6, advance, by using the inclusion module, the common local features of the image to be recognized, such as local features like Scale-Invariant Feature Transform (SIFT), Histogram of Oriented Gradients (HOG), and/or Local Binary Pattern (LBP).
B2, the electronic equipment performs feature coding on the local features to obtain coded local features.
For example, the electronic device may perform feature coding on the local features by using coding modes such as vector quantization coding, sparse coding, local linear constraint coding, and Fisher vector coding, so as to obtain the coded local features.
And B3, fusing the coded local features by the electronic equipment to obtain fused features.
For example, the electronic device may perform feature fusion on the encoded local features by using a pyramid feature matching method to obtain fused features. Specifically, the image to be recognized is uniformly partitioned, feature fusion is performed on each partition to obtain fused features of each partition, then, feature fusion is performed on the fused features of each partition again to obtain the fused features of the image to be recognized, and the feature vector of the fixed dimension of the image to be recognized is obtained.
And B4, the electronic equipment identifies the content of the image to be identified based on the fused features.
For example, the electronic device may classify feature vectors of fixed dimensions of an image to be recognized by using a Support Vector Machine (SVM), a random forest and other methods, so as to obtain a recognition result of the content of the image to be recognized, where the recognition result may be the content of the image to be recognized and a type corresponding to the content, thereby completing a recognition process.
(2) When at least two images to be recognized exist in the information to be processed, recognizing the content of the images to be recognized by adopting the trained recognition model to obtain at least two initial recognition results, and fusing the initial recognition results to obtain a recognition result.
For example, when at least two images to be recognized exist in the information to be processed, the electronic device also recognizes the content of the image to be recognized by using the trained recognition model for each image to be recognized, the recognition process of the image to be recognized refers to steps B1-B4, and obtains at least two initial recognition results, for example, taking three images to be recognized exist in the information to be processed as an example, three images to be recognized are respectively recognized, and the recognition results of the three images to be recognized are obtained and respectively are a probability value of a type, a probability value of a type B and a probability value of a type C. The initial recognition result is classified, for example, the type a is a scene graph of a certain game, the type B and the type C are character images of a certain game, and the initial recognition result can be classified into a game scene and a game character image. According to the classification result, a weighting coefficient corresponding to the initial recognition result is obtained, for example, the weighting coefficient of a preset game scene is 0.7, the weighting coefficient of a game character is 0.3, the initial recognition results are weighted according to the weighting coefficients, the weighted initial recognition results are fused to obtain a recognition result, for example, after the probability value of the type A, the probability value of the type B and the probability value of the type C are weighted, the maximum value can be screened out of the three recognition results to be used as the recognition result, and the three recognition results can be considered to be comprehensively analyzed to obtain the type of the image to be recognized as a certain game.
The trained recognition model may be trained from a plurality of image samples, and specifically may be trained by other devices and then provided to the information processing apparatus, or may be trained by the information processing apparatus, that is, before the step "recognizing the image to be recognized by using the trained recognition model", the information processing method may further include:
(1) a plurality of image samples are acquired, the image samples including images with labeled recognition results.
For example, a plurality of images may be acquired as original image samples through a database, a network and an image acquisition device, then the original image samples are preprocessed, such as de-duplicated, cropped, rotated and/or flipped, to obtain image samples meeting the input criteria of the preset recognition model, and then the preprocessed image samples are subjected to type labeling to obtain labeled image samples.
(2) And predicting the recognition result of the image sample by adopting a preset recognition model to obtain the predicted recognition result of the image sample.
For example, feature extraction is performed on an image sample by using a preset identification model to obtain local features of the image sample, feature coding is performed on the local features to obtain coded local features, the coded local features are fused to obtain fused features, and the image sample is identified based on the fused features to obtain a prediction result of image sample identification.
(3) And converging the preset recognition model according to the predicted recognition result and the marked recognition result to obtain the trained recognition model.
For example, the preset recognition model may be converged according to the predicted recognition result and the labeled recognition result by an interpolation loss function, so as to obtain a trained recognition model. For example, the following may be specifically mentioned:
and adjusting parameters for feature fusion in the preset recognition model according to the recognition result marked by the image sample and the predicted recognition result by adopting a Dice function (a loss function) to obtain the trained recognition model.
Optionally, in order to improve the accuracy of the context feature, besides the Dice function, other loss functions such as a cross entropy loss function may be used for convergence, which may specifically be as follows:
adjusting parameters for feature fusion in the preset recognition model according to the recognition result marked by the image sample and the predicted recognition result by adopting a cross entropy loss function, and adjusting the parameters for feature fusion in the preset recognition model according to the recognition result marked by the image sample and the predicted recognition result by adopting an interpolation loss function to obtain a trained recognition model
305. And the electronic equipment confirms the type of the information to be processed according to the identification result.
For example, the electronic device determines the type of the information to be processed according to the recognition result of the image to be recognized, the recognition result may be the content of the image to be recognized and the type corresponding to the content, the type of the content of the image to be recognized may be obtained, and the type of the content of the image to be recognized and the type of the information to be processed are associated with each other. For example, according to the type of the image content to be recognized, matching is performed in a type set of preset information to be processed, for example, the type of the image content to be recognized is a certain game, the type set of the preset information to be processed may include games, dances, music, animations, drawings and the like, according to the type of the image content to be recognized, matching is performed in the type set of the preset information to be processed, it can be determined that the type of the information to be processed is a game, and for example, a promotional image of a certain piece of music of the image content to be recognized, it can be determined that the type of the information to be processed is music.
306. And the electronic equipment screens out a target point praise control corresponding to the type of the information to be processed from the preset praise control set according to the identification result.
For example, the electronic device obtains styles or types of each thumbing control in the preset thumbing control set, where each style or type corresponds to one information type, as shown in fig. 8, which may include games, dances, music, animations, drawings, and the like. The type of the information to be processed is matched with the type of the praise control, for example, taking the type of the information to be processed as a game as an example, the praise control style corresponding to the game type is matched in the preset praise control set, after the matching is successful, the praise control which is successfully matched is screened out in the preset praise control set, for example, whether the game is taken as an example, the praise control corresponding to the game is screened out in the preset praise control set, and the praise control is taken as a target praise control.
307. And the electronic equipment adds the information to be processed and the control favored by the target point to the interaction area and displays the information on the information display canvas.
E1, the electronic device obtains the first receiving time of the information to be processed, and identifies a target interaction area corresponding to the first receiving time in the interaction area.
For example, if the user enters information to be processed in 05 minutes and 02 seconds at XX of X month and X day in 201X year, XX minutes and XX seconds at XX of X month and X day in 201X year. When the display content of the interactive area is inquired, when the display content exists in the interactive area, acquiring a second receiving time of the interactive area for receiving the display content, for example, the interactive area also displays other processed information, acquiring a second receiving time of the information processing device for receiving other processed information, for example, further displays one processed information in the interactive area, acquiring a second receiving time of the processed information, for example, 01 minutes and 01 seconds in XX of X month of 201X year, and sorting the first receiving time and the second receiving time, obviously, the interactive area of the information to be processed corresponding to the first receiving time should be before the interactive area of the processed information corresponding to the second receiving time, and if two interactive areas exist on the information display canvas, the interactive area corresponding to the first receiving time should be above the interactive area corresponding to the second receiving time, and taking the interactive area above the interactive area corresponding to the second receiving time as a target interactive area.
E2, the electronic equipment adds the information to be processed and the target point thumbs to the target interaction area.
For example, the electronic device divides the target interaction area into an information display area and a like area, adds the information to be processed to the information display area, and adds the target like control to the like area, as shown in fig. 9, the information display area is used for displaying the information to be processed, including displaying images, texts, audios, videos, and the like of the information to be processed. The like area is used for adding a target point like control.
E3, the electronic device displays the content of the target interaction region on the information display canvas.
For example, the electronic device displays the content of the target interaction area on the information display canvas, and may mainly display the content information issued by the user and the like, and provide the user or another user to trigger the like button, and perform an interactive activity on the content information issued by the user, and may also display the user identification of the issuer of the information to be processed and the issuance time (i.e., the first reception time) in the information display area, where the display of the issuance time (i.e., the first reception time) may directly display the time, for example, directly display the time in 20XX year, X month, X day, X minute, X second, and may also display the difference between the issuance time and the current time, for example, the issuance time is 10 o 50 minutes, and the current time is 10 o 51 minutes, and may display 1 minute before.
Optionally, after the electronic device displays the information to be processed and the control favored by the target point on the information display canvas, the method may further include the following steps:
(1) and when the trigger operation aiming at the target point compliments the control is detected, generating an interaction record.
For example, when a trigger operation for the target point approval control is detected, the identity of the terminal that triggered the operation is obtained and counted once, for example, when a user (the user who releases information himself or another user) triggers the target point approval control, the identity of the user, such as an account number, a nickname, and the like of the user, is obtained and interaction is recorded once. And accumulating the counting times to obtain the total number of the counting times, recording the interaction once when a plurality of users trigger the control praise at the target point, and accumulating the total number of the interaction users or the total number of the interaction users in the previous interaction times to obtain the total number of the interaction users or the total number of the interaction users, for example, taking the interaction activity praise as an example, when three users praise, acquiring the identification of the three users, and simultaneously recording the total number of the interaction times or the total number of the three users as 3. Taking the total number of the counted times and the identification as interaction records, for example, when there is a user interaction, the interaction record is the identification of the user, the interaction times is 1, the identification and the interaction times are taken as the first interaction record, when a second user appears, the identification of the second user is obtained, the interaction times are accumulated to obtain the interaction times 2, the identification of the two users and the total interaction times 2 are taken as the second interaction record, when a third user appears, the identification of the third user is obtained, the interaction times are continuously accumulated to obtain the total number of the interaction times 3, and the identification of the three users and the total interaction times 3 are taken as the third interaction record. It can be seen that the interaction records are dynamically changed, and one interaction record needs to be regenerated each time the control is complied with by the target point.
(2) And displaying the interaction record in the interaction area.
For example, the interaction record is displayed in the interaction area, for example, the total number of interactions in the interaction record may be displayed beside the target point thumbs, as shown in fig. 10, the total number of interactions display area and the target point thumbs are taken as a whole, and the number of times displayed in the total number of interactions display area beside the target interaction space is increased by 1 each time the user triggers the target point thumbs. The user's identity in the interaction record may also be displayed below the total number of interactions display area, as shown in fig. 11. Another interaction time control can be added, the total interaction time is displayed on the interaction time control, and the identification is associated with the interaction time control in a list form, for example, when a user triggers the interaction time control, an identification list can be displayed.
As can be seen from the above, after the electronic device displays the information processing page, the information processing page includes an information display canvas and an information entry interface, the information display canvas includes at least one interaction area, receives the information to be processed entered at the information entry interface, identifies the type of the information to be processed, then, according to the identification result, screens out the target point praise corresponding to the type in the preset praise set, adds the information to be processed and the target interaction control to the interaction area, and displays the information on the information display canvas; according to the scheme, after the information to be processed is received, the type of the information to be processed is automatically identified, the target interaction control is directly matched according to the identified type, manual matching is not needed, and a user does not need to manually add a label in the issuing process, so that the time for information processing in the interaction process can be greatly reduced.
In order to better implement the above method, the embodiment of the present invention further provides an information processing apparatus, which may be integrated in an electronic device, such as a server or a terminal, and the terminal may include a tablet computer, a notebook computer, and/or a personal computer.
For example, as shown in fig. 16, the information processing apparatus may include a display unit 401, an extraction unit 402, a recognition unit 403, a filtering unit 404, and a processing unit 405 as follows:
(1) a display unit 401;
the display unit 401 is configured to display an information processing page, where the information processing page includes an information display canvas and an information entry interface, and the information display canvas includes at least one interaction area.
For example, the display unit 401 may be specifically configured to enable a user to input information such as a preset identity identifier on a login page through a client, trigger and display an information processing page after the information is verified, and enable the user to directly trigger and display the information processing page through the client.
(2) A receiving unit 402;
the receiving unit 402 is configured to receive information to be processed entered in the information entry interface.
For example, the receiving unit 402 may be specifically configured to receive to-be-processed information entered by a user through an information entry interface control of a trigger information processing page.
(3) An identification unit 403;
an identifying unit 403, configured to identify a type of the information to be processed.
The identification unit 403 may include an extraction subunit 4031, an identification subunit 4032, and a determination subunit 4033, as shown in fig. 17, specifically as follows:
an extraction subunit 4031, configured to extract at least one image to be identified from the information to be processed;
an identification subunit 4032, configured to identify content of the image to be identified;
and the confirming subunit 4033 is configured to determine the type of the information to be processed according to the identification result.
For example, the extracting subunit 4031 extracts at least one image to be recognized from the information to be processed, the identifying subunit 4032 identifies the type of the image to be recognized, and the determining subunit 4033 determines the type of the information to be processed according to the identification result.
(4) A screening unit 404;
and the screening unit 404 is configured to screen out a target interaction control corresponding to the type from a preset interaction control set according to the identification result.
The screening unit 404 may include an obtaining sub-unit 4041, a matching sub-unit 4042, and a screening sub-unit 4043, as shown in fig. 18, specifically as follows:
an obtaining subunit 4041, configured to obtain a type of an interaction control in a preset interaction control set;
the matching subunit 4042 is configured to match the type of the information to be processed with the type of the interactive control;
and the screening subunit 4043 is configured to screen out, according to the matching result, a target interaction control corresponding to the type of the information to be processed from the preset interaction control set.
For example, the obtaining sub-unit 4041 obtains the type of the interactive control in the preset interactive control set, the matching sub-unit 4042 matches the type of the information to be processed with the type of the interactive control, and the screening sub-unit 4043 screens out the target interactive control corresponding to the type of the information to be processed from the preset interactive control set according to the matching result.
(5) A processing unit 405;
and the processing unit 405 is configured to add the information to be processed and the target interaction control to the interaction area, and display the information on the information display canvas.
For example, the processing unit 405 may be specifically configured to obtain first receiving time of the information to be processed, identify a target interaction area corresponding to the first receiving time in the interaction area, add the information to be processed and the target interaction control to the target interaction area, and display content of the target interaction area on the information display canvas.
Optionally, the processing unit 405 may be further specifically configured to, when the trigger operation for the target interaction control is detected, obtain an identity of a terminal that triggers the operation, count once, accumulate the count times to obtain a total number of the count times, use the total number of the count times and the identity as an interaction record, and display the interaction record in the interaction area.
Optionally, the information processing apparatus may further include an acquisition unit 406 and a training unit 407, as shown in fig. 19, which are specifically as follows:
an acquisition unit 406, configured to acquire a plurality of image samples, where the image samples include an image to which a recognition result has been tagged;
the training unit 407 is configured to predict the recognition result of the image sample by using a preset recognition model to obtain a predicted recognition result of the image sample, and converge the preset recognition model according to the predicted recognition result and the labeled result to obtain a trained recognition model.
For example, the acquiring unit 406 acquires a plurality of image samples, where the image samples include images to which identification results have been labeled, the training unit 407 predicts the identification results of the image samples by using a preset identification model to obtain the predicted identification results of the image samples, and converges the preset identification model according to the predicted identification results and the labeled results to obtain a trained identification model.
Optionally, another information processing apparatus is further provided in this embodiment of the present application, and is applied to a terminal, where the information processing apparatus may include an information processing display unit 408, a published page display unit 409, a to-be-processed information display unit 410, an interaction display unit 411, and an interaction record display unit 412, as shown in fig. 20, the following specifically:
the information processing display unit 408 is configured to display an information processing page, where the information processing page includes an information sharing control and an information display canvas, and the information canvas at least includes an interaction area;
the publishing page display unit 409 is used for displaying an information publishing page when receiving a triggering operation of a user for an information sharing control, wherein the information publishing page comprises at least one information input control and a publishing control;
the information to be processed display unit 410 is used for displaying the information to be processed on an information publishing page when receiving the information to be processed input by a user through the information input control;
the interaction display unit 411 is configured to, when receiving a trigger operation of a user for a publishing control, display information to be processed on an information display canvas of an information processing page, and display a target interaction control corresponding to the information to be processed in an interaction area;
and the interaction record display unit 412 is configured to display an interaction record in the interaction area when a triggering operation of the user for the target interaction control is detected.
For example, the information processing display unit 408 displays an information processing page including an information sharing control and an information display canvas, the information canvas includes at least one interaction region, the publishing page display unit 409 displays an information publishing page including at least one information input control and a publishing control when receiving a triggering operation of a user for the information sharing control, the information to be processed display unit 410 displays the information to be processed on the information publishing page when receiving the information to be processed input by the user through the information input control, the interaction display unit 411 displays the information to be processed on the information display canvas of the information processing page and displays a target interaction control corresponding to the information to be processed on the interaction region when receiving the triggering operation of the user for the publishing control, the interaction record display unit 412 displays the information to be processed on the information display canvas of the information processing page and displays a target interaction control corresponding to the information to be processed on the interaction region when detecting the triggering operation of the user for the target interaction control, and displaying the interaction record in the interaction area. In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, after the display unit 401 displays the information processing page in this embodiment, the information processing page includes an information display canvas and an information entry interface, the information display canvas includes at least one interaction area, the receiving unit 402 receives information to be processed entered at the information entry interface, the identifying unit 403 identifies the type of the information to be processed, then the screening unit 404 screens out a target interaction control corresponding to the type of the information to be processed in the preset interaction control set according to the identification result, and the processing unit 405 adds the information to be processed and the target interaction control to the interaction area and displays the information on the information display canvas; according to the scheme, after the information to be processed is received, the type of the information to be processed is automatically identified, the target interaction control is directly matched according to the identified type, manual matching is not needed, and a user does not need to manually add a label in the issuing process, so that the time for information processing in the interaction process can be greatly reduced.
An embodiment of the present invention further provides an electronic device, as shown in fig. 21, which shows a schematic structural diagram of the electronic device according to the embodiment of the present invention, specifically:
the electronic device may include components such as a processor 501 of one or more processing cores, memory 502 of one or more computer-readable storage media, a power supply 503, and an input unit 504. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 21 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 501 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory 502, thereby performing overall monitoring of the electronic device. Optionally, processor 501 may include one or more processing cores; preferably, the processor 501 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 501.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and data processing by operating the software programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 501 with access to the memory 502.
The electronic device further comprises a power supply 503 for supplying power to each component, and preferably, the power supply 503 may be logically connected to the processor 501 through a power management system, so that functions of managing charging, discharging, power consumption, and the like are realized through the power management system. The power supply 503 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may also include an input unit 504, where the input unit 504 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 501 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 502 according to the following instructions, and the processor 501 runs the application program stored in the memory 502, so as to implement various functions as follows:
the information processing method comprises the steps of displaying an information processing page, wherein the information processing page comprises an information display canvas and an information input interface, the information display canvas at least comprises an interaction area, receiving information to be processed input by the information input interface, identifying the type of the information to be processed, screening out a target interaction control corresponding to the type of the information to be processed from a preset interaction control set according to an identification result, adding the information to be processed and the target interaction control to the interaction area, and displaying the information on the information display canvas. Or
The information processing method comprises the steps of displaying an information processing page, wherein the information processing page comprises an information sharing control and an information display canvas, the information canvas at least comprises an interaction area, when the triggering operation of a user for the information sharing control is detected, the information publishing page is displayed, the information publishing page comprises at least one information input control and a publishing control, when the information to be processed input by the user through the information input control is received, the information to be processed is displayed on the information publishing page, when the triggering operation of the user for the publishing control is detected, the information to be processed is displayed on the information display canvas of the information processing page, a target interaction control corresponding to the information to be processed is displayed in the interaction area, and when the triggering operation of the user for the target interaction control is detected, the interaction record is displayed in the interaction area.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
As can be seen from the above, after the information processing page is displayed, the information processing page includes an information display canvas and an information entry interface, the information display canvas includes at least one interaction area, receives the information to be processed entered at the information entry interface, identifies the type of the information to be processed, then, according to the identification result, screens out a target interaction control corresponding to the type of the information to be processed in a preset interaction control set, adds the information to be processed and the target interaction control to the interaction area, and displays the information on the information display canvas; according to the scheme, after the information to be processed is received, the type of the information to be processed is automatically identified, the target interaction control is directly matched according to the identified type, manual matching is not needed, and a user does not need to manually add a label in the issuing process, so that the time for information processing in the interaction process can be greatly reduced.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the embodiment of the present invention provides a computer-readable storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any one of the information processing methods provided by the embodiment of the present invention. For example, the instructions may perform the steps of:
the information processing method comprises the steps of displaying an information processing page, wherein the information processing page comprises an information display canvas and an information input interface, the information display canvas at least comprises an interaction area, receiving information to be processed input by the information input interface, identifying the type of the information to be processed, screening out a target interaction control corresponding to the type of the information to be processed from a preset interaction control set according to an identification result, adding the information to be processed and the target interaction control to the interaction area, and displaying the information on the information display canvas. Or
The information processing method comprises the steps of displaying an information processing page, wherein the information processing page comprises an information sharing control and an information display canvas, the information canvas at least comprises an interaction area, when the triggering operation of a user for the information sharing control is detected, the information publishing page is displayed, the information publishing page comprises at least one information input control and a publishing control, when the information to be processed input by the user through the information input control is received, the information to be processed is displayed on the information publishing page, when the triggering operation of the user for the publishing control is detected, the information to be processed is displayed on the information display canvas of the information processing page, a target interaction control corresponding to the information to be processed is displayed in the interaction area, and when the triggering operation of the user for the target interaction control is detected, the interaction record is displayed in the interaction area.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in any information processing method provided in the embodiment of the present invention, the beneficial effects that can be achieved by any information processing method provided in the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described again here.
The above detailed description is provided for an information processing method, an information processing apparatus, and a computer-readable storage medium according to embodiments of the present invention, and specific examples are applied herein to illustrate the principles and implementations of the present invention, and the above descriptions of the embodiments are only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (15)
1. An information processing method characterized by comprising:
displaying an information processing page, wherein the information processing page comprises an information display canvas and an information input interface, and the information display canvas at least comprises an interaction area;
receiving information to be processed input in the information input interface;
identifying the type of the information to be processed;
screening out a target interaction control corresponding to the type of the information to be processed from a preset interaction control set according to the identification result;
and adding the information to be processed and the target interaction control to the interaction area, and displaying on the information display canvas.
2. The information processing method according to claim 1, wherein the identifying the type of the information to be processed includes:
extracting at least one image to be identified from the information to be processed;
identifying the content of the image to be identified;
and determining the type of the information to be processed according to the identification result.
3. The information processing method according to claim 2, wherein the identifying the content of the image to be identified includes:
when an image to be recognized exists in the information to be processed, recognizing the content of the image to be recognized by adopting a trained recognition model to obtain a recognition result;
and when at least two images to be recognized exist in the information to be processed, recognizing the content of the images to be recognized by adopting the trained recognition model to obtain at least two initial recognition results, and fusing the initial recognition results to obtain the recognition results.
4. The information processing method according to claim 3, wherein the fusing the plurality of initial recognition results to obtain the recognition result includes:
classifying the initial recognition result;
according to the classification result, obtaining a weighting coefficient corresponding to the initial identification result;
weighting the initial recognition result based on the weighting coefficient;
and fusing the weighted initial recognition results to obtain the recognition result.
5. The information processing method according to any one of claims 3 or 4, wherein the recognizing the content of the image to be recognized by using the trained recognition model includes:
performing feature extraction on the image to be recognized to obtain local features of the image to be recognized;
carrying out feature coding on the local features to obtain coded local features;
fusing the coded local features to obtain fused features;
and identifying the content of the image to be identified based on the fused features.
6. The information processing method according to claim 3, wherein before recognizing the image to be recognized by using the trained recognition model, the method comprises:
acquiring a plurality of image samples, wherein the image samples comprise images marked with identification results;
predicting the recognition result of the image sample by adopting a preset recognition model to obtain the predicted recognition result of the image sample;
and converging the preset recognition model according to the predicted recognition result and the marked recognition result to obtain the trained recognition model.
7. The information processing method according to claim 1, wherein the screening out, according to the recognition result, the target interaction control corresponding to the type of the information to be processed in a preset interaction control set includes:
acquiring the type of the interactive control in the preset interactive control set;
matching the type of the information to be processed with the type of the interactive control;
and screening out a target interaction control corresponding to the type of the information to be processed from the preset interaction control set according to a matching result.
8. The information processing method according to claim 1, wherein the adding the information to be processed and the target interactive control to the interactive region and displaying on the information display canvas comprises:
acquiring first receiving time of the information to be processed;
identifying a target interaction area corresponding to the first receiving time in the interaction area;
adding the information to be processed and a target interaction control to the target interaction area;
displaying the content of the target interaction region on the information display canvas.
9. The information processing method according to claim 8, wherein the identifying a target interaction area corresponding to the reception time in the interaction area comprises:
inquiring the display content of the interaction area;
when the display content exists in the interactive area, acquiring second receiving time for the interactive area to receive the display content;
sorting the first and second receive times;
and identifying a target interaction area corresponding to the first receiving time in the interaction area according to the sequencing result.
10. The information processing method of claim 8, wherein the adding the information to be processed and the target interaction control to the target interaction area comprises:
dividing the target interaction area into an information display area and an interaction area;
adding the information to be processed to the information display area;
adding the target interaction control to the interaction region.
11. The information processing method of claim 10, wherein after displaying the content of the target interaction region on the information display canvas, further comprising:
when the trigger operation aiming at the target interaction control is detected, generating an interaction record;
and displaying the interaction record in the interaction area.
12. The information processing method of claim 11, wherein generating an interaction record when the trigger operation for the target interaction control is detected comprises:
when the trigger operation aiming at the target interaction control is detected, acquiring the identity of the terminal of the trigger operation, and counting once;
accumulating the counting times to obtain the total number of the counting times;
and taking the total number of the counting times and the identity as an interaction record.
13. An information processing method characterized by comprising:
displaying an information processing page, wherein the information processing page comprises an information sharing control and an information display canvas, and the information canvas at least comprises an interaction area;
when the triggering operation aiming at the information sharing control is detected, displaying an information publishing page, wherein the information publishing page comprises at least one information input control and a publishing control;
when receiving the information to be processed input through the information input control, displaying the information to be processed on the information publishing page;
when the triggering operation aiming at the release control is detected, displaying the information to be processed on an information display canvas of the information processing page, and displaying a target interaction control corresponding to the information to be processed in the interaction area;
and when the triggering operation of the user for the target interaction control is detected, displaying an interaction record in the interaction area.
14. An information processing apparatus characterized by comprising:
the display unit is used for displaying an information processing page, the information processing page comprises an information display canvas and an information input interface, and the information display canvas at least comprises an interaction area;
the receiving unit is used for receiving the information to be processed which is input in the information input interface;
the identification unit is used for identifying the type of the information to be processed;
the screening unit is used for screening out a target interaction control corresponding to the type of the information to be processed from a preset interaction control set according to the identification result;
and the processing unit is used for adding the information to be processed and the target interaction control to the interaction area and displaying the information on the information display canvas.
15. An information processing apparatus characterized by comprising:
the information processing and displaying unit is used for displaying an information processing page, the information processing page comprises an information sharing control and an information display canvas, and the information canvas at least comprises an interaction area;
the information sharing control comprises a publishing page display unit and a publishing control unit, wherein the publishing page display unit is used for displaying an information publishing page when receiving triggering operation aiming at the information sharing control, and the information publishing page comprises at least one information input control and a publishing control;
the information to be processed display unit is used for displaying the information to be processed on the information publishing page when the information to be processed input through the information input control is received;
the interaction display unit is used for displaying the information to be processed on the information display canvas of the information processing page and displaying a target interaction control corresponding to the information to be processed in the interaction area when the trigger operation aiming at the release control is received;
and the interactive record display unit is used for displaying the interactive record in the interactive area when the trigger operation aiming at the target interactive control is detected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010001716.9A CN111209068A (en) | 2020-01-02 | 2020-01-02 | Information processing method and device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010001716.9A CN111209068A (en) | 2020-01-02 | 2020-01-02 | Information processing method and device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111209068A true CN111209068A (en) | 2020-05-29 |
Family
ID=70787932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010001716.9A Pending CN111209068A (en) | 2020-01-02 | 2020-01-02 | Information processing method and device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111209068A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112306601A (en) * | 2020-10-27 | 2021-02-02 | 北京字节跳动网络技术有限公司 | Application interaction method and device, electronic equipment and storage medium |
WO2022193867A1 (en) * | 2021-03-17 | 2022-09-22 | 北京字跳网络技术有限公司 | Video processing method and apparatus, and electronic device and storage medium |
-
2020
- 2020-01-02 CN CN202010001716.9A patent/CN111209068A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112306601A (en) * | 2020-10-27 | 2021-02-02 | 北京字节跳动网络技术有限公司 | Application interaction method and device, electronic equipment and storage medium |
WO2022193867A1 (en) * | 2021-03-17 | 2022-09-22 | 北京字跳网络技术有限公司 | Video processing method and apparatus, and electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110598037B (en) | Image searching method, device and storage medium | |
US11140446B2 (en) | Sensitivity assessment for media production using artificial intelligence | |
WO2017202006A1 (en) | Data processing method and device, and computer storage medium | |
JP5727476B2 (en) | Image evaluation apparatus, image evaluation method, program, integrated circuit | |
US11768597B2 (en) | Method and system for editing video on basis of context obtained using artificial intelligence | |
KR102281676B1 (en) | Audio classification method based on neural network for waveform input and analyzing apparatus | |
CN109933782B (en) | User emotion prediction method and device | |
CN111309966B (en) | Audio matching method, device, equipment and storage medium | |
US20230140369A1 (en) | Customizable framework to extract moments of interest | |
CN113748439A (en) | Prediction of successful quotient for motion pictures | |
US20220156318A1 (en) | Propagating changes from one visual content to a portfolio of visual contents | |
CN111435369B (en) | Music recommendation method, device, terminal and storage medium | |
CN116595398A (en) | Resource intelligent matching method and related device based on meta universe | |
CN112541120B (en) | Recommendation comment generation method, device, equipment and medium | |
CN112040339A (en) | Method and device for making video data, computer equipment and storage medium | |
CN113963303A (en) | Image processing method, video recognition method, device, equipment and storage medium | |
CN111209068A (en) | Information processing method and device and computer readable storage medium | |
US20240348846A1 (en) | Video generating method and apparatus, electronic device, and readable storage medium | |
CN114491255A (en) | Recommendation method, system, electronic device and medium | |
CN111858969A (en) | Multimedia data recommendation method and device, computer equipment and storage medium | |
CN115618024A (en) | Multimedia recommendation method and device and electronic equipment | |
She et al. | Learning discriminative sentiment representation from strongly-and weakly supervised CNNs | |
JP2012194691A (en) | Re-learning method and program of discriminator, image recognition device | |
Sharath et al. | Music Recommendation System Using Facial Emotions | |
JP6793169B2 (en) | Thumbnail output device, thumbnail output method and thumbnail output program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |