CN111488186B - Data processing method, device, electronic equipment and computer storage medium - Google Patents

Data processing method, device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN111488186B
CN111488186B CN201910072969.2A CN201910072969A CN111488186B CN 111488186 B CN111488186 B CN 111488186B CN 201910072969 A CN201910072969 A CN 201910072969A CN 111488186 B CN111488186 B CN 111488186B
Authority
CN
China
Prior art keywords
data
image
information
attribute information
dynamic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910072969.2A
Other languages
Chinese (zh)
Other versions
CN111488186A (en
Inventor
邹亚
张依然
王晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910072969.2A priority Critical patent/CN111488186B/en
Publication of CN111488186A publication Critical patent/CN111488186A/en
Application granted granted Critical
Publication of CN111488186B publication Critical patent/CN111488186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Abstract

The embodiment of the invention provides a data processing method, a device, electronic equipment and a computer storage medium, wherein the data processing method comprises the following steps: determining that a trigger instruction for performing image rendering on the displayed data to be processed is received; acquiring attribute information of the data to be processed according to the trigger indication; and acquiring a dynamic image corresponding to the attribute information, and displaying the dynamic image on a display interface of the data to be processed. By the embodiment of the invention, the expression mode of the interactive scene is enriched, and the corresponding scene information can be effectively expressed.

Description

Data processing method, device, electronic equipment and computer storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a data processing method, a data processing device, electronic equipment and a computer storage medium.
Background
With the development of computer technology, most applications APP provide a function for user interaction, for example, an interaction function for users to post their own mindsets, raise their own questions, or perform a dialogue with other users.
However, most of the application APPs currently provide only a text input path when implementing the above interactive function, so that users can meet the interactive requirement through the most basic text input. Because this way is too single, in order to further meet the personalized needs of different users, some APP applications now also provide a way to implement the above interactive function through audio.
However, the expression is simpler in a text mode or an audio mode, a richer expression mode is lacked, scene information cannot be effectively expressed, and finally the use and acceptance of the application APP by a user are affected.
Disclosure of Invention
In view of the above, an embodiment of the present invention provides a data processing scheme to solve the above-mentioned problems.
According to a first aspect of an embodiment of the present invention, there is provided a data processing method, including: determining that a trigger instruction for performing image rendering on the displayed data to be processed is received; acquiring attribute information of the data to be processed according to the trigger indication; and acquiring a dynamic image corresponding to the attribute information, and displaying the dynamic image on a display interface of the data to be processed.
According to a second aspect of an embodiment of the present invention, there is provided a data processing apparatus including: the determining module is used for determining that a trigger instruction for performing image rendering on the displayed data to be processed is received; the acquisition module is used for acquiring attribute information of the data to be processed according to the trigger indication; and the rendering module is used for acquiring the dynamic image corresponding to the attribute information and displaying the dynamic image on a display interface of the data to be processed.
According to a third aspect of an embodiment of the present invention, there is provided an electronic apparatus including: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus; the memory is configured to store at least one executable instruction, where the executable instruction causes the processor to perform operations corresponding to the data processing method according to the first aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the data processing method according to the first aspect.
According to the data processing scheme provided by the embodiment of the invention, no matter in a text mode or an audio mode or other proper interaction modes, only the attribute information of the displayed data to be processed is needed to be obtained, and the corresponding dynamic image can be displayed according to the attribute information so as to assist in the intention expression of the data to be processed. On one hand, the expression mode of the interactive scene is enriched by using the dynamic image to assist in the expression of the intention; on the other hand, the dynamic image is related to the attribute information of the data to be processed, and the attribute information can represent the scene intention of the data to be processed to a large extent, so that the dynamic image is displayed while the data to be processed is displayed, and the corresponding scene information can be effectively expressed. Further, the use experience of the user on the corresponding application APP is also improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a flow chart showing steps of a data processing method according to a first embodiment of the present invention;
FIG. 2 is a flow chart showing steps of a data processing method according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of an interactive interface in the embodiment of FIG. 2;
FIG. 4 is a schematic diagram of a rendering interface in the embodiment of FIG. 2;
FIG. 5 is a schematic diagram of another rendering interface in the embodiment of FIG. 2;
FIG. 6 is a block diagram showing a data processing apparatus according to a third embodiment of the present invention;
FIG. 7 is a block diagram showing a data processing apparatus according to a fourth embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
In order to better understand the technical solutions in the embodiments of the present invention, the following description will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the present invention, shall fall within the scope of protection of the embodiments of the present invention.
The implementation of the embodiments of the present invention will be further described below with reference to the accompanying drawings.
Example 1
Referring to fig. 1, a flowchart of steps of a data processing method according to a first embodiment of the present invention is shown.
The data processing method of the present embodiment includes the steps of:
step S102: and determining that a trigger instruction for performing image rendering on the displayed data to be processed is received.
In the embodiment of the invention, the data to be processed which is displayed in the display interface is operated so as to realize the auxiliary intention expression of the data. The data to be processed may be any suitable data that can be subjected to semantic analysis or feature extraction or attribute extraction, including but not limited to text data and non-text data such as audio data. If the data to be processed is non-text data, the non-text data is converted into text data, and then semantic analysis, feature extraction, attribute extraction, and the like are performed, for example, audio data is converted into text data, and then the above operations are performed.
The trigger indication for image rendering of the data to be processed can be set by a person skilled in the art according to actual requirements, for example, if an option for image rendering is preset, if the user selects the option, the trigger indication is considered to be received after the data to be processed is released; or if the option of image rendering is selected when the data to be processed is to be published or after the data to be processed is published, the trigger instruction can be considered to be received; or if the user replies to the questions posted by other users, or comments are commented on the viewpoints posted by other users, or further comments are commented on the comments posted by other users, or the user posts questions, comments and the like, after confirmation operation of the operations is performed, if a reply button, a comment button, a posting button, a carriage return button or the like is clicked, the triggering instruction is considered to be received; or, the user issues the set interaction data in the instant interaction process, or the issued interaction data meets the triggering condition of image rendering, which can be considered to receive the triggering indication, and the embodiment of the invention is not limited to this.
Step S104: and acquiring attribute information of the data to be processed according to the trigger indication.
The attribute information is used for indicating the expression intention of the data to be processed, and specifically can be the attribute, the feature, the scene and the like of the data to be processed. For example, "today's air PM2.5 index is 220", the attribute information that can express the main intention of the sentence may include: "air", "PM2.5 index 220"; as another example, "this is a truck," the attribute information that may express the main intention of the sentence may include: "truck", etc. In practical application, a person skilled in the art may determine specific content of the attribute information according to actual requirements, and may indicate the expression intention of the data to be processed, which is not particularly limited in the embodiment of the present invention.
Step S106: and acquiring a dynamic image corresponding to the attribute information, and displaying the dynamic image on a display interface of the data to be processed.
After the attribute information of the data to be processed is acquired, a dynamic image corresponding to the attribute information can be obtained in a proper manner, and the dynamic image can be matched with the expression intention of the data to be processed in general so as to better assist in expressing the content of the data to be processed which is wanted to be expressed.
According to the embodiment, no matter in a text mode, an audio mode or other proper interaction modes, only the attribute information of the displayed data to be processed is needed to be obtained, and the corresponding dynamic image can be displayed according to the attribute information so as to assist in the intention expression of the data to be processed. On one hand, the expression mode of the interactive scene is enriched by using the dynamic image to assist in the expression of the intention; on the other hand, the dynamic image is related to the attribute information of the data to be processed, and the attribute information can represent the scene intention of the data to be processed to a large extent, so that the dynamic image is displayed while the data to be processed is displayed, and the corresponding scene information can be effectively expressed. Further, the use experience of the user on the corresponding application APP is also improved.
The data processing method of the present embodiment may be performed by any suitable electronic device having data processing capabilities, including but not limited to: mobile terminals (e.g., cell phones, PADs, etc.), servers, PCs, etc.
Example two
Referring to fig. 2, a flowchart of steps of a data processing method according to a second embodiment of the present invention is shown.
The data processing method of the present embodiment includes the steps of:
Step S202: and determining that a trigger instruction for performing image rendering on the displayed data to be processed is received.
As previously described, the presented data to be processed may be any suitable data that may be subject to semantic analysis or feature extraction or attribute extraction, including but not limited to textual data and non-textual data such as audio data.
In this embodiment, the determining that the trigger indication for performing image rendering on the displayed data to be processed is received may include at least one of the following ways:
in one mode, if a reply operation to the issued question is received, it is determined that a trigger instruction for performing image rendering on reply data corresponding to the reply operation is received. In this way, the data processing scheme provided by the embodiment of the invention can be applied to question-answer interaction scenes, such as a question-answer community scene, a question-answer interaction scene of a non-question-answer community, a question-answer instant interaction scene, and the like. In this way, the reply data is used as the data to be processed, which is helpful for the intention expression, scene expression and the like of the reply data, improves the interest of the reply data, and improves the experience of the user for viewing the reply data.
In the second mode, if a viewing operation of a search result obtained according to the search information is received, it is determined that a trigger instruction for performing image rendering on the search result operated by the viewing operation is received. In this way, the data processing scheme provided by the embodiment of the present invention may be applied to a search scenario, for example, a browser search scenario, an internal search scenario of a non-browser application, and so on. In this way, the checked search results are used as the data to be processed, which is helpful for the intention and scene expression of the search results, improves the search interest and improves the search experience of the user.
And thirdly, if the interactive data meeting the preset standard, which is input by the user, is received, determining that a trigger instruction for performing image rendering on the interactive data is received. The predetermined criteria may be set appropriately by those skilled in the art according to actual needs, for example, user a and user B interact through text input, if user B selects to perform image rendering on their interaction data, then the predetermined criteria may be according to a certain rule, for example, when user B has input each sentence and issues the sentence, the user B may consider that a trigger instruction for performing image rendering is received, or, in the embodiment of the present invention, each sentence or multiple sentences (if there is no special description, the number of "multiple sentences", "multiple" and the like related to "multiple" means two or more), or, when the text input by user B matches with the preset text, image rendering is performed, etc. If the users a and B select to perform image rendering on the interactive data, the above processing may be performed on the users a and B, and the processing results may be displayed on both devices of the users a and B or on the devices of the users a and B, respectively. In this way, the data processing scheme provided by the embodiment of the invention can be applied to instant interaction scenes, such as short messages, weChat, nails, and the like. In this way, the interactive data is used as the data to be processed, which is helpful for interactive intention and scene expression, and interactive interestingness and interactive experience of users are improved.
In the fourth mode, if an information posting operation or a comment operation on posted information is received, it is determined that a trigger instruction for performing image rendering on posted information or comment data is received. In this way, the data processing scheme provided by the embodiment of the invention can be applied to any community scene capable of carrying out information publishing and information comment. In this way, the information posted by the user or comment data of comments made by the user on the posted information is used as the data to be processed, which is helpful for the intention and scene expression of the posted information or comment data, and improves the community use interestingness and user use experience.
It will be apparent to those skilled in the art that other similar manners and scenarios are also suitable, not limited to the exemplary manners and scenarios described above.
Step S204: and acquiring attribute information of the data to be processed according to the trigger indication.
The attribute information is used for indicating the expression intention of the data to be processed, and specifically can be the attribute, the feature, the scene and the like of the data to be processed.
In one possible way, this step may be implemented as: according to the trigger indication, carrying out semantic analysis on the data to be processed; and acquiring attribute information of the data to be processed according to the semantic analysis result. The implementation of the semantic analysis may be implemented by those skilled in the art in any suitable manner or algorithm according to actual needs, including but not limited to: a convolutional neural network model approach, an N-Gram model approach, a shallow semantic analysis Semantic Role Labeling (SRL) algorithm, and so on are used. By means of semantic analysis, the attribute information of the data to be processed can be obtained more accurately, and even if no obvious attribute words or feature words exist in the data to be processed, the attribute information can be obtained effectively.
In another possible way, this step may be implemented as: according to the trigger indication, extracting attributes or characteristics of the data to be processed; and acquiring attribute information of the data to be processed according to the extraction result. The specific implementation manner of the attribute extraction or the feature extraction may be implemented in any suitable manner by those skilled in the art according to actual requirements, including but not limited to: keyword extraction, central word extraction, neural network extraction and the like. The method for acquiring the attribute information of the data to be processed is simpler to realize, reduces the realization cost and improves the acquisition speed and efficiency of the attribute information.
Step S206: and acquiring a dynamic image corresponding to the attribute information.
In one possible manner, the acquisition of the dynamic image corresponding to the attribute information may be implemented as: and acquiring the dynamic image corresponding to the attribute information of the data to be processed according to the corresponding relation between the preset attribute information and the dynamic image. In this way, the correspondence between various attribute information and dynamic images can be preset, so that after the attribute information of the data to be processed is obtained, the attribute information can be directly corresponding to the corresponding dynamic images, thereby improving the speed and efficiency of obtaining the dynamic images. The dynamic image is matched with the attribute information, so that the intention to be expressed by the data to be processed can be well expressed.
Optionally, according to a preset correspondence between attribute information and dynamic images, acquiring the dynamic images corresponding to the attribute information of the data to be processed may include: determining the attribute type corresponding to the attribute information; and acquiring the dynamic image corresponding to the attribute information of the data to be processed from the plurality of dynamic images corresponding to the attribute type according to the corresponding relation between the preset attribute information and the dynamic image. In this way, attribute information is classified, for example, "raining", "snowing", "sunny", and the like are classified into "weather" types; dividing PM2.5, sand, haze and the like into air types; "cat", "dog", "rabbit", "whale" etc. are classified as "animal" types, etc. Optionally, an "other" type may also be set to categorize attribute information that cannot be explicitly categorized into the "other" type. Each attribute type corresponds to a plurality of dynamic images, and when the application is specific, the dynamic images corresponding to the specific attribute information are determined according to the corresponding relation between the attribute information and the dynamic images. For example, the data to be processed is "PM2.5 index 220", and if the attribute information thereof includes "PM2.5 index" and "220", a plurality of dynamic images corresponding to the type of "air" may be determined first, and then dynamic images corresponding to "PM2.5 index" and "220" such as haze dynamic images having a higher density and the like may be determined therefrom. In this way, the number of usable dynamic images is expanded, and the intended expression of the data to be processed is made more accurate.
In another possible manner, the acquisition of the dynamic image corresponding to the attribute information may be implemented as: acquiring at least one image element corresponding to the attribute information of the data to be processed according to the corresponding relation between the preset attribute information and the image element; and generating a dynamic image according to the image element. In the mode, the attribute information corresponds to the image elements forming the dynamic image, and a large number of dynamic images can be formed through a small number of image elements, so that the image storage burden is reduced, the required dynamic images can be generated according to the data to be processed, the dynamic images are more flexible to acquire, the content is more abundant, and the quantity is more. The specific generation manner of generating the dynamic image according to the image element may be generated by any person skilled in the art in any suitable manner, which is not limited in the embodiment of the present invention.
In still another possible manner, the acquisition of the dynamic image corresponding to the attribute information may be implemented as: and acquiring an attribute data analysis result, matching the attribute information with the attribute data analysis result, and acquiring a corresponding dynamic image according to the matching result. Wherein, the attribute data analysis result includes: and (3) obtaining a result after carrying out big data analysis on massive attribute information and/or obtaining a result after carrying out big data analysis or hot spot analysis on massive social interaction data. The attribute data analysis results characterize the likelihood information that is widely accepted or used by most users during the current time period. In this way, by matching the attribute information with the analysis result of the attribute data, the true expression intention of the user can be hit with a high probability, and a more accurate dynamic image can be obtained.
Optionally, the matching the attribute information with the attribute data analysis result, and obtaining the corresponding dynamic image according to the matching result may include: and matching the attribute information with the attribute data analysis result, and acquiring a corresponding dynamic image from a third party according to the matching result. The method can directly acquire the corresponding dynamic image from a third party, wherein the third party means an application, a program, a tool, a data provider or the like which is independent of the current data processing scheme user, for example, the dynamic image corresponding to the attribute information can be acquired from the network through a corresponding deep learning model, or the dynamic image corresponding to the attribute information can be acquired from the network through an AI, or the dynamic image corresponding to the attribute information can be acquired from a preset server or address, and the like. In specific implementation, multiple hot spot dynamic images of a third party can be preferentially acquired and then matched with the attribute information. Taking a social interaction scene as an example, if the user A publishes content related to an electric competitive crown, the animation of a certain electronic competitive player or a star and the like biting a sandwich on the network can be grabbed by an AI to display. By acquiring the dynamic images from the third party, the number and the range of the dynamic images are expanded, and the user using the data processing scheme of the embodiment of the invention does not need to perform operations such as storage and processing, thereby reducing the burden of data storage and processing.
In the embodiment of the invention, the dynamic image can adopt a coded dynamic image, such as a Lottie animation image. The coded dynamic image can be used for drawing, rendering and displaying the dynamic image through the code, so that the generated dynamic image can be matched with the screen size for displaying the dynamic image without modification, and the device has better adaptability. However, the present invention is not limited thereto, and other forms of dynamic images are equally applicable to the data processing scheme of the embodiment of the present invention.
Step S208: and displaying the dynamic image on a display interface of the data to be processed.
After the dynamic image is obtained, the dynamic image can be displayed on a display interface of the data to be processed, so that the expression effect of the data to be processed is improved.
In one possible manner, after the dynamic image corresponding to the attribute information is acquired, the structural layer information, the layer information and the information of the image elements in each layer of the dynamic image are further acquired; and rendering and displaying the dynamic image on a display interface of the data to be processed according to the structural layer information, the layer information and the information of the image elements in each layer. The information of the width and height, the frame number, the background color, the time, the starting frame, the ending frame and the like of the canvas on which the dynamic image is to be drawn can be obtained through the structural layer information of the dynamic image; the number of layers of the dynamic image to be drawn, and the information of the start frame and the end frame of each layer can be obtained through the layer information of the dynamic image; the resource of the image element used by the dynamic image to be drawn and the information of the image element contained in each layer can be known through the information of the image element in each layer of the dynamic image. By the method, effective and rapid rendering and drawing of the dynamic image can be realized.
Taking Lottie animation as an example, the Json structure can be divided into 4 layers, which are respectively: structural layers, asset, laminates and shapes. The structure layer can read the width and height, the frame number, the background color, the time, the starting key frame, the ending frame and the like of the Lottie animation canvas; asset is an animation element resource information set, and animation element resources quoted during Lottie animation production can be known; the layers are layer sets, so that the number of layers of the Lottie animation can be obtained, and the beginning frame, the ending frame and the like of each layer can be obtained; the shapes is an element set, and it can acquire which animation elements each layer contains. And reading information of the Lottie animation through the hierarchy, mapping the information into a Java bean object, and carrying out hierarchical rendering and drawing on the Java bean to Canvas of a Canvas through a key class Lottie drawing, so that rendering and drawing of the Lottie animation can be realized.
However, the method of generating the coded moving image is not limited to the above, and in practical applications, the non-coded moving image may be converted into the coded moving image by any suitable method, for example, any moving image may be exported as a Lottie animation.
Optionally, rendering and displaying the dynamic image on the display interface of the data to be processed according to the structural layer information, the layer information and the information of the image element in each layer may include: rendering and displaying the dynamic image according to display rules on a display interface of the data to be processed according to the structure layer information, the layer information and the information of the image elements in each layer; wherein the presentation rule includes at least one of: the image elements display rules in the corresponding layers according to a preset display sequence or preset display time or preset display paths; the layer displays rules according to a preset display sequence or preset display time.
As described above, each layer includes a corresponding image element of the layer, and when the image elements include a plurality of image elements, the plurality of image elements may be displayed in a predetermined display sequence, where the predetermined display sequence may be set appropriately by those skilled in the art according to actual needs. For example, layer A includes picture elements 1, 2, 3, 4, and 5, and the display order of the picture elements in layer A may be sequentially 1-3-5-2-4 from first to second; when displaying the dynamic image, if rendering to the layer a, the image element 1 is displayed first, then the image element 3 is displayed, then the image element 5 is displayed, then the image element 2 is displayed, and finally the image element 4 is displayed. The plurality of image elements may also be displayed at a predetermined display time, wherein the predetermined display time may be appropriately set by those skilled in the art according to actual needs. Some or all of the image elements may be displayed at the same display time, may be displayed at different display times, may be displayed at the same display time for some of the image elements, may be displayed at different display times for other portions, and so on. The plurality of image elements may also be displayed in a predetermined display path, where the predetermined display path may also be set by those skilled in the art according to actual needs.
Through the various display rules of the image elements, the display modes of the dynamic images are greatly enriched, and the display effect of the dynamic images is improved.
For the layers in the dynamic image, if the dynamic image includes multiple layers, the multiple layers may be displayed according to a predetermined display order or a predetermined display time. Wherein, the predetermined display sequence or the predetermined display time can be set by a person skilled in the art according to actual requirements, which is not limited by the embodiment of the present invention. For example, the moving image includes three layers A, B and C, and it is assumed that the display order of the three layers is from first to last: a-B-C, when displaying the moving image, displaying the layer a and all the image elements in the layer a, then displaying the layer B and all the image elements therein, and finally displaying the layer C and all the image elements therein. When the display is performed according to the preset display time, part or all of the layers can be displayed at the same display time, can be displayed at different display times, can also be displayed at the same display time in part, and can be displayed at different display times in other parts. Through the multiple display rules of the layers, the display modes of the dynamic images are enriched, and the display effect of the dynamic images is improved. In another possible manner, when the moving image is directly acquired, the moving image corresponding to the attribute information is acquired and then displayed directly.
In addition, optionally, in the embodiment of the present invention, the dynamic image is displayed in a full-screen display manner, so as to further improve the display effect and the auxiliary intention expression effect.
The above-described procedure of the present embodiment will be described below by taking a dialogue example as an example.
The dialog instance is set as a question-answer scene, which triggers image rendering when replying to a question. Initially, as shown in fig. 3, the interactive interface shows a question "how air quality is" in the interactive interface, at which time there is no answer to the question. If the user replies to the question, or the APP of the question-answering scenario obtains the answer to the question in any appropriate manner (such as via a network query or search manner, or via a data interface manner provided by a third party), for example, "PM2.5 of beijing is 220, air is slightly polluted, air is not good, and outdoor activities are avoided", the image rendering process of the embodiment is triggered. At this time, the attribute information of "PM2.5 in Beijing is 220, air is slightly polluted, air is not very good, and outdoor activities are avoided" is obtained, for example, "PM2.5 is 220", and further, a dynamic image corresponding to the attribute information, such as a dot-like dynamic image with higher density, is obtained, wherein one frame is shown in FIG. 4. If the obtained answer is "PM2.5 of beijing is 100, air is lightly polluted, and outdoor activities are reduced", the attribute information may be "PM2.5 is 100", and at this time, one frame of the moving image corresponding to the attribute information is shown in fig. 5, and it is seen from fig. 5 that the moving image is a dot-like moving image having a lower density than that of fig. 4. As can be seen from fig. 4 and 5, even if the attribute types of the attribute information are the same, such as the "air" type, the dynamic images corresponding to them are different depending on the specific attribute information, such as "PM2.5 is 220" and "PM2.5 is 100". Specifically, in fig. 4 and 5, the background of the moving image is the same but the dot density is different. Thus, the auxiliary intention expression of the data to be processed is more accurate.
Therefore, according to the embodiment, no matter the text mode, the audio mode or other proper interaction modes are adopted, only the attribute information of the displayed data to be processed is needed to be obtained, and the corresponding dynamic image can be displayed according to the attribute information so as to assist in the intention expression of the data to be processed. On one hand, the expression mode of the interactive scene is enriched by using the dynamic image to assist in the expression of the intention; on the other hand, the dynamic image is related to the attribute information of the data to be processed, and the attribute information can represent the scene intention of the data to be processed to a large extent, so that the dynamic image is displayed while the data to be processed is displayed, and the corresponding scene information can be effectively expressed. Further, the use experience of the user on the corresponding application APP is also improved.
The data processing method of the present embodiment may be performed by any suitable electronic device having data processing capabilities, including but not limited to: mobile terminals (e.g., cell phones, PADs, etc.), servers, PCs, etc.
Example III
Referring to fig. 6, there is shown a block diagram of a data processing apparatus according to a third embodiment of the present invention.
The data processing apparatus of the present embodiment includes: a determining module 302, configured to determine that a trigger instruction for performing image rendering on the displayed data to be processed is received; an obtaining module 304, configured to obtain attribute information of the data to be processed according to the trigger indication; and the rendering module 306 is configured to obtain a dynamic image corresponding to the attribute information, and display the dynamic image on a display interface of the data to be processed.
The data processing device in this embodiment is configured to implement the corresponding data processing method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the data processing apparatus of this embodiment may refer to the description of the corresponding portion in the foregoing method embodiment, which is not repeated herein.
Example IV
Referring to fig. 7, there is shown a block diagram of a data processing apparatus according to a fourth embodiment of the present invention.
The data processing apparatus of the present embodiment includes: a determining module 402, configured to determine that a trigger instruction for performing image rendering on the displayed data to be processed is received; an obtaining module 404, configured to obtain attribute information of the data to be processed according to the trigger indication; and the rendering module 406 is configured to obtain a dynamic image corresponding to the attribute information, and display the dynamic image on a display interface of the data to be processed.
Optionally, the obtaining module 404 includes: the analysis module 4042 is configured to perform semantic analysis on the data to be processed according to the trigger indication; acquiring attribute information of the data to be processed according to semantic analysis results; or, an extracting module 4044, configured to perform attribute extraction or feature extraction on the data to be processed according to the trigger indication; and acquiring attribute information of the data to be processed according to the extraction result.
Optionally, the rendering module 406 includes an image module 4062 and a display module 4064, where the image module 4062 is configured to obtain a dynamic image corresponding to the attribute information, and the display module 4064 is configured to render and display the dynamic image on a display interface of the data to be processed.
Wherein the image module 4062 includes: a first image module 40622, configured to obtain a dynamic image corresponding to the attribute information of the data to be processed according to a preset correspondence between attribute information and the dynamic image; or, the second image module 40624 is configured to obtain at least one image element corresponding to the attribute information of the data to be processed according to a preset correspondence between attribute information and image elements; generating a dynamic image according to the image element; or the third image module 40626 is configured to obtain an attribute data analysis result, match the attribute information with the attribute data analysis result, and obtain a corresponding dynamic image according to the matching result.
Optionally, the first image module 40622 is configured to determine an attribute type corresponding to the attribute information; and acquiring the dynamic image corresponding to the attribute information of the data to be processed from the plurality of dynamic images corresponding to the attribute type according to the corresponding relation between the preset attribute information and the dynamic image.
Optionally, the third image module 40626 is configured to obtain an attribute data analysis result, match the attribute information with the attribute data analysis result, and obtain a corresponding dynamic image from a third party according to the matching result.
Optionally, the rendering module 406 is configured to obtain a dynamic image corresponding to the attribute information, and obtain structural layer information, and information of image elements in each layer of the dynamic image; and rendering and displaying the dynamic image on a display interface of the data to be processed according to the structural layer information, the layer information and the information of the image elements in each layer.
Optionally, the rendering module 406 renders and displays the dynamic image according to a display rule on a display interface of the data to be processed according to the structural layer information, the layer information and the information of the image elements in each layer; wherein the presentation rule includes at least one of: the image elements display rules in the corresponding layers according to a preset display sequence or preset display time or preset display paths; the layer displays rules according to a preset display sequence or preset display time.
Optionally, the determining module 402 includes: a first instruction determining module 4022, configured to determine, if a reply operation to a issued question is received, that a trigger instruction for performing image rendering on reply data corresponding to the reply operation is received; or, the second instruction determining module 4024 is configured to determine, if a viewing operation of a search result obtained according to the search information is received, that a trigger instruction for performing image rendering on the search result operated by the viewing operation is received; or, the third instruction determining module 4026 is configured to determine that a trigger instruction for performing image rendering on the interaction data is received if the interaction data input by the user and meeting the predetermined standard is received; or, the fourth instruction determining module 4028 is configured to determine that a trigger instruction for rendering an image of the posted information or comment data is received if an information posting operation or a comment operation on the posted information is received.
The data processing device in this embodiment is configured to implement the corresponding data processing method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the data processing apparatus of this embodiment may refer to the description of the corresponding portion in the foregoing method embodiment, which is not repeated herein.
Example five
Referring to fig. 8, a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention is shown, and the specific embodiment of the present invention is not limited to the specific implementation of the electronic device.
As shown in fig. 8, the electronic device may include: a processor 502, a communication interface (Communications Interface) 504, a memory 506, and a communication bus 508.
Wherein:
processor 502, communication interface 504, and memory 506 communicate with each other via communication bus 508.
A communication interface 504 for communicating with other electronic devices or servers.
The processor 502 is configured to execute the program 510, and may specifically perform relevant steps in the above-described data processing method embodiment.
In particular, program 510 may include program code including computer-operating instructions.
The processor 502 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors comprised by the smart device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
A memory 506 for storing a program 510. Memory 506 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically operable to cause the processor 502 to: determining that a trigger instruction for performing image rendering on the displayed data to be processed is received; acquiring attribute information of the data to be processed according to the trigger indication; and acquiring a dynamic image corresponding to the attribute information, and displaying the dynamic image on a display interface of the data to be processed.
In an alternative embodiment, the program 510 is further configured to cause the processor 502 to perform semantic analysis on the data to be processed according to the trigger indication when obtaining attribute information of the data to be processed according to the trigger indication; acquiring attribute information of the data to be processed according to semantic analysis results; or extracting attributes or characteristics of the data to be processed according to the trigger indication; and acquiring attribute information of the data to be processed according to the extraction result.
In an alternative embodiment, the program 510 is further configured to, when obtaining the dynamic image corresponding to the attribute information, enable the processor 502 to obtain the dynamic image corresponding to the attribute information of the data to be processed according to a preset correspondence between the attribute information and the dynamic image; or, according to the corresponding relation between the preset attribute information and the image elements, acquiring at least one image element corresponding to the attribute information of the data to be processed; generating a dynamic image according to the image element; or, acquiring an attribute data analysis result, matching the attribute information with the attribute data analysis result, and acquiring a corresponding dynamic image according to the matching result.
In an optional implementation manner, the program 510 is further configured to cause the processor 502 to determine an attribute type corresponding to attribute information of the data to be processed when acquiring a dynamic image corresponding to the attribute information according to a preset correspondence between the attribute information and the dynamic image; and acquiring the dynamic image corresponding to the attribute information of the data to be processed from the plurality of dynamic images corresponding to the attribute type according to the corresponding relation between the preset attribute information and the dynamic image.
In an alternative embodiment, the program 510 is further configured to, when obtaining the corresponding dynamic image according to the matching result, cause the processor 502 to obtain the corresponding dynamic image from the third party according to the matching result.
In an optional implementation manner, the program 510 is further configured to, when acquiring a dynamic image corresponding to the attribute information and displaying the dynamic image on a display interface of the data to be processed, cause the processor 502 to acquire the dynamic image corresponding to the attribute information, and acquire structural layer information, and information of image elements in each layer of the dynamic image; and rendering and displaying the dynamic image on a display interface of the data to be processed according to the structural layer information, the layer information and the information of the image elements in each layer.
In an alternative embodiment, the program 510 is further configured to cause the processor 502 to render and display the dynamic image according to the display rule on the display interface of the data to be processed when rendering and displaying the dynamic image on the display interface of the data to be processed according to the structural layer information, the layer information, and the information of the image elements in each layer; wherein the presentation rule includes at least one of: the image elements display rules in the corresponding layers according to a preset display sequence or preset display time or preset display paths; the layer displays rules according to a preset display sequence or preset display time.
In an optional embodiment, the program 510 is further configured to, when determining that a trigger instruction for image rendering of the data to be processed is received, determine that a trigger instruction for image rendering of reply data corresponding to a reply operation is received if a reply operation to the issued problem is received; or if a viewing operation of the search result obtained according to the search information is received, determining that a trigger instruction for performing image rendering on the search result operated by the viewing operation is received; or if the interactive data meeting the preset standard, which is input by the user, is received, determining that a trigger instruction for performing image rendering on the interactive data is received; or if the information posting operation or the comment operation on the posted information is received, determining that a trigger instruction for performing image rendering on the posted information or comment data is received.
The specific implementation of each step in the program 510 may refer to the corresponding steps and corresponding descriptions in the units in the above data processing method embodiment, which are not repeated herein. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and modules described above may refer to corresponding procedure descriptions in the foregoing method embodiments, which are not repeated herein.
According to the electronic equipment, whether a text mode, an audio mode or other proper interaction modes are adopted, only the attribute information of the displayed data to be processed is needed to be obtained, and the corresponding dynamic image can be displayed according to the attribute information so as to assist in the intention expression of the data to be processed. On one hand, the expression mode of the interactive scene is enriched by using the dynamic image to assist in the expression of the intention; on the other hand, the dynamic image is related to the attribute information of the data to be processed, and the attribute information can represent the scene intention of the data to be processed to a large extent, so that the dynamic image is displayed while the data to be processed is displayed, and the corresponding scene information can be effectively expressed. Further, the use experience of the user on the corresponding application APP is also improved.
It should be noted that, according to implementation requirements, each component/step described in the embodiments of the present invention may be split into more components/steps, or two or more components/steps or part of operations of the components/steps may be combined into new components/steps, so as to achieve the objects of the embodiments of the present invention.
The above-described methods according to embodiments of the present invention may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium and to be stored in a local recording medium downloaded through a network, so that the methods described herein may be stored on such software processes on a recording medium using a general purpose computer, special purpose processor, or programmable or special purpose hardware such as an ASIC or FPGA. It is understood that a computer, processor, microprocessor controller, or programmable hardware includes a memory component (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor, or hardware, implements the data processing methods described herein. Further, when a general-purpose computer accesses code for implementing the data processing methods illustrated herein, execution of the code converts the general-purpose computer into a special-purpose computer for executing the data processing methods illustrated herein.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present invention.
The above embodiments are only for illustrating the embodiments of the present invention, but not for limiting the embodiments of the present invention, and various changes and modifications may be made by one skilled in the relevant art without departing from the spirit and scope of the embodiments of the present invention, so that all equivalent technical solutions also fall within the scope of the embodiments of the present invention, and the scope of the embodiments of the present invention should be defined by the claims.

Claims (16)

1. A data processing method, comprising:
determining that a trigger instruction for performing image rendering on the displayed data to be processed is received;
acquiring attribute information of the data to be processed according to the trigger indication;
Acquiring a dynamic image corresponding to the attribute information, and displaying the dynamic image on a display interface of the data to be processed, wherein the method comprises the following steps: acquiring a dynamic image corresponding to the attribute information, and acquiring structural layer information, layer information and information of image elements in each layer of the dynamic image; and rendering and displaying the dynamic image on a display interface of the data to be processed according to the structural layer information, the layer information and the information of the image elements in each layer.
2. The method of claim 1, wherein the obtaining attribute information of the data to be processed according to the trigger indication comprises:
according to the trigger indication, carrying out semantic analysis on the data to be processed; acquiring attribute information of the data to be processed according to semantic analysis results;
or alternatively, the process may be performed,
according to the trigger indication, carrying out attribute extraction or feature extraction on the data to be processed; and acquiring attribute information of the data to be processed according to the extraction result.
3. The method according to claim 1 or 2, wherein the acquiring the moving image corresponding to the attribute information includes:
acquiring a dynamic image corresponding to the attribute information of the data to be processed according to the corresponding relation between the preset attribute information and the dynamic image;
Or alternatively, the process may be performed,
acquiring at least one image element corresponding to the attribute information of the data to be processed according to the corresponding relation between the preset attribute information and the image element; generating a dynamic image according to the image element;
or alternatively, the process may be performed,
and acquiring an attribute data analysis result, matching the attribute information with the attribute data analysis result, and acquiring a corresponding dynamic image according to the matching result.
4. The method according to claim 3, wherein the obtaining the dynamic image corresponding to the attribute information of the data to be processed according to the preset correspondence between the attribute information and the dynamic image includes:
determining the attribute type corresponding to the attribute information;
and acquiring the dynamic image corresponding to the attribute information of the data to be processed from the plurality of dynamic images corresponding to the attribute type according to the corresponding relation between the preset attribute information and the dynamic image.
5. A method according to claim 3, wherein said obtaining a corresponding dynamic image according to the matching result comprises:
and acquiring the corresponding dynamic image from the third party according to the matching result.
6. The method of claim 1, wherein rendering and displaying the dynamic image at the presentation interface of the data to be processed according to the structural layer information, the layer information, and the information of the image elements in each layer, comprises:
Rendering and displaying the dynamic image according to display rules on a display interface of the data to be processed according to the structure layer information, the layer information and the information of the image elements in each layer;
wherein the presentation rule includes at least one of:
the image elements display rules in the corresponding layers according to a preset display sequence or preset display time or preset display paths;
the layer displays rules according to a preset display sequence or preset display time.
7. The method of claim 1 or 2, wherein the determining that a trigger indication to image render the presented data to be processed is received comprises:
if a reply operation to the issued problem is received, determining that a trigger instruction for performing image rendering on reply data corresponding to the reply operation is received;
or alternatively, the process may be performed,
if a viewing operation of a search result obtained according to the search information is received, determining that a trigger instruction for performing image rendering on the search result operated by the viewing operation is received;
or alternatively, the process may be performed,
if the interactive data meeting the preset standard, which is input by the user, is received, determining that a trigger instruction for performing image rendering on the interactive data is received;
Or alternatively, the process may be performed,
if the information posting operation or comment operation on the posted information is received, the trigger instruction for performing image rendering on the posted information or comment data is determined to be received.
8. A data processing apparatus comprising:
the determining module is used for determining that a trigger instruction for performing image rendering on the displayed data to be processed is received;
the acquisition module is used for acquiring attribute information of the data to be processed according to the trigger indication;
the rendering module is configured to obtain a dynamic image corresponding to the attribute information, and display the dynamic image on a display interface of the data to be processed, and includes: acquiring a dynamic image corresponding to the attribute information, and acquiring structural layer information, layer information and information of image elements in each layer of the dynamic image; and rendering and displaying the dynamic image on a display interface of the data to be processed according to the structural layer information, the layer information and the information of the image elements in each layer.
9. The apparatus of claim 8, wherein the acquisition module comprises:
the analysis module is used for carrying out semantic analysis on the data to be processed according to the trigger indication; acquiring attribute information of the data to be processed according to semantic analysis results;
Or alternatively, the process may be performed,
the extraction module is used for carrying out attribute extraction or feature extraction on the data to be processed according to the trigger indication; and acquiring attribute information of the data to be processed according to the extraction result.
10. The device according to claim 8 or 9, wherein the rendering module comprises an image module and a display module, wherein the image module is used for acquiring a dynamic image corresponding to the attribute information, and the display module is used for rendering and displaying the dynamic image on a display interface of the data to be processed;
wherein, the image module includes:
the first image module is used for acquiring a dynamic image corresponding to the attribute information of the data to be processed according to the corresponding relation between the preset attribute information and the dynamic image;
or alternatively, the process may be performed,
the second image module is used for acquiring at least one image element corresponding to the attribute information of the data to be processed according to the corresponding relation between the preset attribute information and the image element; generating a dynamic image according to the image element;
or alternatively, the process may be performed,
and the third image module is used for acquiring an attribute data analysis result, matching the attribute information with the attribute data analysis result, and acquiring a corresponding dynamic image according to the matching result.
11. The apparatus of claim 10, wherein the first image module is configured to determine an attribute type corresponding to the attribute information; and acquiring the dynamic image corresponding to the attribute information of the data to be processed from the plurality of dynamic images corresponding to the attribute type according to the corresponding relation between the preset attribute information and the dynamic image.
12. The apparatus of claim 10, wherein the third image module is configured to obtain an attribute data analysis result, match the attribute information with the attribute data analysis result, and obtain a corresponding dynamic image from a third party according to the matching result.
13. The device of claim 8, wherein the rendering module renders and displays the dynamic image according to a display rule on a display interface of the data to be processed according to the structure layer information, the layer information and the information of the image elements in each layer;
wherein the presentation rule includes at least one of:
the image elements display rules in the corresponding layers according to a preset display sequence or preset display time or preset display paths;
the layer displays rules according to a preset display sequence or preset display time.
14. The apparatus of claim 8 or 9, wherein the determining module comprises:
the first instruction determining module is used for determining that a trigger instruction for performing image rendering on reply data corresponding to the reply operation is received if the reply operation on the issued problem is received;
or alternatively, the process may be performed,
the second instruction determining module is used for determining that a trigger instruction for performing image rendering on the search result operated by the view operation is received if the view operation on the search result obtained according to the search information is received;
or alternatively, the process may be performed,
the third indication determining module is used for determining that a trigger indication for performing image rendering on the interactive data is received if the interactive data which meets the preset standard and is input by a user is received;
or alternatively, the process may be performed,
and the fourth instruction determining module is used for determining that a trigger instruction for performing image rendering on the posted information or comment data is received if the information posting operation or the comment operation on the posted information is received.
15. An electronic device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
The memory is configured to store at least one executable instruction, where the executable instruction causes the processor to perform operations corresponding to the data processing method according to any one of claims 1 to 7.
16. A computer storage medium having stored thereon a computer program which, when executed by a processor, implements a data processing method as claimed in any one of claims 1 to 7.
CN201910072969.2A 2019-01-25 2019-01-25 Data processing method, device, electronic equipment and computer storage medium Active CN111488186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910072969.2A CN111488186B (en) 2019-01-25 2019-01-25 Data processing method, device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910072969.2A CN111488186B (en) 2019-01-25 2019-01-25 Data processing method, device, electronic equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN111488186A CN111488186A (en) 2020-08-04
CN111488186B true CN111488186B (en) 2023-04-28

Family

ID=71791226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910072969.2A Active CN111488186B (en) 2019-01-25 2019-01-25 Data processing method, device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN111488186B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637518B (en) * 2020-12-21 2023-03-24 北京字跳网络技术有限公司 Method, device, equipment and medium for generating simulated shooting special effect
CN113973224A (en) * 2021-09-18 2022-01-25 阿里巴巴(中国)有限公司 Method for transmitting media information, computing device and storage medium
CN114610920B (en) * 2022-05-09 2022-09-02 宏景科技股份有限公司 Image storage format generation method, image storage format and processing system
CN117130717B (en) * 2023-10-27 2024-02-13 杭州实在智能科技有限公司 Element positioning method and system of HTMLayout application program in RPA scene

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663078A (en) * 2012-04-01 2012-09-12 百度在线网络技术(北京)有限公司 Method and equipment for generating to-be-issued information in network community
WO2012155012A1 (en) * 2011-05-12 2012-11-15 Google Inc. Dynamic image display area and image display within web search results
WO2013005366A1 (en) * 2011-07-05 2013-01-10 パナソニック株式会社 Anti-aliasing image generation device and anti-aliasing image generation method
CN105117102A (en) * 2015-08-21 2015-12-02 小米科技有限责任公司 Audio interface display method and device
CN105302428A (en) * 2014-07-29 2016-02-03 腾讯科技(深圳)有限公司 Social network-based dynamic information display method and device
CN105468315A (en) * 2014-09-05 2016-04-06 腾讯科技(深圳)有限公司 Mobile terminal page displaying method and apparatus
CN106250090A (en) * 2016-09-07 2016-12-21 讯飞幻境(北京)科技有限公司 A kind of three-dimensional scenic interactive exhibition system and methods of exhibiting
CN106549839A (en) * 2016-11-08 2017-03-29 陈智玲 A kind of electronic equipment dynamic interaction mode
WO2018072470A1 (en) * 2016-10-19 2018-04-26 华为技术有限公司 Image display method, and terminal
CN108255923A (en) * 2017-11-06 2018-07-06 优视科技有限公司 Image presentation method, equipment and electronic equipment
CN108846792A (en) * 2018-05-23 2018-11-20 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and computer-readable medium
CN109087159A (en) * 2018-06-13 2018-12-25 北京三快在线科技有限公司 Business object information methods of exhibiting, device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150154778A1 (en) * 2013-11-29 2015-06-04 Calgary Scientific, Inc. Systems and methods for dynamic image rendering

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012155012A1 (en) * 2011-05-12 2012-11-15 Google Inc. Dynamic image display area and image display within web search results
WO2013005366A1 (en) * 2011-07-05 2013-01-10 パナソニック株式会社 Anti-aliasing image generation device and anti-aliasing image generation method
CN102663078A (en) * 2012-04-01 2012-09-12 百度在线网络技术(北京)有限公司 Method and equipment for generating to-be-issued information in network community
CN105302428A (en) * 2014-07-29 2016-02-03 腾讯科技(深圳)有限公司 Social network-based dynamic information display method and device
CN105468315A (en) * 2014-09-05 2016-04-06 腾讯科技(深圳)有限公司 Mobile terminal page displaying method and apparatus
CN105117102A (en) * 2015-08-21 2015-12-02 小米科技有限责任公司 Audio interface display method and device
CN106250090A (en) * 2016-09-07 2016-12-21 讯飞幻境(北京)科技有限公司 A kind of three-dimensional scenic interactive exhibition system and methods of exhibiting
WO2018072470A1 (en) * 2016-10-19 2018-04-26 华为技术有限公司 Image display method, and terminal
CN106549839A (en) * 2016-11-08 2017-03-29 陈智玲 A kind of electronic equipment dynamic interaction mode
CN108255923A (en) * 2017-11-06 2018-07-06 优视科技有限公司 Image presentation method, equipment and electronic equipment
CN108846792A (en) * 2018-05-23 2018-11-20 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and computer-readable medium
CN109087159A (en) * 2018-06-13 2018-12-25 北京三快在线科技有限公司 Business object information methods of exhibiting, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张超 ; 杨晶晶 ; 王盛 ; 陈更生 ; .基于动态场景估计的自适应图像增强算法.计算机工程.2013,(第05期),全文. *
陈建忠 ; 饶长春 ; .国土资源二三维一体化GIS管理平台研究.国土资源信息化.2014,(第01期),全文. *

Also Published As

Publication number Publication date
CN111488186A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN111488186B (en) Data processing method, device, electronic equipment and computer storage medium
US11463631B2 (en) Method and apparatus for generating face image
WO2016150083A1 (en) Information input method and apparatus
US9230035B2 (en) Pushing specific content to a predetermined webpage
KR102124466B1 (en) Apparatus and method for generating conti for webtoon
CN111488931A (en) Article quality evaluation method, article recommendation method and corresponding devices
CN111240669B (en) Interface generation method and device, electronic equipment and computer storage medium
CN107958078A (en) Information generating method and device
CN110059212A (en) Image search method, device, equipment and computer readable storage medium
CN104615639B (en) A kind of method and apparatus for providing the presentation information of picture
US20220319082A1 (en) Generating modified user content that includes additional text content
CN116188250A (en) Image processing method, device, electronic equipment and storage medium
CN112507214B (en) User name-based data processing method, device, equipment and medium
CN110661693A (en) Methods, computing device-readable storage media, and computing devices facilitating media-based content sharing performed in a computing device
CN113709681B (en) Method and device for displaying and pushing short message content
CN111582281B (en) Picture display optimization method and device, electronic equipment and storage medium
CN112328871A (en) Reply generation method, device, equipment and storage medium based on RPA module
CN113434633A (en) Social topic recommendation method, device, equipment and storage medium based on head portrait
CN113822521A (en) Method and device for detecting quality of question library questions and storage medium
CN116401394B (en) Object set, image generation method and device, electronic equipment and storage medium
CN113190779B (en) Webpage evaluation method and device
CN113377196B (en) Data recommendation method and device, electronic equipment and readable storage medium
TWI643080B (en) A method to parse network data and simulate specific objects accordingly
CN117010413A (en) Community question and answer method and device, storage medium and computer equipment
CN117892140A (en) Visual question and answer and model training method and device thereof, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant