CN104375815B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN104375815B
CN104375815B CN201310356860.4A CN201310356860A CN104375815B CN 104375815 B CN104375815 B CN 104375815B CN 201310356860 A CN201310356860 A CN 201310356860A CN 104375815 B CN104375815 B CN 104375815B
Authority
CN
China
Prior art keywords
type
image
object file
character
files
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310356860.4A
Other languages
Chinese (zh)
Other versions
CN104375815A (en
Inventor
李然
戴岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201310356860.4A priority Critical patent/CN104375815B/en
Publication of CN104375815A publication Critical patent/CN104375815A/en
Application granted granted Critical
Publication of CN104375815B publication Critical patent/CN104375815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses an information processing method and electronic equipment, wherein the method is applied to the electronic equipment and comprises the following steps: obtaining input data; generating a first type of object file based on the input data; determining an object file of a second type according to a preset rule based on the object file of the first type; the object files of the second type are generated object files, and the first type is different from the second type; and establishing an association relationship between the object file of the first type and the object file of the second type.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to an information processing method and an electronic device.
Background
At present, due to the development of science and technology, the mode of making friends is changed, the former making friends need face-to-face communication, and the making friends can use some auxiliary tools to communicate without the need of face-to-face communication between users. Such as a mobile phone, a computer, or the like, and the user can communicate with the other party by using the electronic device. In the process of communication by using the electronic equipment, the character matching is a new communication mode rather than simply using characters for communication. If the user A updates the state or releases new information, the user A has the need of matching the picture, so that the user A can express the mood more vividly and attract more new friends. In addition, when the user A actively comes to a new friend, the electronic device is also used for browsing pictures or moods published by other users.
However, in the process of implementing the present application, the applicant finds that, when the electronic device performs matching for the user, the electronic device receives the operation of the user and selects an appropriate matching from a plurality of pictures. For example, a user selects a matching picture by using a mobile phone, the user often needs to search in a gallery, and a large number of drawings are stored in a general gallery. Therefore, if the user manually searches for matching pictures, a lot of time is consumed, and for the electronic device, when the user needs to match pictures after inputting characters, the electronic device cannot actively search corresponding pictures to provide for the user, so that the user experience is low. In addition, when the electronic device provides the picture published by other users for the user, only some basic information about the picture can be provided, and other information about the content of the picture cannot be obtained, so that the user needs to manually search for other information, and further the user experience is low.
Therefore, the technical problems existing in the prior art are as follows: different types of files cannot be automatically associated, which results in a low user experience.
Disclosure of Invention
The invention provides an information processing method and electronic equipment, which are used for solving the technical problem that the experience degree of a user is lower due to the fact that different types of files cannot be automatically associated in the prior art.
In one aspect, the present invention provides the following technical solutions through an embodiment of the present application:
an information processing method, which is applied to an electronic device, the method comprising: obtaining input data; generating a first type of object file based on the input data; determining an object file of a second type according to a preset rule based on the object file of the first type; the object files of the second type are generated object files, and the first type is different from the second type; and establishing an association relationship between the object file of the first type and the object file of the second type.
Preferably, the electronic device includes an image acquisition unit, and the obtaining of the input data specifically includes: image data is obtained by the image acquisition unit.
Preferably, the generating of the object file of the first type based on the input data specifically includes: a first object file having an image type is generated based on the image data.
Preferably, the determining the object file of the second type according to the predetermined rule based on the object file of the first type specifically includes: analyzing the first object file of the image type to obtain a portrait element and/or an environmental element in the first object file; and determining N first object files of character types corresponding to the portrait elements and/or the environment elements based on the portrait elements and/or the environment elements, wherein N is a positive integer.
Preferably, after the associating relationship between the object file of the first type and the object file of the second type is established, the method further includes: and correspondingly displaying one or more object files in the first object files of the N character types based on the portrait elements and/or the environment elements.
Preferably, the obtaining input data specifically includes: receiving character data input by a user in a key mode on the input interface; or receiving character data input by a user in a handwriting mode on the input interface.
Preferably, when the input data is text data, generating an object file of a first type based on the input data, specifically: and generating a second object file of the character type based on the character data.
Preferably, the determining the object file of the second type according to the predetermined rule based on the object file of the first type specifically includes: acquiring parameter information of the second object file of the character type according to the second object file of the character type; and obtaining second object files of corresponding M image types according to the parameter information, wherein M is a positive integer.
Preferably, the obtaining the parameter information of the second object file of the text type according to the second object file of the text type specifically includes: acquiring a first keyword contained in the second object file of the character type according to the second object file of the character type; or acquiring the character meaning represented by the second object file of the character type according to the second object file of the character type; or acquiring the environment information describing the second object file of the character type according to the second object file of the character type.
Preferably, the obtaining, according to the second object file of the text type, the first keyword included in the second object file of the text type specifically includes: splitting and recombining the second object file of the character type into K keywords, wherein K is a positive integer; determining the priority of each keyword in the K keywords, and selecting the keyword with the highest priority as the first keyword.
Preferably, the obtaining of the second object files of the corresponding M image types according to the parameter information specifically includes: according to the first keyword, image searching is carried out from a local database of the electronic equipment, and second object files of the M image types are obtained; or according to the first keyword, performing image search from a cloud server connected with the electronic equipment to obtain the second object files of the M image types.
Preferably, obtaining the second object files of the corresponding M image types according to the parameter information specifically includes: and according to the word meaning, carrying out image search from a local database of the electronic equipment to obtain second object files of the M image types.
Preferably, the obtaining, according to the parameter information, the second object files of the corresponding M image types specifically includes: and according to the environment information, performing image search from a cloud server connected with the electronic equipment to obtain second object files of M image types.
Preferably, after the associating relationship between the object file of the first type and the object file of the second type is established, the method further includes: determining a first image from the second object files of the M image types as a matching picture of the second object files of the character types; and sending the second object file of the character type and the first image to the at least one second electronic device.
Preferably, the determining the first image from the second object files of the M image types specifically includes: determining the first image from the second object files of the M image types based on the selection operation of the user; or determining an image with the minimum image storage amount from the second object files of the M image types as the first image; or determining an image with the smallest image display size from the second object files of the M image types as the first image.
In another aspect, the present invention provides, by another embodiment of the present application:
in an electronic device, comprising: a first obtaining unit configured to obtain input data; a generating unit configured to generate an object file of a first type based on the input data; a first determining unit, configured to determine an object file of a second type according to a predetermined rule based on the object file of the first type; the object files of the second type are generated object files, and the first type is different from the second type; and the establishing unit is used for establishing an association relationship between the object file of the first type and the object file of the second type.
Preferably, the electronic device includes an image acquisition unit, which is specifically configured to obtain image data.
Preferably, the generating unit is further specifically configured to generate a first object file having an image type based on the image data.
Preferably, the first determining unit specifically includes:
the analysis unit is used for analyzing the first object file of the image type to obtain human figure elements and/or environment elements in the first object file; and the second determining unit is used for determining N character types of the first object files corresponding to the portrait elements and/or the environment elements based on the portrait elements and/or the environment elements, wherein N is a positive integer.
Preferably, the electronic device further includes: and the display unit is used for correspondingly displaying one or more object files in the first object files of the N character types based on the portrait element and/or the environment element after the incidence relation is established between the object files of the first type and the object files of the second type.
Preferably, the first obtaining unit is specifically configured to receive, on the input interface, text data input by a user in a key pressing manner; or receiving character data input by a user in a handwriting mode on the input interface.
Preferably, the generating unit is further specifically configured to generate a second object file of a text type based on the text data.
Preferably, the first determining unit specifically includes: a second obtaining unit, configured to obtain parameter information of the second object file of the text type according to the second object file of the text type; and a third obtaining unit, configured to obtain second object files of corresponding M image types according to the parameter information, where M is a positive integer.
Preferably, the second obtaining unit is specifically configured to: acquiring a first keyword contained in the second object file of the character type according to the second object file of the character type; or acquiring the character meaning represented by the second object file of the character type according to the second object file of the character type; or acquiring the environment information describing the second object file of the character type according to the second object file of the character type.
Preferably, the second obtaining unit specifically further includes: the splitting unit is used for splitting and recombining the second object file of the character type into K keywords, wherein K is a positive integer; and the third determining unit is used for determining the priority of each keyword in the K keywords and selecting the keyword with the highest priority as the first keyword.
Preferably, the second obtaining unit is further specifically configured to perform image search from a local database of the electronic device according to the first keyword, and obtain the second object files of the M image types; or according to the first keyword, performing image search from a cloud server connected with the electronic equipment to obtain the second object files of the M image types.
Preferably, the third obtaining unit is specifically configured to perform image search from a local database of the electronic device according to the word meaning, and obtain the second object files of the M image types.
Preferably, the third obtaining unit is specifically configured to perform image search from a cloud server connected to the electronic device according to the environment information, and obtain the second object files of M image types.
Preferably, the electronic device further includes: a fourth determining unit, configured to determine, after an association relationship is established between the object files of the first type and the object files of the second type, a first image from the second object files of the M image types, where the first image is used as a matching image of the second object files of the text types; and the sending unit is used for sending the second object file of the character type and the first image to the at least one second electronic device.
Preferably, the fourth determining unit is specifically configured to: determining the first image from the second object files of the M image types based on the selection operation of the user; or determining an image with the minimum image storage amount from the second object files of the M image types as the first image; or determining an image with the smallest image display size from the second object files of the M image types as the first image.
One or more of the above technical solutions have the following technical effects or advantages:
in one or more of the above technical solutions, in order to enable the electronic device to actively search for other different types of files to be provided to the user based on one type of file, first input data is obtained, and then a first type of object file is generated based on the input data. And determining the object file of the second type according to a preset rule based on the object file of the first type. And then establishing an association relationship between the object file of the first type and the object file of the second type. If the text and the picture are taken as an example, the picture related to the text content can be searched according to the text content, and after a plurality of images are searched, the pictures are provided for the user to select. Therefore, after receiving the text content, the alternative pictures can be actively provided according to the text content, the alternative pictures are provided without the operation of a user, the provided pictures are richer, and the interaction mode is more diversified.
Drawings
FIG. 1 is a process diagram of an information processing method in an embodiment of the present application;
FIGS. 1A-1B are schematic diagrams of a text search in an embodiment of the present application;
fig. 2 is a schematic diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to solve the technical problem that the user experience is low due to the fact that different types of files cannot be automatically associated in the prior art, an embodiment of the invention provides an information processing method and electronic equipment, and the general idea of the solution is as follows:
the invention provides an information processing method and electronic equipment. Specifically, the method is applied to an electronic device, the electronic device includes a display unit, an input interface is displayed on the display unit, and the method includes: receiving the text content input by a user on an input interface; acquiring parameter information in the text content according to the text content; obtaining corresponding M images according to the parameter information, wherein M is a positive integer; and sending the text content and at least one image in the M images to at least one second electronic device.
The technical solutions of the present invention are described in detail below with reference to the drawings and the specific embodiments, and it should be understood that the specific features in the embodiments and the embodiments of the present invention are detailed descriptions of the technical solutions of the present invention, and are not limitations of the technical solutions of the present invention, and the technical features in the embodiments and the embodiments of the present invention may be combined with each other without conflict.
The first embodiment is as follows:
in an embodiment of the present application, an information processing method is described.
Specifically, the method is applied to electronic equipment, and the electronic equipment in the embodiment of the application can be a notebook computer, a tablet computer, a mobile phone and the like.
Referring to fig. 1, the information processing method in the embodiment of the present application is implemented as follows.
S101, input data are obtained.
S102, generating a first type of object file based on input data.
S103, determining the object file of the second type according to a preset rule based on the object file of the first type.
And the second type of object file is the generated object file, and the first type is different from the second type.
And S104, establishing an association relationship between the object files of the first type and the object files of the second type.
In the implementation of S101, the input data obtained may be two types, one type being image data and one type being text data. The image data is a basis for composing an image, and the character data is a basis for composing a character. The acquisition of image data is typically obtained by an image acquisition unit. And the text data is generally obtained by receiving an input operation of a user when the electronic device has an input interface.
When the electronic device includes an image capturing unit, the process of obtaining the input data is generally to obtain the image data through the image capturing unit.
The electronic equipment in the embodiment of the application is provided with an image acquisition unit. And the image acquisition unit may specifically be a camera application or a video camera application. Taking a mobile phone as an example, a mobile phone generally has a camera application program, and can be used for taking a picture of a scene or a portrait to obtain image data.
After the image data is obtained, the following procedure specifically describes how the subsequent information processing procedure is completed based on the image data.
Specifically, in the processing procedure of S102, a first object file having an image type is generated from the image data. This should essentially be a function of the image acquisition unit, which takes the surrounding scenery, obtains the underlying image data, and then generates a first object file with the image type based on the image data. While the first object file may be considered an image generated based on image data, the picture having an image type, a general image type refers to a format of the image, such as a jpg, jpeg, bmp, or the like format.
Then, based on the obtained image, the second type of object file is determined according to a predetermined rule.
The second type and the first type are different, if the first type is an image type, the second type may be a text type or a movie type, and the object file of the second type may be a text content or a movie content.
Specifically, the method for determining the object file of the second type according to the predetermined rule based on the object file of the first type includes the following specific steps:
first, a first object file of an image type is analyzed to obtain portrait elements and/or environmental elements therein.
For example, when the first object file of the image type is a picture, portrait elements may be included in the picture, such as the picture being a combination of a small forest and a small Zhao. Environmental elements such as the palace, the great wall, etc. may also be included. In addition, it is also possible to include both portrait elements and environmental elements, such as pictures of a small forest and a small Zhao standing on a great wall for taking a picture.
Thus, when a picture of the type described above is obtained, the portrait elements and/or the environmental elements contained therein can be analyzed.
Then, based on the portrait element and/or the environment element, the first object files of N character types corresponding to the portrait element and/or the environment element are determined, wherein N is a positive integer.
The specific determination process may be determined by the combination of the portrait element, the environmental element, the portrait element, and the environmental element, for example, a picture in which a small forest and a small Zhao are taken while standing in a great wall (as shown in FIG. 1A). At this time, the first object files of N character types may be determined based on the integrated element of the portrait element and the environment element, for example, by taking a small forest as an example, if the portrait element is determined to be a small forest, the integrated element of the small forest standing on the great wall is used to determine the first object files of N character types corresponding to the integrated element. The first object file of the N text types, if exemplified by the text type, may be a mood description of a great wall standing before the small forest. Or individual portrait elements of the small forest can be used to obtain the previous cardiac course description of the small forest itself, and the like. For example, as shown in fig. 1B, when the user clicks on the forest with a mouse while watching the picture, a mood description about the forest appears, such as "great wall is beautiful, landscape is good", "beijing is funny", "today go to chafing dish". For example, in the case of the movie type, a video shot by standing in a great wall before a small forest, or the like, can be obtained.
After obtaining the text description or the video, the electronic device will associate the object file of the first type with the object file of the second type. And correspondingly displaying one or more object files in the first object files of the N character types based on the portrait elements and/or the environment elements.
For example, the picture is associated with the text description or the video. The text or video is further displayed according to the association relationship. If viewed from the user's perspective, when the user is viewing a picture of a small forest and a small Zhao standing in the great wall, if the mouse is moved to the small forest, the system will associate and display a mood description of the great wall before the small forest, or a cardiac course published before the small forest, or a video taken by the small forest before boarding the great wall, etc. Further let the user know more information about the forest. Therefore, the method embodiment in the application can overcome the defect that pictures and character images in the existing social network cannot be displayed in a correlated manner, so that users can know more about each other, and the interactivity of the social network is enhanced.
In addition, in addition to the above situation, the embodiment of the present application may also have another way, and from the perspective of the user of the present application, please refer to the following embodiment specifically.
In the embodiment of the present application, generally, an electronic device has a display unit, and an input interface can be displayed on the display unit.
The display unit is a display screen having an input interface thereon.
Further, for example, a mobile phone is provided, and generally, a plurality of application software, such as a QQ, a browser, a WeChat, a microblog, and the like, is installed in the mobile phone. In addition, the mobile phone also has some additional functions, such as a short message application program and the like. The application software is displayed on a display screen of the mobile phone, if a certain application program is started (for example, a QQ is started), the QQ application program provides a dialog box for the user to input chat contents during the process of chatting with other users, and the interface appearing in the form of the dialog box is an input interface. In addition, if the user chats with other users by using the short message application program, the short message application program also provides a dialog box for the user to input the chat content, and the interface presented in the form of the dialog box is also an input interface.
When there is an input interface, then in the process of executing S101, the input data is obtained, specifically, the following manner may be used:
and receiving character data input by a user in a key pressing mode on the input interface. Or receiving character data input by a user in a handwriting mode on the input interface.
The first method comprises the following steps: and receiving the text content input by the user in a key pressing mode on the input interface.
For example: for example, if the mobile phone has a virtual keyboard, a character string may be formed by responding to a click operation on a plurality of keys on the keyboard. The character string generally has a certain corresponding relationship with the keyword, and one character string can correspond to a plurality of different keywords. And displaying the keywords determined by the user on the input interface by the mobile phone according to the selection of the user by the user through selecting the keywords by the user. For example, assuming that the first character string is "haiyang", it may display the following keywords (sea; (further itch); (sea yang); (further nourishment), etc.).
In addition, the above example is the keyword matched by using the pinyin character string, and in addition, in practical application, the keyword can be matched by using a way of stroke character string, which is not described in detail herein.
And the second method comprises the following steps: receiving the text content input by the user in a handwriting mode on the input interface.
For example: taking the mobile phone as an example, if the display screen of the mobile phone is a touch display screen, the mobile phone may detect a sliding operation on the touch screen, and further obtain a corresponding sliding track. Then, matching the corresponding sliding track with the tracks of all keywords in the mobile phone lexicon, and further obtaining any number of options with the highest matching degree, such as: the user writes the sea in the touch screen, keywords such as sea, thirst, basin and the like can appear according to the sliding track of the keywords, and the mobile phone displays the keywords determined by the user on the input interface according to the selection of the user by selecting the keywords.
As can be seen from the above description, in the embodiment of the present application, on the input interface, not only the text content input by the user through the key pressing manner but also the text content input by the user through the handwriting manner can be received. Therefore, the technical effect of more diversified input modes is achieved.
At this time, since the input data is character data, the object file of the first type is generated based on the input data, specifically: a second object file of the text type is generated based on the text data.
The second object file of the text type at this time is a piece or a sentence of text content obtained based on the text data. It is of the same type as the first object file of the text type, except that the text content may be different.
When the second object file of the text type is obtained, the following steps are performed:
firstly, acquiring parameter information of a second object file of the character type according to the second object file of the character type.
And then, obtaining second object files of corresponding M image types according to the parameter information, wherein M is a positive integer.
In the above implementation process, since the second object file of the text type is text content, and the parameter information of the text content may include various types, such as a keyword constituting the text content, a meaning expressed by the text content, and an environment where the electronic device is located when the user uses the electronic device to transmit the text content, such as time, place, weather, and the like.
Therefore, in the above implementation, there are the following steps:
and acquiring a first keyword contained in the second object file of the character type according to the second object file of the character type. Or
And acquiring the character meaning represented by the second object file of the character type according to the second object file of the character type. Or
And acquiring the environment information when the second object file describing the character type is acquired according to the second object file describing the character type.
Specifically, the specific way of obtaining the first keyword is as follows:
firstly, the second object file of the character type is split and recombined into K keywords. Wherein K is a positive integer.
Generally, the text content may be a keyword or a combination of several keywords. Therefore, after the text content is obtained, the text content is split and recombined into one or more keywords. When splitting, the keyword can be split, and the keyword group can also be split. For example: the input text content is: i and Zhang three stars day to polar region ocean park see dolphin. Then this session can be split and recombined into multiple keywords or keyword groups, such as: i, zhangsan, sunday, polar, ocean, park, ocean park, polar ocean park, see dolphins, etc.
Secondly, the priority of each keyword in the K keywords is determined, and the keyword with the highest priority is selected as the first keyword.
When a plurality of keywords are obtained, the priorities of the keywords are determined. Generally, in a word stock of a mobile phone, each priority of a keyword is stored, and therefore, after a plurality of keywords are obtained, the priorities of the keywords are called from the word stock according to the keywords, and then the keywords are arranged and distributed. When the priorities are arranged, since a plurality of keywords can be combined, the priority order of the combined keyword group can be increased, and if the more keywords are included in the keyword group obtained by recombination, the higher the priority is. As in the above keywords, "polar marine park" is actually derived from a combination of a plurality of keywords, and its priority is the highest, it is determined as the first keyword.
After the first keyword is determined, M images corresponding to the first keyword may be obtained in a variety of ways, and some of the images are listed below for description, and certainly, in the specific implementation process, the method is not limited to the following two cases.
First, according to a first keyword, image searching is carried out from a local database of the electronic equipment, and second object files of M image types are obtained.
Specifically, the second object files of M image types are actually M images, and therefore, for convenience of description, the term M images will be used for description.
In the specific implementation process, when image searching is performed in the local database of the electronic device, a plurality of searching modes can be provided. For example, when the local electronic device stores an image, the user may create a folder, label a name, a date, a specific event, and the like, and when searching for an image, the user may search according to parameters labeled in the folder, and at this time, almost all contents in the folder may be searched. And then matching each image by using the first keyword to obtain one or more matched images. If the first keyword is "polar ocean park", then according to the keyword, the folder containing the keyword is first matched, and then the corresponding image is matched from the matched folder. Of course, the first keyword may also be used to directly match out the corresponding image. In the matching process, a search can be performed according to the time, the place and the specific content in the image attached to the image, for example, the element in the first keyword "polar ocean park" obtained above can be matched with the content in the image, and then one or more images containing the element can be searched. If the keyword contains a name of a person, such as elements of 'me', 'zhang san' and the like, face recognition can be performed in the image, and when the image is recognized to contain two persons of 'me', 'zhang san', the face recognition is determined. Of course, if the first keyword is a date, a plurality of images including the date can be searched.
Of course, the first keyword may be a variety of words, such as time, weather, location, air festival, etc., and is not limited to the above-listed keyword sentences. And the images searched based on the first keyword are also various. Further, the image may include parameters such as time, location, face, air festival, scene, etc., and the folder for storing the image may also include parameters such as name, calendar date, three-party application filling information, etc., although the image may also be a common image such as a holiday or activity.
In addition, when matching, the electronic device may further store a correspondence between keywords and images, where one keyword may correspond to one image, or may correspond to multiple images, or multiple keywords may correspond to one image, and after determining the first keyword, for example: after the dolphin, the local database can be searched by taking the dolphin as a keyword, and then a plurality of images related to the dolphin are obtained.
As can be seen from the above description, in the embodiment of the present application, the M images corresponding to the first keyword can be directly obtained from the local server, and the local storage is directly accessed, so that the technical effect of increasing the obtaining rate is achieved.
And secondly, searching images from a cloud server connected with the electronic equipment according to the first keyword to obtain second object files of M image types.
In a specific implementation process, the image search can be performed on the cloud server based on the first keyword to obtain more images in order to obtain more image resources because the image search is limited by the storage space of the electronic device. For example: more images are searched by taking the dolphin as a keyword.
As can be seen from the above description, in the embodiment of the present application, the M images corresponding to the first keyword can be further obtained from the cloud server, and the data volume of the cloud server is larger, so that the obtained data volume is richer.
The above description has been made for M images determined by determining keywords. Other ways of matching the M images may also be used below, as described in more detail below.
If the parameter information of the text content is the text meaning expressed by the text content, image searching can be performed from a local database of the electronic equipment according to the text meaning, and second object files of M image types are obtained.
For example, the above-described text content is "i and zhangsanxingsiangtian arrive at the polar ocean park to see the dolphin", then after obtaining the text content, the text meaning of the text content is obtained, and then image search is performed to obtain M images. Further, in the searching process, the image searching may be performed from a local database of the electronic device, and certainly, in an actual situation, the searching may also be performed from a cloud server. The application is not limited to providing a location for a search.
In addition, according to the embodiment of the application, image searching can be performed from a cloud server connected with the electronic device according to the environment information, and second object files of the M image types are obtained.
Specifically, the specific meaning of the environment information is the environment information, such as time, place, weather, and the like, where the electronic device is located when the electronic device displays the text content. For example, when a user sends a microblog, a text content "today is fine in weather and is going to work" is input in an input interface provided by the microblog, the electronic device can associate and edit the text information of the environment where the text is located, such as time, place, weather and the like, and then search out a suitable photo according to the information.
Of course, in the searching process, besides the searching in the cloud server, the searching can be performed in the local database, and the specific location for providing the searching is not limited in the present application.
Further, in a specific implementation process, the M images may be acquired by one of the above manners, or may be acquired by a plurality of manners. When M images are acquired in one mode, which mode to acquire the M images may be determined through default setting, which mode to acquire the M images may also be determined through user setting, or which mode to acquire the M images may be determined through detecting a network environment where the electronic device is located, for example: and under the condition of high network speed, acquiring M images through the cloud server. And in the case of a slower web speed, M images are acquired via a local database, and so on. In a specific implementation process, the embodiment of the present application is not limited to what manner to obtain the first image.
After the corresponding M images are obtained, the images and the text are associated, and the following steps are performed.
The specific implementation process is as follows:
first, a first image is determined from a second object file of M image types as a matching picture of a second object file of a character type.
Specifically, the following methods are available for determining the final mapping:
first, a first image is determined from a second object file of an image type based on a user's selection operation.
In a specific implementation process, after the M images are obtained, partial tags or all tags of the M images may be displayed on a display interface of the electronic device, so that a user may perform a selection operation. If the value of M is small, all the labels of the M images can be displayed on the display interface. And when the value of M is larger, the label of the partial image in the M images can be displayed on the display interface. When the labels of the M images are displayed on the display interface, a matching image with the highest click rate may be screened from the M images, or an image with the highest similarity may also be screened from the M images, or M1 images with the smallest display size may be screened from the M images, and how to screen an image from the M images is not limited in the embodiment of the present application.
In addition, in a specific implementation process, the tags may be thumbnails, partial screenshots, numbers, or full graphs of M1 images, and the embodiments of the present application are not limited as to what kinds of tags are.
After the label information of the M images is displayed on the display interface, the final matching image can be determined based on the selection of the user, for example: the operation of clicking the label corresponding to the matching picture, the clicking of the number corresponding to the matching picture on the keyboard and the like.
Second, an image having the smallest image storage amount is determined as the first image from a second object file of the M image types.
In a specific implementation process, when there is a limit to a storage space, it is assumed that a search is performed with "dolphin" as a first keyword to obtain 5 images, which are: if the image a (1 kB), the image b (1.1 kB), the image c (0.8 MB), the image d (1.4 MB), and the image e (3 MB) are determined as the first image, the first keyword is not limited to "dolphin" or M images, and is not limited to the above-mentioned images in the specific implementation process.
Third, an image having the smallest image display size is determined as the first image from the second object file of the M image types.
In specific implementations, for example: suppose that search is performed with "dolphin" as the first keyword to obtain 5 images, which are: image a (34 px 20 px), image b (13 px 21 px), image c (124 px 130 px), image d (345 px 400 px), image e (250 px 855 px), then the first image a is determined.
In addition, in the implementation process, the above manners may be combined, that is, the first image may be determined by one manner, or may be determined by a plurality of manners, such as: the method includes the steps of determining a plurality of images with storage space smaller than a threshold, selecting an image with the smallest display size as a first image, and the like.
And then, sending the second object file of the character type and the first image to at least one second electronic device.
Based on the same inventive concept, an electronic device is described in the following embodiments.
Example two:
in an embodiment of the present application, an electronic device is provided.
Specifically, the electronic device in the embodiment of the present application may be a notebook computer, a tablet computer, a mobile phone, and the like. Referring next to fig. 2, the electronic device further includes:
a first obtaining unit 201 for obtaining input data;
a generating unit 202 for generating an object file of a first type based on input data;
a first determining unit 203 for determining an object file of a second type according to a predetermined rule based on an object file of a first type; the second type of object file is an object file which is already generated, and the first type is different from the second type;
the establishing unit 204 is configured to establish an association relationship between the object file of the first type and the object file of the second type.
Further, the electronic device includes an image acquisition unit, which is specifically configured to obtain image data. Further, the generating unit 202 is specifically further configured to generate a first object file having an image type based on the image data.
Further, the first determining unit 203 specifically includes:
the analysis unit is used for analyzing the first object file of the image type to obtain a portrait element and/or an environmental element in the first object file;
and the second determining unit is used for determining the first object files of N character types corresponding to the portrait elements and/or the environment elements based on the portrait elements and/or the environment elements, wherein N is a positive integer.
Further, the electronic device further includes:
and the display unit is used for correspondingly displaying one or more object files in the first object files of the N character types based on the portrait element and/or the environment element after the incidence relation is established between the object files of the first type and the object files of the second type.
Further, the first obtaining unit 201 is specifically configured to receive, on an input interface, text data input by a user in a key pressing manner; or receiving character data input by a user in a handwriting mode on the input interface.
Further, the generating unit 202 is specifically configured to generate a second object file of the text type based on the text data.
Further, the first determining unit 203 specifically includes:
the second obtaining unit is used for obtaining the parameter information of the second object file of the character type according to the second object file of the character type;
and the third obtaining unit is used for obtaining second object files of corresponding M image types according to the parameter information, wherein M is a positive integer.
Further, the second obtaining unit is specifically configured to:
acquiring a first keyword contained in a second object file of the character type according to the second object file of the character type; or
Acquiring the character meaning represented by the second object file of the character type according to the second object file of the character type; or
And acquiring the environment information when the second object file describing the character type is acquired according to the second object file describing the character type.
Further, the second obtaining unit specifically includes:
the splitting unit is used for splitting and recombining the second object file of the character type into K keywords, wherein K is a positive integer;
and the third determining unit is used for determining the priority of each keyword in the K keywords and selecting the keyword with the highest priority as the first keyword.
Further, the second obtaining unit is specifically configured to perform image search from a local database of the electronic device according to the first keyword, and obtain second object files of M image types; or according to the first keyword, performing image search from a cloud server connected with the electronic equipment to obtain second object files of M image types.
Further, the third obtaining unit is specifically configured to perform image search from a local database of the electronic device according to the word meaning, and obtain the second object files of the M image types.
Further, the third obtaining unit is specifically configured to perform image search from a cloud server connected to the electronic device according to the environment information, and obtain the second object files of the M image types.
Further, the electronic device further includes:
a fourth determining unit, configured to determine, after an association relationship is established between the object files of the first type and the object files of the second type, the first image from the second object files of the M image types, as a matching image of the second object file of the character type;
and the sending unit is used for sending the second object file and the first image of the character type to at least one second electronic device.
Further, the fourth determining unit is specifically configured to:
determining a first image from a second object file of the M image types based on the selection operation of the user; or
Determining an image with the minimum image storage amount from second object files of the M image types as a first image; or
An image having the smallest image display size is determined as the first image from the second object file of the M image types.
Through one or more embodiments of the invention, the following technical effects can be achieved:
in one or more embodiments of the present application, in order to enable an electronic device to actively search for other different types of files to a user based on one type of file, input data is first obtained, and then a first type of object file is generated based on the input data. And determining the object file of the second type according to a preset rule based on the object file of the first type. And then establishing an association relationship between the object files of the first type and the object files of the second type. If the text and the picture are taken as an example, the picture related to the text content can be searched according to the text content, and after a plurality of images are searched, the pictures are provided for the user to select. Therefore, after receiving the text content, the alternative pictures can be actively provided according to the text content, the alternative pictures are provided without the operation of a user, the provided pictures are richer, and the interaction mode is more diversified.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (16)

1. An information processing method, which is applied to an electronic device, the method comprising:
obtaining input data;
generating a first type of object file based on the input data;
determining a second type of object file according to a preset rule based on the first type of object file, wherein the second type of object file is a generated object file, the first type is different from the second type, the first type of object file is a character type of object file, and the second type of object file is an image type of object file;
establishing an association relationship between the object file of the first type and the object file of the second type;
determining the first image in at least one of the following ways: determining an image with a storage space smaller than a threshold value from M image type object files as the first image, or determining an image with a minimum image display size from M image type object files as the first image, wherein M is a positive integer, and the first image is the second type object file; and
and sending the object file of the first type and the first image to at least one second electronic device.
2. The method of claim 1, wherein the electronic device includes an image capture unit, and the obtaining input data specifically includes:
image data is obtained by the image acquisition unit.
3. The method according to claim 2, characterized in that an object file of a first type is generated based on the input data, in particular:
a first object file having an image type is generated based on the image data.
4. The method of claim 1, wherein the obtaining input data specifically comprises:
receiving character data input by a user in a key mode on an input interface; or receiving character data input by a user in a handwriting mode on the input interface.
5. The method according to claim 4, wherein when the input data is text data, generating an object file of a first type based on the input data, specifically:
and generating a second object file of the character type based on the character data.
6. The method according to claim 2, wherein determining the object file of the second type based on the object file of the first type according to a predetermined rule specifically comprises:
acquiring parameter information of the second object file of the character type according to the second object file of the character type;
and obtaining second object files of corresponding M image types according to the parameter information, wherein M is a positive integer.
7. The method according to claim 6, wherein the obtaining parameter information of the second object file of the text type according to the second object file of the text type specifically includes:
acquiring a first keyword contained in the second object file of the character type according to the second object file of the character type; or
Acquiring the character meaning represented by the second object file of the character type according to the second object file of the character type; or
And acquiring the environment information describing the second object file of the character type according to the second object file of the character type.
8. The method according to claim 7, wherein the obtaining a first keyword included in the second object file of the text type according to the second object file of the text type specifically includes:
splitting and recombining the second object file of the character type into K keywords, wherein K is a positive integer;
determining the priority of each keyword in the K keywords, and selecting the keyword with the highest priority as the first keyword.
9. The method according to claim 7, wherein the obtaining of the second object files of the corresponding M image types according to the parameter information includes:
according to the first keyword, image searching is carried out from a local database of the electronic equipment, and second object files of the M image types are obtained; or
And according to the first keyword, performing image search from a cloud server connected with the electronic equipment to obtain second object files of the M image types.
10. The method according to claim 7, wherein obtaining second object files corresponding to the M image types according to the parameter information specifically includes:
and according to the word meaning, carrying out image search from a local database of the electronic equipment to obtain second object files of the M image types.
11. The method according to claim 7, wherein the obtaining of the second object files of the corresponding M image types according to the parameter information specifically includes:
and according to the environment information, performing image search from a cloud server connected with the electronic equipment to obtain second object files of M image types.
12. The method of claim 1, wherein after said associating the object files of the first type with the object files of the second type, the method further comprises:
determining a first image from the second object files of the M image types as a matching picture of the second object files of the character types;
and sending the second object file of the character type and the first image to the at least one second electronic device.
13. The method of claim 12, wherein said determining a first image from a second object file of said M image types further comprises:
and determining the first image from the second object files of the M image types based on the selection operation of the user.
14. An electronic device, comprising:
a first obtaining unit configured to obtain input data;
a generating unit configured to generate an object file of a first type based on the input data;
a first determining unit, configured to determine an object file of a second type according to a predetermined rule based on the object file of the first type; the second type of object file is an object file which is already generated, and the first type is different from the second type, wherein the first type of object file is a character type of object file, and the second type of object file is an image type of object file;
the establishing unit is used for establishing an incidence relation between the object files of the first type and the object files of the second type;
a first image determination unit for determining a first image in at least one of the following ways: determining an image with a storage space smaller than a threshold value from M image type object files as the first image, or determining an image with a minimum image display size from M image type object files as the first image, wherein M is a positive integer, and the first image is the second type object file; and
and the sending unit is used for sending the object file of the first type and the first image to at least one second electronic device.
15. Electronic device according to claim 14, characterized in that the electronic device comprises an image acquisition unit, in particular for obtaining image data.
16. The electronic device of claim 15, wherein the generating unit is further specifically configured to generate a first object file having an image type based on the image data.
CN201310356860.4A 2013-08-15 2013-08-15 Information processing method and electronic equipment Active CN104375815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310356860.4A CN104375815B (en) 2013-08-15 2013-08-15 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310356860.4A CN104375815B (en) 2013-08-15 2013-08-15 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN104375815A CN104375815A (en) 2015-02-25
CN104375815B true CN104375815B (en) 2021-12-24

Family

ID=52554764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310356860.4A Active CN104375815B (en) 2013-08-15 2013-08-15 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN104375815B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446132B (en) * 2016-09-19 2019-07-23 百度在线网络技术(北京)有限公司 Search processing method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930263A (en) * 2012-09-27 2013-02-13 百度国际科技(深圳)有限公司 Information processing method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100536532C (en) * 2005-05-23 2009-09-02 北京大学 Method and system for automatic subtilting
JP5566624B2 (en) * 2009-04-08 2014-08-06 三菱電機株式会社 Image display system
JPWO2011125419A1 (en) * 2010-04-09 2013-07-08 日本電気株式会社 Web content conversion apparatus, Web content conversion method, and recording medium
US20110302149A1 (en) * 2010-06-07 2011-12-08 Microsoft Corporation Identifying dominant concepts across multiple sources
CN102314441A (en) * 2010-06-30 2012-01-11 百度在线网络技术(北京)有限公司 Method for user to input individualized primitive data and equipment and system
CN103179093B (en) * 2011-12-22 2017-05-31 腾讯科技(深圳)有限公司 The matching system and method for video caption

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930263A (en) * 2012-09-27 2013-02-13 百度国际科技(深圳)有限公司 Information processing method and device

Also Published As

Publication number Publication date
CN104375815A (en) 2015-02-25

Similar Documents

Publication Publication Date Title
CN109952610B (en) Selective identification and ordering of image modifiers
US20200057590A1 (en) Gallery of messages from individuals with a shared interest
US10565268B2 (en) Interactive communication augmented with contextual information
US11575639B2 (en) UI and devices for incenting user contribution to social network content
CN102982178B (en) A kind of image searching method, device and system
US9881322B2 (en) Data transfer between mobile computing devices using short-range communication systems
US11356498B2 (en) Method and a device for sharing a hosted application
US11308327B2 (en) Providing travel-based augmented reality content with a captured image
US11769500B2 (en) Augmented reality-based translation of speech in association with travel
CN109274999A (en) A kind of video playing control method, device, equipment and medium
KR101567555B1 (en) Social network service system and method using image
US11983461B2 (en) Speech-based selection of augmented reality content for detected objects
US20210406965A1 (en) Providing travel-based augmented reality content relating to user-submitted reviews
US20230091214A1 (en) Augmented reality items based on scan
KR20220155601A (en) Voice-based selection of augmented reality content for detected objects
CN111158924A (en) Content sharing method and device, electronic equipment and readable storage medium
US11048387B1 (en) Systems and methods for managing media feed timelines
CN109791545A (en) The contextual information of resource for the display including image
CN110036356B (en) Image processing in VR systems
CN104375815B (en) Information processing method and electronic equipment
US20220319082A1 (en) Generating modified user content that includes additional text content
CN113779293A (en) Image downloading method, device, electronic equipment and medium
US11321409B2 (en) Performing a search based on position information
CN106844783B (en) Information processing method and device
CN117014398A (en) Message interaction method, device, computer, readable storage medium and program product

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant