CN103905773A - Information processing method and electronic devices - Google Patents

Information processing method and electronic devices Download PDF

Info

Publication number
CN103905773A
CN103905773A CN201210590085.4A CN201210590085A CN103905773A CN 103905773 A CN103905773 A CN 103905773A CN 201210590085 A CN201210590085 A CN 201210590085A CN 103905773 A CN103905773 A CN 103905773A
Authority
CN
China
Prior art keywords
electronic equipment
medium data
expressive features
information
features information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210590085.4A
Other languages
Chinese (zh)
Other versions
CN103905773B (en
Inventor
杨丰华
张晓军
朱义国
黄大荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210590085.4A priority Critical patent/CN103905773B/en
Publication of CN103905773A publication Critical patent/CN103905773A/en
Application granted granted Critical
Publication of CN103905773B publication Critical patent/CN103905773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an information processing method and electronic devices. The information processing method is applied to the first electronic device. The first electronic device comprises an image acquisition device. The first electronic device can conduct communication with the second electronic device. The information processing method comprises the steps that when the first electronic device and the second electronic device are in communication, first multimedia data corresponding to a first user using the first electronic device are acquired through the image acquisition device; first expression feature information used for representing the expression of the first user is extracted from the first multimedia data; first parameter information related to the first expression feature information is sent to the second electronic device so that the second electronic device can display second multimedia data related to the first expression feature information based on the first parameter information.

Description

A kind of method of information processing and electronic equipment
Technical field
The present invention relates to communication technique field, particularly a kind of method of information processing and electronic equipment.
Background technology
Along with scientific and technical development, electronic technology has also obtained development at full speed, and the kind of electronic product is also more and more, and people have also enjoyed the various facilities that development in science and technology brings.People can pass through various types of electronic equipments now, enjoy the comfortable life bringing along with development in science and technology.
Wherein, a lot of electronic equipments all have communication function, such as: Video chat, in the time of Video chat, under normal circumstances, catch active user's expression by camera, then real-time Transmission is to another user, so that another user can understand active user's state.
Present inventor is realizing in the process of the embodiment of the present application technical scheme, at least finds to exist in prior art following technical problem:
Due in the prior art, in the time carrying out Video chat, is all conventionally that the whole dynamic image that catches active user is sent to another user, and then causes user's portrait to leak, so the lower technical problem of fail safe while existing Video chat in prior art.
Summary of the invention
The embodiment of the present invention provides a kind of method and electronic equipment of information processing, the lower technical problem of fail safe when solving the existing Video chat of prior art.
On the one hand, the application provides following technical scheme by an embodiment:
A method for information processing, described method is applied in the first electronic equipment, and described the first electronic equipment comprises image collecting device, and described the first electronic equipment can be communicated by letter with the second electronic equipment, and described method comprises:
In the time that described the first electronic equipment and described the second electronic equipment carry out communication, obtain corresponding the first multi-medium data of first user that uses described the first electronic equipment by described image collecting device collection;
From described the first multimedia, extract the first expressive features information of the expression for characterizing described first user;
Described the second electronic equipment first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, so that can present second multi-medium data relevant to described the first expressive features information based on described the first parameter information information.
Optionally, described first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, specifically comprises:
Obtain corresponding the first animation image of described first user;
By described the first expressive features information, determine described the second multi-medium data, described the second multi-medium data is described the first parameter information;
Described the second multi-medium data is sent to described the second electronic equipment.
Optionally, described in obtain corresponding the first animation image of described first user, specifically comprise:
Obtain the first image of described first user by described image collecting device collection;
Described the first image is carried out to feature extraction, obtain at least one characteristic information;
Based on described at least one characteristic information, determine described the first animation image.
Optionally, in the time that described the first expressive features information is described the first parameter information, described first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, is specially:
Described the first expressive features information is sent to described the second electronic equipment, so that described the second electronic equipment is determined described the second multi-medium data based on described the first expressive features information.
Optionally, described first parameter information relevant to described the first expressive features information is sent to the second electronic equipment before or after, described method also comprises:
Obtain corresponding the first animation image of described first user;
Described the first animation image is sent to described the second electronic equipment.
On the other hand, the application provides following technical scheme by another embodiment:
A method for information processing, described method is applied in the second electronic equipment, and described the second electronic equipment can communicate with the first electronic equipment, and described method comprises:
Receive sent by the first electronic equipment for characterizing the first expressive features information of expression of first user;
Determine second multi-medium data relevant to described the first expressive features information based on described the first expressive features information.
Optionally, described based on definite second multi-medium data relevant to described the first expressive features information of described the first expressive features information, be specially:
Determine first chatting facial expression corresponding with described the first expressive features information based on described the first expressive features information, described the first chatting facial expression is described the second multi-medium data.
Optionally, described based on definite second multi-medium data relevant to described the first expressive features information of described the first expressive features information, specifically comprise:
Obtain corresponding the first animation image of described first user;
Based on described the first animation image and described the first expressive features information, determine described the second multi-medium data.
Optionally, when described the second multi-medium data is that described the first electronic equipment is while being sent to the multi-medium data of described the second electronic equipment by a chat software, described determine second multi-medium data relevant to described the first expressive features information based on described the first expressive features information after, described method also comprises:
Described the second multi-medium data is shown in to described chat software corresponding display interface on described the second electronic equipment; And/or
By described the second multi-medium data, described first user is upgraded at the corresponding head portrait of described chat software.
Optionally, described described the second multi-medium data is shown in to described chat software corresponding display interface on described the second electronic equipment, is specially:
Described the second multi-medium data is shown in to described display interface in the mode of chatting facial expression; And/or
Described the second multi-medium data is shown in to described display interface in the mode of chat video.
On the other hand, the application provides following technical scheme by another embodiment:
A kind of electronic equipment, described electronic equipment comprises image collecting device, and described electronic equipment can be communicated by letter with the second electronic equipment, and described electronic equipment also comprises:
Acquisition module, in the time that described electronic equipment and described the second electronic equipment carry out communication, obtains corresponding the first multi-medium data of first user that uses described the first electronic equipment by described image collecting device collection;
Extraction module, for extracting the first expressive features information of the expression for characterizing described first user from described the first multimedia;
The first sending module, for first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, so that described the second electronic equipment can present second multi-medium data relevant to described the first expressive features information based on described the first parameter information information.
Optionally, described the first sending module, specifically comprises:
The first acquiring unit, for obtaining corresponding the first animation image of described first user;
The first determining unit, for by described the first expressive features information, determines described the second multi-medium data, and described the second multi-medium data is described the first parameter information;
Transmitting element, for being sent to described the second electronic equipment by described the second multi-medium data.
Optionally, described the first acquiring unit, specifically comprises:
Gather subelement, for obtain the first image of described first user by described image collecting device collection;
Extract subelement, for described the first image is carried out to feature extraction, obtain at least one characteristic information;
Determine subelement, for based on described at least one characteristic information, determine described the first animation image.
Optionally, in the time that described the first expressive features information is described the first parameter information, described the first sending module, specifically for:
Described the first expressive features information is sent to described the second electronic equipment, so that described the second electronic equipment is determined described the second multi-medium data based on described the first expressive features information.
Optionally, described electronic equipment also comprises:
Acquisition module, for before or after first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, obtains corresponding the first animation image of described first user;
The second sending module, for being sent to described the second electronic equipment by described the first animation image.
On the other hand, the application provides following technical scheme by another embodiment:
A kind of electronic equipment, described electronic equipment can communicate with the first electronic equipment, and described method comprises:
Receiver module, for receive sent by the first electronic equipment for characterizing the first expressive features information of expression of first user;
Determination module, for determining second multi-medium data relevant to described the first expressive features information based on described the first expressive features information.
Optionally, described determination module, specifically for:
Determine first chatting facial expression corresponding with described the first expressive features information based on described the first expressive features information, described the first chatting facial expression is described the second multi-medium data.
Optionally, described determination module, specifically comprises:
The first acquiring unit, for obtaining corresponding the first animation image of described first user;
The second determining unit, for based on described the first animation image and described the first expressive features information, determines described the second multi-medium data.
Optionally, when described the second multi-medium data is described the first electronic equipment while being sent to the multi-medium data of described the second electronic equipment by a chat software, described electronic equipment also comprises:
Display module, for after determining second multi-medium data relevant to described the first expressive features information based on described the first expressive features information, described the second multi-medium data is shown in to described chat software corresponding display interface on described the second electronic equipment; And/or
Update module, for upgrading at the corresponding head portrait of described chat software described first user by described the second multi-medium data.
Optionally, described display module, specifically for:
Described the second multi-medium data is shown in to described display interface in the mode of chatting facial expression; And/or
Described the second multi-medium data is shown in to described display interface in the mode of chat video.
The one or more technical schemes that provide in the embodiment of the present application, at least have following technique effect or advantage:
(1) due in the embodiment of the present application, adopt in the time that the first electronic equipment and the second electronic equipment carry out communication, gather the first multi-medium data of first user, and from described the first multi-medium data, extract the first expressive features information of the expression for characterizing first user, then the first relevant parameter information of described the first expressive features information is sent to the technical scheme of the second electronic equipment, due in the time transmitting data, only need the data that transmission is relevant to expression, and can not expose the portrait of first user, so reached the higher technique effect of fail safe.
(2) due in the embodiment of the present application, described the first parameter information can be only described the first expressive features information, so the data volume that is sent to described the second electronic equipment from described the first electronic equipment is less, reach the technique effect of saving network flow.
(3) due in the embodiment of the present application, described the second electronic equipment is obtaining the second multi-medium data based on described the first expressive features information, can be first chatting facial expression relevant to described the first expressive features information, so reached the more diversified technique effect of mode that obtains chatting facial expression.
(4) due in the embodiment of the present application, the first animation image and described the first expressive features information that can be based on corresponding with described first user, determine described the second multi-medium data, so reached the more accurate technique effect of described the second multi-medium data of determining; And the mode of determining described the second multi-medium data is also more diversified.
(5) due in the embodiment of the present application, when described the second multi-medium data is that described the first electronic equipment is while being sent to the multi-medium data of described the second electronic equipment by a chat software, described the second multi-medium data can also be shown in to described chat software corresponding display interface on described the second electronic equipment, so reached the more diversified technique effect of chat mode.
Brief description of the drawings
Fig. 1 is the flow chart of the method for information processing in the embodiment of the present application one;
Fig. 2 is the flow chart that in the embodiment of the present application one method, the first parameter information is sent to the second electronic equipment;
Fig. 3 is the flow chart that obtains the first animation image in the embodiment of the present application one method;
Fig. 4 is the flow chart of the method for information processing in the embodiment of the present application two;
Fig. 5 is the flow chart of determining the second multi-medium data in the embodiment of the present application two methods;
Fig. 6 is the structure chart of electronic equipment in the embodiment of the present application four;
Fig. 7 is the structure chart of electronic equipment in the embodiment of the present application five.
Embodiment
The embodiment of the present invention provides a kind of method and electronic equipment of information processing, the lower technical problem of fail safe when solving the existing Video chat of prior art.
Technical scheme in the embodiment of the present application is to solve above-mentioned technical problem, and general thought is as follows:
In the time that the first electronic equipment and the second electronic equipment carry out communication, obtain by the image collecting device collection of the first electronic equipment corresponding the first multi-medium data of first user that uses described the first electronic equipment; From described the first multimedia, extract the first expressive features information of the expression for characterizing described first user; Described the second electronic equipment first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, so that can present second multi-medium data relevant to described the first expressive features information based on described the first parameter information information.
While carrying out transfer of data due to employing such scheme, only need the data that transmission is relevant with expression, and can not expose the portrait of first user, so reached the higher technique effect of fail safe.
In order better to understand technique scheme, below in conjunction with Figure of description and concrete execution mode, technique scheme is described in detail.
Embodiment mono-
The embodiment of the present application one provides a kind of method of information processing, the method of described information processing is applied in the first electronic equipment, described the first electronic equipment comprises image collecting device, described the first electronic equipment can be communicated by letter with the second electronic equipment, and described the first electronic equipment is for example: notebook computer, mobile phone, panel computer etc.; Described the second electronic equipment is for example: notebook computer, mobile phone, panel computer etc.
Please refer to Fig. 1, the method for described information processing comprises the steps:
Step S101: in the time that described the first electronic equipment and described the second electronic equipment carry out communication, obtain corresponding the first multi-medium data of first user that uses described the first electronic equipment by described image collecting device collection;
Step S102: the first expressive features information of extracting the expression for characterizing described first user from described the first multimedia;
Step S103: first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, so that described the second electronic equipment can present second multi-medium data relevant to described the first expressive features information based on described the first parameter information information.
Wherein, in step S101, the first multi-medium data gathering by described image collecting device, it can be multiple situation, such as: which kind of data still image, dynamic image, video data etc. are for described the first multi-medium data, and the embodiment of the present application is not restricted.
Wherein, in step S102, described the first expressive features information can be also various features information, and two kinds that enumerate below are wherein introduced, and certainly, in specific implementation process, is not limited to following two kinds of situations.
The first, the relative displacement information of the face that described the first expressive features information is described first user, such as: the corners of the mouth for example, with reference to moving 0.5cm on image (: the first animation image of described first user), can certainly be other value with respect to one, such as: 0.2cm, 1cm etc.; Such as: the eyeball 0.7cm that moves to left, can certainly be other value again, such as: 0.5cm, 1.2cm etc., be specially which kind of displacement information for described relative displacement information, the embodiment of the present application no longer itemizes, and is not restricted.
The second, the expression information that described the first expressive features information is described first user, such as: " crying ", " laughing at ", " frowning " etc., wherein, after the first multi-medium data of described first user being detected, just can pass through the expression information of described the first multi-medium data.
In specific implementation process, in the time that described the first multi-medium data is still image, described the first expressive features information is only for comprising the information relevant to the expression of described first user; But, in the time that described the first multi-medium data is dynamic image, described the first expressive features information except the relevant information of the expression that comprises described first user, temporal information that can also be corresponding with expression, for example as shown in table 1:
Table 1
Time Expression
12:00:01 Smile
12:00:05 Wild laugh
12:00:10 Frown
Wherein, in step S103, described the first parameter information can be divided into multiple situation, and then sends the also difference of process of described the first parameter information, and two kinds that enumerate below are wherein introduced, and certainly, in specific implementation process, are not limited to following two kinds of situations.
The first, please refer to Fig. 2, specifically comprises the steps:
Step S201: obtain corresponding the first animation image of described first user;
Step S202: by described the first expressive features information, determine described the second multi-medium data, described the second multi-medium data is described the first parameter information;
Step S203: described the second multi-medium data is sent to described the second electronic equipment.
Wherein, in step S201, obtain the process of described the first animation image, as shown in Figure 3, can comprise the steps: again
Step S301: the first image that obtains described first user by described image collecting device collection;
Step S302: described the first image is carried out to feature extraction, obtain at least one characteristic information;
Step S303: based on described at least one characteristic information, determine described the first animation image.
Wherein, in step S301, for the ease of described first user is carried out to feature identification, described the first image is generally the face image of described first user, can certainly be other image, and the embodiment of the present application is not restricted.
Wherein, in step S302, described at least one characteristic information can be various features information, such as: the shape of face of described first user, for example: oval face, oval face, state's word face etc.; The face feature of described first user, taking eyes as example, such as: slim eye, standard eye, elongated eye etc.; The face size of described first user and distributing position of face etc., can also be out of Memory certainly, and the embodiment of the present application no longer itemizes and is not restricted.
Wherein, due to the information such as shape of face size, face type, distribution based on determining described first user in step S302, so just can determine the first animation image of described first user based on these information, wherein, described at least one characteristic information is more detailed, and the appearance of described the first animation image and described first user is similar.
In specific implementation process, after determining described the first animation image by described at least one characteristic information, can also it be adjusted the operation based on user, such as: user by shape of face adjust thin, eyes are tuned up etc., and then more can meet user's demand.
In specific implementation process, described the first animation image can just gather in the time that needs are determined described the second multi-medium data, also can be collected in advance described the first animation image and be stored in described the first electronic equipment or server that described the first electronic equipment can be accessed, in the time that needs use described the first animation image, directly from the first animation image described in described the first electronic equipment or described server calls.
Wherein, owing to having determined the first animation image of described first user based on step S201, and determine the first expressive features information of described first user based on step S102, so, based on described the first expressive features information, described the first animation image is adjusted, just can be obtained described the second multi-medium data.
Wherein, in the time that described the first expressive features only comprises expression information, described the second multi-medium data is single image; In the time that described the first expressive features comprises expression information and time, can determine multiple images, then by described multiple images according to Time alignment, just can obtain the second dynamic image or the second video file.
The second, in the time that described the first expressive features information is described the first parameter information, is describedly sent to the second electronic equipment by the first parameter information relevant to described the first expressive features information, is specially:
Described the first expressive features information is sent to described the second electronic equipment, so that described the second electronic equipment is determined described the second multi-medium data based on described the first expressive features information.
In first kind of way, on described the first electronic equipment after synthetic described the second animation image, be sent to described the second electronic equipment, but in order to save network traffics, also can directly described the first expressive features information be sent to described the second electronic equipment, then determine described the second multi-medium data by described the second electronic equipment based on described the first expressive features information.
In this case, described method also comprises:
Obtain corresponding the first animation image of described first user;
Described the first animation image is sent to described the second electronic equipment.
In specific implementation process, described the first animation image can be sent to described the second electronic equipment before described the first parameter information is sent to described the second electronic equipment, also can after described the first parameter information is sent to described the second electronic equipment, send, this embodiment of the present application is not restricted; In addition, described the first animation image is after being sent to described the second electronic equipment, described the second electronic equipment can be stored the mapping table of described first user and described the first animation image, like this, just need not all need to send described the first animation image at every turn.
As seen from the above description, due in the embodiment of the present application, described the first parameter information can be only described the first expressive features information, so the data volume that is sent to described the second electronic equipment from described the first electronic equipment is less, has reached the technique effect of saving network flow.
Embodiment bis-
Based on same inventive concept, the embodiment of the present application two provides a kind of method of information processing, described method is applied in the second electronic equipment, described the second electronic equipment can communicate with described the first electronic equipment, and described the second electronic equipment is for example: notebook computer, panel computer, mobile phone etc.; Described the first electronic equipment is for example: panel computer, notebook computer, mobile phone etc.
Please refer to Fig. 4, the method for described information processing comprises the steps:
Step S401: receive sent by the first electronic equipment for characterizing the first expressive features information of expression of first user;
Step S402: determine second multi-medium data relevant to described the first expressive features information based on described the first expressive features information.
Wherein, in step S402, determine that described the second multi-medium data can adopt various ways, two kinds that enumerate below are wherein introduced, and certainly, in specific implementation process, are not limited to following two kinds of situations.
The first, described based on definite second multi-medium data relevant to described the first expressive features information of described the first expressive features information, be specially:
Determine first chatting facial expression corresponding with described the first expressive features information based on described the first expressive features information, described the first chatting facial expression is described the second multi-medium data.
In specific implementation process, after described the second electronic equipment receives described the first expressive features information, just can determine the at a time corresponding expression information of user, such as being: smile, then the expression that search is relevant to smile is as described the second multi-medium data, in specific implementation process, can under the expression file folder of the first chat software, search for the first corresponding chatting facial expression; Also can all search in this locality of described the second electronic equipment and obtain described the first chatting facial expression; Or by web search first chatting facial expression corresponding with " smile ", in specific implementation process, adopting which kind of mode to obtain described the first chatting facial expression, the embodiment of the present application no longer itemizes, and is not restricted.
In specific implementation process, after obtaining described the first chatting facial expression, such as " (* ^_^*) " afterwards, can be inserted in the chat message that first user is sent to, first user is in the time sending chat message like this, even without expression, the chat message of accepting of the second electronic equipment also can comprise expression, such as: suppose that the chat message that the first electronic equipment sends is " need not go to work today ", so, the second electronic equipment can receive following chat message " (* ^_^*) need not go to work today ", certainly, in specific implementation process, described the first chatting facial expression and described chat message can also be other chatting facial expression and information, this the embodiment of the present application is no longer itemized, and be not restricted.
As seen from the above description, due in the embodiment of the present application, described the second electronic equipment is obtaining the second multi-medium data based on described the first expressive features information, can be first chatting facial expression relevant to described the first expressive features information, so reached the more diversified technique effect of mode that obtains chatting facial expression.
The second, as shown in Figure 5, described based on definite second multi-medium data relevant to described the first expressive features information of described the first expressive features information, specifically comprise:
Step S501: obtain corresponding the first animation image of described first user;
Step S502: based on described the first animation image and described the first expressive features information, determine described the second multi-medium data.
Wherein, in step S501, described the second electronic equipment can obtain from many places described the first animation expression, for example: obtain described the first animation expression that the first electronic equipment is sent to, obtain described the first animation expression that described second electronic equipment this locality prestores, obtain described first animation expression etc. by cloud server, this embodiment of the present application is not restricted.
Wherein, due to described the first animation facial characteristics information that has comprised described first user of expressing one's feelings, and described the first expressive features packets of information is containing the expression information of described first user, so based on synthesizing of the two, just can determine the animation expression of described first user, namely described the second multi-medium data.
In specific implementation process, based on the difference of described the first expressive features information, the mode of synthetic described the second multi-medium data is also different, and two kinds that enumerate below are wherein introduced, and certainly, in specific implementation process, are not limited to following two kinds of situations.
The first, in the time of the relative displacement information of described the first expressive features information face that are described first user, can determine described the second multi-medium data by the face on described the first animation image are carried out to displacement, such as: if described the first expressive features information shows that the corners of the mouth of described first user is with respect to moving 0.5cm on described the first animation image, so, on described the first animation image according to the same equal proportion of 0.5cm, the corners of the mouth is upwarped, certainly, the difference of the relative displacement information based on wherein, synthetic data are also different, the embodiment of the present application no longer itemizes, and be not restricted.
The second, in the time of expression information that described the first expressive features information is described first user, can adjust parameter by the expression prestoring in the second electronic equipment and adjust described the first animation image, and then obtain described the second multi-medium data.Such as: in the time that described expression information is " smile ", expression is adjusted parameter and is: the corners of the mouth upwarps 5px, can certainly be other value; Described expression information is when " frowning ", and expression is adjusted parameter and is: eyebrow both ends horizontal shift invariant, the eyebrow center 3px etc. that moves up, can certainly be other value.In specific implementation process, applied environment that can be based on different, arranges different expressions and adjusts parameter, this embodiment of the present application is no longer itemized, and be not restricted.
As seen from the above description, due in the embodiment of the present application, the first animation image and described the first expressive features information that can be based on corresponding with described first user, determine described the second multi-medium data, so reached the more accurate technique effect of described the second multi-medium data of determining; And the mode of determining described the second multi-medium data is also more diversified.
In specific implementation process, described the second multi-medium data is for example: described the first electronic equipment is sent to the multi-medium data of described the second electronic equipment by a chat software; Or described the first electronic equipment is in the time of video calling, video content of described the second electronic equipment being sent to by network communication system etc.
Wherein, after determining described the second multi-medium data, can also carry out multiple operation based on described the second multi-medium data, being sent to two kinds that the multi-medium data of described the second electronic equipment enumerates wherein taking described the one the second multi-medium datas as: described the first electronic equipment by a chat software is below introduced, certainly, in specific implementation process, be not limited to following two kinds of situations.
The first, is shown in described chat software corresponding display interface on described the second electronic equipment by described the second multi-medium data.
In specific implementation process, when being shown in to described display unit, described the second multi-medium data can be divided into again multiple situation, and two kinds that enumerate below are wherein introduced, and certainly, in specific implementation process, are not limited to following two kinds of situations.
1. described the second multi-medium data is shown in to described display interface in the mode of chatting facial expression;
Specifically, namely in the time that described the first electronic equipment sends chat message to described the second electronic equipment, described the second multi-medium data is shown together with chat content with the form of chatting facial expression, before can being shown in chatting facial expression, between or afterwards, the embodiment of the present application is not restricted.
2. described the second multi-medium data is shown in to described display interface in the mode of chat video.
Specifically, namely replace the first multi-medium data used in current Video chat with described the second multi-medium data, do not revealed with the privacy that ensures user.
As seen from the above description, due in the embodiment of the present application, when described the second multi-medium data is that described the first electronic equipment is while being sent to the multi-medium data of described the second electronic equipment by a chat software, described the second multi-medium data can also be shown in to described chat software corresponding display interface on described the second electronic equipment, so reached the more diversified technique effect of chat mode.
The second, upgrades at the corresponding head portrait of described chat software described first user by described the second multi-medium data.
In specific implementation process, under normal circumstances, the corresponding different user of the first chat software can have different head portraits, and the head portrait of corresponding first user is conventionally set by first user, but in the embodiment of the present application, after the second electronic equipment obtains described the second multi-medium data, can upgrade described head portrait by described the second multi-medium data.
In specific implementation process, can upgrade described head portrait every Preset Time interval; Also can every initiation once new chat, just described head portrait is upgraded; Also can be after obtaining described the second multi-medium data, the selection based on user determines whether described head portrait to upgrade etc.For in specific implementation process, adopt which kind of mode to upgrade described head portrait, the embodiment of the present application is not restricted.
Embodiment tri-
In order to make technical staff described in this area can understand the method for the information processing that the embodiment of the present application one and embodiment bis-introduce, the embodiment of the present application three will stand in user's side and introduce the method for the information processing that the embodiment of the present application one and embodiment bis-introduce.
In the embodiment of the present application, described the first electronic equipment is the first notebook computer, and it comprises a camera; Described the second electronic equipment is the second notebook computer.
In the T1 moment, user A opens the first notebook computer, and user B opens the second notebook computer, and this two people of while opens QQ chat tool and chats.
In the T2 moment, user A sends Video chat request to user B, and described the first notebook computer detection sends it to the second electronic equipment after arriving described Video chat request, after described the second electronic equipment receives described Video chat request, click approve button; Agreement information is sent to described the first electronic equipment by described the second electronic equipment.
Described the first electronic equipment is opened described camera after receiving described agreement information, and catches the first dynamic video of user A, then described the first dynamic video is extracted to the first expressive features information of described first user, is assumed to be content shown in table 2:
Table 2
Time The first expressive features information
21:01:00 Stupefied
21:01:05 Angry
21:01:10 Frown
21:01:15 Stupefied
Then, described the first expressive features information is sent to described the second electronic equipment.
Described the second electronic equipment is after receiving described the first expressive features information, from the corresponding animation image of ontology acquisition user A of the second electronic equipment, then the corresponding relation based between expression and face, described animation image is adjusted, and then the image of four various expressions that comprise user A of acquisition, then by it according to synthetic the second dynamic video of time sequencing, and be presented in the Video chat window of the QQ on the display unit of the second electronic equipment.
Embodiment tetra-
Based on same inventive concept, the embodiment of the present application four provides a kind of electronic equipment, described electronic equipment comprises image collecting device, and described electronic equipment can communicate with the second electronic equipment, and described electronic equipment is the first electronic equipment that the embodiment of the present application one and embodiment bis-introduce.
Please refer to Fig. 6, described electronic equipment also comprises following structure:
Acquisition module 601, in the time that described electronic equipment and described the second electronic equipment carry out communication, obtains corresponding the first multi-medium data of first user that uses described the first electronic equipment by described image collecting device collection;
Extraction module 602, for extracting the first expressive features information of the expression for characterizing described first user from described the first multimedia;
The first sending module 603, for first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, so that described the second electronic equipment can present second multi-medium data relevant to described the first expressive features information based on described the first parameter information information.
In specific implementation process, the first multi-medium data that described acquisition module 601 gathers can be polytype multi-medium data, such as: still image, dynamic image, video file etc.
In specific implementation process, the first parameter information that described the first sending module 603 sends can be multiple situation, and then the functional module of described the first sending module 603 is also different, two kinds that enumerate below are wherein introduced, certainly, in specific implementation process, be not limited to following two kinds of situations.
The first, described the first sending module 603, specifically comprises:
The first acquiring unit, for obtaining corresponding the first animation image of described first user;
The first determining unit, for by described the first expressive features information, determines described the second multi-medium data, and described the second multi-medium data is described the first parameter information;
Transmitting element, for being sent to described the second electronic equipment by described the second multi-medium data.
In specific implementation process, described the first acquiring unit, specifically comprises:
Gather subelement, for obtain the first image of described first user by described image collecting device collection;
Extract subelement, for described the first image is carried out to feature extraction, obtain at least one characteristic information;
Determine subelement, for based on described at least one characteristic information, determine described the first animation image.
The second, in the time that described the first expressive features information is described the first parameter information, described the first sending module, specifically for:
Described the first expressive features information is sent to described the second electronic equipment, so that described the second electronic equipment is determined described the second multi-medium data based on described the first expressive features information.
In this case, described electronic equipment also comprises:
Acquisition module, for before or after first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, obtains corresponding the first animation image of described first user;
The second sending module, for being sent to described the second electronic equipment by described the first animation image.
As seen from the above description, due in the embodiment of the present application, described the first parameter information can be only described the first expressive features information, so the data volume that is sent to described the second electronic equipment from described the first electronic equipment is less, has reached the technique effect of saving network flow.
The electronic equipment of introducing due to the embodiment of the present application four, the electronic equipment adopting for implementing the method for the information processing in the embodiment of the present application one, so method of the information processing based on introducing in the embodiment of the present application one, the concrete structure that those skilled in the art can understand the electronic equipment in the present embodiment four with and various version, so introduce no longer in detail for this electronic equipment at this.As long as those skilled in the art implement the electronic equipment that the method for information processing in the embodiment of the present application one adopts, all belong to the scope of the application institute wish protection.
Embodiment five
Based on same inventive concept, the embodiment of the present application five provides a kind of electronic equipment, and described electronic equipment can communicate with the first electronic equipment, and described electronic equipment is the second electronic equipment that the embodiment of the present application one and embodiment bis-introduce.
Please refer to Fig. 7, described electronic equipment comprises:
Receiver module 701, for receive sent by the first electronic equipment for characterizing the first expressive features information of expression of first user;
Determination module 702, for determining second multi-medium data relevant to described the first expressive features information based on described the first expressive features information.
In specific implementation process, described determination module 702 can be determined described second multi-medium data of various ways, and then its functional module is also different, and two kinds that enumerate below are wherein introduced, and certainly, in specific implementation process, are not limited to following two kinds of situations.
The first, described determination module 702, specifically for:
Determine first chatting facial expression corresponding with described the first expressive features information based on described the first expressive features information, described the first chatting facial expression is described the second multi-medium data.
As seen from the above description, due in the embodiment of the present application, described the second electronic equipment is obtaining the second multi-medium data based on described the first expressive features information, can be first chatting facial expression relevant to described the first expressive features information, so reached the more diversified technique effect of mode that obtains chatting facial expression.
The second, described determination module 702, specifically comprises:
The first acquiring unit, for obtaining corresponding the first animation image of described first user;
The second determining unit, for based on described the first animation image and described the first expressive features information, determines described the second multi-medium data.
As seen from the above description, due in the embodiment of the present application, the first animation image and described the first expressive features information that can be based on corresponding with described first user, determine described the second multi-medium data, so reached the more accurate technique effect of described the second multi-medium data of determining; And the mode of determining described the second multi-medium data is also more diversified.
In specific implementation process, when described the second multi-medium data is described the first electronic equipment while being sent to the multi-medium data of described the second electronic equipment by a chat software, described electronic equipment also comprises:
Display module, for after determining second multi-medium data relevant to described the first expressive features information based on described the first expressive features information, described the second multi-medium data is shown in to described chat software corresponding display interface on described the second electronic equipment; And/or
Update module, for upgrading at the corresponding head portrait of described chat software described first user by described the second multi-medium data.
In specific implementation process, described display module, specifically for:
Described the second multi-medium data is shown in to described display interface in the mode of chatting facial expression; And/or
Described the second multi-medium data is shown in to described display interface in the mode of chat video.
As seen from the above description, due in the embodiment of the present application, when described the second multi-medium data is that described the first electronic equipment is while being sent to the multi-medium data of described the second electronic equipment by a chat software, described the second multi-medium data can also be shown in to described chat software corresponding display interface on described the second electronic equipment, so reached the more diversified technique effect of chat mode.
The electronic equipment of introducing due to the embodiment of the present application five, the electronic equipment adopting for implementing the method for the information processing in the embodiment of the present application two, so method of the information processing based on introducing in the embodiment of the present application two, the concrete structure that those skilled in the art can understand the electronic equipment in the present embodiment five with and various version, so introduce no longer in detail for this electronic equipment at this.As long as those skilled in the art implement the electronic equipment that the method for information processing in the embodiment of the present application two adopts, all belong to the scope of the application institute wish protection.
One or more technical schemes that the application provides, at least have following technique effect or advantage:
(1) due in the embodiment of the present application, adopt in the time that the first electronic equipment and the second electronic equipment carry out communication, gather the first multi-medium data of first user, and from described the first multi-medium data, extract the first expressive features information of the expression for characterizing first user, then the first relevant parameter information of described the first expressive features information is sent to the technical scheme of the second electronic equipment, due in the time transmitting data, only need the data that transmission is relevant to expression, and can not expose the portrait of first user, so reached the higher technique effect of fail safe.
(2) due in the embodiment of the present application, described the first parameter information can be only described the first expressive features information, so the data volume that is sent to described the second electronic equipment from described the first electronic equipment is less, reach the technique effect of saving network flow.
(3) due in the embodiment of the present application, described the second electronic equipment is obtaining the second multi-medium data based on described the first expressive features information, can be first chatting facial expression relevant to described the first expressive features information, so reached the more diversified technique effect of mode that obtains chatting facial expression.
(4) due in the embodiment of the present application, the first animation image and described the first expressive features information that can be based on corresponding with described first user, determine described the second multi-medium data, so reached the more accurate technique effect of described the second multi-medium data of determining; And the mode of determining described the second multi-medium data is also more diversified.
(5) due in the embodiment of the present application, when described the second multi-medium data is that described the first electronic equipment is while being sent to the multi-medium data of described the second electronic equipment by a chat software, described the second multi-medium data can also be shown in to described chat software corresponding display interface on described the second electronic equipment, so reached the more diversified technique effect of chat mode.
Although described the preferred embodiments of the present invention, once those skilled in the art obtain the basic creative concept of cicada, can make other change and amendment to these embodiment.So claims are intended to be interpreted as comprising preferred embodiment and fall into all changes and the amendment of the scope of the invention.
Obviously, those skilled in the art can carry out various changes and modification and not depart from the spirit and scope of the present invention the present invention.Like this, if these amendments of the present invention and within modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention is also intended to comprise these changes and modification interior.

Claims (20)

1. a method for information processing, described method is applied in the first electronic equipment, and described the first electronic equipment comprises image collecting device, and described the first electronic equipment can be communicated by letter with the second electronic equipment, it is characterized in that, and described method comprises:
In the time that described the first electronic equipment and described the second electronic equipment carry out communication, obtain corresponding the first multi-medium data of first user that uses described the first electronic equipment by described image collecting device collection;
From described the first multimedia, extract the first expressive features information of the expression for characterizing described first user;
Described the second electronic equipment first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, so that can present second multi-medium data relevant to described the first expressive features information based on described the first parameter information information.
2. the method for claim 1, is characterized in that, described first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, specifically comprises:
Obtain corresponding the first animation image of described first user;
By described the first expressive features information, determine described the second multi-medium data, described the second multi-medium data is described the first parameter information;
Described the second multi-medium data is sent to described the second electronic equipment.
3. method as claimed in claim 2, is characterized in that, described in obtain corresponding the first animation image of described first user, specifically comprise:
Obtain the first image of described first user by described image collecting device collection;
Described the first image is carried out to feature extraction, obtain at least one characteristic information;
Based on described at least one characteristic information, determine described the first animation image.
4. the method for claim 1, is characterized in that, in the time that described the first expressive features information is described the first parameter information, described first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, is specially:
Described the first expressive features information is sent to described the second electronic equipment, so that described the second electronic equipment is determined described the second multi-medium data based on described the first expressive features information.
5. method as claimed in claim 4, is characterized in that, described first parameter information relevant to described the first expressive features information is sent to the second electronic equipment before or after, described method also comprises:
Obtain corresponding the first animation image of described first user;
Described the first animation image is sent to described the second electronic equipment.
6. a method for information processing, described method is applied in the second electronic equipment, and described the second electronic equipment can communicate with the first electronic equipment, it is characterized in that, and described method comprises:
Receive sent by the first electronic equipment for characterizing the first expressive features information of expression of first user;
Determine second multi-medium data relevant to described the first expressive features information based on described the first expressive features information.
7. method as claimed in claim 6, is characterized in that, described based on definite second multi-medium data relevant to described the first expressive features information of described the first expressive features information, is specially:
Determine first chatting facial expression corresponding with described the first expressive features information based on described the first expressive features information, described the first chatting facial expression is described the second multi-medium data.
8. method as claimed in claim 6, is characterized in that, described based on definite second multi-medium data relevant to described the first expressive features information of described the first expressive features information, specifically comprises:
Obtain corresponding the first animation image of described first user;
Based on described the first animation image and described the first expressive features information, determine described the second multi-medium data.
9. method as claimed in claim 8, it is characterized in that, when described the second multi-medium data is that described the first electronic equipment is while being sent to the multi-medium data of described the second electronic equipment by a chat software, described determine second multi-medium data relevant to described the first expressive features information based on described the first expressive features information after, described method also comprises:
Described the second multi-medium data is shown in to described chat software corresponding display interface on described the second electronic equipment; And/or
By described the second multi-medium data, described first user is upgraded at the corresponding head portrait of described chat software.
10. method as claimed in claim 9, is characterized in that, described described the second multi-medium data is shown in to described chat software corresponding display interface on described the second electronic equipment, is specially:
Described the second multi-medium data is shown in to described display interface in the mode of chatting facial expression; And/or
Described the second multi-medium data is shown in to described display interface in the mode of chat video.
11. 1 kinds of electronic equipments, described electronic equipment comprises image collecting device, and described electronic equipment can be communicated by letter with the second electronic equipment, it is characterized in that, and described electronic equipment also comprises:
Acquisition module, in the time that described electronic equipment and described the second electronic equipment carry out communication, obtains corresponding the first multi-medium data of first user that uses described the first electronic equipment by described image collecting device collection;
Extraction module, for extracting the first expressive features information of the expression for characterizing described first user from described the first multimedia;
The first sending module, for first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, so that described the second electronic equipment can present second multi-medium data relevant to described the first expressive features information based on described the first parameter information information.
12. electronic equipments as claimed in claim 11, is characterized in that, described the first sending module, specifically comprises:
The first acquiring unit, for obtaining corresponding the first animation image of described first user;
The first determining unit, for by described the first expressive features information, determines described the second multi-medium data, and described the second multi-medium data is described the first parameter information;
Transmitting element, for being sent to described the second electronic equipment by described the second multi-medium data.
13. electronic equipments as claimed in claim 12, is characterized in that, described the first acquiring unit, specifically comprises:
Gather subelement, for obtain the first image of described first user by described image collecting device collection;
Extract subelement, for described the first image is carried out to feature extraction, obtain at least one characteristic information;
Determine subelement, for based on described at least one characteristic information, determine described the first animation image.
14. electronic equipments as claimed in claim 11, is characterized in that, in the time that described the first expressive features information is described the first parameter information, and described the first sending module, specifically for:
Described the first expressive features information is sent to described the second electronic equipment, so that described the second electronic equipment is determined described the second multi-medium data based on described the first expressive features information.
15. electronic equipments as claimed in claim 14, is characterized in that, described electronic equipment also comprises:
Acquisition module, for before or after first parameter information relevant to described the first expressive features information is sent to the second electronic equipment, obtains corresponding the first animation image of described first user;
The second sending module, for being sent to described the second electronic equipment by described the first animation image.
16. 1 kinds of electronic equipments, described electronic equipment can communicate with the first electronic equipment, it is characterized in that, and described electronic equipment comprises:
Receiver module, for receive sent by the first electronic equipment for characterizing the first expressive features information of expression of first user;
Determination module, for determining second multi-medium data relevant to described the first expressive features information based on described the first expressive features information.
17. electronic equipments as claimed in claim 16, is characterized in that, described determination module, specifically for:
Determine first chatting facial expression corresponding with described the first expressive features information based on described the first expressive features information, described the first chatting facial expression is described the second multi-medium data.
18. electronic equipments as claimed in claim 16, is characterized in that, described determination module, specifically comprises:
The first acquiring unit, for obtaining corresponding the first animation image of described first user;
The second determining unit, for based on described the first animation image and described the first expressive features information, determines described the second multi-medium data.
19. electronic equipments as claimed in claim 18, is characterized in that, when described the second multi-medium data is described the first electronic equipment while being sent to the multi-medium data of described the second electronic equipment by a chat software, described electronic equipment also comprises:
Display module, for after determining second multi-medium data relevant to described the first expressive features information based on described the first expressive features information, described the second multi-medium data is shown in to described chat software corresponding display interface on described the second electronic equipment; And/or
Update module, for upgrading at the corresponding head portrait of described chat software described first user by described the second multi-medium data.
20. electronic equipments as claimed in claim 19, is characterized in that, described display module, specifically for:
Described the second multi-medium data is shown in to described display interface in the mode of chatting facial expression; And/or
Described the second multi-medium data is shown in to described display interface in the mode of chat video.
CN201210590085.4A 2012-12-28 2012-12-28 A kind of method and electronic equipment of information processing Active CN103905773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210590085.4A CN103905773B (en) 2012-12-28 2012-12-28 A kind of method and electronic equipment of information processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210590085.4A CN103905773B (en) 2012-12-28 2012-12-28 A kind of method and electronic equipment of information processing

Publications (2)

Publication Number Publication Date
CN103905773A true CN103905773A (en) 2014-07-02
CN103905773B CN103905773B (en) 2018-08-10

Family

ID=50996898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210590085.4A Active CN103905773B (en) 2012-12-28 2012-12-28 A kind of method and electronic equipment of information processing

Country Status (1)

Country Link
CN (1) CN103905773B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407313A (en) * 2015-10-28 2016-03-16 掌赢信息科技(上海)有限公司 Video calling method, equipment and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1127388A (en) * 1994-07-28 1996-07-24 株式会社半导体能源研究所 Information processing system
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method
US20050196018A1 (en) * 2001-12-31 2005-09-08 Microsoft Corporation Machine vision system and method for estimating and tracking facial pose
JP2006121158A (en) * 2004-10-19 2006-05-11 Olympus Corp Videophone system
CN102271241A (en) * 2011-09-02 2011-12-07 北京邮电大学 Image communication method and system based on facial expression/action recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1127388A (en) * 1994-07-28 1996-07-24 株式会社半导体能源研究所 Information processing system
US20050196018A1 (en) * 2001-12-31 2005-09-08 Microsoft Corporation Machine vision system and method for estimating and tracking facial pose
JP2006121158A (en) * 2004-10-19 2006-05-11 Olympus Corp Videophone system
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method
CN102271241A (en) * 2011-09-02 2011-12-07 北京邮电大学 Image communication method and system based on facial expression/action recognition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407313A (en) * 2015-10-28 2016-03-16 掌赢信息科技(上海)有限公司 Video calling method, equipment and system

Also Published As

Publication number Publication date
CN103905773B (en) 2018-08-10

Similar Documents

Publication Publication Date Title
WO2021013158A1 (en) Display method and related apparatus
CN103906010B (en) Method, machine readable storage medium and the server of multiple terminal room synchronization messages
EP2890088A1 (en) Method for displaying schedule reminding information, terminal device and cloud server
CN110166439B (en) Equipment sharing method, terminal, router and server
KR20150024526A (en) Information Obtaining Method and Apparatus
US8866587B2 (en) Remote display control
CN107908765B (en) Game resource processing method, mobile terminal and server
CN107908330B (en) The management method and mobile terminal of application icon
WO2019132564A1 (en) Method and system for classifying time-series data
US20210144197A1 (en) Method for Presenting Schedule Reminder Information, Terminal Device, and Cloud Server
EP2492791A1 (en) Augmented reality-based file transfer method and file transfer system thereof
CN107911547A (en) Interactive system, the method for interface layout
CN108536349B (en) Icon management method and mobile terminal
CN108391253B (en) application program recommendation method and mobile terminal
CN111158815B (en) Dynamic wallpaper blurring method, terminal and computer readable storage medium
CN113238727A (en) Screen switching method and device, computer readable medium and electronic equipment
CN108601048B (en) Flow control method and mobile terminal
CN110223615B (en) Advertisement display control method, device, medium and advertisement push server
CN108200287B (en) Information processing method, terminal and computer readable storage medium
CN111338745A (en) Deployment method and device of virtual machine and intelligent equipment
CN104375963B (en) Control system and method based on buffer consistency
CN110489657B (en) Information filtering method and device, terminal equipment and storage medium
CN109688402A (en) A kind of exchange method based on hologram, client and system
CN103905773A (en) Information processing method and electronic devices
CN110825475A (en) Input method and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant