CN103905772A - Prompting method and electronic equipment - Google Patents
Prompting method and electronic equipment Download PDFInfo
- Publication number
- CN103905772A CN103905772A CN201210587646.5A CN201210587646A CN103905772A CN 103905772 A CN103905772 A CN 103905772A CN 201210587646 A CN201210587646 A CN 201210587646A CN 103905772 A CN103905772 A CN 103905772A
- Authority
- CN
- China
- Prior art keywords
- session
- identification information
- window
- session data
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention discloses a prompting method and electronic equipment. The prompting method is applied to the electronic equipment, and the electronic equipment comprises a display unit. The method comprises the steps that when first identification information corresponding to a first user is adopted to carry out multi-user sessions with N pieces of identification information corresponding to N remote users, N session windows corresponding to the N pieces of identification information one by one are displayed on the display unit; when the electronic equipment receives first session data, second identification information corresponding to the first session data is determined from the N pieces of identification information, and the second identification information is used for corresponding to the first session window; the first session window displays the prompt information and outputs the first session data to prompt a first user that the identification information corresponding to the first session data is the second identification information corresponding to a first remote user, wherein the prompt information is different from the first session data.
Description
Technical field
The present invention relates to technical field of information processing, particularly a kind of method of prompting and electronic equipment.
Background technology
Along with scientific and technical development, electronic technology has also obtained development at full speed, and the kind of electronic product is also more and more, and people have also enjoyed the various facilities that development in science and technology brings.People can pass through various types of electronic equipments now, enjoy the comfortable life bringing along with development in science and technology.
Wherein, when by electronic equipment accesses network, can carry out session with the user of other electronic equipment, such as: many people video session, multi-person speech session and the session of many people word etc.Under normal conditions, in the time carrying out multi-conference, on the display unit of electronic equipment, can show the corresponding session window of multiple long-distance users.
Present inventor is realizing in the process of the embodiment of the present application technical scheme, at least finds to exist in prior art following technical problem:
Due in the prior art, in the time carrying out multi-conference by electronic equipment, on electronic equipment, only show the corresponding multiple session window of multiple long-distance users, and do not make prompting for the current user who is making a speech, so exist the accurate not technical problem of the control of multi-conference.
Summary of the invention
The embodiment of the present invention provides a kind of method and electronic equipment of prompting, for solving the accurate not technical problem of control of the more conference that prior art exists.
On the one hand, the application provides following technical scheme by an embodiment:
A method for prompting, described method is applied in an electronic equipment, and described electronic equipment comprises a display unit, and described method comprises:
In the time adopting corresponding the first identification information of first user and N the corresponding N of a long-distance user identification information to carry out multi-conference, on described display unit, show and described N identification information N session window one to one, N is more than or equal to 2 integer;
In the time that described electronic equipment receives the first session data, from a described N identification information, determine and corresponding the second identification information of described the first session data, wherein, described the second identification information is for corresponding the first session window;
In described the first session window, show an information and output the first session data, to point out described second identification information of the corresponding identification information of described the first session data as corresponding the first long-distance user to described first user, wherein, described information and described the first session data are different information.
Optionally, described multi-conference is specially: many people video session and/or multi-person speech session and/or the session of many people word.
Optionally, described many people video session is specially: the video session that the video session that many people carry out in same grouping and/or many people carry out in different grouping;
Described multi-person speech session is specially: the voice conversation that the voice conversation that many people carry out in same grouping and/or many people carry out in different grouping.
Optionally, described determining from a described N identification information and corresponding the second identification information of described the first session data, is specially:
From described the first session data, extract described the second identification information.
Optionally, describedly in described the first session window, show an information, be specially:
In described the first session window, a label is set; And/or
The display size of described the first session window is amplified as for the second different size of described the first display size by the first display size; And/or
Described the first session window is shown to extremely described display unit foremost; And/or
Described the first session window is produced to a vibration information; And/or
By the highlighted demonstration of described the first session window.
Optionally, described output the first session data, is specially:
Export described the first session data by the mode of voice; And/or
Export described the first session data by the mode of word and/or image; And/or
Export described the first session data by the mode of video.
Optionally, in the time that described multi-conference is described word session, described from a described N identification information, determine with corresponding the second identification information of described the first session data after, described method also comprises:
By extremely corresponding the first inputting interface of described the first session window of cursor movement.
On the other hand, the application provides following technical scheme by another embodiment:
A kind of electronic equipment, described electronic equipment comprises a display unit, described electronic equipment also comprises:
Display chip, for in the time adopting corresponding the first identification information of first user and N the corresponding N of a long-distance user identification information to carry out multi-conference, on described display unit, show and described N identification information N session window one to one, N is more than or equal to 2 integer;
Receiving chip in the time that described electronic equipment receives the first session data, is determined and corresponding the second identification information of described the first session data from a described N identification information, and wherein, described the second identification information is for corresponding the first session window;
Pio chip, for show an information and output the first session data in described the first session window, to point out described second identification information of the corresponding identification information of described the first session data as corresponding the first long-distance user to described first user, wherein, described information and described the first session data are different information.
Optionally, described multi-conference is specially: many people video session and/or multi-person speech session and/or the session of many people word.
Optionally, described many people video session is specially: the video session that many people carry out in same grouping and/or many people are in video session that different grouping carries out;
Described multi-person speech session is specially: the voice conversation that the voice conversation that many people carry out in same grouping and/or many people carry out in different grouping.
Optionally, described receiving chip, specifically for:
From described the first session data, extract described the second identification information.
Optionally, described pio chip, specifically for:
In described the first session window, a label is set; And/or
The display size of described the first session window is amplified as for the second different size of described the first display size by the first display size; And/or
Described the first session window is shown to extremely described display unit foremost; And/or
Described the first session window is produced to a vibration information; And/or
By the highlighted demonstration of described the first session window.
Optionally, described pio chip, specifically for:
Export described the first session data by the mode of voice; And/or
Export described the first session data by the mode of word and/or image; And/or
Export described the first session data by the mode of video.
Optionally, described electronic equipment also comprises:
Moving chip, for in the time that described multi-conference is described word session, from a described N identification information, determine with corresponding the second identification information of described the first session data after, by cursor movement to corresponding the first inputting interface of described the first session window.
The one or more technical schemes that provide in the embodiment of the present application, at least have following technique effect or advantage:
(1) due in the embodiment of the present application, adopt in the time adopting corresponding the first identification information of first user and N the corresponding N of a long-distance user identification information to carry out multi-conference, if receive the first session data, can produce an information and input described the first session data for corresponding the first session window of described the first session data, can point out corresponding the first identification information of the first session data so reached, and then reach the more accurate technique effect of the control of multi-conference.
(2) due in the embodiment of the present application, can adopt various ways to produce described information, such as: in described the first session window, a label is set, amplifies the display size of described the first session window, by described the first session window show to display unit foremost, to described the first session window produce a vibration information, by highlighted demonstration of described the first session window etc., so reached the more diversified technique effect of mode that produces information.
(3) due in the embodiment of the present application, in the time that multi-conference is word session, in the time of definite corresponding the first session window of the first session data, can also be by cursor movement to the corresponding inputting interface of described the first session window, owing to not needing user manually to regulate, so reached and diversified technique effect more accurate to the control mode of word session, and then user experience is also higher.
Accompanying drawing explanation
Fig. 1 is the flow chart of the method for prompting in the embodiment of the present application one;
Fig. 2 is the schematic diagram that shows multiple session window in the embodiment of the present application one method on display unit;
Fig. 3 a-3c be while producing different informations in the embodiment of the present application one method described in the demonstration schematic diagram of display unit;
Fig. 4 a-4c is the schematic diagram that shows multiple chat windows in the embodiment of the present application two;
Fig. 5 is the structure chart that the application implements two electronic equipments.
Embodiment
The embodiment of the present invention provides a kind of method and electronic equipment of prompting, for solving the accurate not technical problem of control of the more conference that prior art exists.
Technical scheme in the embodiment of the present application is to solve above-mentioned technical problem, and general thought is as follows:
In the time adopting corresponding the first identification information of first user and N the corresponding N of a long-distance user identification information to carry out multi-conference, on the display unit of electronic equipment, show and described N identification information N session window one to one, N is more than or equal to 2 integer; In the time that described electronic equipment receives the first session data, from a described N identification information, determine and corresponding the second identification information of described the first session data, wherein, described the second identification information is for corresponding the first session window; In described the first session window, show an information and output the first session data, to point out described second identification information of the corresponding identification information of described the first session data as corresponding the first long-distance user to described first user, wherein, described information and described the first session data are different information.
When multi-conference control due to employing such scheme, if receive the first session data, can produce an information and input described the first session data for corresponding the first session window of described the first session data, can point out corresponding the first identification information of the first session data so reached, and then reach the more accurate technique effect of the control of multi-conference.
In order better to understand technique scheme, below in conjunction with Figure of description and concrete execution mode, technique scheme is described in detail.
Embodiment mono-
The embodiment of the present application one provides a kind of method of prompting, and described method is applied in an electronic equipment, and described electronic equipment comprises a display unit, and described electronic equipment is for example: panel computer, notebook computer, mobile phone etc.
Please refer to Fig. 1, the method for described prompting comprises the steps:
Step S101: in the time adopting corresponding the first identification information of first user and N the corresponding N of a long-distance user identification information to carry out multi-conference, on described display unit, show and described N identification information N session window one to one, N is more than or equal to 2 integer;
Step S102: in the time that described electronic equipment receives the first session data, determine and corresponding the second identification information of described the first session data from a described N identification information, wherein, described the second identification information is for corresponding the first session window;
Step S103: show an information and output the first session data in described the first session window, to point out described second identification information of the corresponding identification information of described the first session data as corresponding the first long-distance user to described first user, wherein, described information and described the first session data are different information.
Wherein, in step S101, described multi-conference can be the multi-conference of arbitrary situation, and three kinds that enumerate below are wherein introduced.
The first, described multi-conference is specially: many people video session.
In specific implementation process, described many people video session also can be divided into multiple situation, and two kinds that enumerate below are wherein introduced, and certainly, in specific implementation process, is not limited to following two kinds of situations.
(1) described many people video session is specially: the video session that many people carry out in same grouping.
In specific implementation process, between the described first user of possibility and multiple long-distance user, need to discuss a common topic, so by corresponding described first user the first identification information and the corresponding N of a described N long-distance user identification information, all pull a Video chat group into, then in group, corresponding described first user and a described N long-distance user show and described display interface side by side, then described first user and a described N long-distance user discuss jointly, such as: cuisines are discussed jointly or tourism are discussed or news etc. is discussed.
In specific implementation process, described identification information can be id information, login mailbox message etc. of corresponding the first chat software of first user, and this embodiment of the present application is not restricted.
It is example take described identification information as id information, suppose that described first user and 3 long-distance users carry out Video chat, wherein, the user ID of described first user is: user A, described 3 corresponding user ID of long-distance user are respectively: user B1, user B2, user B3, so, as shown in Figure 2, can on described display unit, show below side by side content:
The chat window 201 of user A;
The chat window 202 of user B1;
The chat window 203 of user B2;
The chat window 204 of user B3.
(2) described many people video session is specially: the video session that many people carry out in different grouping.
In specific implementation process, also may adopt corresponding described the first identification information of described first user and a described N long-distance user to carry out respectively different video sessions, or take the user ID of described first user as user A, a described N long-distance user is 3 long-distance users, its corresponding user ID is respectively: user B1, user B2, user B3 are example, and wherein, user A and user B1 discuss tourism, with user B2, cuisines are discussed, with user B3, news etc. are discussed.
The second, described multi-conference is specially: multi-person speech session.
In specific implementation process, described multi-person speech session also can be divided into multiple situation, and two kinds that enumerate below are wherein introduced, and certainly, in specific implementation process, is not limited to following two kinds of situations.
(1) described multi-person speech session is specially: the voice conversation that many people carry out in same grouping.
(2) described multi-person speech session is specially: the voice conversation that many people carry out in different grouping.
Wherein, many people same grouping carry out the mode of voice conversation and similar, the many people of mode that many people carry out video session in same grouping different grouping carry out the mode of voice conversation and mode that many people carry out video session in different grouping similar, so introduce no longer in detail at this.
The third, described multi-conference is specially: the session of many people word.
In specific implementation process, between described the first identification information and the corresponding N of a described N long-distance user identification information respectively to there being different session window, then each session window is to there being an inputting interface, and described inputting interface can receive word or the view data that described first user is inputted.
In specific implementation process, the inputting interface that described inputting interface can comprise for described session window, such as: described session window is divided into multiple interfaces, comprising inputting interface, word/image display interfaces, video display interface etc.; Described inputting interface can be also independent interface, and wherein, described session window only comprises data display window etc.Be which kind of corresponding relation for described inputting interface and described session window, the embodiment of the present application is not restricted.
In specific implementation process, described multi-conference is not limited to above-mentioned three kinds of situations, and can only adopt a kind of mode to carry out multi-conference, also can adopt various ways to carry out multi-conference, and this embodiment of the present application is not restricted.
Wherein, in step S102, described determining from a described N identification information and corresponding the second identification information of described the first session data, is specially:
From described the first session data, extract described the second identification information.
In specific implementation process, after the corresponding electronic equipment of described the first identification information obtains described the first long-distance user's chat data, chat data described in it is processed, such as: its second identification information corresponding with it, chatting time etc. are synthesized, then obtain described the first session data, and then described the first session data is sent to the electronic equipment at described the first identification information place, for example as shown in the table, described the first session data comprises following content:
Identification information | Chatting time | Chat content |
User B1 | 11:40 | Go to eat the taro chicken of promotion bridge today? |
Then, current electronic device is resolved described the first session data, and then therefrom extracts corresponding the second identification information of described the first session data, namely user B1.
Certainly, in the embodiment of the present application, can also adopt alternate manner to obtain described the second identification information, such as: a numbering is set to each identification information, then in described the first session content, the corresponding numbering of described the second identification information is sent to described electronic equipment, then the corresponding relation of described electronic equipment based between identification information and numbering, determines described the second identification information.
In specific implementation process, in the time that described multi-conference is described word session, after determining described the second identification information based on step S102, described method also comprises:
By extremely corresponding the first inputting interface of described the first session window of cursor movement.
In specific implementation process, in the time that described multi-conference is described word session, described electronic equipment need to receive by described the first inputting interface the input operation of first user, and then send corresponding chat message to described the first long-distance user, in this case, can be by described cursor movement to described the first inputting interface, to facilitate user to carry out input operation.
As seen from the above description, due in the embodiment of the present application, in the time that multi-conference is word session, in the time of definite corresponding the first session window of the first session data, can also be by cursor movement to the corresponding inputting interface of described the first session window, owing to not needing user manually to regulate, so reached and diversified technique effect more accurate to the control mode of word session, and then user experience is also higher.
Wherein, in step S103, described the first session window can be that a session window can be also multiple session window, the embodiment of the present application is not restricted, and shows an information in addition in described the first session window, can be divided into multiple situation, enumerate several being introduced wherein below, certainly,, in specific implementation process, be not limited to following several situation.
The first arranges a label in described the first session window.
In specific implementation process, described label can be the label of various ways, such as: word, picture, figure etc., wherein, in the time showing described label, can adopt the mode of common display, also can adopt the specific display mode of tool, such as: transparent demonstration, concavo-convex demonstration, the demonstration etc. that suspends, for adopting which kind of mode that described label is set, the embodiment of the present application is not restricted.
Wherein, take described the first long-distance user as shown in user B, so, as shown in Figure 3 a, can label 301 be set to it.
The second, amplifies the display size of described the first session window as for the second different size of described the first display size by the first display size.
In specific implementation process, the X-axis display size of the display size of described the first session window can be amplified, show dimensions as take described the first session window: 50px*100px, as example, so, is enlarged into: 70px*100px; Also the Y-axis display size of the display size of described the first session window can be amplified, such as being enlarged into: 50px*120px; Also described the first session window equal proportion can be amplified, such as: 60px*120px etc.For adopting which kind of mode to amplify described the first display size, the embodiment of the present application is not restricted.
In addition, after the display size of described the first display window is amplified, the display size of other display window different from described the first display window can also be dwindled or constant, this embodiment of the present application is not restricted.
Please refer to Fig. 3 b, is the schematic diagram that the display size of described the first display window is amplified.
The third, show that by described the first session window extremely described display unit foremost.
In specific implementation process, described by described first session window show to described display unit foremost, may be divided into multiple situation.Two kinds of situations enumerating below are wherein introduced, and certainly, in specific implementation process, are not limited to following two kinds of situations.
(1) a described N session window is for overlapping, and wherein, window has circumstance of occlusion each other, in this case, described the first session window is shown in foremost, namely adjusts the order that stacks of described the first session window, and it can not be blocked.
(2) for being set up in parallel, there is not circumstance of occlusion in a described N session window, and this is shown in described the first session window foremost in this case,, to the demonstration that suspends of described the first session window, is namely that it is visually presented at foremost.
Please refer to Fig. 3 c is that described the first session window is shown in to schematic diagram foremost.
The 4th kind, described the first session window is produced to a vibration information.
In specific implementation process, multiple motors are set can to described electronic equipment diverse location, wherein, corresponding different session vibrations, allow different motors shake, and then the zones of different of controlling described electronic equipment has seismaesthesia, and then determine described the first session window.
The 5th kind, by the highlighted demonstration of described the first session window.
In specific implementation process, can adopt various ways by the highlighted demonstration of described the first session window, such as: give frame that described the first session window arranges the first session window described in a frame, overstriking, change border color of described the first session window etc., for adopting described the first session window of the highlighted demonstration of which kind of mode, the embodiment of the present application is not restricted.
As seen from the above description, due in the embodiment of the present application, can adopt various ways to produce described information, such as: in described the first session window, a label is set, amplifies the display size of described the first session window, by described the first session window show to display unit foremost, to described the first session window produce a vibration information, by highlighted demonstration of described the first session window etc., so reached the more diversified technique effect of mode that produces information.
In addition, in specific implementation process, can only adopt a kind of mode to export described the first session data and also can adopt various ways to export described the first session data, to this, the embodiment of the present application is not restricted.
As seen from the above description, due in the embodiment of the present application, can adopt various ways to produce described information, such as: in described the first session window, a label is set, amplifies the display size of described the first session window, by described the first session window show to display unit foremost, to described the first session window produce a vibration information, by highlighted demonstration of described the first session window etc., so reached the more diversified technique effect of mode that produces information.
In specific implementation process, in step S103, the mode of output the first session data also can be divided into multiple situation, and three kinds that enumerate below are wherein introduced, and certainly, in specific implementation process, are not limited to following three kinds of situations.
The first, exports described the first session data by the mode of voice.
In specific implementation process, in the time that described the first session data itself is just voice messaging, it directly can be exported mode with voice; In the time that described the first session data is lteral data, can be converted into voice messaging input, such as: by simulating with corresponding the first long-distance user's of the second identification information sound, and then be converted into voice messaging; In the time that described the first session data is video data, the voice messaging that can extract is wherein exported.
The second, exports described the first session data by the mode of word and/or image.
In specific implementation process, in the time that described the first session data itself is just word and/or view data, be directly shown in described the first session window; And in the time that it is speech data or video data, the lteral data that can extract is wherein exported.
The third, export described the first session data by the mode of video.
In specific implementation process, in the time that described the first session data is video data, it directly can be exported in the mode of video; And in the time that it is speech data or lteral data, can first lteral data be converted into speech data, then mix certain picture to speech data, such as: the picture relevant to the first session content, picture relevant with the first long-distance user etc., then by its output.
In specific implementation process, described the first session content can be exported according to primitive attribute, such as: adopt the mode of voice to export, adopt the mode of word and/or image to export, adopt the mode of video to export etc. for video data for word and/or view data for speech data; Also can change, then export according to the attribute after conversion, and can only adopt in the embodiment of the present application a kind of attribute to export described the first session data, also can adopt various ways to input described the first session data, the embodiment of the present application is not restricted.
Embodiment bis-
In order to make those skilled in the art can connect the specific implementation process of the method for the prompting of introducing in the embodiment of the present application one, in the present embodiment, will stand in the detailed annotation of user's side and introduce the specific implementation process of the method for this information processing.In the present embodiment will be take described electronic equipment as notebook computer as example be introduced.
In the T1 moment, user A opens described notebook computer, and opens QQ chat software wherein, to carry out QQ chat.
In the T2 moment, user puts out respectively identification information and is: the chat window of user C1, user C2, user C3, after described notebook computer detects that user's described point is opened operation, by these three overlapping demonstrations of chat window, wherein, as shown in Fig. 4 a, as follows respectively:
The first chat window 41, respective user C1, comprises the first inputting interface 41a;
The second chat window 42, respective user C2, comprises the second inputting interface 42a;
The 3rd chat window 43, respective user C3, comprises the 3rd inputting interface 43a.
In the T2 moment, user A inputs following Word message at described the first inputting interface 41a: " you receive on mobile phone that interim password does not have, and have received, issue me once." then click enter key; after described notebook computer detects that user A clicks the operation of enter key; detect and obtain above-mentioned Word message, and the first identification information (user A) is packed into described Word message, and be sent to the corresponding electronic equipment of user C1;
In the T3 moment, user A sends information to respectively again the corresponding electronic equipment of user C2 and user C3, and described notebook computer is in the time receiving the input operation of user A, its corresponding chat window should be shown in the top of described electronic equipment, so, as shown in Figure 4 b, in the described T3 moment, should be that the 3rd chat window 43 is shown in top layer, in the middle of the second chat window 42 is shown in, the first chat window 41 be shown in the bottom.
In the T4 moment, described electronic equipment receives a feedback message, and then detects its corresponding identification information, find its respective user C2, so as shown in Fig. 4 c, corresponding user C2 the second chat window 42 is shown in to top layer, and cursor 44 is moved to described the second inputting interface 42a.
Embodiment tri-
Based on same inventive concept, the embodiment of the present application three provides a kind of electronic equipment, and described electronic equipment comprises display unit, and described electronic equipment is for example: notebook computer, mobile phone, panel computer etc.
Please refer to Fig. 5, described electronic equipment comprises following structure:
Receiving chip 502 in the time that described electronic equipment receives the first session data, is determined and corresponding the second identification information of described the first session data from a described N identification information, and wherein, described the second identification information is for corresponding the first session window;
In specific implementation process, described multi-conference can be the multi-conference of various ways, below three kinds of enumerating is wherein introduced, and certainly, in specific implementation process, is not limited to following three kinds of situations.
The first, described multi-conference is specially: many people video session.
In specific implementation process, described many people video session is specially: the video session that the video session that many people carry out in same grouping and/or many people carry out in different grouping.
The second, described multi-conference is specially: multi-person speech session.
In specific implementation process, described multi-person speech session is specially: the voice conversation that the voice conversation that many people carry out in same grouping and/or many people carry out in different grouping.
The third, described multi-conference is specially: the session of many people word.
In specific implementation process, described receiving chip 502, specifically for:
From described the first session data, extract described the second identification information.
In specific implementation process, described pio chip 503 can adopt various ways to export described information, enumerates several being introduced wherein below, certainly, in specific implementation process, is not limited to following several situation.
The first, described pio chip 503, specifically for: in described the first session window, a label is set.
The second, described pio chip 503, specifically for: the display size of described the first session window is amplified as for the second different size of described the first display size by the first display size.
The third, described pio chip 503, specifically for: described the first session window is shown to extremely described display unit is foremost.
The 4th kind, described pio chip 503, specifically for: described the first session window is produced to a vibration information.
The 5th kind, described pio chip 503, specifically for: by the highlighted demonstration of described the first session window.
As seen from the above description, due in the embodiment of the present application, can adopt various ways to produce described information, such as: in described the first session window, a label is set, amplifies the display size of described the first session window, by described the first session window show to display unit foremost, to described the first session window produce a vibration information, by highlighted demonstration of described the first session window etc., so reached the more diversified technique effect of mode that produces information.
In specific implementation process, described pio chip 503 can adopt various ways to export described the first session data, and three kinds that enumerate below are wherein introduced, and certainly, in specific implementation process, is not limited to following several situation.
The first, described pio chip 503, specifically for: export described the first session data by the mode of voice.
The second, described pio chip 503, specifically for: export described the first session data by the mode of word and/or image.
The third, described pio chip 503, specifically for: export described the first session data by the mode of video.
In specific implementation process, described electronic equipment also comprises:
Moving chip, for in the time that described multi-conference is described word session, from a described N identification information, determine with corresponding the second identification information of described the first session data after, by cursor movement to corresponding the first inputting interface of described the first session window.
As seen from the above description, due in the embodiment of the present application, in the time that multi-conference is word session, in the time of definite corresponding the first session window of the first session data, can also be by cursor movement to the corresponding inputting interface of described the first session window, owing to not needing user manually to regulate, so reached and diversified technique effect more accurate to the control mode of word session, and then user experience is also higher.
The electronic equipment of introducing due to the present embodiment three, the electronic equipment adopting for implementing the method for the prompting in the embodiment of the present application one, so method of the prompting based on introducing in the embodiment of the present application one, the embodiment that those skilled in the art can understand three kinds of electronic equipments of the embodiment of the present application with and various version, so introduce no longer in detail for this electronic equipment at this.As long as those skilled in the art implement the electronic equipment that the method for prompting in the embodiment of the present application one adopts, all belong to the scope of the application institute wish protection.
One or more technical schemes that the application provides, at least have following technique effect or advantage:
(1) due in the embodiment of the present application, adopt in the time adopting corresponding the first identification information of first user and N the corresponding N of a long-distance user identification information to carry out multi-conference, if receive the first session data, can produce an information and input described the first session data for corresponding the first session window of described the first session data, can point out corresponding the first identification information of the first session data so reached, and then reach the more accurate technique effect of the control of multi-conference.
(2) due in the embodiment of the present application, can adopt various ways to produce described information, such as: in described the first session window, a label is set, amplifies the display size of described the first session window, by described the first session window show to display unit foremost, to described the first session window produce a vibration information, by highlighted demonstration of described the first session window etc., so reached the more diversified technique effect of mode that produces information.
(3) due in the embodiment of the present application, in the time that multi-conference is word session, in the time of definite corresponding the first session window of the first session data, can also be by cursor movement to the corresponding inputting interface of described the first session window, owing to not needing user manually to regulate, so reached and diversified technique effect more accurate to the control mode of word session, and then user experience is also higher.
Although described the preferred embodiments of the present invention, once those skilled in the art obtain the basic creative concept of cicada, can make other change and modification to these embodiment.So claims are intended to be interpreted as comprising preferred embodiment and fall into all changes and the modification of the scope of the invention.
Obviously, those skilled in the art can carry out various changes and modification and not depart from the spirit and scope of the present invention the present invention.Like this, if within of the present invention these are revised and modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention is also intended to comprise these changes and modification interior.
Claims (14)
1. a method for prompting, described method is applied in an electronic equipment, and described electronic equipment comprises a display unit, it is characterized in that, and described method comprises:
In the time adopting corresponding the first identification information of first user and N the corresponding N of a long-distance user identification information to carry out multi-conference, on described display unit, show and described N identification information N session window one to one, N is more than or equal to 2 integer;
In the time that described electronic equipment receives the first session data, from a described N identification information, determine and corresponding the second identification information of described the first session data, wherein, described the second identification information is for corresponding the first session window;
In described the first session window, show an information and output the first session data, to point out described second identification information of the corresponding identification information of described the first session data as corresponding the first long-distance user to described first user, wherein, described information and described the first session data are different information.
2. the method for claim 1, is characterized in that, described multi-conference is specially: many people video session and/or multi-person speech session and/or the session of many people word.
3. method as claimed in claim 2, is characterized in that, described many people video session is specially: the video session that the video session that many people carry out in same grouping and/or many people carry out in different grouping;
Described multi-person speech session is specially: the voice conversation that the voice conversation that many people carry out in same grouping and/or many people carry out in different grouping.
4. the method for claim 1, is characterized in that, described determining from a described N identification information and corresponding the second identification information of described the first session data, is specially:
From described the first session data, extract described the second identification information.
5. the method for claim 1, is characterized in that, describedly in described the first session window, shows an information, is specially:
In described the first session window, a label is set; And/or
The display size of described the first session window is amplified as for the second different size of described the first display size by the first display size; And/or
Described the first session window is shown to extremely described display unit foremost; And/or
Described the first session window is produced to a vibration information; And/or
By the highlighted demonstration of described the first session window.
6. the method for claim 1, is characterized in that, described output the first session data, is specially:
Export described the first session data by the mode of voice; And/or
Export described the first session data by the mode of word and/or image; And/or
Export described the first session data by the mode of video.
7. method as claimed in claim 2, is characterized in that, in the time that described multi-conference is described word session, described from a described N identification information, determine with corresponding the second identification information of described the first session data after, described method also comprises:
By extremely corresponding the first inputting interface of described the first session window of cursor movement.
8. an electronic equipment, described electronic equipment comprises a display unit, it is characterized in that, described electronic equipment also comprises:
Display chip, for in the time adopting corresponding the first identification information of first user and N the corresponding N of a long-distance user identification information to carry out multi-conference, on described display unit, show and described N identification information N session window one to one, N is more than or equal to 2 integer;
Receiving chip in the time that described electronic equipment receives the first session data, is determined and corresponding the second identification information of described the first session data from a described N identification information, and wherein, described the second identification information is for corresponding the first session window;
Pio chip, for show an information and output the first session data in described the first session window, to point out described second identification information of the corresponding identification information of described the first session data as corresponding the first long-distance user to described first user, wherein, described information and described the first session data are different information.
9. electronic equipment as claimed in claim 8, is characterized in that, described multi-conference is specially: many people video session and/or multi-person speech session and/or the session of many people word.
10. electronic equipment as claimed in claim 9, is characterized in that, described many people video session is specially: the video session that the video session that many people carry out in same grouping and/or many people carry out in different grouping;
Described multi-person speech session is specially: the voice conversation that the voice conversation that many people carry out in same grouping and/or many people carry out in different grouping.
11. electronic equipments as claimed in claim 8, is characterized in that, described receiving chip, specifically for:
From described the first session data, extract described the second identification information.
12. electronic equipments as claimed in claim 8, is characterized in that, described pio chip, specifically for:
In described the first session window, a label is set; And/or
The display size of described the first session window is amplified as for the second different size of described the first display size by the first display size; And/or
Described the first session window is shown to extremely described display unit foremost; And/or
Described the first session window is produced to a vibration information; And/or
By the highlighted demonstration of described the first session window.
13. electronic equipments as claimed in claim 8, is characterized in that, described pio chip, specifically for:
Export described the first session data by the mode of voice; And/or
Export described the first session data by the mode of word and/or image; And/or
Export described the first session data by the mode of video.
14. electronic equipments as claimed in claim 8, is characterized in that, described electronic equipment also comprises:
Moving chip, for in the time that described multi-conference is described word session, from a described N identification information, determine with corresponding the second identification information of described the first session data after, by cursor movement to corresponding the first inputting interface of described the first session window.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210587646.5A CN103905772B (en) | 2012-12-28 | 2012-12-28 | The method and electronic equipment of a kind of prompting |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210587646.5A CN103905772B (en) | 2012-12-28 | 2012-12-28 | The method and electronic equipment of a kind of prompting |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103905772A true CN103905772A (en) | 2014-07-02 |
CN103905772B CN103905772B (en) | 2018-06-01 |
Family
ID=50996897
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210587646.5A Active CN103905772B (en) | 2012-12-28 | 2012-12-28 | The method and electronic equipment of a kind of prompting |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103905772B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106325671A (en) * | 2016-08-16 | 2017-01-11 | 浙江翼信科技有限公司 | Message reply method and device |
CN106998438A (en) * | 2016-01-26 | 2017-08-01 | 北京佳讯飞鸿电气股份有限公司 | User video image display methods and device in visualization calling |
CN111158838A (en) * | 2019-12-31 | 2020-05-15 | 联想(北京)有限公司 | Information processing method and device |
CN111258479A (en) * | 2020-01-16 | 2020-06-09 | 上海携程商务有限公司 | Method, system, equipment and storage medium for displaying multiple chat windows on chat interface |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1315113A (en) * | 1998-08-26 | 2001-09-26 | 联合视频制品公司 | Television chat system |
US20030065721A1 (en) * | 2001-09-28 | 2003-04-03 | Roskind James A. | Passive personalization of buddy lists |
US20070094341A1 (en) * | 2005-10-24 | 2007-04-26 | Bostick James E | Filtering features for multiple minimized instant message chats |
CN101159714A (en) * | 2007-11-30 | 2008-04-09 | 腾讯科技(深圳)有限公司 | Instant communication method, device and cluster server |
CN101212751A (en) * | 2006-12-26 | 2008-07-02 | 鸿富锦精密工业(深圳)有限公司 | Mobile communication terminal capable of displaying multi-party video call and the display method |
CN101247364A (en) * | 2008-03-31 | 2008-08-20 | 腾讯科技(深圳)有限公司 | Conversation message managing system and method thereof |
WO2009017573A2 (en) * | 2007-07-31 | 2009-02-05 | Hewlett-Packard Development Company, L.P. | Video conferencing system and method |
CN102255824A (en) * | 2011-01-10 | 2011-11-23 | 北京开心人信息技术有限公司 | Instant messaging method and system |
CN102474671A (en) * | 2009-08-12 | 2012-05-23 | 索尼计算机娱乐公司 | Information processing system and information processing device |
-
2012
- 2012-12-28 CN CN201210587646.5A patent/CN103905772B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1315113A (en) * | 1998-08-26 | 2001-09-26 | 联合视频制品公司 | Television chat system |
US20030065721A1 (en) * | 2001-09-28 | 2003-04-03 | Roskind James A. | Passive personalization of buddy lists |
US20070094341A1 (en) * | 2005-10-24 | 2007-04-26 | Bostick James E | Filtering features for multiple minimized instant message chats |
CN101212751A (en) * | 2006-12-26 | 2008-07-02 | 鸿富锦精密工业(深圳)有限公司 | Mobile communication terminal capable of displaying multi-party video call and the display method |
WO2009017573A2 (en) * | 2007-07-31 | 2009-02-05 | Hewlett-Packard Development Company, L.P. | Video conferencing system and method |
CN101159714A (en) * | 2007-11-30 | 2008-04-09 | 腾讯科技(深圳)有限公司 | Instant communication method, device and cluster server |
CN101247364A (en) * | 2008-03-31 | 2008-08-20 | 腾讯科技(深圳)有限公司 | Conversation message managing system and method thereof |
CN102474671A (en) * | 2009-08-12 | 2012-05-23 | 索尼计算机娱乐公司 | Information processing system and information processing device |
CN102255824A (en) * | 2011-01-10 | 2011-11-23 | 北京开心人信息技术有限公司 | Instant messaging method and system |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106998438A (en) * | 2016-01-26 | 2017-08-01 | 北京佳讯飞鸿电气股份有限公司 | User video image display methods and device in visualization calling |
CN106998438B (en) * | 2016-01-26 | 2019-11-19 | 北京佳讯飞鸿电气股份有限公司 | User video image display methods and device in visualization calling |
CN106325671A (en) * | 2016-08-16 | 2017-01-11 | 浙江翼信科技有限公司 | Message reply method and device |
CN106325671B (en) * | 2016-08-16 | 2019-05-28 | 浙江翼信科技有限公司 | A kind of method and apparatus replied message |
CN111158838A (en) * | 2019-12-31 | 2020-05-15 | 联想(北京)有限公司 | Information processing method and device |
CN111258479A (en) * | 2020-01-16 | 2020-06-09 | 上海携程商务有限公司 | Method, system, equipment and storage medium for displaying multiple chat windows on chat interface |
CN111258479B (en) * | 2020-01-16 | 2023-12-15 | 上海携程商务有限公司 | Method, system, equipment and storage medium for displaying multiple chat windows on chat interface |
Also Published As
Publication number | Publication date |
---|---|
CN103905772B (en) | 2018-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6305033B2 (en) | Method and system for providing a multi-user messenger service | |
US20170302709A1 (en) | Virtual meeting participant response indication method and system | |
CN111066042A (en) | Virtual conference participant response indication method and system | |
EP2699029B1 (en) | Method and device for providing a message function | |
US10553003B2 (en) | Interactive method and apparatus based on web picture | |
CN103186912B (en) | The method and system of word are shown with picture format | |
CN102664009B (en) | System and method for implementing voice control over video playing device through mobile communication terminal | |
CN103209201A (en) | Virtual avatar interaction system and method based on social relations | |
JP2014160467A (en) | Apparatus and method for controlling messenger in terminal | |
CN103313140A (en) | Television receiving terminal, text information input method and system thereof and mobile terminal | |
CN103905772A (en) | Prompting method and electronic equipment | |
US20130318447A1 (en) | Prompting of Recipient Expertise in Collaboration Environment | |
WO2016119165A1 (en) | Chat history display method and apparatus | |
CN107728918A (en) | Browse the method, apparatus and electronic equipment of continuous page | |
WO2019076307A1 (en) | Storage apparatus, application control creation method, and user interface creation method | |
CN106028172A (en) | Audio/video processing method and device | |
Chang | New media, new technologies and new communication opportunities for deaf/hard of hearing people | |
CN103870491B (en) | A kind of information matching method and electronic equipment | |
CN103973542A (en) | Voice information processing method and device | |
JP2024138546A (en) | Message sending method, message receiving method and device, equipment, and computer program | |
CN102591500B (en) | Touch-control drawing disposal system and method | |
CN108022466A (en) | A kind of meeting display device based on multimedia technology | |
CN105893735B (en) | Medical information remote co-screen assistance method and terminal | |
WO2022253132A1 (en) | Information display method and apparatus, and electronic device | |
CN202196390U (en) | Multi-screen display system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |