CN112337088A - Information processing method, server, electronic equipment and storage medium - Google Patents
Information processing method, server, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112337088A CN112337088A CN202011238753.8A CN202011238753A CN112337088A CN 112337088 A CN112337088 A CN 112337088A CN 202011238753 A CN202011238753 A CN 202011238753A CN 112337088 A CN112337088 A CN 112337088A
- Authority
- CN
- China
- Prior art keywords
- user
- pitch
- image
- electronic device
- matched
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 27
- 238000003672 processing method Methods 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 claims abstract description 44
- 238000004590 computer program Methods 0.000 claims description 19
- 230000001965 increasing effect Effects 0.000 claims description 9
- 230000003993 interaction Effects 0.000 abstract description 7
- 238000012545 processing Methods 0.000 abstract description 5
- 230000002452 interceptive effect Effects 0.000 abstract description 2
- 239000011295 pitch Substances 0.000 description 215
- 230000008569 process Effects 0.000 description 21
- 230000006870 function Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000009466 transformation Effects 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000012790 confirmation Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000003999 initiator Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/54—Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/79—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
- A63F13/795—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other players; for building a team; for providing a buddy list
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/79—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
- A63F13/798—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/843—Special adaptations for executing a specific game genre or game mode involving concurrently two or more players on the same game device, e.g. requiring the use of a plurality of controllers or of a specific view of game data for each player
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Computer Security & Cryptography (AREA)
- General Business, Economics & Management (AREA)
- Optics & Photonics (AREA)
- Acoustics & Sound (AREA)
- Theoretical Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the invention provides an information processing method, a server, electronic equipment and a storage medium, relates to the technical field of image processing, and aims to solve the problem that interactive display between users is lack of real-time performance. The method comprises the following steps: the method comprises the steps that a server receives first request information sent by first electronic equipment, wherein the first request information comprises an identifier of a first user; determining a second user matched with the first user; receiving first sound information sent by first electronic equipment and second sound information sent by second electronic equipment logged by a second user; the first pitch determined based on the first sound information and the second pitch determined based on the second sound information are sent to the first electronic device and the second electronic device, so that the first electronic device and the second electronic device can display corresponding information according to the first pitch and the second pitch, the spelling condition of the first user and the second user is displayed in time, and the real-time performance of displaying the interaction condition between the users is improved.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to an information processing method, a server, an electronic device, and a storage medium.
Background
Under the prior art scheme, two users participating in the wheat-connected spelling can play games in a spelling mode, in the spelling process, the two parties of the spelling do not know the current spelling condition of the other party, and only when the spelling is finished, the win-or-lose message can be known, namely, the real-time performance of displaying the interaction condition between the users is low.
Disclosure of Invention
The embodiment of the invention provides an information processing method, a server, electronic equipment and a storage medium.
The embodiment of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an information processing method, applied to a server, including:
receiving first request information sent by first electronic equipment, wherein the first request information comprises an identifier of a first user;
determining a second user matching the first user;
receiving first sound information sent by the first user through the first electronic device and second sound information sent by the second user through the second electronic device;
transmitting, to the first electronic device and the second electronic device, a first pitch determined based on the first sound information and a second pitch determined based on the second sound information.
In a second aspect, an embodiment of the present invention further provides an information processing method, applied to a first electronic device, including:
sending first request information to a server;
sending the collected first sound information to the server;
receiving a first pitch determined based on the first sound information and sent by the server and a second pitch of a second user, wherein the second user is matched with a first user, and the first user is a user sending the first sound information through the first electronic equipment;
adjusting a first image and a second image displayed on the first electronic device under the condition that the first pitch and the second pitch are different, wherein the first image is an image acquired by a first electronic device, the second image is an image acquired by a second electronic device, and the second electronic device corresponds to the second user;
displaying the adjusted first image and the second image.
In a third aspect, an embodiment of the present invention further provides a server, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the computer program implements the steps of the information processing method according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when the computer program is executed by the processor, the steps of the information processing method according to the second aspect are implemented.
In a fifth aspect, the embodiments of the present invention further provide a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the steps of the information processing method according to the first aspect, or the computer program, when executed by the processor, implements the steps of the information processing method according to the second aspect.
In the embodiment of the invention, a server receives first request information sent by first electronic equipment, wherein the first request information comprises an identifier of a first user; determining a second user matching the first user; receiving first sound information sent by the first user through the first electronic device and second sound information sent by the second user through the second electronic device; and sending a first pitch determined based on the first sound information and a second pitch determined based on the second sound information to the first electronic equipment and the second electronic equipment, so that the first electronic equipment and the second electronic equipment can display corresponding information according to the first pitch and the second pitch, display the spelling condition of the first user and the second user in time, and improve the real-time performance of displaying the interaction condition between the users.
Drawings
FIG. 1 is a flow chart of an information processing method provided by an embodiment of the present invention;
FIG. 2 is another flow chart of an information processing method provided by an embodiment of the invention;
3 a-3 c are schematic display diagrams of a first electronic device according to an embodiment of the invention;
FIG. 4 is a block diagram of an implementation apparatus of a server according to an embodiment of the present invention;
fig. 5 is a block diagram of an implementation apparatus of an electronic device according to an embodiment of the present invention;
FIG. 6 is a block diagram of a server provided by an embodiment of the present invention;
fig. 7 is a block diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an information processing method according to an embodiment of the present invention, and as shown in fig. 1, the embodiment provides an information processing method applied to a server, including the following steps:
The first user logs in through the first electronic device. In the live broadcast process, the first user may initiate game spelling through the first electronic device, that is, send first request information to the server, the server receives the first request information sent by the first electronic device, the first request information includes an identifier of the first user, and the first request information may further include a time parameter for sending the first request information, for example, a time when the first user sends the first request information, or an expected game start time, and the like. The first request information may also be considered spelling request information.
The identity of the first user may be an account of the first user logged in through the first electronic device.
And 102, determining a second user matched with the first user.
After receiving the first request message, the server matches the first user with the second user, for example, the second user may be found in a matching queue of the server. In determining whether to match, the second user may be determined based on the historical spell information of the first user, e.g., the historical best spell record of the first user, or the historical win rate of the first user, or the historical pitch average of the first user, etc. And if the historical best spelling record of the first user is similar to the historical best spelling record of the second user, or the historical victory rate of the first user is similar to the historical victory rate of the second user, or the historical pitch mean value of the first user is similar to the historical pitch mean value of the second user, the first user is considered to be matched with the second user.
The first sound information may be sound collected by the first electronic device through the first sound collection device, the first sound collection device is installed on the first electronic device, or the first sound collection device is electrically connected to the first electronic device.
The second sound information may be sound collected by the second electronic device through a second sound collection device mounted on the second electronic device, or the second sound collection device is electrically connected to the second electronic device.
The first sound information and the second sound information may include human sound information.
The first sound information and the second sound information may be collected in the same time period, for example, in the same 5 second time period, the first electronic device collects the first sound information and sends the first sound information to the server, and the second electronic device collects the second sound information and sends the second sound information to the server. The first sound information and the second sound information may also be collected at different time periods, for example, during a first 5 second time period, the first electronic device collects the first sound information and sends the first sound information to the server, during a second 5 second time period, the second electronic device collects the second sound information and sends the second sound information to the server, the first 5 second time period is adjacent to the second 5 second time period, and the first 5 second time period is earlier than the second 5 second time period.
After receiving the first sound information, the server can calculate the voice in the first sound information to obtain a first pitch; similarly, after receiving the second sound information, the server may calculate the human voice in the second sound information to obtain the second pitch. The process of determining pitch according to human voice in the above description may refer to a process of determining a maximum pitch value according to audio information of the first user, and the process of determining pitch according to human voice may also be understood as a process of determining a maximum pitch value according to audio information.
The server sends the first pitch and the second pitch to the first electronic device and the second electronic device, and the first electronic device may display corresponding information on the display screen according to the sizes of the first pitch and the second pitch, for example, display numerical values of the first pitch and the second pitch, or, in the case that the first pitch is larger than the second pitch, enlarge a first image of the first electronic device and reduce a second image of the second electronic device, where the first image is an image acquired by the first electronic device through the first camera, and the second image is an image acquired by the second electronic device through the second camera.
The information processing method can be applied to the field of live broadcast, and the interactivity among users can be improved and the interestingness of games can be improved.
In this embodiment, a server receives first request information sent by a first electronic device, where the first request information includes an identifier of a first user; determining a second user matching the first user; receiving first sound information sent by the first user through the first electronic device and second sound information sent by the second user through the second electronic device; and sending a first pitch determined based on the first sound information and a second pitch determined based on the second sound information to the first electronic equipment and the second electronic equipment, so that the first electronic equipment and the second electronic equipment can display corresponding information according to the first pitch and the second pitch, display the spelling condition of the first user and the second user in time, and improve the real-time performance of displaying the interaction condition between the users.
In an embodiment of the application, the determining the second user matching the first user includes:
acquiring a first historical pitch mean value of the first user;
acquiring a second historical pitch mean value of the candidate users in the matching queue;
acquiring a first pitch interval according to the first historical pitch mean value;
and if the matching queue comprises a first user to be matched, determining the second user from the first user to be matched, wherein the first user to be matched is the user with the second historical pitch mean value in the first pitch interval.
In the above description, the method for determining the historical pitch average of the user is described below by taking the first historical pitch average of the first user as an example.
And in the multiple high pitch syllabification, the highest pitch value obtained by the first user every time is averaged. In each high pitch spelling, the highest pitch value of the first user is determined in the following manner:
extracting audio information within the last 1 second (which can also be 2 seconds, and the specific duration is not limited) from the audio information sent by the first user to obtain audio data;
filtering the audio data through a preset band-pass filter (such as a kaiser window band-pass filter) so as to reserve data with a frequency spectrum close to human voice in the audio data;
carrying out Fourier transform on the filtered data, and converting time domain data into frequency domain data;
and taking an absolute value of the obtained frequency domain data, and then taking a logarithm with e as a base to obtain a group of data. Although the pitch and the frequency are in one-to-one correspondence, the pitch and the frequency are not in a direct proportion relationship but are in an approximate logarithmic relationship, and the value obtained after logarithm is taken can reflect the pitch value of the first user more accurately;
and taking the maximum value in the group of data as the highest sound value of the first user.
It should be noted that, if the number of times of the first user's historical high-pitch syllabification is only 2 times, the average value of the two highest pitch values is taken as the first historical pitch average value;
if the historical high pitch spelling frequency of the first user is only 1 time, taking the highest pitch value of the time as the average value of the first historical pitch;
if the first user has no high pitch rate, taking a default numerical value as the first historical pitch mean value of the first user, wherein the default numerical value can be configured according to the overall data condition of the platform, and the numerical value can be determined based on the pitch mean value when a man or a woman speaks.
The first historical pitch average of the first user and the second historical pitch average of the second user may be determined in the manner described above for determining the historical pitch averages.
The candidate user may not be included in the matching queue, and if the candidate user is not included, the second historical pitch mean value at this time may be 0.
From the first historical pitch average, the first pitch interval may be obtained with (FAa × 0.8) as the lower limit value of the first pitch interval and (FAa × 1.2) as the upper limit value of the first pitch interval, where FAa is the first historical pitch average.
If the matching queue comprises a first user to be matched, determining a second user from the first user to be matched, for example, if the first user to be matched comprises one user, the user is the second user; if the first user to be matched includes multiple users, one of the multiple users is randomly selected as the second user, or the matching degree between each of the multiple users and the first user is calculated, and the user with the largest matching degree is used as the second user, and the calculation of the matching degree can be specifically described in the following.
In this embodiment, the matched second user is determined by the first historical pitch mean value of the first user, so that the pitch mean value difference between the first user and the second user is not too large, the first user and the second user are not too different, the fairness and the challenge of the game are enhanced, and the user experience is increased.
In the above, after the obtaining the first pitch interval according to the first historical pitch mean, the method further includes:
if the matching queue does not comprise the first user to be matched, determining a second pitch interval according to the first historical pitch average value, wherein the minimum value of the second pitch interval is smaller than the minimum value of the first pitch interval, and the maximum value of the second pitch interval is larger than the maximum value of the first pitch interval;
and if the matching queue comprises a second user to be matched, determining the second user from the second user to be matched, wherein the second user to be matched is the user with the second historical pitch mean value in the second pitch interval.
In the foregoing, if the matching queue does not include the first user to be matched, the first pitch interval is adjusted, the value range of the first pitch interval is increased, the second pitch interval is obtained, the minimum value of the second pitch interval is smaller than the minimum value of the first pitch interval, and the maximum value of the second pitch interval is larger than the maximum value of the first pitch interval, that is, the first pitch interval falls within the range of the second pitch interval.
Further, when determining a pitch interval from the first historical pitch mean, the range of the pitch interval may be dynamically adjusted according to the matching duration, for example, within the first 5 seconds of completion of the matching, a first pitch region is obtained, and within the 5 th to 15 th seconds of completion of the matching, a second pitch interval is obtained, which may be (FAa × 0.5) as the lower limit value of the first pitch interval and (FAa × 1.5) as the upper limit value of the first pitch interval.
If the matching queue comprises the second user to be matched, determining the second user from the second user to be matched, for example, if the second user to be matched comprises one user, the user is the second user; if the second user to be matched includes multiple users, one of the multiple users is randomly selected as the second user, or the matching degree between each of the multiple users and the first user is calculated, and the user with the largest matching degree is used as the second user, and the calculation of the matching degree may be specifically described below.
In this embodiment, if the matching queue does not include the first to-be-matched user, a second pitch interval with a larger value range is determined according to the first historical pitch average value, so as to improve the matching success rate of the first user.
In the above, when the matching queue does not include the first user to be matched, in addition to adjusting the pitch interval to obtain the second pitch interval, the following steps may be performed:
and if the matching queue does not comprise the first user to be matched, updating the matching queue.
Specifically, if the matching queue does not include the first to-be-matched user, second request information may be sent to the third electronic device; updating the matching queue including a third user logged in through the third electronic device in a case where confirmation information for the second request information is received. The second request information may be considered invitation information for a third user.
That is, if the first user to be matched is not included in the matching queue, the second request message is sent to the third electronic device to attempt to add a new user to the matching queue. And if the confirmation information aiming at the second request information is received, updating the matching queue and adding a third user to the matching queue.
For example, the server performs matching for the first user, if the matching queue has no users falling in the first pitch interval, the server may send an "invite spell" message (i.e., the second request message) to N third users currently live on the platform and not in the matching queue, the third user may click the confirmation button to send a response message (i.e., the confirmation message) to the server to accept the invitation, and the third user enters the matching queue. By adding a third user to the matching queue, the matching success rate of the first user can be improved. The selection modes of the N third users are as follows:
(1) n may be other values, and is not limited herein;
(2) selecting 3 persons from users who are live broadcast on the full platform and have a historical pitch mean value lower than a first historical pitch mean value according to the sequence of the historical pitch mean values from high to low; if the number of the qualified people is less than 3, selecting several people, and complementing the insufficient number of people in the mode of the lower part (3);
(3) selecting 2 persons from users who are live broadcast on the full platform and have a historical pitch mean value higher than a first historical pitch mean value according to the sequence of the historical pitch mean values from low to high; if the number of people in (2) is less than 3, complementing 5 people according to the mode in (3); if 5 people still cannot be complemented in the step (3), selecting several people by some people to obtain a final selected user;
it should be noted that, in the above description, N is a positive integer, that is, the server at least sends the second request message to a third electronic device.
In the above, in the case that the matching queue does not include the first to-be-matched user, the second pitch interval may be determined according to the first historical pitch average, and/or the second request information may be sent to the third electronic device to update the matching queue.
In this embodiment, if the matching queue does not include the first to-be-matched user, sending second request information to a third electronic device, adding a new user to the matching queue, and improving the matching success rate of the first user.
In the above, if the matching queue includes a second user to be matched, determining the second user from the second user to be matched includes:
if the matching queue comprises second users to be matched, determining the absolute value of the difference value between the second historical pitch mean value of each user in the second users to be matched and the first historical pitch mean value of the first user;
determining a first score of each user in the second users to be matched according to the absolute value;
determining a second score of each user in the second users to be matched according to the waiting time of each user in the second users to be matched in the matching queue;
determining the matching degree of each user in the second users to be matched with the first user according to the first score and the second score;
and determining the user with the maximum matching degree in the second users to be matched as the second user.
For example, the second users to be matched include user B, user C, and user D, and their corresponding historical pitch averages are FBa, FCa, and FDa.
The absolute values of the differences between the historical pitch averages for user B, user C, and user D, respectively, and user a (i.e., the first user) historical pitch average FAa are calculated. For example: i FBa-FAa, FCa-FAa and FDa-FAa, wherein the three absolute values are marked as Fab, Fac and Fad;
the three absolute values are sorted according to the sequence from big to small, the numerical values are randomly sorted, and the corresponding user is given a 'pitch similarity score' (the smaller the difference value is, the closer the pitches of the two are proved) according to the sorting. After sorting, the maximum (first ranked) is given a score of 1, and the sorting is given a score of 1 for each downward shift. For example: if Fab > Fac > Fad, user B ranks first, user C ranks second, and user D ranks third, and the pitch similarity scores (i.e. the first scores) between user B, user C, and user D and user a are respectively: 1 minute, 2 minutes and 3 minutes;
and (3) arranging the users B, C and D in the matching queue from short to long in sequence, randomly arranging the users with the same duration, giving a 'waiting mark' to the corresponding user according to the sequence (the longer the waiting time is, the higher the mark is given, so that the matching is faster), giving 1 mark to the user with the shortest waiting time (the first position in the sequence), and adding 1 to the value of each downward shift of the sequence. For example: if the waiting time length is user B > user C > user D, user B ranks first, user C ranks second, and user D ranks third, then the waiting scores (i.e., the second scores) of user B, user C, and user D are: 1 minute, 2 minutes and 3 minutes;
and adding the pitch similarity scores and the waiting scores of the user B, the user C and the user D to obtain the matching degree with the user A. For example: the matching degree of the user B is 1+ 1-2, and the matching degree of the user C is 2+ 2-4; the matching degree of the user D is 3+3 — 6. The matching degree of the user D is the maximum, and the user D is a second user matched with the first user.
Further, in order to improve response efficiency, if no user meeting the requirement of the matching interval (that is, no other user whose second historical pitch mean value falls in the second pitch interval exists in the matching queue, that is, other user except the first user in the matching queue) exists within 15 seconds, randomly selecting one user from the matching queue to complete matching; if no user can be matched within 20 seconds, a prompt message is returned, for example, "the number of spellers is insufficient, please retry later".
In this embodiment, the matching degree between each user in the second users to be matched and the first user is determined according to the first score and the second score, and the matching degree can be determined by considering both the pitch similarity and the waiting time of the user in the matching queue, so that the user experience is improved.
In the above, after the determining the second user matching the first user, the method further includes:
receiving a first image sent by the first electronic device and a second image sent by the second electronic device;
and sending the first image to the second electronic equipment, and sending the second image to the first electronic equipment.
That is to say, after the server determines the second user, the server forwards the second image sent by the second electronic device to the first electronic device, and forwards the first image sent by the first electronic device to the second electronic device, so that the first image and the second image can be simultaneously viewed on both the first electronic device and the second electronic device, and the visual experience of the user is improved.
Initially, the first image and the second image are displayed in the same size and are not hidden from each other. When the game is played, after the server receives the first sound information, the server can calculate the voice in the first sound information to obtain a first pitch; similarly, after receiving the second sound information, the server may calculate the human voice in the second sound information to obtain the second pitch. The server sends the first pitch and the second pitch to the first electronic device and the second electronic device, and the first electronic device may display corresponding information on the display screen according to the sizes of the first pitch and the second pitch, for example, displaying numerical values of the first pitch and the second pitch, or, in the case that the first pitch is larger than the second pitch, enlarging a first image of the first electronic device and reducing a second image of the second electronic device.
Referring to fig. 2, fig. 2 is another flowchart of an information processing method according to an embodiment of the present invention, and as shown in fig. 2, the embodiment provides an information processing method applied to a first electronic device, including:
The first user logs in on the first electronic device, and the first electronic device may be an initiator of the pinyin game, that is, the first electronic device sends the first request information to the server, for example, the first user clicks a tweeter-pinyin play button to send the first request information. The server puts the first user in a "waiting for match" user queue (i.e., a match queue) where the first user will wait a certain amount of time (e.g., up to 20 seconds) for a match.
The first request message includes an identification of the first user, and the first request message may further include a time parameter for sending the first request message, for example, a time when the first user sends the first request message, or a time when the server receives the first request message, and so on.
The first sound information may be sound collected by the first electronic device through the first sound collection device, the first sound collection device is installed on the first electronic device, or the first sound collection device is electrically connected to the first electronic device.
For example, the first electronic device calls a microphone to collect sound; the first electronic equipment calls the local voice recognition capability, filters out voice parts in the voice, obtains a first voice, and sends the first voice to the server. Similarly, the second electronic device may obtain the second sound and send the second sound to the server in the same manner.
After receiving the first sound information, the server can calculate the voice in the first sound information to obtain a first pitch; similarly, after receiving the second sound information, the server may calculate the human voice in the second sound information to obtain the second pitch.
The server sends the first pitch and the second pitch to the first electronic device and the second electronic device.
After determining the second user, the server forwards the second image sent by the second electronic device for the first electronic device, and forwards the first image sent by the first electronic device for the second electronic device, so that the first image and the second image can be viewed on both the first electronic device and the second electronic device. Initially, the display sizes of the first image and the second image may be the same.
And step 205, displaying the adjusted first image and the second image.
In this embodiment, a first electronic device sends first request information to a server; sending the collected first sound information to the server; receiving a first pitch determined based on the first sound information and sent by the server and a second pitch of a second user, wherein the second user is matched with a first user, and the first user is a user sending the first sound information through the first electronic equipment; adjusting a first image and a second image displayed on the first electronic device if the first pitch and the second pitch are different, wherein the first image is an image captured by a first electronic device. First electronic equipment can be according to first pitch and second pitch, right first image and the second image that shows on first electronic equipment are adjusted, express first user and second user's pitch size through the size of first image and second image, and the expression mode is more directly perceived, can in time show first user and second user's spelling condition, has improved the real-time that shows to the interactive condition between the user.
In the above, in the case where the first pitch and the second pitch are different, adjusting the first image and the second image displayed on the first electronic device includes:
if the first pitch is larger than the second pitch, increasing the display size of the first image and reducing the display size of the second image;
if the first pitch is smaller than the second pitch, reducing the display size of the first image and increasing the display size of the second image.
That is, if the first pitch is larger than the second pitch, the display sizes of the first image and the second image are adjusted so that the display size of the first image is larger than the display size of the second image; and if the first pitch is smaller than the second pitch, adjusting the display sizes of the first image and the second image so that the display size of the first image is smaller than that of the second image. The first user can intuitively know whether the pitch of the first user is larger than that of the spelling object according to the sizes of the first image and the second image displayed on the first electronic device, the spelling condition of the first user and the second user is displayed in time, and the real-time performance of displaying the interaction condition between the users is improved.
In the above, the displaying the adjusted first image and the adjusted second image includes:
when the first part of the adjusted first image and the second part of the adjusted second image are displayed and overlapped, if the size of the adjusted first image is larger than that of the adjusted second image, the first part is used for shielding the second part;
when the first part of the adjusted first image and the second part of the adjusted second image are displayed and overlapped, if the size of the adjusted first image is smaller than that of the adjusted second image, the first part is shielded by the second part.
That is, if the first pitch is greater than the second pitch, adjusting the display sizes of the first image and the second image such that the display size of the first image is greater than the display size of the second image, and if the display of the adjusted first portion of the first image and the adjusted second portion of the second image overlap, blocking the second portion with the first portion;
and if the first pitch is smaller than the second pitch, adjusting the display sizes of the first image and the second image to enable the display size of the first image to be smaller than the display size of the second image, and if the first part of the adjusted first image and the second part of the adjusted second image are displayed to be overlapped, shielding the first part by using the second part.
The greater the difference between the first pitch and the second pitch, the more the portion that is occluded, the more. The first user can intuitively know whether the pitch of the first user is larger than that of the spelling object or not according to the shielded part of the first image or the second image displayed on the first electronic device, the spelling condition of the first user and the second user is displayed in time, and the real-time performance of displaying the interaction condition between the users is improved.
The following illustrates the mosaicing process.
As shown in fig. 3a, initially, the screen size U1 of the first anchor (i.e., the first user) and the screen size U2 of the second anchor (i.e., the second user) are displayed at the same size and are not blocked by each other.
The server starts a timer of X seconds, wherein X is a configurable time period (such as 20 seconds) of the high-pitch spelling playing method, and the server informs the two anchor clients that the spelling starts;
the client (namely the first electronic equipment or the second electronic equipment) calls a microphone to collect end-side sound;
the client side has local voice recognition capability, and a voice part in the voice is filtered out and transmitted to the server;
every 1 second, the server calculates the human voice part and outputs real-time pitch values FA and FB of two anchor broadcasters (the anchor broadcasters can be understood as users);
and issuing the FA and the FB to the client. The client obtains the pitch values of the two parties, and the two parties are divided into three conditions according to the size of the values: the first anchor value is high; the second anchor value is high; the first anchor and the second anchor have equal values.
When the pitch value of the first anchor is high or the pitch value of the second anchor is high, distinguishing the one with larger value from the one with smaller value, and respectively carrying out transition conversion on the size value of the anchor picture within 1 second.
The conversion method of the picture size (i.e., the size of the image) with the larger logarithm value is as follows:
calculating a target value of the picture width-height conversion: the target value of the increased picture width is equal to the current width multiplied by the increased transformation parameter; the large picture height target value is the current height × the large transformation parameter. Wherein the large transformation parameter is adjustable and has a default value of 1.1.
When the calculated width target value > the width original value × the width upper limit parameter, the final target value is taken as (the width original value × the width upper limit parameter); when the calculated length target value > the length original value x the length upper limit parameter, the final target value is taken (the length original value x the length upper limit parameter);
linearly transforming the size of the picture control from a current value to a target value;
meanwhile, the level of the anchor picture control with the larger numerical value is moved upwards, so that the anchor picture control with the smaller numerical value can be shielded.
The picture size conversion method for the smaller logarithmic value is as follows:
calculating a target value of the picture width-height conversion: the target value of the variable width picture is equal to the current width multiplied by the variable transformation parameter; the reduced picture height target value is the current height × the reduced transform parameter. Wherein the smaller transformation parameter is adjustable, and the default value is 0.95.
When the calculated wide and high target value is less than the wide and high initial value multiplied by the lower limit parameter, the final target value is (the wide and high initial value multiplied by the lower limit parameter) (namely, the final target value is converted to be minimum and is not reduced any more);
linearly transforming the size of the picture control from a current value to a target value;
the transformation diagram is shown in fig. 3b and 3c, wherein in fig. 3b, the picture size U1 of the first anchor is larger than the picture size U2 of the second anchor; in fig. 3c, the picture size U1 of the first anchor is smaller than the picture size U1 of the second anchor. The main broadcast with high pitch value gradually enlarges the picture and shields the other main broadcast to obtain stronger exposure; the main broadcast with low pitch value has gradually smaller pictures and is partially shielded. The matching strategy can enable the anchor with similar pitches to be compared, and the pictures on the two sides can be alternately enlarged or reduced, thereby greatly enhancing the entertainment and the competitive feeling.
When the pitch values of the first anchor and the second anchor are equal, the sizes of the controls of the two sides of the picture are not changed.
And when the countdown of the timer of the server is 0, the server informs the clients of the two parties that the high-pitch spelling playing method is finished, the picture controls of the two parties recover to the original size, and the pitch spelling is finished.
In the above, when the anchor initiates pitch spelling, the server matches an opponent with a similar pitch according to the historical pitch data, and after the matching is successful, the server calculates the input sounds of the two parties to obtain respective pitch values, and sends the pitch values to the client. The client enlarges or reduces the main broadcasting picture on one side according to the pitch values of the client and the main broadcasting picture on the other side, improves the control level of the person with the large pitch value, realizes the effect of partially covering the picture of the person with the high pitch value, greatly enhances the entertainment of the game, enriches the experience of the spelling interaction between the main broadcasting pictures on the one hand, and enhances the watching experience of audiences on the other hand.
Referring to fig. 4, fig. 4 is a block diagram of a server according to an embodiment of the present invention, and as shown in fig. 4, the server 400 includes:
a first receiving module 401, configured to receive first request information sent by a first electronic device, where the first request information includes an identifier of a first user;
a first determining module 402, configured to determine a second user matching the first user;
a second receiving module 403, configured to receive first sound information sent by the first user through the first electronic device, and second sound information sent by the second user through the second electronic device;
a first sending module 404, configured to send a first pitch determined based on the first sound information and a second pitch determined based on the second sound information to the first electronic device and the second electronic device.
Further, the first determining module 402 includes:
the first obtaining submodule is used for obtaining a first historical pitch mean value of the first user;
the second obtaining submodule is used for obtaining a second historical pitch mean value of the candidate users in the matching queue;
a third obtaining submodule, configured to obtain a first pitch interval according to the first historical pitch mean value;
a first determining submodule, configured to determine the second user from the first to-be-matched user if the matching queue includes the first to-be-matched user, where the first to-be-matched user is a user whose second historical pitch mean is located in the first pitch interval.
Further, the server 400 further includes:
a second determining module, configured to determine a second pitch interval according to the first historical pitch average if the matching queue does not include the first user to be matched, where a minimum value of the second pitch interval is smaller than a minimum value of the first pitch interval, and a maximum value of the second pitch interval is larger than a maximum value of the first pitch interval;
and a third determining module, configured to determine, if the matching queue includes a second user to be matched, the second user from the second user to be matched, where the second user to be matched is a user whose second historical pitch mean is located in the second pitch interval.
Further, the third determining module includes:
a second determining submodule, configured to determine, if the matching queue includes a second user to be matched, an absolute value of a difference between a second historical pitch mean of each user in the second user to be matched and a first historical pitch mean of the first user;
a third determining submodule, configured to determine, according to the absolute value, a first score of each user in the second to-be-matched users;
a fourth determining submodule, configured to determine a second score of each of the second users to be matched according to a waiting duration of each of the second users to be matched in the matching queue;
a fifth determining submodule, configured to determine, according to the first score and the second score, a matching degree between each user of the second users to be matched and the first user;
and a sixth determining submodule, configured to determine, as the second user, the user with the largest matching degree from among the second users to be matched.
Further, the server 400 further includes:
and the second sending module is used for updating the matching queue if the matching queue does not comprise the first user to be matched.
Further, the server 400 further includes:
the third receiving module is used for receiving the first image sent by the first electronic device and the second image sent by the second electronic device;
and the third sending module is used for sending the first image to the second electronic equipment and sending the second image to the first electronic equipment.
The server 400 can implement each process implemented by the server in the embodiment of the method in fig. 1 and achieve the same beneficial effects, and in order to avoid repetition, the details are not described here.
Referring to fig. 5, fig. 5 is a structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 5, the first electronic device includes:
a first sending module 501, configured to send first request information to a server;
a second sending module 502, configured to send the collected first sound information to the server;
a receiving module 503, configured to receive a first pitch determined based on the first sound information and sent by the server, and a second pitch of a second user, where the second user is matched with a first user, and the first user is a user who sends the first sound information through the first electronic device;
an adjusting module 504, configured to adjust a first image and a second image displayed on the first electronic device under the condition that the first pitch and the second pitch are different, where the first image is an image captured by a first electronic device, and the second image is an image captured by a second electronic device.
And a display module 505 for displaying the adjusted first image and the adjusted second image.
Further, the adjusting module includes:
a first adjustment submodule, configured to increase a display size of the first image and decrease a display size of the second image if the first pitch is larger than the second pitch;
a second adjustment submodule, configured to decrease the display size of the first image and increase the display size of the second image if the first pitch is smaller than the second pitch.
Further, the display module includes:
the first display sub-module is used for shielding the second part by using the first part if the size of the adjusted first image is larger than that of the adjusted second image under the condition that the first part of the adjusted first image and the second part of the adjusted second image are displayed and overlapped;
and the second display sub-module is used for shielding the first part by using the second part if the size of the adjusted first image is smaller than that of the adjusted second image under the condition that the first part of the adjusted first image and the second part of the adjusted second image are displayed and overlapped.
The first electronic device 500 can implement the processes implemented by the first electronic device in the embodiment of the method in fig. 2 and achieve the same beneficial effects, and for avoiding repetition, the details are not described here again.
Fig. 6 is a schematic diagram of a hardware structure of a server according to an embodiment of the present invention. The information processing method comprises a processor 61, a memory 62, and a program or an instruction stored in the memory 62 and capable of running on the processor 61, wherein when the program or the instruction is executed by the processor 61, the program or the instruction realizes each process of the above-mentioned information processing method embodiment shown in fig. 1, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention, and as shown in fig. 7, the electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
The radio frequency unit 701 is configured to send first request information to a server; sending the collected first sound information to the server; receiving a first pitch determined based on the first sound information and sent by the server and a second pitch of a second user, wherein the second user is matched with a first user, and the first user is a user sending the first sound information through the first electronic equipment;
a processor 710 configured to adjust a first image and a second image displayed on the first electronic device when the first pitch and the second pitch are different, where the first image is an image captured by a first electronic device, and the second image is an image captured by a second electronic device;
a display unit 706 for displaying the adjusted first image and the adjusted second image.
Further, processor 710 is configured to increase a display size of the first image and decrease a display size of the second image if the first pitch is greater than the second pitch;
if the first pitch is smaller than the second pitch, reducing the display size of the first image and increasing the display size of the second image.
Further, the display unit 706 is configured to, when a first portion of the adjusted first image and a second portion of the adjusted second image are displayed to overlap, block the second portion with the first portion if a size of the adjusted first image is larger than a size of the adjusted second image;
when the first part of the adjusted first image and the second part of the adjusted second image are displayed and overlapped, if the size of the adjusted first image is smaller than that of the adjusted second image, the first part is shielded by the second part.
The electronic device 700 can implement the processes implemented by the first electronic device in the foregoing embodiments, and achieve the same technical effects, and for avoiding repetition, the details are not described here.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 702, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the electronic apparatus 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The electronic device 700 also includes at least one sensor 707, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the electronic device 700 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 707 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 7 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 708 is an interface for connecting an external device to the electronic apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 700 or may be used to transmit data between the electronic apparatus 700 and the external device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby monitoring the whole electronic device. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The electronic device 700 may also include a power supply 711 (e.g., a battery) for providing power to the various components, and preferably, the power supply 711 may be logically coupled to the processor 710 via a power management system, such that functions of managing charging, discharging, and power consumption may be performed via the power management system.
In addition, the electronic device 700 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 710, a memory 709, and a computer program stored in the memory 709 and capable of running on the processor 710, where the computer program is executed by the processor 710 to implement the processes in the embodiment shown in fig. 2, and can achieve the same technical effects, and in order to avoid repetition, the details are not described here again.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the information processing method embodiment shown in fig. 1 or fig. 2, and can achieve the same technical effect, and is not described herein again to avoid repetition. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (12)
1. An information processing method applied to a server is characterized by comprising the following steps:
receiving first request information sent by first electronic equipment, wherein the first request information comprises an identifier of a first user;
determining a second user matching the first user;
receiving first sound information sent by the first user through the first electronic device and second sound information sent by the second user through the second electronic device;
transmitting, to the first electronic device and the second electronic device, a first pitch determined based on the first sound information and a second pitch determined based on the second sound information.
2. The method of claim 1, wherein determining the second user matching the first user comprises:
acquiring a first historical pitch mean value of the first user;
acquiring a second historical pitch mean value of the candidate users in the matching queue;
acquiring a first pitch interval according to the first historical pitch mean value;
and if the matching queue comprises a first user to be matched, determining the second user from the first user to be matched, wherein the first user to be matched is the user with the second historical pitch mean value in the first pitch interval.
3. The method of claim 2, after said obtaining a first pitch interval from said first historical pitch mean, further comprising:
if the matching queue does not comprise the first user to be matched, determining a second pitch interval according to the first historical pitch average value, wherein the minimum value of the second pitch interval is smaller than the minimum value of the first pitch interval, and the maximum value of the second pitch interval is larger than the maximum value of the first pitch interval;
and if the matching queue comprises a second user to be matched, determining the second user from the second user to be matched, wherein the second user to be matched is the user with the second historical pitch mean value in the second pitch interval.
4. The method of claim 3, wherein if the matching queue includes a second user to be matched, determining the second user from the second user to be matched comprises:
if the matching queue comprises second users to be matched, determining the absolute value of the difference value between the second historical pitch mean value of each user in the second users to be matched and the first historical pitch mean value of the first user;
determining a first score of each user in the second users to be matched according to the absolute value;
determining a second score of each user in the second users to be matched according to the waiting time of each user in the second users to be matched in the matching queue;
determining the matching degree of each user in the second users to be matched with the first user according to the first score and the second score;
and determining the user with the maximum matching degree in the second users to be matched as the second user.
5. A method according to claim 3, after said obtaining a first pitch interval from the first historical pitch mean, further comprising:
and if the matching queue does not comprise the first user to be matched, updating the matching queue.
6. The method of claim 1, wherein after said determining a second user matching said first user, further comprising:
receiving a first image sent by the first electronic device and a second image sent by the second electronic device;
and sending the first image to the second electronic equipment, and sending the second image to the first electronic equipment.
7. An information processing method applied to a first electronic device is characterized by comprising the following steps:
sending first request information to a server;
sending the collected first sound information to the server;
receiving a first pitch determined based on the first sound information and sent by the server and a second pitch of a second user, wherein the second user is matched with a first user, and the first user is a user sending the first sound information through the first electronic equipment;
adjusting a first image and a second image displayed on the first electronic device under the condition that the first pitch and the second pitch are different, wherein the first image is an image acquired by a first electronic device, the second image is an image acquired by a second electronic device, and the second electronic device corresponds to the second user;
and displaying the adjusted first image and the adjusted second image.
8. The method of claim 7, wherein adjusting the first image and the second image displayed on the first electronic device if the first pitch and the second pitch are different comprises:
if the first pitch is larger than the second pitch, increasing the display size of the first image and reducing the display size of the second image;
if the first pitch is smaller than the second pitch, reducing the display size of the first image and increasing the display size of the second image.
9. The method of claim 7, wherein displaying the adjusted first image and the adjusted second image comprises:
when the first part of the adjusted first image and the second part of the adjusted second image are displayed and overlapped, if the size of the adjusted first image is larger than that of the adjusted second image, the first part is used for shielding the second part;
when the first part of the adjusted first image and the second part of the adjusted second image are displayed and overlapped, if the size of the adjusted first image is smaller than that of the adjusted second image, the first part is shielded by the second part.
10. A server, characterized by comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the information processing method according to any one of claims 1 to 6.
11. An electronic device, comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the information processing method according to any one of claims 7 to 9.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, realizes the steps of an information processing method according to one of claims 1 to 6, or which computer program, when being executed by a processor, realizes the steps of an information processing method according to one of claims 7 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011238753.8A CN112337088B (en) | 2020-11-09 | 2020-11-09 | Information processing method, server, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011238753.8A CN112337088B (en) | 2020-11-09 | 2020-11-09 | Information processing method, server, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112337088A true CN112337088A (en) | 2021-02-09 |
CN112337088B CN112337088B (en) | 2023-07-14 |
Family
ID=74428608
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011238753.8A Active CN112337088B (en) | 2020-11-09 | 2020-11-09 | Information processing method, server, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112337088B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114201095A (en) * | 2021-12-14 | 2022-03-18 | 广州博冠信息科技有限公司 | Control method and device for live interface, storage medium and electronic equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108010541A (en) * | 2017-12-14 | 2018-05-08 | 广州酷狗计算机科技有限公司 | Method and device, the storage medium of pitch information are shown in direct broadcasting room |
CN108900920A (en) * | 2018-07-20 | 2018-11-27 | 广州虎牙信息科技有限公司 | A kind of live streaming processing method, device, equipment and storage medium |
CN109587509A (en) * | 2018-11-27 | 2019-04-05 | 广州市百果园信息技术有限公司 | Live-broadcast control method, device, computer readable storage medium and terminal |
US20190147841A1 (en) * | 2017-11-13 | 2019-05-16 | Facebook, Inc. | Methods and systems for displaying a karaoke interface |
CN110324652A (en) * | 2019-07-31 | 2019-10-11 | 广州华多网络科技有限公司 | Game interaction method and system, electronic equipment and the device with store function |
CN110718239A (en) * | 2019-10-15 | 2020-01-21 | 北京达佳互联信息技术有限公司 | Audio processing method and device, electronic equipment and storage medium |
CN111526406A (en) * | 2020-03-31 | 2020-08-11 | 广州酷狗计算机科技有限公司 | Live broadcast interface display method and device, terminal and storage medium |
CN111698567A (en) * | 2020-06-22 | 2020-09-22 | 北京达佳互联信息技术有限公司 | Game fighting method and device for live broadcast room |
CN111881940A (en) * | 2020-06-29 | 2020-11-03 | 广州华多网络科技有限公司 | Live broadcast and live broadcast matching method and device, electronic equipment and storage medium |
-
2020
- 2020-11-09 CN CN202011238753.8A patent/CN112337088B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190147841A1 (en) * | 2017-11-13 | 2019-05-16 | Facebook, Inc. | Methods and systems for displaying a karaoke interface |
CN108010541A (en) * | 2017-12-14 | 2018-05-08 | 广州酷狗计算机科技有限公司 | Method and device, the storage medium of pitch information are shown in direct broadcasting room |
CN108900920A (en) * | 2018-07-20 | 2018-11-27 | 广州虎牙信息科技有限公司 | A kind of live streaming processing method, device, equipment and storage medium |
CN109587509A (en) * | 2018-11-27 | 2019-04-05 | 广州市百果园信息技术有限公司 | Live-broadcast control method, device, computer readable storage medium and terminal |
CN110324652A (en) * | 2019-07-31 | 2019-10-11 | 广州华多网络科技有限公司 | Game interaction method and system, electronic equipment and the device with store function |
CN110718239A (en) * | 2019-10-15 | 2020-01-21 | 北京达佳互联信息技术有限公司 | Audio processing method and device, electronic equipment and storage medium |
CN111526406A (en) * | 2020-03-31 | 2020-08-11 | 广州酷狗计算机科技有限公司 | Live broadcast interface display method and device, terminal and storage medium |
CN111698567A (en) * | 2020-06-22 | 2020-09-22 | 北京达佳互联信息技术有限公司 | Game fighting method and device for live broadcast room |
CN111881940A (en) * | 2020-06-29 | 2020-11-03 | 广州华多网络科技有限公司 | Live broadcast and live broadcast matching method and device, electronic equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
尹俊: "关于游戏内的匹配机制" * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114201095A (en) * | 2021-12-14 | 2022-03-18 | 广州博冠信息科技有限公司 | Control method and device for live interface, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112337088B (en) | 2023-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109151180B (en) | Object identification method and mobile terminal | |
CN108632658B (en) | Bullet screen display method and terminal | |
CN109078319B (en) | Game interface display method and terminal | |
CN109871174B (en) | Virtual key display method and mobile terminal | |
CN108319442B (en) | Audio playing control method and mobile terminal | |
CN108668024B (en) | Voice processing method and terminal | |
CN110012143B (en) | Telephone receiver control method and terminal | |
CN108196815B (en) | Method for adjusting call sound and mobile terminal | |
CN110177296A (en) | A kind of video broadcasting method and mobile terminal | |
CN109922294B (en) | Video processing method and mobile terminal | |
CN111601139A (en) | Information display method, electronic device, and storage medium | |
CN110825897A (en) | Image screening method and device and mobile terminal | |
CN111415722A (en) | Screen control method and electronic equipment | |
CN109949809B (en) | Voice control method and terminal equipment | |
CN113495617A (en) | Method and device for controlling equipment, terminal equipment and storage medium | |
CN109582820B (en) | Song playing method, terminal equipment and server | |
CN112337088B (en) | Information processing method, server, electronic equipment and storage medium | |
CN108924413B (en) | Shooting method and mobile terminal | |
CN108646966B (en) | Screen-off time adjusting method and device | |
CN111402157B (en) | Image processing method and electronic equipment | |
CN109614827B (en) | Control method of mobile terminal and mobile terminal | |
CN109327605B (en) | Display control method and device and mobile terminal | |
CN110766396A (en) | Graphic code display method and electronic equipment | |
CN110851042A (en) | Interface display method and electronic equipment | |
CN107957789B (en) | Text input method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |