CN114866828B - Audio and video playing method and device, server, storage medium and product - Google Patents

Audio and video playing method and device, server, storage medium and product Download PDF

Info

Publication number
CN114866828B
CN114866828B CN202210299357.9A CN202210299357A CN114866828B CN 114866828 B CN114866828 B CN 114866828B CN 202210299357 A CN202210299357 A CN 202210299357A CN 114866828 B CN114866828 B CN 114866828B
Authority
CN
China
Prior art keywords
playing
audio
intelligent
video
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210299357.9A
Other languages
Chinese (zh)
Other versions
CN114866828A (en
Inventor
李永松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Haier Smart Home Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202210299357.9A priority Critical patent/CN114866828B/en
Publication of CN114866828A publication Critical patent/CN114866828A/en
Application granted granted Critical
Publication of CN114866828B publication Critical patent/CN114866828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43078Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen for seamlessly watching content streams when changing device, e.g. when watching the same program sequentially on a TV and then on a tablet
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4627Rights management associated to the content

Abstract

The application discloses an audio and video playing method, an audio and video playing device, a server, a storage medium and a product, and relates to the technical field of intelligent home, wherein the audio and video playing method comprises the following steps: monitoring whether audio and video playing information reported by the first intelligent equipment is received or not; if yes, determining a user identifier associated with the first intelligent equipment, and storing the audio and video playing information and the user identifier in an associated manner; receiving a cross-device playing request sent by a first intelligent device, and determining a corresponding second intelligent device from a plurality of intelligent devices associated with a user identifier, wherein the playing request is triggered by a user based on a cross-device playing voice instruction; and sending the playing instruction and the corresponding audio/video playing information to the second intelligent equipment, and playing the corresponding audio/video by the second intelligent equipment according to the playing instruction and the corresponding audio/video playing information. By receiving a cross-device playing request triggered by a user based on a cross-device playing voice instruction, cross-device continuous playing among different devices is reasonably and effectively realized.

Description

Audio and video playing method and device, server, storage medium and product
Technical Field
The application relates to the technical field of smart home, in particular to an audio and video playing method, an audio and video playing device, a server, a storage medium and a product.
Background
In recent years, with the development of information technology and control technology, higher requirements are put forward, and the "step into the new era of intelligence" is also becoming the subject of people's life. Smart home is increasingly popular as a product of Internet and artificial intelligence development, and automation and intelligent control of home equipment are realized, so that the smart home is also a mode and trend for people to pursue comfortable, convenient and safe home life.
In the time of internet of things, many families have a plurality of intelligent devices, and the intelligent devices in the families are not interconnected and intercommunicated, and some audio and video resources can only be independently played on a certain device and can not be transferred from one device to another designated device for continuous playing. For example, a user may watch a television show on television a in a living room, then need to continue watching the television show on television B in a bedroom, need to manually operate to play the television show on television B, and need to manually adjust the position to which the user watches.
At present, a scheme of transferring a television play on a television A in a living room to relevant cross-equipment playing which is continuously played on a television B in a bedroom is not realized, so that the viewing experience of a user is influenced.
Disclosure of Invention
The invention provides an audio and video playing method, an audio and video playing device, a server, a storage medium and a product, which are used for solving the problem that continuous playing can not be effectively realized by using cross-equipment in the prior art.
In a first aspect, the present invention provides an audio/video playing method, including:
monitoring whether audio and video playing information reported by the first intelligent equipment is received or not;
if yes, determining a user identifier associated with the first intelligent equipment, and storing the audio and video playing information and the user identifier in an associated manner;
receiving a cross-device playing request sent by a first intelligent device, and determining a corresponding second intelligent device from a plurality of intelligent devices associated with the user identifier, wherein the playing request is triggered by a user based on a cross-device playing voice instruction;
and sending the playing instruction and the corresponding audio/video playing information to the second intelligent device so that the second intelligent device plays the corresponding audio/video according to the playing instruction and the corresponding audio/video playing information.
In a second aspect, the present invention provides an audio/video playing device, including:
the monitoring unit is used for monitoring whether audio and video playing information reported by the first intelligent equipment is received or not;
the determining unit is used for determining a user identifier associated with the first intelligent equipment and storing the audio and video playing information and the user identifier in an associated manner if the audio and video playing information and the user identifier are associated;
the receiving unit is used for receiving a cross-device playing request sent by the first intelligent device, and determining a corresponding second intelligent device from a plurality of intelligent devices associated with the user identifier, wherein the playing request is triggered by the user based on a cross-device playing voice instruction;
and the sending unit is used for sending the playing instruction and the corresponding audio/video playing information to the second intelligent equipment so that the second intelligent equipment plays the corresponding audio/video according to the playing instruction and the corresponding audio/video playing information.
In a third aspect, the present invention provides a server comprising: a processor, a memory, and a transceiver;
a processor, memory, and transceiver circuitry interconnect;
the memory stores computer-executable instructions;
a transceiver for transceiving data and requests;
the processor executes computer-executable instructions stored in the memory to cause the processor to perform the method as described in the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored therein computer-executable instructions for performing the method according to the first aspect when executed by a processor.
In a fifth aspect, the invention provides a computer program product comprising a computer program which, when executed by a processor, implements the method of the first aspect.
The invention provides an audio and video playing method, an audio and video playing device, a server, a storage medium and a product, wherein the method monitors whether audio and video playing information reported by first intelligent equipment is received or not; if yes, determining a user identifier associated with the first intelligent equipment, and storing the audio and video playing information and the user identifier in an associated manner; receiving a cross-device playing request sent by a first intelligent device, and determining a corresponding second intelligent device from a plurality of intelligent devices associated with the user identifier, wherein the playing request is triggered by a user based on a cross-device playing voice instruction; and sending the playing instruction and the corresponding audio/video playing information to the second intelligent device so that the second intelligent device plays the corresponding audio/video according to the playing instruction and the corresponding audio/video playing information. By receiving a cross-device playing request triggered by a user based on a cross-device playing voice instruction, cross-device continuous playing among different devices is reasonably and effectively realized.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic diagram of a network architecture of an audio/video playing method provided by the present invention;
fig. 2 is a flowchart of an audio/video playing method according to a first embodiment of the present invention;
fig. 3 is a flow chart of an audio/video playing method according to a second embodiment of the present invention;
fig. 4 is a flow chart of an audio/video playing method according to a third embodiment of the present invention;
fig. 5 is a flow chart of an audio/video playing method according to a fourth embodiment of the present invention;
fig. 6 is a flowchart of an audio/video playing method according to a seventh embodiment of the present invention;
fig. 7 is a schematic structural diagram of an audio/video playing device according to an embodiment of the present invention;
fig. 8 is a first block diagram of a server for implementing an audio/video playing method according to an embodiment of the present invention;
fig. 9 is a second block diagram of a server for implementing an audio/video playing method according to an embodiment of the present invention.
Specific embodiments of the present disclosure have been shown by way of the above drawings and will be described in more detail below. These drawings and the written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the disclosed concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For a clear understanding of the technical solutions of the present application, the prior art solutions will be described in detail first.
With the development of information technology and control technology, higher requirements are put forward, and the 'step into an intelligent new era' is also gradually becoming the subject of life of people. Smart home is increasingly popular as a product of Internet and artificial intelligence development, and automation and intelligent control of home equipment are realized, so that the smart home is also a mode and trend for people to pursue comfortable, convenient and safe home life. In the time of internet of things, many families have a plurality of intelligent devices, and the intelligent devices in the families are not interconnected and intercommunicated, and some audio and video resources can only be independently played on a certain device and can not be transferred from one device to another designated device for continuous playing. For example, a user may watch a television show on television a in a living room, then need to continue watching the television show on television B in a bedroom, need to manually operate to play the television show on television B, and need to manually adjust the position to which the user watches. The following mode of the home terminal is started, the position of the user is determined through the camera of the home terminal, and when the user is far away from home equipment for playing video and is close to another home equipment, the processing center starts the other home equipment, so that video programs are played.
Although the above manner is manually operated by the user to play the television drama, it cannot be called continuous play, which affects the viewing experience of the user. The user's position is determined by the following mode of the home terminal, so that home equipment close to the user is started to realize continuous playing, and although the problem of continuous playing among different equipment can be solved, the user needs to be shot by means of a camera of the home equipment, and the mode has the possibility of revealing the privacy of the user. Whether to use the home equipment depends on the user, and the user decides that the user approaches a certain equipment and does not need to use the equipment, so that the scheme of realizing the cross-equipment playing by judging the distance is not reasonable, and the problem of continuous playing of the cross-equipment cannot be effectively solved.
Therefore, aiming at the problem that continuous playing can not be effectively realized by the cross-equipment in the prior art, the inventor finds out in the research that whether the audio and video playing information reported by the first intelligent equipment is received or not is monitored, if the audio and video playing information is received, the user identification associated with the first intelligent equipment is determined, the audio and video playing information and the user identification are stored in an associated mode, a cross-equipment playing request sent by the first intelligent equipment is received, the corresponding second intelligent equipment is determined from a plurality of intelligent equipment associated with the user identification, a playing instruction and the corresponding audio and video playing information are further sent to the second intelligent equipment, and the second intelligent equipment continues playing the audio and video according to the playing instruction and the corresponding audio and video playing information. By receiving a cross-device playing request triggered by a user based on a cross-device playing voice instruction, cross-device continuous playing among different devices is reasonably and effectively realized.
The inventor proposes the technical scheme of the embodiment of the invention based on the creative discovery. The network architecture and application scenario of the audio/video playing method provided by the embodiment of the invention are described below.
As shown in fig. 1, a network architecture corresponding to an audio/video playing method provided by an embodiment of the present invention includes: the system comprises a first intelligent device 1, a second intelligent device 2 and a server 3. The server 3 is communicatively connected to the first smart device 1 and the second smart device 2, respectively. The first smart device 1 may be a bedroom smart tv and the second smart device 2 may be a living room smart tv. When the first smart device 1 starts playing the audio and video, the first smart device 1 transmits the audio and video playing information to the server 3. The server 3 monitors whether audio and video playing information reported by the first intelligent equipment 1 is received or not, if yes, determines a user identifier associated with the first intelligent equipment 1, and stores the audio and video playing information and the user identifier in an associated mode. The user triggers the cross-device playing voice command through voice, and the first intelligent device 1 sends a cross-device playing request. The server 3 receives a cross-device playing request sent by the first intelligent device 1, and determines a corresponding second intelligent device 2 from a plurality of intelligent devices associated with the user identifier; and sending the playing instruction and the corresponding audio/video playing information to the second intelligent device 2, and playing the corresponding audio/video by the second intelligent device 2 according to the playing instruction and the corresponding audio/video playing information. By receiving a cross-device playing request triggered by a user based on a cross-device playing voice instruction, cross-device continuous playing among different devices is reasonably and effectively realized.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Example 1
Fig. 2 is a flow chart of an audio/video playing method according to an embodiment of the present invention, as shown in fig. 2, an execution subject of the audio/video playing method according to the embodiment is an audio/video playing device, and the audio/video playing device is located in a server, and the audio/video playing method according to the embodiment includes the following steps:
step 101, monitoring whether audio and video playing information reported by the first intelligent equipment is received.
In this embodiment, the server is connected to multiple intelligent devices of the user, and the user can control the intelligent devices through voice or a mobile terminal or an intelligent device remote controller, for example, the user can control the intelligent television to play a television play through voice, and the intelligent television reports audio and video play information of the television play to the server, where the audio and video play information includes an audio and video identifier and play progress information. In order to synchronize information, the intelligent device reports audio and video playing information to the server at intervals of preset time, wherein the preset time can be set to be 30s. The intelligent device that reports the audiovisual information to the server may be referred to as a first intelligent device for ease of subsequent distinction. The server monitors whether audio and video playing information reported by the first intelligent equipment is received or not.
And 102, if yes, determining a user identifier associated with the first intelligent equipment, and storing the audio and video playing information and the user identifier in an associated manner.
In this embodiment, if the audio and video playing information reported by the first intelligent device is monitored, the audio and video playing information needs to be associated with the user, the user identifier associated with the first intelligent device is further determined, and the audio and video playing information reported by the first intelligent device and the user identifier are associated and stored, so that the information synchronization between the first intelligent television side and the server side is realized.
Step 103, receiving a cross-device playing request sent by the first intelligent device, and determining a corresponding second intelligent device from a plurality of intelligent devices associated with the user identifier, wherein the playing request is triggered by the user based on a cross-device playing voice instruction.
In this embodiment, a user may send a cross-video playing instruction to an intelligent device through voice, for example, the intelligent device in the user's home includes an intelligent television in a bedroom and an intelligent television in a living room, the user is watching a television play by using the intelligent television in the living room, the intelligent television in the living room reports audio and video playing information to a server, the user continues watching the television play in the intelligent television in the living room to the bedroom, at this time, the user may send a cross-video playing instruction to the intelligent device in the living room through voice, for example, the intelligent device in the living room continues playing the television play to the bedroom, and the intelligent device in the living room further sends a cross-device playing request to the server according to the cross-video playing instruction triggered by the user, where the intelligent television in the living room is the first intelligent device. The server receives a cross-device playing request sent by the first intelligent device, determines a corresponding second intelligent device from a plurality of intelligent devices associated with the user identifier, and the second intelligent device is the intelligent device designated by the user, such as the intelligent television of the bedroom in the above illustration.
Step 104, sending the playing instruction and the corresponding audio/video playing information to the second intelligent device, so that the second intelligent device plays the corresponding audio/video according to the playing instruction and the corresponding audio/video playing information.
In this embodiment, a playing instruction and corresponding audio/video playing information are sent to the second intelligent device, where the corresponding audio/video playing information is the latest audio/video playing information reported by the first intelligent device, and in the process of playing the audio/video by the first intelligent device, the audio/video playing information is reported to the server at intervals of preset time, so as to realize synchronization of the playing information of the first intelligent device and the server. The second intelligent device receives the playing instruction and the corresponding audio and video playing information sent by the server, and plays the corresponding audio and video according to the playing instruction and the corresponding audio and video playing information.
In this embodiment, by monitoring whether audio and video playing information reported by the first intelligent device is received, if the audio and video playing information is received, determining a user identifier associated with the first intelligent device, storing the audio and video playing information and the user identifier in an associated manner, receiving a cross-device playing request sent by the first intelligent device, determining a corresponding second intelligent device from a plurality of intelligent devices associated with the user identifier, further sending a playing instruction and corresponding audio and video playing information to the second intelligent device, and continuing playing and playing videos by the second intelligent device according to the playing instruction and the corresponding audio and video playing information. By receiving a cross-device playing request triggered by a user based on a cross-device playing voice instruction, cross-device continuous playing among different devices can be reasonably and effectively realized through interaction between the intelligent device and the server.
Example two
Fig. 3 is a flow chart of an audio/video playing method provided by the second embodiment of the present invention, as shown in fig. 3, on the basis of the audio/video playing method provided by the first embodiment of the present invention, the user identifier associated with the first intelligent device determined in step 102 is further refined, and specifically includes the following steps:
step 1021, obtaining the identifier corresponding to the first intelligent device.
In this embodiment, the server locally stores the user identifier and the identifier of the smart device associated with the user, and after receiving the information reported by the first smart television device, the server may further obtain the identifier corresponding to the first smart device.
Step 1022, searching the user identifier with mapping relation with the identifier corresponding to the first intelligent device in the locally stored user identifiers, and determining the user identifier as the user identifier associated with the first intelligent device.
In this embodiment, a user identifier, for example, a user mobile phone number, is associated with each intelligent device in a home, a mapping relation between the user identifier and an identifier corresponding to the intelligent device is stored in advance in a server, the user identifier associated with the first intelligent device is determined according to the first intelligent device and the mapping relation, specifically, the user identifier having the mapping relation with the identifier corresponding to the first intelligent device is searched in the locally stored user identifiers, and the user identifier having the mapping relation with the identifier corresponding to the first intelligent device is determined as the user identifier associated with the first intelligent device.
Example III
Fig. 4 is a flow chart of an audio/video playing method provided by the third embodiment of the present invention, as shown in fig. 4, on the basis of the audio/video playing method provided by the first embodiment of the present invention, a second smart device corresponding to the second smart device determined from a plurality of smart devices associated with a user identifier in step 103 is further refined, and specifically includes the following steps:
step 1031, analyzing a cross-device playing request sent by the first intelligent device, and obtaining an identifier corresponding to a designated device corresponding to the cross-device playing request.
In this embodiment, a cross-device play request sent by a first intelligent device is parsed, and an identifier corresponding to a designated device corresponding to the cross-device play request is obtained, where the designated device is an intelligent device designated by a user.
Step 1032, searching for a smart device matching the identifier corresponding to the specified device from the plurality of smart devices associated with the user identifier, and determining the smart device as the corresponding second smart device.
In this embodiment, a user identifier, for example, a user mobile phone number, is associated with each intelligent device in a home, a mapping relationship between the user identifier and an identifier corresponding to the intelligent device is stored in a server locally in advance, an intelligent device matching with the identifier corresponding to the designated device is determined according to the identifier corresponding to the designated device and the mapping relationship, and the intelligent device is determined to be a corresponding second intelligent device.
Example IV
Fig. 5 is a flow chart of an audio/video playing method according to a fourth embodiment of the present invention, as shown in fig. 5, based on the audio/video playing method according to the first embodiment of the present invention, step 104 is further refined, and specifically includes the following steps:
step 1041, searching the latest audio/video identifier and playing progress information associated with the user identifier from the locally stored audio/video playing information.
In this embodiment, in the process of playing audio and video, the first intelligent device reports audio and video playing information of the played audio and video to the server at intervals of preset time, where the audio and video playing information includes audio and video identifiers and playing progress information, the audio and video identifiers are identifiers corresponding to audio and video resources, each audio and video resource has a unique identifier corresponding to the audio and video resource, the playing progress information may be playing time, and the server stores the audio and video playing information in association with the user identifier. When cross-device playing is performed, the server searches the latest audio and video identification and playing progress information associated with the user identification corresponding to the first intelligent device in the locally stored audio and video playing information.
Step 1042, the playing instruction, the latest audio/video identifier and the playing progress information are sent to the second intelligent device.
In this embodiment, the server sends the playing instruction, the latest audio and video identifier and the playing progress to the corresponding second intelligent device, and the second intelligent device receives the playing instruction, the latest audio and video identifier and the playing progress information sent by the server, and determines the played video resource according to the latest audio and video identifier and the playing progress information, so as to play the corresponding audio and video. It should be noted that the second intelligent device needs to be in a power-on state or a standby state to play the corresponding audio/video, and if the second intelligent device is in a power-off state, the corresponding audio/video cannot be played. After playing the audio and video, the second intelligent device reports the audio and video playing information to the server, and at this time, the second intelligent device is the first intelligent device in the above embodiment, in order to distinguish two different intelligent devices, the first intelligent device refers to the intelligent device that reports the audio and video playing information and cross-device playing request, and the second intelligent device refers to the intelligent device that receives the playing instruction and the corresponding audio and video playing information. The same smart device has functions of receiving information and transmitting information, and is called a first smart device or a second smart device in order to distinguish between them in different cases.
Example five
On the basis of the audio/video playing method provided by the first embodiment of the present invention, before determining the corresponding second intelligent device from the plurality of intelligent devices associated with the user identifier in step 103, the method further includes the following steps:
step 103a, obtaining user voiceprint information corresponding to the cross-device playing voice instruction acquired by the first intelligent device.
In this embodiment, when a user initiates a cross-device playing voice command through voice, a first intelligent device collects user voiceprint information corresponding to the cross-device playing voice command, a carrier of voiceprints is voice, voice refers to human speaking, voiceprint recognition converts a voice signal into an electrical signal, extracts characteristics, builds a model, and performs recognition and judgment according to matching degree. The first intelligent device can collect the voice of the user, so that voiceprint information is obtained.
Step 103b, determining whether the user has cross-device playing authority according to the voiceprint information of the user; if yes, go to step 103.
In this embodiment, whether the user has cross-device playing authority can be further determined according to voiceprint information of the user sent by the first intelligent device, and if the user is determined to have cross-device playing authority, a corresponding second intelligent device is further determined from a plurality of intelligent devices associated with the user identifier; and if the user is determined not to have the cross-device playing authority, rejecting the cross-device playing request sent by the first intelligent device.
Example six
On the basis of the audio/video playing method provided in the fifth embodiment of the present invention, step 103b is further refined, and specifically includes the following steps:
step 103b1, obtaining preset voiceprint information corresponding to the user identifier, and matching the user voiceprint information corresponding to the cross-equipment playing voice instruction acquired by the first intelligent equipment with the preset voiceprint information.
In this embodiment, the server locally pre-stores voiceprint information, acquires preset voiceprint information corresponding to the user identifier, matches user voiceprint information corresponding to the cross-device playing voice instruction acquired by the first intelligent device with the preset voiceprint information, and determines whether the user has cross-device playing permission according to a voiceprint information matching result.
Step 103b2, if the user voiceprint information corresponding to the cross-device playing voice command acquired by the first intelligent device is matched with the preset voiceprint information, determining that the user has the cross-device playing authority.
In this embodiment, if the user voiceprint information corresponding to the cross-device playing voice command collected by the first intelligent device matches with the preset voiceprint information, it is indicated that the user is a designated user, a plurality of intelligent devices can be controlled, and another intelligent device can be controlled to continue playing the audio and video that is being played by the current intelligent device.
Step 103b3, if the user voiceprint information corresponding to the cross-device playing voice command acquired by the first intelligent device is not matched with the preset voiceprint information, determining that the user does not have the cross-device playing authority.
In this embodiment, if the user voiceprint information corresponding to the cross-device playing voice command collected by the first intelligent device is matched with the preset voiceprint information, it is indicated that the user does not specify the user, cannot control multiple intelligent devices, cannot control another intelligent device to continue playing the audio and video being played by the current intelligent device, for example, the non-specified user includes a child, the adult is set as the specified user, and the child does not have the authority to control the intelligent device.
Example seven
Fig. 6 is a flow chart of an audio/video playing method according to a seventh embodiment of the present invention, as shown in fig. 6, on the basis of the audio/video playing method according to the first embodiment of the present invention, after step 104, the method further includes:
and 105, receiving playing prompt information fed back by the second intelligent equipment.
In this embodiment, the second intelligent device plays the corresponding audio and video according to the play instruction and the corresponding audio and video play information, whether the play is successful or not, the second intelligent device can feed back play prompt information to the server, the server receives the play prompt information fed back by the second intelligent device, and whether to send a stop play instruction to the first intelligent device is determined according to the play prompt information.
And 106, if the playing prompt information is played, sending a playing stopping instruction to the first intelligent device so that the first intelligent device stops playing the corresponding audio and video according to the playing stopping instruction.
In this embodiment, if the play prompt information is played, the server sends a play stopping instruction to the first intelligent device, and the first intelligent device stops playing the corresponding audio and video according to the play stopping instruction, at this time, the server defaults the first intelligent device to be the second intelligent device, and defaults the second intelligent device playing the audio and video to be the first intelligent device.
Fig. 7 is a schematic structural diagram of an audio/video playing device according to an embodiment of the present invention, as shown in fig. 7, an audio/video playing device 200 provided in this embodiment includes a monitoring unit 201, a determining unit 202, a receiving unit 203, and a transmitting unit 204.
The monitoring unit 201 is configured to monitor whether audio and video playing information reported by the first intelligent device is received. And the determining unit 202 is configured to determine a user identifier associated with the first intelligent device if the first intelligent device is the first intelligent device, and store the audio/video playing information in association with the user identifier. And the receiving unit 203 is configured to receive a cross-device play request sent by the first intelligent device, and determine a corresponding second intelligent device from a plurality of intelligent devices associated with the user identifier, where the play request is triggered by the user based on a cross-device play voice command. And the sending unit 204 is configured to send the playing instruction and the corresponding audio/video playing information to the second intelligent device, so that the second intelligent device plays the corresponding audio/video according to the playing instruction and the corresponding audio/video playing information.
Optionally, the determining unit is further configured to obtain an identifier corresponding to the first intelligent device; searching a user identifier with a mapping relation with the identifier corresponding to the first intelligent device in the locally stored user identifiers, and determining the user identifier as the user identifier associated with the first intelligent device.
Optionally, the receiving unit is further configured to parse the cross-device play request sent by the first intelligent device, and obtain an identifier corresponding to a designated device corresponding to the cross-device play request; and searching the intelligent device matched with the identification corresponding to the designated device from a plurality of intelligent devices associated with the user identification, and determining the intelligent device as a corresponding second intelligent device.
Optionally, the sending unit is further configured to search for a latest audio/video identifier and playing progress information associated with the user identifier from locally stored audio/video playing information; and sending the playing instruction, the latest audio and video identification and the playing progress information to the second intelligent device.
Optionally, the determining unit is further configured to obtain user voiceprint information corresponding to the cross-device playing voice instruction acquired by the first intelligent device; determining whether the user has cross-equipment playing authority according to the voiceprint information of the user; if yes, the step of determining a corresponding second intelligent device from a plurality of intelligent devices associated with the user identification is executed.
Optionally, the determining unit is further configured to obtain preset voiceprint information corresponding to the user identifier, and match user voiceprint information corresponding to the cross-device playing voice instruction acquired by the first intelligent device with the preset voiceprint information; if the user voiceprint information corresponding to the cross-equipment playing voice instruction acquired by the first intelligent equipment is matched with the preset voiceprint information, determining that the user has cross-equipment playing authority; if the user voiceprint information corresponding to the cross-device playing voice instruction acquired by the first intelligent device is not matched with the preset voiceprint information, determining that the user does not have the cross-device playing authority.
Optionally, the receiving unit is further configured to receive a play prompt message fed back by the second intelligent device. And the sending unit is also used for sending a play stopping instruction to the first intelligent device if the play prompt information is played, so that the first intelligent device stops playing the corresponding audio and video according to the play stopping instruction.
Fig. 8 is a first block diagram of a server for implementing an audio/video playing method according to an embodiment of the present invention, and as shown in fig. 8, the server 300 includes: 300 includes: a memory 301, a processor 302 and a transceiver 303.
Processor 302, memory 301, and transceiver 303 circuitry;
a transceiver 303 for transceiving data and requests;
memory 301 stores computer-executable instructions;
processor 302 executes computer-executable instructions stored in memory 301, causing the processor to perform the methods provided by any of the embodiments described above.
Fig. 9 is a second block diagram of a server for implementing an audio/video playing method according to an embodiment of the present invention, and as shown in fig. 9, the server may be a computer, a digital broadcasting terminal, a messaging device, a tablet device, a personal digital assistant, a server cluster, or the like.
Server 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the server 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the server 800. Examples of such data include instructions for any application or method operating on server 800, contact data, phonebook data, messages, pictures, video, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the server 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the server 800.
The multimedia component 808 includes a screen between the server 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the server 800 is in an operation mode, such as a photographing mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the server 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects for the server 800. For example, the sensor component 814 may detect an on/off state of the server 800, a relative positioning of components, such as a display and keypad of the server 800, the sensor component 814 may also detect a change in position of the server 800 or a component of the server 800, the presence or absence of a user's contact with the server 800, an orientation or acceleration/deceleration of the server 800, and a change in temperature of the server 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the server 800 and other devices, either wired or wireless. The server 800 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the server 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein computer-executable instructions for performing the method of any one of the above embodiments by a processor.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program for executing the method of any of the above embodiments by a processor.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An audio/video playing method, which is characterized by comprising the following steps:
monitoring whether audio and video playing information reported by the first intelligent equipment is received or not;
if yes, determining a user identifier associated with the first intelligent equipment, and storing the audio and video playing information and the user identifier in an associated manner;
receiving a cross-device playing request sent by a first intelligent device, and determining a corresponding second intelligent device from a plurality of intelligent devices associated with the user identifier, wherein the playing request is triggered by a user based on a cross-device playing voice instruction;
sending a playing instruction and corresponding audio/video playing information to the second intelligent device so that the second intelligent device plays corresponding audio/video according to the playing instruction and the corresponding audio/video playing information;
after the playing instruction and the corresponding audio/video playing information are sent to the second intelligent device, the method further comprises:
receiving playing prompt information fed back by the second intelligent equipment;
if the playing prompt information is played, sending a playing stopping instruction to the first intelligent device, so that the first intelligent device stops playing the corresponding audio and video according to the playing stopping instruction, defaulting the first intelligent device to be a second intelligent device, defaulting the second intelligent device playing the audio and video to be the first intelligent device.
2. The method of claim 1, wherein the determining the user identification associated with the first smart device comprises:
acquiring an identifier corresponding to a first intelligent device;
searching a user identifier with a mapping relation with the identifier corresponding to the first intelligent device in the locally stored user identifiers, and determining the user identifier as the user identifier associated with the first intelligent device.
3. The method of claim 1, wherein the determining a corresponding second smart device from the plurality of smart devices associated with the user identification comprises:
analyzing a cross-device playing request sent by a first intelligent device, and acquiring an identifier corresponding to a designated device corresponding to the cross-device playing request;
and searching the intelligent equipment matched with the identification corresponding to the appointed equipment from a plurality of intelligent equipment associated with the user identification, and determining the intelligent equipment as a corresponding second intelligent equipment.
4. The method of claim 1, wherein the audiovisual playback information comprises: audio and video identification and playing progress information;
the sending the playing instruction and the corresponding audio/video playing information to the second intelligent device includes:
searching the latest audio and video identifications and playing progress information associated with the user identifications from locally stored audio and video playing information;
and sending the playing instruction, the latest audio and video identification and the playing progress information to the second intelligent device.
5. The method of claim 1, wherein prior to determining the corresponding second smart device from the plurality of smart devices associated with the user identification, further comprising:
acquiring user voiceprint information corresponding to a cross-equipment playing voice instruction acquired by a first intelligent device;
determining whether the user has cross-equipment playing authority according to the voiceprint information of the user;
and if yes, executing the step of determining the corresponding second intelligent equipment from the plurality of intelligent equipment associated with the user identification.
6. The method of claim 5, wherein determining whether the user has cross-device playback rights based on the user voiceprint information comprises:
acquiring preset voiceprint information corresponding to the user identifier, and matching the user voiceprint information corresponding to the cross-equipment playing voice instruction acquired by the first intelligent equipment with the preset voiceprint information;
if the user voiceprint information corresponding to the cross-equipment playing voice instruction acquired by the first intelligent equipment is matched with the preset voiceprint information, determining that the user has cross-equipment playing authority;
if the user voiceprint information corresponding to the cross-device playing voice instruction acquired by the first intelligent device is not matched with the preset voiceprint information, determining that the user does not have the cross-device playing authority.
7. An audio/video playback apparatus, the apparatus comprising:
the monitoring unit is used for monitoring whether audio and video playing information reported by the first intelligent equipment is received or not;
the determining unit is used for determining a user identifier associated with the first intelligent equipment and storing the audio and video playing information and the user identifier in an associated manner if the audio and video playing information and the user identifier are associated;
the receiving unit is used for receiving a cross-device playing request sent by the first intelligent device, and determining a corresponding second intelligent device from a plurality of intelligent devices associated with the user identifier, wherein the playing request is triggered by the user based on a cross-device playing voice instruction;
the sending unit is used for sending the playing instruction and the corresponding audio/video playing information to the second intelligent equipment so that the second intelligent equipment plays the corresponding audio/video according to the playing instruction and the corresponding audio/video playing information;
the receiving unit is further used for receiving playing prompt information fed back by the second intelligent device;
the sending unit is further configured to send a play stopping instruction to the first intelligent device if the play prompting message is played, so that the first intelligent device stops playing the corresponding audio and video according to the play stopping instruction, defaults the first intelligent device to be a second intelligent device, defaults the second intelligent device playing the audio and video to be the first intelligent device.
8. A server, comprising: a processor, a memory, and a transceiver;
a processor, memory, and transceiver circuitry interconnect;
a transceiver for transceiving data and requests;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of claims 1 to 6.
9. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program which, when executed by a processor, implements the method of any of claims 1 to 6.
CN202210299357.9A 2022-03-25 2022-03-25 Audio and video playing method and device, server, storage medium and product Active CN114866828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210299357.9A CN114866828B (en) 2022-03-25 2022-03-25 Audio and video playing method and device, server, storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210299357.9A CN114866828B (en) 2022-03-25 2022-03-25 Audio and video playing method and device, server, storage medium and product

Publications (2)

Publication Number Publication Date
CN114866828A CN114866828A (en) 2022-08-05
CN114866828B true CN114866828B (en) 2024-03-22

Family

ID=82630020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210299357.9A Active CN114866828B (en) 2022-03-25 2022-03-25 Audio and video playing method and device, server, storage medium and product

Country Status (1)

Country Link
CN (1) CN114866828B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105049903A (en) * 2015-07-03 2015-11-11 浪潮软件集团有限公司 Method and system for cross-device synchronization of media files and media playing device
CN109493866A (en) * 2018-10-29 2019-03-19 苏州乐轩科技有限公司 Intelligent sound box and its operating method
CN112258086A (en) * 2020-11-13 2021-01-22 Oppo广东移动通信有限公司 Cross-device task relay method and device, cloud platform and storage medium
CN113141531A (en) * 2020-01-20 2021-07-20 青岛海尔多媒体有限公司 Method and device for cross-device playing control and playing device
CN113765754A (en) * 2020-06-02 2021-12-07 云米互联科技(广东)有限公司 Audio synchronous playing method and device and computer readable storage medium
CN114189729A (en) * 2021-12-14 2022-03-15 海信视像科技股份有限公司 Data relay playing method and intelligent device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105049903A (en) * 2015-07-03 2015-11-11 浪潮软件集团有限公司 Method and system for cross-device synchronization of media files and media playing device
CN109493866A (en) * 2018-10-29 2019-03-19 苏州乐轩科技有限公司 Intelligent sound box and its operating method
CN113141531A (en) * 2020-01-20 2021-07-20 青岛海尔多媒体有限公司 Method and device for cross-device playing control and playing device
CN113765754A (en) * 2020-06-02 2021-12-07 云米互联科技(广东)有限公司 Audio synchronous playing method and device and computer readable storage medium
CN112258086A (en) * 2020-11-13 2021-01-22 Oppo广东移动通信有限公司 Cross-device task relay method and device, cloud platform and storage medium
CN114189729A (en) * 2021-12-14 2022-03-15 海信视像科技股份有限公司 Data relay playing method and intelligent device

Also Published As

Publication number Publication date
CN114866828A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN108520746B (en) Method and device for controlling intelligent equipment through voice and storage medium
RU2654510C2 (en) Method, apparatus and system for playing multimedia data
EP3136793B1 (en) Method and apparatus for awakening electronic device
CN105975828B (en) Unlocking method and device
CN107769881B (en) Information synchronization method, apparatus and system, storage medium
CN105451369A (en) Method and apparatus for updating connection parameter of Bluetooth device with low power consumption
CN105430547A (en) Dormancy method and apparatus of bluetooth earphones
CN106792173B (en) Video playing method and device and non-transitory computer readable storage medium
CN106993265B (en) Communication method based on wearable device, terminal and wearable device
CN111641839B (en) Live broadcast method and device, electronic equipment and storage medium
CN105207994A (en) Account number binding method and device
CN114500442B (en) Message management method and electronic equipment
CN104933071A (en) Information retrieval method and corresponding device
EP3565374A1 (en) Region configuration method and device
CN104394137A (en) Voice call reminding method and device
CN106292316B (en) Working mode switching method and device
CN112217987B (en) Shooting control method and device and storage medium
CN105786561B (en) Method and device for calling process
CN114866828B (en) Audio and video playing method and device, server, storage medium and product
CN106899369B (en) Method and device for reserved playing of intelligent radio
CN105159181A (en) Control method and device for intelligent equipment
CN112489650A (en) Wake-up control method and device, storage medium and terminal
CN111314554A (en) Voice sending method and device
CN105979323B (en) Equipment control method and device and terminal
CN112201236B (en) Terminal awakening method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant