CN113138742A - Screen projection display method and device - Google Patents

Screen projection display method and device Download PDF

Info

Publication number
CN113138742A
CN113138742A CN202110279613.3A CN202110279613A CN113138742A CN 113138742 A CN113138742 A CN 113138742A CN 202110279613 A CN202110279613 A CN 202110279613A CN 113138742 A CN113138742 A CN 113138742A
Authority
CN
China
Prior art keywords
user
voice
authority
projection display
target authority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110279613.3A
Other languages
Chinese (zh)
Inventor
程胜文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Wanxiang Electronics Technology Co Ltd
Original Assignee
Xian Wanxiang Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Wanxiang Electronics Technology Co Ltd filed Critical Xian Wanxiang Electronics Technology Co Ltd
Priority to CN202110279613.3A priority Critical patent/CN113138742A/en
Publication of CN113138742A publication Critical patent/CN113138742A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a screen projection display method and device, relates to the technical field of intelligent office, and can solve the problem that users are not easy to find out own screen projection content in the prior art. The specific technical scheme is as follows: firstly, receiving a voice instruction input by a target authority user, wherein the voice instruction is used for indicating and displaying a control interface of the target authority user; then, determining user information and user authority of a target authority user according to the voice instruction, wherein the user authority comprises an image source list of the target authority user; generating a control interface of a target authority user according to the user information and the user authority; and finally, receiving a voice releasing request of a user, and executing corresponding operation content according to the voice releasing request. The present disclosure is for a projection display.

Description

Screen projection display method and device
Technical Field
The disclosure relates to the technical field of intelligent office, in particular to a screen projection display method and device.
Background
The projection screen display system generally comprises a control host, a plurality of display screens and external input equipment (a keyboard and a mouse). As shown in fig. 1, one of the plurality of display screens serves as a management screen to display thumbnails of all source screens connected to the control host, and the remaining display screens serve as remote manipulation screens. The source terminal image corresponding to a certain thumbnail can be launched to a specific remote control screen through the control management screen, and a user operates the remote control screen to remotely operate corresponding source terminal equipment.
The existing screen projection display system is mainly applied to offices, traffic guidance, government and enterprise units and the like. In some cases, the control host needs to access a large number of acquisition end devices, that is, multiple source end pictures. When different source end pictures correspond to different users, the same screen projection display system needs to simultaneously display the source end pictures needed by the different users, so that all the source end pictures can be simultaneously gathered and displayed on the management screen.
Disclosure of Invention
The embodiment of the disclosure provides a screen projection display method and device, which can solve the problem that a user cannot easily find out own screen projection content in the prior art. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a screen projection display method, including:
receiving a voice instruction input by a target authority user, wherein the voice instruction is used for indicating and displaying a control interface of the target authority user;
determining user information and user authority of the target authority user according to the voice instruction, wherein the user authority comprises an image source list of the target authority user;
generating a control interface of the target authority user according to the user information and the user authority;
and receiving a voice releasing request of a user, and executing corresponding operation content according to the voice releasing request.
The screen projection display method provided by the embodiment of the disclosure comprises the steps of firstly receiving a voice instruction input by a target authority user, wherein the voice instruction is used for indicating and displaying a control interface of the target authority user; then, determining user information and user authority of a target authority user according to the voice instruction, wherein the user authority comprises an image source list of the target authority user; generating a control interface of a target authority user according to the user information and the user authority; and finally, receiving a voice releasing request of a user, and executing corresponding operation content according to the voice releasing request. According to the screen projection display method provided by the disclosure, a user can quickly acquire the content which can be projected and is associated with the user in a voice input mode, and the projection of the content which can be projected is controlled through voice.
In one embodiment, determining the user information and the user authority of the target authority user according to the voice instruction comprises:
performing acoustic feature recognition on the voice instruction to obtain a recognition result;
and matching the recognition result with acoustic characteristics of at least one user pre-stored in a database, and determining the user information and the user authority of the target authority user.
In one embodiment, performing acoustic feature recognition on the voice instruction to obtain a recognition result comprises:
converting the voice command into voice parameters, wherein the voice parameters comprise at least one of gene period, linear prediction coefficients, vocal tract impact response, vocal tract area function and perception linear prediction coefficients;
and identifying the voice parameters to obtain the identification result.
In one embodiment, after receiving a voice placement request of a user, the method further comprises:
performing acoustic feature recognition on the voice delivery request of the user;
determining whether the voice delivery request comes from the target authority user;
the executing the corresponding operation content according to the voice delivery request comprises:
and if so, executing corresponding operation content according to the voice delivery request.
Through the method, the delivered content of each user has high confidentiality through the privacy and the hard copying of the voiceprint information.
In one embodiment, the method further comprises:
and when the judgment result is no, displaying the message that the user has no control authority.
Through the method, the user can only access the respective deliverable content through voice and cannot access the deliverable content of other users.
In one embodiment, the executing the corresponding operation content according to the voice delivery request includes:
performing semantic recognition on the voice delivery request to acquire corresponding operation content;
and executing corresponding operation according to the operation content.
In one embodiment, the method further comprises:
and when the voice releasing request cannot be executed, displaying prompt information that the voice cannot be correctly recognized.
According to a second aspect of the embodiments of the present disclosure, there is provided a screen projection display device, including: the device comprises a receiving module, a determining module, a generating module and a processing module;
the receiving module is used for receiving a voice instruction input by a target authority user, and the voice instruction is used for indicating and displaying a control interface of the target authority user;
the determining module is used for determining the user information and the user authority of the target authority user according to the voice instruction, wherein the user authority comprises an image source list of the target authority user;
the generating module is used for generating a control interface of the target authority user according to the user information and the user authority;
the processing module is used for receiving a voice releasing request of a user and executing corresponding operation content according to the voice releasing request.
The screen projection display device provided by the embodiment of the disclosure comprises a receiving module, a determining module, a generating module and a processing module; the receiving module receives a voice instruction input by a target authority user, wherein the voice instruction is used for indicating and displaying a control interface of the target authority user; the determining module determines user information and user authority of a target authority user according to the voice instruction, wherein the user authority comprises an image source list of the target authority user; the generation module generates a control interface of a target authority user according to the user information and the user authority; the processing module receives a voice releasing request of a user and executes corresponding operation content according to the voice releasing request. According to the screen projection display device provided by the disclosure, a user can quickly acquire the content which can be projected and is associated with the user in a voice input mode, and the projection of the content which can be projected is controlled through voice.
In one embodiment, the determination module includes a recognition unit and a matching unit;
the recognition unit is used for carrying out acoustic feature recognition on the voice command to obtain a recognition result;
the matching unit is used for matching the identification result with acoustic characteristics of at least one user pre-stored in a database, and determining the user information and the user authority of the target authority user.
In one embodiment, the recognition unit is specifically configured to convert the voice command into a voice parameter, where the voice parameter includes at least one of a gene period, a linear prediction coefficient, a vocal tract impact response, a vocal tract area function, and a perceptual linear prediction coefficient; and identifying the voice parameters to obtain the identification result.
According to a third aspect of the embodiments of the present disclosure, there is provided a screen projection display device, which includes a processor and a memory, where at least one computer instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the steps executed in any one of the above screen projection display methods.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, having at least one computer instruction stored therein, where the instruction is loaded and executed by a processor to implement the steps performed in the screen projection display method according to any one of the above-mentioned embodiments.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram of a prior art projection screen display system;
fig. 2 is a flowchart of a screen projection display method provided by an embodiment of the present disclosure;
FIG. 3 is an exemplary user control interface provided by embodiments of the present disclosure;
fig. 4 is a flowchart of a screen projection display method provided by an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a projection screen display device provided in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a projection screen display device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The utility model provides a screen projection display system based on speech control, the control host computer wherein possesses a plurality of receiving software, and each receiving software can connect an acquisition end equipment, and each acquisition end equipment is connected with image source equipment respectively again to from each image source equipment acquisition desktop image. Different from the prior art, the control host can realize the authority management and screen projection control of screen projection contents based on voice.
Based on this, the embodiment of the present disclosure provides a screen projection display method, as shown in fig. 2, the screen projection display method includes the following steps:
step 101, receiving a voice instruction input by a target authority user, wherein the voice instruction is used for indicating and displaying a control interface of the target authority user;
specifically, when the control host is started, the main display interface is directly displayed, and at the moment, no thumbnail of any acquisition terminal picture is displayed on the main display interface; under the condition of displaying the main display interface, the user calls up the own control interface by inputting the specified voice content, for example, the input sentence can be: please open my control interface; or please display my source screen; or ask for my display content, etc.
102, determining user information and user authority of a target authority user according to a voice instruction, wherein the user authority comprises an image source list of the target authority user;
in one embodiment, determining the user information and the user authority of the target authority user according to the voice instruction comprises:
carrying out acoustic feature recognition on the voice command to obtain a recognition result;
and matching the recognition result with acoustic characteristics of at least one user pre-stored in a database, and determining the user information and the user authority of the target authority user.
In practical implementation, since different persons have different timbres, even if the same speech is spoken by different persons, the recognized acoustic features are different, and the acoustic features of different persons are obviously different. The acoustic feature recognition is actually recognition of voiceprints, and based on acoustic features, sounds made by different users can be recognized, so that the identity of the user who inputs the voice currently can be determined.
Specifically, after determining the acoustic features of the current input voice, the acoustic features of the current input voice are compared with acoustic features of different users pre-stored in a database to find out matched acoustic features, and user information and user permissions corresponding to the acoustic features are determined.
In one embodiment, performing acoustic feature recognition on the voice instruction to obtain a recognition result comprises:
converting the voice command into voice parameters, wherein the voice parameters comprise at least one of a gene period, a linear prediction coefficient, a sound channel impact response, a sound channel area function and a perception linear prediction coefficient;
and recognizing the voice parameters to obtain a recognition result.
103, generating a control interface of a target authority user according to the user information and the user authority;
specifically, the control host generates a control interface of the current authority user based on the determined user information and the user authority. The thumbnail of each image source is displayed according to an image source list with a user release authority, as shown in fig. 3, which is an exemplary user control interface, and the user control interface at least includes two display areas, a virtual display position display area and a source image thumbnail display area. The virtual display position showing area comprises a plurality of virtual display positions (refer to positions 1-4 in the figure), and each virtual display position corresponds to one remote control screen in the screen projection display system. And if no picture is released on the remote control screen corresponding to the virtual display position, the virtual display position is blank and does not display any content, and conversely, if the virtual display is released on the remote control screen corresponding to the virtual display position, the thumbnail of the corresponding source image is displayed on the virtual display position. For simplicity, only 4 virtual display areas are shown in fig. 3, and actually, there may be more than 4 virtual display areas, or less than 4 virtual display areas, which is not limited herein. The source image thumbnail display area is used for displaying the thumbnails of the source images sent by the acquisition ends and received by the receiving software, so that a user can visually see how many pictures are currently available and the content of each picture.
And 104, receiving a voice releasing request of a user, and executing corresponding operation content according to the voice releasing request.
Specifically, in a control interface display state of a current authority user, a release request initiated by the user through voice is received, semantic recognition is performed on the received release request, operation content corresponding to the release request is determined based on a semantic recognition result, and corresponding operation content is executed.
Specifically, the voice delivery request includes but is not limited to: a release request for releasing a certain path of picture to which virtual position; or ending the request of which releasing path; or, a request back to the main display interface; the user may initiate the release request through different statements, for example, if the user wants to release the image 3 to the position 1, it can be said that: please put the image 3 to the position 1, and then the receiving software receiving the image 3 outputs the corresponding image to the remote control screen corresponding to the position 1 through the corresponding display card port for displaying.
According to the scheme, in the release control process, the identity of the user initiating the voice control does not need to be identified, namely, anyone can use a correct control statement to realize the voice control.
In one embodiment, after receiving a voice placement request from a user, the method further includes:
performing acoustic feature recognition on a voice delivery request of a user;
judging whether the voice releasing request comes from a target authority user or not;
the corresponding operation content executed according to the voice delivery request comprises the following steps:
and if so, executing corresponding operation content according to the voice delivery request.
In one embodiment, the method further comprises:
and when the judgment result is no, displaying a message that the user has no control authority.
In actual use, it can be further specified that when the control interface of the current authority user is displayed, voice control can be performed only by the current authority user, at this time, it is first required to determine whether the current sentence is from the current authority user based on recognition of acoustic features, if not, the current user is notified that there is no control authority, if so, semantic recognition is continued, and the operation content initiated by the user is determined based on the semantic recognition result, so as to perform corresponding operation. The method and the device have the advantages that the deliverable content of each user has high confidentiality, and the user can only access the individual deliverable content through voice and cannot access the deliverable content of other users.
In one embodiment, the executing the corresponding operation content according to the voice delivery request includes:
performing semantic recognition on the voice delivery request to acquire corresponding operation content;
and executing corresponding operation according to the operation content.
In one embodiment, the method further comprises:
and when the voice releasing request cannot be executed, displaying prompt information that the voice cannot be correctly recognized.
In practical use, in the semantic recognition process, if the corresponding operation content cannot be determined according to the semantic recognition result, prompt information that the voice cannot be correctly recognized needs to be returned to the user, and the prompt mode includes, but is not limited to: speech, text messages, pictures, flashing of indicator lights, lighting, color changes, etc.
The screen projection display method provided by the embodiment of the disclosure comprises the steps of firstly receiving a voice instruction input by a target authority user, wherein the voice instruction is used for indicating and displaying a control interface of the target authority user; then, determining user information and user authority of a target authority user according to the voice instruction, wherein the user authority comprises an image source list of the target authority user; generating a control interface of a target authority user according to the user information and the user authority; and finally, receiving a voice releasing request of a user, and executing corresponding operation content according to the voice releasing request. According to the screen projection display method provided by the disclosure, a user can quickly acquire the content which can be projected and is associated with the user in a voice input mode, and the projection of the content which can be projected is controlled through voice.
Based on the screen projection display method provided in the embodiment corresponding to fig. 2, another embodiment of the present disclosure provides a screen projection display method, as shown in fig. 4, including the following steps:
step 201, receiving a content acquisition voice instruction input by a user under the condition of displaying a main display interface;
specifically, when the control host is started, the main display interface is directly displayed, and at the moment, no thumbnail of any acquisition terminal picture is displayed on the main display interface; under the main display interface, a user can call out a control interface belonging to the user through voice input;
specifically, the user invokes his control interface by inputting the specified speech content, for example, the input sentence may be: please open my control interface; or please display my source screen; or ask for my display content, etc.
Step 202, after receiving a voice command of a user, the control host performs acoustic feature recognition on a corresponding statement;
in practical implementation, since different persons have different timbres, even if the same speech is spoken by different persons, the recognized acoustic features are different, and the acoustic features of different persons are obviously different. The acoustic feature recognition is actually recognition of voiceprints, and based on acoustic features, sounds made by different users can be recognized, so that the identity of the user who inputs the voice currently can be determined. The basic steps of acoustic feature recognition are: the input speech is converted into speech parameters such as gene periods, linear prediction coefficients, vocal tract impulse responses, vocal tract area functions, perceptual linear prediction coefficients, etc.
Step 203, the control host determines the current user information based on the acoustic feature recognition result, and determines the user authority based on the user information, wherein the user authority mainly comprises: the user has an image source list of the release authority;
specifically, after determining the acoustic features of the current input voice, the acoustic features of the current input voice are compared with acoustic features of different users pre-stored in a database to find out matched acoustic features, and user information and user permissions corresponding to the acoustic features are determined.
Step 204, the control host generates a control interface of the current user based on the user authority;
specifically, the control host generates a control interface of the current user based on the determined user information and the user authority. The user authority mainly refers to: and the user has an image source list of the release authority. Thumbnails of the respective image sources are displayed according to the list.
Step 205, in the display state of the control interface of the current user, receiving a delivery request initiated by the user through voice, performing semantic recognition on the received delivery request, determining corresponding operation content based on a semantic recognition result, and executing the corresponding operation content.
Specifically, the delivery request includes, but is not limited to: a release request for releasing a certain path of picture to which virtual position; or ending the request of which releasing path; or, a request back to the main display interface; the user may initiate the release request through different statements, for example, if the user wants to release the image 3 to the position 1, it can be said that: please drop image 3 to position 1.
After receiving a sentence input by a user, performing semantic recognition on the sentence, determining operation content corresponding to the sentence, and then executing corresponding operation. For example, if it is determined through semantic recognition that the user wishes to project the image 3 to the location 1, the receiving software receiving the image 3 outputs the corresponding image to the remote control screen corresponding to the location 1 through the corresponding video card port for display.
According to the scheme, in the release control process, the identity of the user initiating the voice control does not need to be identified, namely, anyone can use a correct control statement to realize the voice control.
In an alternative embodiment, it may be specified that voice control can only be performed by the current user when the control interface of the current user is displayed, at this time, the current user is determined as an authorized user, when the user inputs a control statement, it is first required to determine whether the current statement is from the authorized user based on recognition of acoustic features, if not, the current user is notified that there is no control authority, if so, semantic recognition is continued, and the content of an operation initiated by the user is determined based on a semantic recognition result, so as to perform a corresponding operation.
The method comprises the steps of marking a current user as an authority user, receiving a release request initiated by the user through voice, carrying out acoustic feature recognition on the received release request, and determining whether the recognized acoustic feature is matched with the current authority user.
In the semantic recognition process, if the corresponding operation content cannot be determined according to the semantic recognition result, prompt information that the voice cannot be correctly recognized needs to be returned to the user, and the prompt mode includes but is not limited to: speech, text messages, pictures, flashing of indicator lights, lighting, color changes, etc.
The screen projection display method provided by the embodiment of the disclosure comprises the steps of firstly receiving a voice instruction input by a target authority user, wherein the voice instruction is used for indicating and displaying a control interface of the target authority user; then, determining user information and user authority of a target authority user according to the voice instruction, wherein the user authority comprises an image source list of the target authority user; generating a control interface of a target authority user according to the user information and the user authority; and finally, receiving a voice releasing request of a user, and executing corresponding operation content according to the voice releasing request. According to the screen projection display method provided by the disclosure, a user can quickly acquire the content which can be projected and is associated with the user in a voice input mode, and the projection of the content which can be projected is controlled through voice.
Based on the screen projection display method described in the embodiments corresponding to fig. 2 and fig. 4, the following is an embodiment of the apparatus of the present disclosure, which can be used to execute the embodiment of the method of the present disclosure.
The embodiment of the present disclosure provides a screen projection display device, as shown in fig. 5, including: a receiving module 301, a determining module 302, a generating module 303 and a processing module 304;
the receiving module 301 is configured to receive a voice instruction input by a target authority user, where the voice instruction is used to instruct and display a control interface of the target authority user;
the determining module 302 is configured to determine user information and user permissions of target permission users according to the voice instruction, where the user permissions include an image source list of the target permission users;
a generating module 303, configured to generate a control interface of a target authority user according to the user information and the user authority;
the processing module 304 is configured to receive a voice delivery request of a user, and execute corresponding operation content according to the voice delivery request.
The screen projection display device that this disclosure provided includes: a receiving module 301, a determining module 302, a generating module 303 and a processing module 304; the receiving module 301 receives a voice instruction input by a target authority user, wherein the voice instruction is used for indicating and displaying a control interface of the target authority user; the determining module 302 determines the user information and the user authority of the target authority user according to the voice instruction, wherein the user authority comprises an image source list of the target authority user; the generating module 303 generates a control interface of the target authority user according to the user information and the user authority; the processing module 304 receives a voice delivery request of a user and executes corresponding operation content according to the voice delivery request. According to the screen projection display device provided by the disclosure, a user can quickly acquire the content which can be projected and is associated with the user in a voice input mode, and the projection of the content which can be projected is controlled through voice.
In one embodiment, as shown in fig. 6, the determination module 302 includes a recognition unit 3021 and a matching unit 3022;
the recognition unit 3021 is configured to perform acoustic feature recognition on the voice instruction to obtain a recognition result;
a matching unit 3022, configured to match the recognition result with acoustic features of at least one user pre-stored in the database, and determine user information and user rights of the target right user.
In one embodiment, the recognition unit 3022 is specifically configured to convert the voice command into a voice parameter, where the voice parameter includes at least one of a gene period, a linear prediction coefficient, a vocal tract impulse response, a vocal tract area function, and a perceptual linear prediction coefficient; and recognizing the voice parameters to obtain a recognition result.
Based on the screen projection display method described in the embodiment corresponding to fig. 2 and fig. 4, another embodiment of the present disclosure further provides a screen projection display device, where the screen projection display device includes a processor and a memory, and the memory stores at least one computer instruction, and the instruction is loaded and executed by the processor to implement the screen projection display method described in the embodiment corresponding to fig. 2 and fig. 4.
Based on the screen projection display method described in the embodiment corresponding to fig. 2 and fig. 4, the embodiment of the present disclosure further provides a computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be a Read Only Memory (ROM), a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. The storage medium stores at least one computer instruction for executing the screen projection display method described in the embodiment corresponding to fig. 2 and fig. 4, which is not described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A screen projection display method, the method comprising:
receiving a voice instruction input by a target authority user, wherein the voice instruction is used for indicating and displaying a control interface of the target authority user;
determining user information and user authority of the target authority user according to the voice instruction, wherein the user authority comprises an image source list of the target authority user;
generating a control interface of the target authority user according to the user information and the user authority;
and receiving a voice releasing request of a user, and executing corresponding operation content according to the voice releasing request.
2. The screen projection display method of claim 1, wherein the determining the user information and the user authority of the target authority user according to the voice instruction comprises:
performing acoustic feature recognition on the voice instruction to obtain a recognition result;
and matching the recognition result with acoustic characteristics of at least one user pre-stored in a database, and determining the user information and the user authority of the target authority user.
3. The screen projection display method of claim 2, wherein the performing acoustic feature recognition on the voice command to obtain a recognition result comprises:
converting the voice command into voice parameters, wherein the voice parameters comprise at least one of gene period, linear prediction coefficients, vocal tract impact response, vocal tract area function and perception linear prediction coefficients;
and identifying the voice parameters to obtain the identification result.
4. The screen-projection display method according to claim 1, wherein after receiving a voice projection request from a user, the method further comprises:
performing acoustic feature recognition on the voice delivery request of the user;
determining whether the voice delivery request comes from the target authority user;
the executing the corresponding operation content according to the voice delivery request comprises:
and if so, executing corresponding operation content according to the voice delivery request.
5. The screen-projection display method according to claim 4, further comprising:
and when the judgment result is no, displaying the message that the user has no control authority.
6. The screen-projection display method according to claim 1, wherein the executing the corresponding operation content according to the voice projection request comprises:
performing semantic recognition on the voice delivery request to acquire corresponding operation content;
and executing corresponding operation according to the operation content.
7. The screen-projection display method according to claim 1, further comprising:
and when the voice releasing request cannot be executed, displaying prompt information that the voice cannot be correctly recognized.
8. A screen projection display device, comprising: the device comprises a receiving module, a determining module, a generating module and a processing module;
the receiving module is used for receiving a voice instruction input by a target authority user, and the voice instruction is used for indicating and displaying a control interface of the target authority user;
the determining module is used for determining the user information and the user authority of the target authority user according to the voice instruction, wherein the user authority comprises an image source list of the target authority user;
the generating module is used for generating a control interface of the target authority user according to the user information and the user authority;
the processing module is used for receiving a voice releasing request of a user and executing corresponding operation content according to the voice releasing request.
9. The screen-projection display device of claim 8, wherein the determination module comprises an identification unit and a matching unit;
the recognition unit is used for carrying out acoustic feature recognition on the voice command to obtain a recognition result;
the matching unit is used for matching the identification result with acoustic characteristics of at least one user pre-stored in a database, and determining the user information and the user authority of the target authority user.
10. The screen-projection display device of claim 9, wherein the recognition unit is specifically configured to convert the voice command into a voice parameter, and the voice parameter comprises at least one of a gene period, a linear prediction coefficient, a vocal tract impact response, a vocal tract area function, and a perceptual linear prediction coefficient; and identifying the voice parameters to obtain the identification result.
CN202110279613.3A 2021-03-16 2021-03-16 Screen projection display method and device Pending CN113138742A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110279613.3A CN113138742A (en) 2021-03-16 2021-03-16 Screen projection display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110279613.3A CN113138742A (en) 2021-03-16 2021-03-16 Screen projection display method and device

Publications (1)

Publication Number Publication Date
CN113138742A true CN113138742A (en) 2021-07-20

Family

ID=76811144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110279613.3A Pending CN113138742A (en) 2021-03-16 2021-03-16 Screen projection display method and device

Country Status (1)

Country Link
CN (1) CN113138742A (en)

Similar Documents

Publication Publication Date Title
US11289100B2 (en) Selective enrollment with an automated assistant
US11735182B2 (en) Multi-modal interaction between users, automated assistants, and other computing services
KR102419513B1 (en) Storing metadata related to captured images
CN106558310B (en) Virtual reality voice control method and device
US11200893B2 (en) Multi-modal interaction between users, automated assistants, and other computing services
US11281707B2 (en) System, summarization apparatus, summarization system, and method of controlling summarization apparatus, for acquiring summary information
JP7268093B2 (en) Selective Detection of Visual Cues for Automated Assistants
US12014738B2 (en) Arbitrating between multiple potentially-responsive electronic devices
JP6689953B2 (en) Interpreter service system, interpreter service method, and interpreter service program
CN112333258A (en) Intelligent customer service method, storage medium and terminal equipment
JP2011248444A (en) Display controller and presentation method used therewith
CN113138742A (en) Screen projection display method and device
US8515173B2 (en) Image processing system, image processing method and computer readable medium
CN112597912A (en) Conference content recording method, device, equipment and storage medium
JP2023506469A (en) User terminal and its control method
JP6372577B2 (en) Presentation support method, presentation support program, and presentation support apparatus
US20230343336A1 (en) Multi-modal interaction between users, automated assistants, and other computing services
KR20180137378A (en) Contents creating method and a system thereof
JP7383885B2 (en) Information processing device and program
CN114187042A (en) Service method and device based on robot, electronic equipment and storage medium
JP2022121643A (en) Voice recognition program, voice recognition method, voice recognition device and voice recognition system
JP2022119530A (en) Information processing device, system, equipment controller, and program for operating equipment with voice
CN118098224A (en) Screen sharing control method, device, equipment, medium and program product
JP2003067347A (en) Presentation device, method and program
Richter et al. Enhancing Accessibility to Information Systems by Dynamic User Interfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination