CN112416281A - Screen projection method and device based on voice recognition - Google Patents

Screen projection method and device based on voice recognition Download PDF

Info

Publication number
CN112416281A
CN112416281A CN202011315293.4A CN202011315293A CN112416281A CN 112416281 A CN112416281 A CN 112416281A CN 202011315293 A CN202011315293 A CN 202011315293A CN 112416281 A CN112416281 A CN 112416281A
Authority
CN
China
Prior art keywords
candidate target
instruction
voice
screen
screen projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011315293.4A
Other languages
Chinese (zh)
Inventor
龙腾
丁凯
镇立新
陈青山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Linguan Data Technology Co.,Ltd.
Shanghai Shengteng Data Technology Co.,Ltd.
Shanghai yingwuchu Data Technology Co.,Ltd.
Shanghai Hehe Information Technology Development Co Ltd
Original Assignee
Shanghai Hehe Information Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hehe Information Technology Development Co Ltd filed Critical Shanghai Hehe Information Technology Development Co Ltd
Priority to CN202011315293.4A priority Critical patent/CN112416281A/en
Publication of CN112416281A publication Critical patent/CN112416281A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The application discloses a screen projection method based on voice recognition. The source equipment receives the voice command and judges whether the voice command is a first command for starting screen projection. And numbering all the candidate target devices, displaying the numbers of all the candidate target devices, and simultaneously displaying the initially selected candidate target device. And the source equipment receives the voice command and/or the gesture command, and judges whether the command is a second command for switching the candidate target equipment or not and whether the command is a third command for confirming screen projection or not. And switching the selected candidate target device among all the candidate target devices according to the second instruction. And the source equipment receives the voice command and judges whether the voice command is a third command for confirming screen projection. And projecting screen display content of the source device to a screen of the candidate target device which is selected currently. According to the screen projection target equipment, the screen projection target equipment can be accurately selected through the combination of at least two voice instructions, the screen projection accuracy is improved to a great extent, and the working efficiency and the user experience are greatly improved.

Description

Screen projection method and device based on voice recognition
Technical Field
The present disclosure relates to a method and an apparatus for screen projection, and more particularly, to a method and an apparatus for screen projection based on voice recognition.
Background
The screen projection is also called screen sharing, screen projection (cast screen), screen projection, screen mirroring (screen mirroring), co-screen interaction, multi-screen interaction and the like, and means that the screen display content of one electronic device (called as a source device) is displayed on the screen of another electronic device (called as a target device) in real time through technical means, and the screen includes non-screen display modes such as a projection picture and the like. Common screen projection application scenarios are, for example, projecting pictures on a mobile phone or a computer onto a television or a projector. With the increase of intelligent devices, the application of screen projection is more and more extensive.
The existing screen projection method is realized based on voice recognition, and the screen projection operation is usually performed according to a voice instruction. If a plurality of candidate target devices are connected around the source device in a wired or wireless manner, screen projection errors are easily caused, and the working efficiency is affected.
Disclosure of Invention
The technical problem to be solved by the application is to provide a screen projection method based on voice recognition, and screen projection operation is performed through the combination of at least two voice instructions, so that screen projection errors can be effectively reduced. Therefore, the application also provides a screen projection device based on voice recognition.
In order to solve the technical problem, the screen projection method based on the voice recognition comprises the following steps. Step S10: the source equipment receives the voice instruction, acquires the specific content of the received voice instruction through voice recognition, and judges whether the received voice instruction is a first instruction for starting screen projection. If so, the process proceeds to step S20. If not, return to step S10. Step S20: and numbering all the candidate target devices, displaying the numbers of all the candidate target devices, and simultaneously displaying the initially selected candidate target device. The candidate target device refers to a display device that has been connected to the source device by a wired or wireless manner. Step S30: the source device receives a voice instruction and/or a gesture instruction, specific contents of the received voice instruction and/or gesture instruction are obtained through voice recognition and/or gesture recognition, and whether the received voice instruction and/or gesture instruction is a second instruction used for switching the candidate target device or a third instruction representing confirmation of screen projection or not is judged. If the instruction is the second instruction, the process proceeds to step S40. If the instruction is the third instruction, the process proceeds to step S60. Otherwise, return to step S30. Step S40: and switching the selected candidate target device among all the candidate target devices according to the second instruction. Step S50: the source equipment receives the voice instruction, acquires the specific content of the received voice instruction through voice recognition, and judges whether the received voice instruction is a third instruction for confirming screen projection. If so, the process proceeds to step S60. If not, return to step S50. Step S60: and projecting screen display content of the source device to a screen of the candidate target device which is selected currently. According to the method and the device, the candidate target devices connected with the source device are numbered, the display device is selected through a voice or gesture instruction, and finally the screen projection device is determined through the voice instruction, so that screen projection operation is completed.
Further, in the step S10, the step S30, and the step S50, the source device is a mobile terminal, and receives a voice command and/or a gesture command input by a user through an APP installed therein. This is a common way of gathering user instructions.
Further, in the step S10, when returning to the step S10, an error notice information is displayed on the source device; in the step S30, when returning to the step S30, displaying an error notice information on the source device; in the step S50, when returning to the step S50, an error notice message is displayed on the source device. This may inform the user to repeat the corresponding operation so that the user command is correctly recognized.
Preferably, in step S20, all the candidate target devices open the screen simultaneously or sequentially, and the numbers of the candidate target devices are displayed on the screen; the initially selected candidate target device presents certain information on its screen to indicate its selection. This is one implementation of displaying the numbers of all candidate target devices.
Preferably, in step S20, the numbers of all candidate target devices are displayed on the source device, and one candidate target device that is initially selected is displayed at the same time. This is another implementation of displaying the numbers of all candidate target devices.
Preferably, in step S20, if the source device is not currently screen-shot, a candidate target device with a specific number is initially selected; if the source device is currently projecting a screen, then the candidate target device currently projecting a screen is initially selected. This is one implementation of determining an initially selected candidate target device.
Further, in the step S30, the gesture command corresponds to only the second command. It is clear here that the gesture command only applies to the second command and not to the first command or the third command.
Preferably, in step S40, when a number reported by the user in a voice manner is received, the candidate target device of the number is taken as the new selected candidate target device after the voice recognition is completed. This is one implementation of a voice instruction to switch candidate target devices.
Preferably, in step S40, when a swiping operation presented by the user in a gesture manner is received, switching to a new selected candidate target device after the gesture recognition is completed, where a number of the new selected candidate target device and a number of a previously selected candidate target device have a numerical relationship corresponding to the swiping operation. This is one implementation of a gesture instructing a switch of candidate target devices.
The application also provides screen projection equipment based on voice recognition, which comprises a recognition module, a numbering module, a switching module and a screen projection module. The recognition module is used for receiving a voice command and/or a gesture command input by a user, performing voice recognition on the voice command to acquire specific content, or performing gesture recognition on the gesture command to acquire the specific content, and judging whether the specific content is a first command for starting screen projection, a second command for switching candidate target equipment, and a third command for confirming screen projection. And the numbering module is used for numbering all the candidate target devices after the identification module confirms that the first instruction is received, displaying the numbers of all the candidate target devices and simultaneously displaying one initially selected candidate target device. And the switching module is used for switching the selected candidate target equipment according to the second instruction after the recognition module confirms that the second instruction is received. And the screen projection module is used for projecting the screen display content of the source equipment to the screen of the currently selected candidate target equipment after the recognition module confirms that the third instruction is received.
The technical effect that this application gained is, through the combination of the voice command twice at least, can accurate selection throw screen target equipment, and very big degree has improved the rate of accuracy of throwing the screen, improves work efficiency and user experience greatly.
Drawings
Fig. 1 is a flowchart of a screen projection method based on speech recognition proposed in the present application.
Fig. 2 is a schematic structural diagram of a screen projection device based on voice recognition according to the present application.
The reference numbers in the figures illustrate: the device comprises an identification module 10, a numbering module 20, a switching module 30 and a screen projection module 40.
Detailed Description
Referring to fig. 1, the screen projection method based on speech recognition provided by the present application includes the following steps.
Step S10: the source equipment receives the voice instruction, acquires the specific content of the received voice instruction through voice recognition, and judges whether the received voice instruction is a first instruction for starting screen projection. For example, the source device is a mobile terminal, and a voice instruction input by a user is received through an APP (application) installed therein, wherein a microphone unit of the source device is used. When it is recognized that the received voice instruction is the first instruction, the flow proceeds to step S20. When the specific content of the received voice instruction cannot be recognized or it is recognized that the received voice instruction is not the first instruction, returning to step S10; at this time, error prompt information is optionally displayed on the source device, for example, the error prompt information is displayed through an APP installed on the source device.
Step S20: and numbering all the candidate target devices, displaying the numbers of all the candidate target devices, and simultaneously displaying the initially selected candidate target device. The candidate target device refers to a display device that has been connected to the source device by a wired or wireless manner. The display mode may be performed on a screen of the candidate target device, or may be performed on a screen of the source device, where the screen includes a non-screen display mode such as a projection screen. For example, opening a screen of all candidate target devices simultaneously or sequentially, and displaying the numbers of the candidate target devices on the screen; the initially selected candidate target device is presented, for example, with specific information on its screen to indicate its selection. In another example, the numbers of all candidate target devices are displayed on the APP of the source device, and one candidate target device that is initially selected is displayed at the same time.
Preferably, if the source device is not currently on screen, then a particular number of candidate target devices is initially selected, e.g., a number 1 candidate target device is initially selected. If the source device is currently projecting a screen, then the candidate target device currently projecting a screen is initially selected.
Step S30: the source device receives a voice instruction and/or a gesture instruction, specific contents of the received voice instruction and/or gesture instruction are obtained through voice recognition and/or gesture recognition, and whether the received voice instruction and/or gesture instruction is a second instruction used for switching the candidate target device or a third instruction representing confirmation of screen projection or not is judged. For example, the source device is a mobile terminal, and receives a voice instruction and/or a gesture instruction input by a user through an APP installed therein. And the voice command is received and used by a microphone unit of the source equipment, and the gesture command is received and used by a camera unit of the source equipment.
If the voice instruction is received in the step, the specific content of the received voice instruction is obtained through voice recognition, and whether the received voice instruction is the second instruction or the third instruction is judged. When it is recognized that the received voice instruction is the second instruction, the flow proceeds to step S40. When it is recognized that the received voice instruction is the third instruction, the flow proceeds to step S60. When the specific content of the received voice instruction cannot be recognized or the received voice instruction is recognized to be neither the second instruction nor the third instruction, returning to step S30; at this time, error prompt information is optionally displayed on the source device, for example, the error prompt information is displayed through an APP installed on the source device.
If the gesture instruction is received in the step, the specific content of the received gesture instruction is obtained through gesture recognition, and whether the received gesture instruction is the second instruction or not is judged. When it is recognized that the received gesture instruction is the second instruction, the process proceeds to step S40. When the specific content of the received gesture instruction cannot be recognized or the received gesture instruction is not recognized as the second instruction, returning to the step S30; at this time, error prompt information is optionally displayed on the source device, for example, the error prompt information is displayed through an APP installed on the source device.
If the step receives the voice command and the gesture command, the processing is carried out in sequence according to the receiving sequence.
Step S40: and switching the selected candidate target device among all the candidate target devices according to the second instruction. For example, when a number reported by the user in a voice manner is received, the source device takes the candidate target device of the number as a new selected candidate target device after the voice recognition is completed. For another example, when a gesture-based right swipe by the user is received, the source device switches to a new selected candidate target device after gesture recognition is completed, and the number of the new selected candidate target device is the number of the previously selected candidate target device plus 1. Gestures that stroke right may instead stroke left, stroke up, stroke down, etc., which may be associated with candidate target device number +1, candidate target device number-1, etc.
Step S50: the source equipment receives the voice instruction, acquires the specific content of the received voice instruction through voice recognition, and judges whether the received voice instruction is a third instruction for confirming screen projection. When it is recognized that the received voice instruction is the third instruction, the flow proceeds to step S60. When the specific content of the received voice instruction cannot be recognized or it is recognized that the received voice instruction is not the third instruction, returning to step S50; at this time, error prompt information is optionally displayed on the source device, for example, the error prompt information is displayed through an APP installed on the source device.
Step S60: and projecting screen display content of the source equipment to a screen of the currently selected candidate target equipment, wherein the screen comprises non-screen display modes such as a projection picture and the like.
The following is an application scenario one of the present application, and the whole screen projection process can be realized only by two voice instructions. Firstly, a user inputs a first instruction representing the start of screen projection in a voice form, and after the voice instruction is successfully recognized, the numbers of all candidate target devices and one candidate target device which is initially selected are displayed. If the user needs to shoot the screen on the initially selected candidate target device, the user inputs a third instruction representing confirmation of the shooting in a voice mode, and after the voice instruction is successfully recognized, screen display content of the source device is shot on a screen of the currently selected candidate target device.
The application scene two of the application is that the whole screen projection process can be realized only by three voice instructions. Firstly, a user inputs a first instruction representing the start of screen projection in a voice form, and after the voice instruction is successfully recognized, the numbers of all candidate target devices and one candidate target device which is initially selected are displayed. If the target device which the user needs to screen is not the candidate target device which is selected initially, the user inputs a second instruction for switching the candidate target device in a voice mode, and after the voice instruction is recognized successfully, the selected candidate target device is switched. And finally, inputting a third instruction representing confirmation screen projection by the user in a voice mode, and after the voice instruction is successfully recognized, projecting screen display contents of the source equipment to a screen of the currently selected candidate target equipment.
The application scene three of the application is that the whole screen projection process can be realized only by two voice instructions and one or more gesture instructions. Firstly, a user inputs a first instruction representing the start of screen projection in a voice form, and after the voice instruction is successfully recognized, the numbers of all candidate target devices and one candidate target device which is initially selected are displayed. And if the target equipment needing to be screened by the user is not the candidate target equipment which is initially selected, inputting a second instruction for switching the candidate target equipment by the user in a gesture mode, and switching the selected candidate target equipment after the gesture instruction is successfully recognized. The switching process may require the user to gesture a second instruction one or more times until the target device is selected. And finally, inputting a third instruction representing confirmation screen projection by the user in a voice mode, and after the voice instruction is successfully recognized, projecting the screen display content of the source equipment to the screen of the currently selected candidate target display equipment.
Referring to fig. 2, the screen projection device based on voice recognition provided by the present application includes a recognition module 10, a numbering module 20, a switching module 30, and a screen projection module 40.
The recognition module 10 is configured to receive a voice instruction and/or a gesture instruction input by a user, perform voice recognition on the voice instruction to obtain specific content, or perform gesture recognition on the gesture instruction to obtain specific content, and determine whether the specific content is a first instruction indicating to start screen projection, a second instruction used to switch candidate target devices, or a third instruction indicating to confirm screen projection. The voice instruction can be any one of the three instructions, and the gesture instruction can only be a second instruction.
The numbering module 20 is configured to number all candidate target devices after the identification module 10 confirms that the first instruction is received, display the numbers of all candidate target devices, and display an initially selected candidate target device.
The switching module 30 is configured to switch the selected candidate target device according to the second instruction after the recognition module 10 confirms the receipt of the second instruction.
The screen projection module 40 is configured to project the screen display content of the source device onto the screen of the currently selected candidate target device after the recognition module 10 confirms that the third instruction is received.
The screen projection method and device based on voice recognition have the following beneficial effects.
Firstly, the screen is projected through the combination of two voice instructions, the combination of three voice instructions and the combination of two voice instructions and a gesture instruction, manual operation is not needed, the operation is simple, and the working efficiency is improved.
Secondly, all candidate target devices are numbered, and a user can accurately select the target devices by adopting voice instructions or gesture instructions, so that screen projection errors are effectively reduced, the screen selection accuracy is greatly improved, and further the working efficiency and the user experience are improved.
Thirdly, the design is simple, the realization is easy, the cost is low, and the performance is good.
The above are merely preferred embodiments of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A screen projection method based on voice recognition is characterized by comprising the following steps:
step S10: the method comprises the steps that source equipment receives a voice instruction, obtains the specific content of the received voice instruction through voice recognition, and judges whether the received voice instruction is a first instruction for starting screen projection or not; if yes, go to step S20; if not, returning to the step S10;
step S20: numbering all candidate target devices, displaying the numbers of all candidate target devices, and simultaneously displaying one initially selected candidate target device; the candidate target device is a display device which is connected with the source device in a wired or wireless mode;
step S30: the method comprises the steps that source equipment receives a voice instruction and/or a gesture instruction, specific contents of the received voice instruction and/or gesture instruction are obtained through voice recognition and/or gesture recognition, and whether the received voice instruction and/or gesture instruction is a second instruction used for switching candidate target equipment or a third instruction representing screen projection confirmation or not is judged; if the instruction is the second instruction, go to step S40; if the instruction is the third instruction, go to step S60; otherwise, return to step S30;
step S40: switching the selected candidate target device among all the candidate target devices according to a second instruction;
step S50: the source equipment receives a voice instruction, acquires the specific content of the received voice instruction through voice recognition, and judges whether the received voice instruction is a third instruction for confirming screen projection; if yes, go to step S60; if not, returning to the step S50;
step S60: and projecting screen display content of the source device to a screen of the candidate target device which is selected currently.
2. The screen projection method based on voice recognition according to claim 1, wherein in the steps S10, S30 and S50, the source device is a mobile terminal, and the APP installed therein receives a voice command and/or a gesture command input by a user.
3. The screen projection method based on voice recognition according to claim 1, wherein in the step S10, when returning to the step S10, an error prompt message is displayed on the source device;
in the step S30, when returning to the step S30, displaying an error notice information on the source device;
in the step S50, when returning to the step S50, an error notice message is displayed on the source device.
4. The screen projection method based on speech recognition according to claim 1, wherein in step S20, all candidate target devices open the screen simultaneously or sequentially, and the number of the candidate target device is displayed on the screen; the initially selected candidate target device presents certain information on its screen to indicate its selection.
5. The screen projection method based on voice recognition according to claim 1, wherein in step S20, numbers of all candidate target devices are displayed on the source device, and one candidate target device initially selected is displayed.
6. The screen projection method based on voice recognition according to claim 1, wherein in step S20, if the source device is not currently projected, a candidate target device with a specific number is initially selected; if the source device is currently projecting a screen, then the candidate target device currently projecting a screen is initially selected.
7. The screen projection method based on voice recognition according to claim 1, wherein in the step S30, the gesture command corresponds to only the second command.
8. The screen projection method based on voice recognition according to claim 1, wherein in step S40, when a number spoken by the user is received, the numbered candidate target device is used as the new selected candidate target device after the voice recognition is completed.
9. The screen projection method based on voice recognition according to claim 1, wherein in step S40, when a swipe operation presented by a gesture of the user is received, switching to a new selected candidate target device after the gesture recognition is completed, and the number of the new selected candidate target device and the number of the previously selected candidate target device have a numerical relationship corresponding to the swipe operation.
10. A screen projection device based on voice recognition is characterized by comprising an identification module, a numbering module, a switching module and a screen projection module;
the recognition module is used for receiving a voice command and/or a gesture command input by a user, performing voice recognition on the voice command to acquire specific content, or performing gesture recognition on the gesture command to acquire the specific content, and judging whether the specific content is a first command for starting screen projection, a second command for switching candidate target equipment and a third command for confirming screen projection;
the number module is used for numbering all the candidate target devices after the identification module confirms that the first instruction is received, displaying the numbers of all the candidate target devices and simultaneously displaying one initially selected candidate target device;
the switching module is used for switching the selected candidate target equipment according to the second instruction after the recognition module confirms that the second instruction is received;
and the screen projection module is used for projecting the screen display content of the source equipment to the screen of the currently selected candidate target equipment after the recognition module confirms that the third instruction is received.
CN202011315293.4A 2020-11-20 2020-11-20 Screen projection method and device based on voice recognition Pending CN112416281A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011315293.4A CN112416281A (en) 2020-11-20 2020-11-20 Screen projection method and device based on voice recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011315293.4A CN112416281A (en) 2020-11-20 2020-11-20 Screen projection method and device based on voice recognition

Publications (1)

Publication Number Publication Date
CN112416281A true CN112416281A (en) 2021-02-26

Family

ID=74778705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011315293.4A Pending CN112416281A (en) 2020-11-20 2020-11-20 Screen projection method and device based on voice recognition

Country Status (1)

Country Link
CN (1) CN112416281A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115097929A (en) * 2022-03-31 2022-09-23 Oppo广东移动通信有限公司 Vehicle-mounted screen projection method and device, electronic equipment, storage medium and program product

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699023A (en) * 2013-11-29 2014-04-02 安徽科大讯飞信息科技股份有限公司 Multi-candidate POI (Point of Interest) control method and system of vehicle-mounted equipment
CN103853520A (en) * 2012-12-03 2014-06-11 联想(北京)有限公司 Input method and input device
CN105843371A (en) * 2015-01-13 2016-08-10 上海速盟信息技术有限公司 Man-machine space interaction method and system
CN106776683A (en) * 2016-10-28 2017-05-31 努比亚技术有限公司 A kind of browser window changing method and terminal
CN106886275A (en) * 2015-12-15 2017-06-23 比亚迪股份有限公司 The control method of car-mounted terminal, device and vehicle
CN108012169A (en) * 2017-11-30 2018-05-08 百度在线网络技术(北京)有限公司 A kind of interactive voice throws screen method, apparatus and server
CN108055558A (en) * 2017-12-27 2018-05-18 浙江大华技术股份有限公司 A kind of on-screen display system and method
CN109120970A (en) * 2018-09-30 2019-01-01 珠海市君天电子科技有限公司 It is a kind of wirelessly to throw screen method, terminal device and storage medium
CN110333836A (en) * 2019-07-05 2019-10-15 网易(杭州)网络有限公司 Throwing screen method, apparatus, storage medium and the electronic device of information
CN110602319A (en) * 2019-09-04 2019-12-20 深圳市乐得瑞科技有限公司 Wireless screen projection connection method and device
CN110673773A (en) * 2019-08-29 2020-01-10 思特沃克软件技术(北京)有限公司 Item selection method and device for vehicle-mounted multimedia screen
CN111897507A (en) * 2020-07-30 2020-11-06 Tcl海外电子(惠州)有限公司 Screen projection method and device, second terminal and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853520A (en) * 2012-12-03 2014-06-11 联想(北京)有限公司 Input method and input device
CN103699023A (en) * 2013-11-29 2014-04-02 安徽科大讯飞信息科技股份有限公司 Multi-candidate POI (Point of Interest) control method and system of vehicle-mounted equipment
CN105843371A (en) * 2015-01-13 2016-08-10 上海速盟信息技术有限公司 Man-machine space interaction method and system
CN106886275A (en) * 2015-12-15 2017-06-23 比亚迪股份有限公司 The control method of car-mounted terminal, device and vehicle
CN106776683A (en) * 2016-10-28 2017-05-31 努比亚技术有限公司 A kind of browser window changing method and terminal
CN108012169A (en) * 2017-11-30 2018-05-08 百度在线网络技术(北京)有限公司 A kind of interactive voice throws screen method, apparatus and server
CN108055558A (en) * 2017-12-27 2018-05-18 浙江大华技术股份有限公司 A kind of on-screen display system and method
CN109120970A (en) * 2018-09-30 2019-01-01 珠海市君天电子科技有限公司 It is a kind of wirelessly to throw screen method, terminal device and storage medium
CN110333836A (en) * 2019-07-05 2019-10-15 网易(杭州)网络有限公司 Throwing screen method, apparatus, storage medium and the electronic device of information
CN110673773A (en) * 2019-08-29 2020-01-10 思特沃克软件技术(北京)有限公司 Item selection method and device for vehicle-mounted multimedia screen
CN110602319A (en) * 2019-09-04 2019-12-20 深圳市乐得瑞科技有限公司 Wireless screen projection connection method and device
CN111897507A (en) * 2020-07-30 2020-11-06 Tcl海外电子(惠州)有限公司 Screen projection method and device, second terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115097929A (en) * 2022-03-31 2022-09-23 Oppo广东移动通信有限公司 Vehicle-mounted screen projection method and device, electronic equipment, storage medium and program product

Similar Documents

Publication Publication Date Title
US9485465B2 (en) Picture control method, terminal, and video conferencing apparatus
EP2993582A1 (en) Method, apparatus and device for upgrading an operating system of a terminal device
JP2018503881A (en) Apparatus automatic restoration method, apparatus and system
US9002947B2 (en) Display device, terminal device, display system, display method, and image alteration method
CN112148408A (en) Barrier-free mode implementation method and device based on image processing and storage medium
WO2018010440A1 (en) Projection picture adjusting method and apparatus, and projection terminal
KR20110117806A (en) Apparatus and method for analyzing error in portable terminal
CN105812945A (en) Information input method, device and smart terminal
CN112416281A (en) Screen projection method and device based on voice recognition
CN105120106B (en) The switching method and device of a kind of contextual model
CN110928509B (en) Display control method, display control device, storage medium, and communication terminal
KR20130097326A (en) Apparatas and method of handing a touch input in a portable terminal
CN109561202B (en) Control processing method and device, terminal equipment, vehicle machine and system
CN109521980B (en) Method, device, medium and electronic equipment for determining display content of entity display screen
WO2020042428A1 (en) Apparatus display processing method and device, storage medium, and processor
US10397531B2 (en) Projector, display device, and display method
CN109144054B (en) Intelligent driving drive test control method and device, electronic equipment and storage medium
CN103248937A (en) Method and system for controlling digital television terminal and associated equipment
CN113031890A (en) Method and device for controlling mobile terminal to perform screen projection interaction
CN109739763B (en) Code segment operation method, device, terminal and storage medium
CN103607539A (en) Information processing method and electronic device
WO2011125200A1 (en) Projector and password input method
CN108710307B (en) Control method and device of intelligent equipment and computer readable storage medium
US20220057619A1 (en) Digital microscope system, method for operating the same and computer program
CN113067936B (en) Terminal device online method and device, online system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210305

Address after: Room 1105-1123, 1256 and 1258 Wanrong Road, Jing'an District, Shanghai, 200436

Applicant after: Shanghai hehe Information Technology Co., Ltd

Applicant after: Shanghai Shengteng Data Technology Co.,Ltd.

Applicant after: Shanghai Linguan Data Technology Co.,Ltd.

Applicant after: Shanghai yingwuchu Data Technology Co.,Ltd.

Address before: Room 1105-1123, 1256 and 1258 Wanrong Road, Jing'an District, Shanghai, 200436

Applicant before: Shanghai hehe Information Technology Co., Ltd

TA01 Transfer of patent application right