CN114879923A - Multi-screen control method and device, electronic equipment and storage medium - Google Patents

Multi-screen control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114879923A
CN114879923A CN202210423029.5A CN202210423029A CN114879923A CN 114879923 A CN114879923 A CN 114879923A CN 202210423029 A CN202210423029 A CN 202210423029A CN 114879923 A CN114879923 A CN 114879923A
Authority
CN
China
Prior art keywords
screen
user
page information
server
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210423029.5A
Other languages
Chinese (zh)
Inventor
朴哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202210423029.5A priority Critical patent/CN114879923A/en
Publication of CN114879923A publication Critical patent/CN114879923A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/452Remote windowing, e.g. X-Window System, desktop virtualisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a multi-screen control method and device, electronic equipment and a storage medium. The method comprises the following steps: receiving a voice instruction sent by a user; performing semantic analysis on the voice instruction to obtain an analysis result of the voice instruction, and sending the analysis result to the server so that the server determines page information corresponding to the analysis result; and receiving page information issued by the server, and sending the page information to a target screen corresponding to the user so as to display the page information based on the target screen. By the technical scheme of the embodiment of the invention, the voice command of the user can be analyzed, the target screen is controlled to respond according to the voice command of the user, and the interactive communication among a plurality of screens is realized. And the target screen is controlled by using the independent central control screen, so that the occupation condition of the internal memory of the vehicle is optimized, and the waste of resources is reduced. When a new demand exists, only the central control screen needs to be modified, and therefore human resources are saved.

Description

Multi-screen control method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of intelligent automobiles, in particular to a multi-screen control method and device, electronic equipment and a storage medium.
Background
Along with the rapid development of intelligent electronic technology and internet technology, the design of intelligent automobiles is more and more emphasized by people, and the number of multi-screen vehicles is more and more. The emergence and development of human-computer interaction systems such as vehicle-mounted infotainment systems, voice interaction systems, central control systems and the like gradually change the interaction mode between a driver and a vehicle. The human-computer interaction scheme can enable a user to trigger text information, button controls and the like on the vehicle-mounted screen in a voice mode without manual control of the user.
The existing visible and speaking schemes are visible and speaking implementation schemes of a single central control screen, and interactive communication among multiple screens cannot be realized. And the existing visible scheme is well customized in advance, when a new requirement is found, each screen needs to be changed, and the later maintenance is not facilitated.
Disclosure of Invention
The invention provides a multi-screen control method, a multi-screen control device, electronic equipment and a storage medium, which can control interactive communication of multiple screens.
In a first aspect, an embodiment of the present invention provides a multi-screen control method, including:
receiving a voice instruction sent by a user;
performing semantic analysis on the voice instruction to obtain an analysis result of the voice instruction, and sending the analysis result to a server so that the server determines page information corresponding to the analysis result;
and receiving the page information issued by the server, and sending the page information to a target screen corresponding to the user so as to display the page information based on the target screen.
In a second aspect, an embodiment of the present invention further provides a multi-screen control device, including:
the instruction receiving module is used for receiving a voice instruction sent by a user;
the semantic analysis module is used for carrying out semantic analysis on the voice instruction to obtain an analysis result of the voice instruction and sending the analysis result to the server so that the server can determine page information corresponding to the analysis result;
and the information receiving module is used for receiving the page information issued by the server and sending the page information to a target screen corresponding to a user so as to display the page information based on the target screen.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the multi-screen control method according to any embodiment of the present invention.
In a fourth aspect, embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the multi-screen control method according to any embodiment of the present invention.
In the embodiment of the invention, a voice instruction sent by a user is received; performing semantic analysis on the voice instruction to obtain an analysis result of the voice instruction, and sending the analysis result to the server so that the server determines page information corresponding to the analysis result; and receiving page information issued by the server, and sending the page information to a target screen corresponding to the user so as to display the page information based on the target screen. Namely, in the embodiment of the invention, the voice command of the user can be analyzed, and the target screen is controlled to respond according to the voice command of the user, so that the interactive communication among a plurality of screens is realized. And the target screen is controlled by using the independent central control screen, so that the occupation condition of the internal memory of the vehicle is optimized, and the waste of resources is reduced. When a new demand exists, only the central control screen needs to be modified at the later stage, so that the maintenance of the visible scheme is facilitated, and the manpower resource is saved.
Drawings
Fig. 1 is a flowchart of a multi-screen control method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a distribution structure of a vehicle-mounted screen provided by an embodiment of the invention;
fig. 3 is a flowchart of another multi-screen control method according to a second embodiment of the present invention;
FIG. 4 is a timing diagram of multi-screen interaction provided by an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a multi-screen control system according to a third embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a multi-screen control apparatus according to a fourth embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a multi-screen control method according to an embodiment of the present invention, where the embodiment is applicable to control multiple-screen interactive communications, and the method can be executed by a multi-screen control device according to an embodiment of the present invention, and the device can be integrated in a vehicle control server and implemented in software and/or hardware. The multi-screen control method provided by the embodiment specifically comprises the following steps:
and S110, receiving a voice command sent by a user.
The voice instruction sent by the user is voice sent by the user according to the self requirement and the page information of the vehicle-mounted screen, and comprises questions, commands and the like, such as 'how long the user can arrive at the destination', 'play a song' and the like.
The number of the in-vehicle screens may be one or more. Fig. 2 is a schematic distribution structure diagram of the vehicle-mounted screen provided by the embodiment of the invention. As shown in fig. 2, when there is only one center control screen in the vehicle, the main driver and the assistant driver can send out voice commands according to the screen information displayed by the center control screen, and the center control screen can receive the voice commands sent out by the main driver and the assistant driver. When the vehicle is internally provided with the central control screen and the assistant driving screen, the central control screen can receive voice instructions sent by the main driver and the assistant driver. The main driver can send out a voice instruction through the screen information of the central control screen and the self requirement, and the assistant driver can send out the voice instruction through the screen information of the assistant driving screen and the self requirement. When the vehicle is internally provided with three vehicle-mounted screens, namely a central control screen, a left rear screen and a right rear screen, the central control screen can receive voice instructions sent by a main driver, a copilot, the left rear person and the right rear person. The main driver and the assistant driver can send out voice commands according to the screen information displayed by the central control screen, the left back driver can send out voice commands according to the screen information of the left back screen, and the right back driver can send out voice commands according to the screen information of the right back screen. When the vehicle is internally provided with four vehicle-mounted screens, namely a central control screen, a secondary driving screen, a left rear screen and a right rear screen, the central control screen can receive voice instructions sent by a main driver, a secondary driver, a left rear person and a right rear person.
The main driver, the assistant driver, the left rear person and the right rear person can respectively send out voice instructions according to the screen information of the central control screen, the assistant driving screen, the left rear screen and the right rear screen. The screen information of each screen includes a vehicle control, music information, navigation information, video information, and an Identity Document (ID) of a location, and the like, which are displayed on each screen.
Specifically, after receiving a voice instruction sent by a user, the central control screen can judge the position of the user according to the direction of the voice sent by the user. For example, the right rear person issues a voice instruction: and when the user still arrives at the happy avenue for a long time, the central control screen judges that the sending position of the voice command is the right back side after receiving the voice command. Furthermore, the central control screen can control the corresponding screen to work at the position according to the position sent by the voice command, the working states of other screens are not influenced, and the user experience is improved.
S120, performing semantic analysis on the voice command to obtain an analysis result of the voice command, and sending the analysis result to the server so that the server determines page information corresponding to the analysis result.
The central control screen can be in communication connection with the server.
Specifically, after receiving a voice command sent by a user, the central control screen can perform semantic analysis on the voice command. Specifically, the central control screen can convert the voice command into text information and perform semantic analysis on the text information. The key information in the voice instruction can be extracted by performing semantic analysis on the voice instruction. Illustratively, the user utters a voice command of "how long it is to a happy avenue". After receiving the instruction, the central control screen performs semantic analysis on the instruction, and can analyze the real-time navigation information required to be checked by the user from key information such as 'how long time' and 'happy avenue' in the voice instruction. Furthermore, the central control screen takes 'viewing real-time navigation information' obtained through semantic analysis as an analysis result, and sends the analysis result to the server side. And after receiving the analysis result, the server analyzes the analysis result so as to match the analysis result with the real-time screen information, and determines page information corresponding to the analysis result according to the matching result.
The independent central control screen is used for controlling the working states of other screens, so that the condition that different screen service requirements are different can be processed, and later maintenance is facilitated.
S130, receiving page information issued by the server, and sending the page information to a target screen corresponding to the user so as to display the page information based on the target screen.
And the page information comprises screen information corresponding to the analysis result. For example, the page information corresponding to the analysis result "view real-time navigation information" is: real-time navigation map information, road condition information, vehicle position and distance information from a destination, and the like.
Specifically, after determining the page information corresponding to the analysis result, the server side sends the page information to the central control screen. The central control screen can receive page information sent by the server and send the page information to a target screen corresponding to the user. The target screen corresponding to the user is a screen corresponding to the position where the user sends the voice instruction. For example, if the user issues a voice command at the right rear position, the target screen is the right rear screen. When the page information is displayed on the right back screen, other screens continue to maintain the original working state and cannot be interfered.
In the embodiment of the scheme, the target screen is used for displaying and broadcasting the page information.
The central control screen is provided with a microphone, a loudspeaker or a Text To Speech (TTS) broadcaster and other devices for receiving and broadcasting voice information. And the other vehicle-mounted screen positions are provided with devices such as a loudspeaker or a TTS (text to speech) broadcaster and the like for broadcasting voice information.
Specifically, when a plurality of vehicle-mounted screens are arranged in the vehicle, the central control screen can receive and process a voice instruction sent by a user, and other vehicle-mounted screens can play and display page information corresponding to the voice instruction. For example, the voice command sent by the right back person is "how long it is to arrive at the happy avenue", and the page information received by the target screen includes real-time navigation map information, road condition information, distance information of the vehicle location and the distance to the destination, and the like. Furthermore, after the user sends a voice instruction, the user can see the real-time navigation information of the vehicle displayed on the right rear screen, and can hear the real-time navigation information played by devices such as a TTS (text to speech) broadcaster and the like.
The scheme receives a voice instruction sent by a user; performing semantic analysis on the voice instruction to obtain an analysis result of the voice instruction, and sending the analysis result to the server so that the server determines page information corresponding to the analysis result; and receiving page information issued by the server, and sending the page information to a target screen corresponding to the user so as to display the page information based on the target screen. According to the scheme, interactive communication among the screens is realized, other screens are controlled by using the independent central control screen, the occupation condition of the vehicle memory is optimized, and the waste of resources is reduced. When a new demand exists, only the central control screen needs to be modified, and therefore human resources are saved.
Example two
Fig. 3 is a flowchart of another multi-screen control method according to a second embodiment of the present invention; in this embodiment, the multi-screen control method is further detailed, and as shown in fig. 3, the detailed multi-screen control method mainly includes the following steps:
s210, receiving the real-time screen information of each screen, and sending the real-time screen information to the server so that the server stores the real-time screen information.
The real-time screen information comprises vehicle control controls, music information, navigation information, video information, position IDs and the like displayed on each screen.
The screen information of each screen may change in different environments, for example, in the case of a poor network status or no network, each screen may load page information that does not need to be connected to a network, such as locally stored music, video, and the like. Therefore, it is necessary to acquire screen information of each screen in real time at the time of an initial wake-up of the screen or a vehicle start. Further, the central control screen sends the received real-time screen information to the server side, and the server side receives and stores the real-time screen information.
And S220, receiving a voice instruction sent by a user.
S230, determining the position ID of the voice command based on the voice command sending position sent by the user.
Wherein the location ID is used to mark a specific location of the in-vehicle screen.
Specifically, after receiving a voice instruction sent by the user, the central control screen determines the specific sending position of the voice instruction sent by the user according to the sending direction of the voice instruction sent by the user. Further, according to the specific position, the position ID of the voice command is determined. Of course, in the case of knowing the location ID, the specific location of the vehicle-mounted screen corresponding to the user can be determined.
S240, performing semantic analysis on the voice command to obtain an analysis result of the voice command, and sending the analysis result to the server so that the server determines page information corresponding to the analysis result.
Specifically, after receiving the voice instruction, the central control screen converts the voice instruction into text information, performs semantic analysis on the text information to obtain an analysis result of the voice instruction, and sends the analysis result to the server. And after receiving the analysis result of the voice command, the server further processes the analysis result.
Optionally, the determining, by the server, page information corresponding to the analysis result includes:
and the server matches the analysis result of the voice instruction with the real-time screen information stored in the server so as to determine the page information corresponding to the analysis result.
Specifically, after receiving the analysis result, the server analyzes the analysis result, matches the analysis result with the real-time screen information, and determines page information corresponding to the analysis result according to the matching result. Illustratively, the left rear person sends a voice instruction of playing songs on sunny days, the central control screen performs semantic analysis on the voice instruction after receiving the voice instruction, and sends an analysis result and the position ID to the server. And after receiving the analysis result, the server searches the real-time screen information corresponding to the position ID based on the position ID, and matches the analysis result with the real-time screen information, including determining whether a music control exists in the real-time screen information corresponding to the position ID. If the real-time screen information contains the music control, the screen can play music normally, and further the analysis result can be successfully matched with the real-time screen information. After the semantic analysis result is successfully matched with the real-time screen information, the server side can convert the semantic analysis result into corresponding page information and send the page information to the central control screen. The page information comprises playing progress, real-time lyrics or song videos and the like of songs displayed for a user on a sunny day, and the songs can be played for the user on the sunny day through a TTS (text to speech) broadcaster and other devices. If the real-time screen information does not have the music control, the screen cannot normally play music, and further the failure of matching between the semantic analysis result and the real-time screen information can be determined. And after the semantic analysis result is determined to be unsuccessfully matched with the real-time screen information, the server side sends the page information which is unsuccessfully matched to the central control screen. The page information which fails to be matched comprises text information for displaying 'failure, please indicate again' for the user, and meanwhile, voice of 'failure, please indicate again' can be played for the user through devices such as a TTS (text to speech) broadcaster and the like.
By sending the analysis result to the server and receiving the page information returned by the server, the voice instruction of the user can be matched with the real-time screen information to obtain the page information which needs to be displayed and broadcasted for the user, the situation that the user instruction is not matched with the page information is avoided, and the user experience is improved.
And S250, receiving page information issued by the server.
And S260, determining a screen corresponding to the position ID of the voice command based on the position ID of the voice command, and taking the screen as a target screen corresponding to the user.
After receiving the page information issued by the server, the central control screen needs to determine to which specific screen the page information is sent.
Specifically, the central control screen can determine the screen corresponding to the position ID of the voice instruction according to the position ID of the voice instruction. Further, the screen is determined as a target screen and page information is transmitted to the target screen.
And S270, determining the path information of the target screen corresponding to the user based on the position ID of the voice command.
After the target screen is determined, the page information can be sent to the target screen through the path information between the central control screen and the target screen. Specifically, the central control screen can determine the communication path information between the target screen and the central control screen through the position ID of the voice instruction. And the information interaction between the central control screen and the target screen can be realized by utilizing the communication path information.
And S280, based on the path information, sending the page information to an adaptation layer of a target screen corresponding to the user through the adaptation layer.
Each vehicle-mounted screen is provided with an adaptation layer, and the adaptation layer can distribute instructions for communication among the screens. FIG. 4 is a timing diagram of multi-screen interaction provided by the embodiment of the invention, as shown in FIG. 4. The User represents a User, the APP UI represents a screen interface, the System service represents an adaptation layer, the SDS represents a voice client of the central control screen, and the SDS cloud represents a server.
Specifically, when a screen is awakened for the first time or a vehicle is started, the adaptation layer acquires screen information of each screen in real time and sends the screen information to the voice client, and the voice client sends the screen information to the server after receiving the screen information; the server receives and stores the screen information; a user sends a voice instruction, a voice client of the central control screen receives the voice instruction sent by the user, carries out semantic analysis on the voice instruction, and sends an analysis result to a server; after receiving the analysis result, the server side matches the analysis result with the screen information and sends the matching result (page information) to the voice client side of the central control screen; and the voice client of the central control screen sends the path information and the page information to the adaptation layer of the central control screen, and the adaptation layer of the central control screen sends the path information and the page information to the screen interface. When a plurality of screens exist, the adaptation layer of the central control screen sends the information to the adaptation layer of the target screen, and then the adaptation layer of the target screen sends the page information to the interface of the target screen.
According to the technical scheme provided by the embodiment, the real-time screen information of each screen can be received, and the real-time screen information is sent to the server, so that the server stores the real-time screen information; receiving a voice instruction sent by a user; determining a position ID of a voice instruction based on an issuing position of the voice instruction issued by a user; performing semantic analysis on the voice instruction to obtain an analysis result of the voice instruction, and sending the analysis result to a server; receiving page information issued by a server; determining a screen corresponding to the position ID of the voice instruction based on the position ID of the voice instruction, and taking the screen as a target screen corresponding to the user; determining path information of a target screen corresponding to the user based on the position ID of the voice instruction; and based on the path information, sending the page information to an adaptation layer of a target screen corresponding to the user through the adaptation layer. Through the technical scheme of the embodiment, the vehicle-mounted screen corresponding to the user can be determined through the position where the user sends the voice command. The page information is sent to the target screen corresponding to the user through the central control screen, so that interactive communication among multiple screens is realized, and the independence among the screens is guaranteed. The occupation condition of the vehicle memory is optimized, and the waste of resources is reduced. When a new demand exists, only the central control screen needs to be modified, and therefore human resources are saved.
EXAMPLE III
Fig. 5 is a schematic structural diagram of a multi-screen control system according to a third embodiment of the present invention. An embodiment of the present invention provides a multi-screen control system, including:
and the main control screen comprises a Dispatcher, a WysContext, a Systemservice and an APP. Wherein, Dispatcher is used for dispatching instructions; WysContext is used for managing screen information; the RpcAllBack is used for receiving a message sent by the Systemservice; systemservice is an adaptation layer, used for multi-screen communication; the APP is used for receiving user instructions and displaying page information to a user.
Other screens, including APP and SDS clients. The APP is used for receiving and displaying page information; the SDS Client is used for supporting TTS broadcast; systemservice is an adaptation layer used for multi-screen communication.
And the server is used for matching the analysis result of the voice instruction with the real-time screen information stored in the server so as to determine the page information corresponding to the analysis result.
The multi-screen control system provided by the embodiment of the invention can execute the multi-screen control method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 6 is a schematic structural diagram of a multi-screen control device according to a fourth embodiment of the present invention. An embodiment of the present invention provides a multi-screen control apparatus, including:
an instruction receiving module 610, configured to receive a voice instruction sent by a user;
the semantic analysis module 620 is configured to perform semantic analysis on the voice instruction to obtain an analysis result of the voice instruction, and send the analysis result to a server, so that the server determines page information corresponding to the analysis result;
an information receiving module 630, configured to receive the page information issued by the server, and send the page information to a target screen corresponding to the user, so as to display the page information based on the target screen.
Optionally, before receiving the voice instruction sent by the user, the apparatus is specifically configured to:
receiving real-time screen information of each screen;
and sending the real-time screen information to the server so that the server stores the real-time screen information.
Optionally, the instruction receiving module 610 is specifically configured to:
and determining the position ID of the voice instruction based on the issuing position of the voice instruction issued by the user.
Optionally, the semantic parsing module 620 specifically includes:
and the server matches the analysis result of the voice instruction with the real-time screen information stored in the server so as to determine the page information corresponding to the analysis result.
Optionally, before sending the page information to the target screen corresponding to the user, the device is further configured to:
and determining a screen corresponding to the position ID of the voice instruction based on the position ID of the voice instruction, and taking the screen as a target screen corresponding to the user.
Optionally, the information receiving module 630 is specifically configured to:
determining path information of a target screen corresponding to the user based on the location ID of the voice instruction;
based on the path information, the page information is sent to an adaptation layer of the target screen corresponding to the user through the adaptation layer; wherein the adaptation layer is used for sending and receiving the page information.
Optionally, the target screen is used for displaying and broadcasting the page information.
The multi-screen control device provided by the embodiment of the invention can execute the multi-screen control method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 7 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention, and referring to fig. 7, a schematic structural diagram of a computer system 12 suitable for implementing the electronic device according to the fifth embodiment of the present invention is shown. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. In the electronic device 12 of the present embodiment, the display 24 is not provided as a separate body but is embedded in the mirror surface, and when the display surface of the display 24 is not displayed, the display surface of the display 24 and the mirror surface are visually integrated. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and multi-screen control by executing programs stored in the system memory 28, for example, implementing a multi-screen control method provided by the embodiment of the present invention: receiving a voice instruction sent by a user; performing semantic analysis on the voice instruction to obtain an analysis result of the voice instruction, and sending the analysis result to a server so that the server determines page information corresponding to the analysis result; and receiving the page information issued by the server, and sending the page information to a target screen corresponding to the user so as to display the page information based on the target screen.
EXAMPLE six
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are executed by a computer processor to perform a multi-screen control method, and the method includes:
receiving a voice instruction sent by a user; performing semantic analysis on the voice instruction to obtain an analysis result of the voice instruction, and sending the analysis result to a server so that the server determines page information corresponding to the analysis result; and receiving the page information issued by the server, and sending the page information to a target screen corresponding to the user so as to display the page information based on the target screen.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the multi-screen control method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes instructions for enabling an electronic device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the multi-screen control device, the units and modules included in the embodiment are only divided according to the functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A multi-screen control method, comprising:
receiving a voice instruction sent by a user;
performing semantic analysis on the voice instruction to obtain an analysis result of the voice instruction, and sending the analysis result to a server so that the server determines page information corresponding to the analysis result;
and receiving the page information issued by the server, and sending the page information to a target screen corresponding to the user so as to display the page information based on the target screen.
2. A multi-screen control method as recited in claim 1, wherein, prior to receiving the voice command from the user, the method further comprises:
receiving real-time screen information of each screen;
and sending the real-time screen information to the server so that the server stores the real-time screen information.
3. A multi-screen control method according to claim 1, wherein the receiving a voice command from a user comprises:
and determining the position ID of the voice instruction based on the issuing position of the voice instruction issued by the user.
4. A multi-screen control method according to claim 2, wherein the determining, by the server, the page information corresponding to the parsing result includes:
and the server matches the analysis result of the voice instruction with the real-time screen information stored in the server so as to determine the page information corresponding to the analysis result.
5. A multi-screen control method according to claim 3, wherein before sending the page information to the target screen corresponding to the user, the method includes:
and determining a screen corresponding to the position ID of the voice instruction based on the position ID of the voice instruction, and taking the screen as a target screen corresponding to the user.
6. A multi-screen control method according to claim 5, wherein the sending the page information to the target screen corresponding to the user comprises:
determining path information of a target screen corresponding to the user based on the location ID of the voice instruction;
based on the path information, the page information is sent to an adaptation layer of the target screen corresponding to the user through the adaptation layer; wherein the adaptation layer is used for sending and receiving the page information.
7. A multi-screen control method as recited in claim 5, wherein the target screen is used to display and report the page information.
8. A multi-screen control apparatus, comprising: the instruction receiving module is used for receiving a voice instruction sent by a user;
the instruction receiving module is used for receiving a voice instruction sent by a user;
the semantic analysis module is used for carrying out semantic analysis on the voice instruction to obtain an analysis result of the voice instruction and sending the analysis result to the server so that the server can determine page information corresponding to the analysis result;
and the information receiving module is used for receiving the page information issued by the server and sending the page information to a target screen corresponding to the user so as to display the page information based on the target screen.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the multi-screen control method according to any one of claims 1 to 7 when executing the program.
10. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing a multi-screen control method according to any one of claims 1 to 7.
CN202210423029.5A 2022-04-21 2022-04-21 Multi-screen control method and device, electronic equipment and storage medium Pending CN114879923A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210423029.5A CN114879923A (en) 2022-04-21 2022-04-21 Multi-screen control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210423029.5A CN114879923A (en) 2022-04-21 2022-04-21 Multi-screen control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114879923A true CN114879923A (en) 2022-08-09

Family

ID=82672676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210423029.5A Pending CN114879923A (en) 2022-04-21 2022-04-21 Multi-screen control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114879923A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115579010A (en) * 2022-12-08 2023-01-06 中国汽车技术研究中心有限公司 Intelligent cabin cross-screen linkage method, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115579010A (en) * 2022-12-08 2023-01-06 中国汽车技术研究中心有限公司 Intelligent cabin cross-screen linkage method, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113783928A (en) Cross-device handover
CN111694433A (en) Voice interaction method and device, electronic equipment and storage medium
CN112334976A (en) Presenting responses to a spoken utterance of a user using a local text response mapping
CN108871370A (en) Air navigation aid, device, equipment and medium
CN110472095A (en) Voice guide method, apparatus, equipment and medium
EP3958577A2 (en) Voice interaction method, voice interaction system, server and storage medium
CN112235063A (en) Vehicle-mounted audio playing control method, device, equipment and storage medium
CN114879923A (en) Multi-screen control method and device, electronic equipment and storage medium
US20190228769A1 (en) Information processing device and information processing method
US7082391B1 (en) Automatic speech recognition
CN111578965B (en) Navigation broadcast information processing method and device, electronic equipment and storage medium
CN112527235A (en) Voice playing method, device, equipment and storage medium
WO2023051315A1 (en) Application control method and apparatus, electronic device, and storage medium
US20230010735A1 (en) Method for processing audio data based on vehicle networking
CN115061762A (en) Page display method and device, electronic equipment and medium
CN114356083A (en) Virtual personal assistant control method and device, electronic equipment and readable storage medium
CN115079993A (en) Cross-system audio playing control method and device, vehicle and storage medium
CN114248786A (en) Vehicle control method, system, apparatus, computer device, medium, and product
CN114036218A (en) Data model switching method and device, server and storage medium
CN114420092A (en) Voice test method, system, device, electronic equipment and storage medium
CN114035878A (en) Information display method, information display device, electronic equipment and storage medium
JP2005242243A (en) System and method for interactive control
CN107967308B (en) Intelligent interaction processing method, device, equipment and computer storage medium
CN113551684A (en) Multi-screen interaction navigation method and device, electronic equipment and storage medium
CN111580766A (en) Information display method and device and information display system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination