CN115291764A - Data interaction method and device, intelligent equipment, system and readable storage medium - Google Patents

Data interaction method and device, intelligent equipment, system and readable storage medium Download PDF

Info

Publication number
CN115291764A
CN115291764A CN202211107369.3A CN202211107369A CN115291764A CN 115291764 A CN115291764 A CN 115291764A CN 202211107369 A CN202211107369 A CN 202211107369A CN 115291764 A CN115291764 A CN 115291764A
Authority
CN
China
Prior art keywords
instruction
mode
display mode
digital
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211107369.3A
Other languages
Chinese (zh)
Inventor
夏人杰
邓园园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Yuguang Technology Co ltd
Original Assignee
Suzhou Yuguang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Yuguang Technology Co ltd filed Critical Suzhou Yuguang Technology Co ltd
Priority to CN202211107369.3A priority Critical patent/CN115291764A/en
Publication of CN115291764A publication Critical patent/CN115291764A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of artificial intelligence, and particularly discloses a data interaction method, a data interaction device, intelligent equipment, a data interaction system and a readable storage medium. The method comprises the following steps: receiving and identifying an interactive instruction input by a user; when the interactive instruction is a first instruction indicating to enter a display mode, outputting and displaying first digital content, wherein the first digital content does not have interactive attributes; when the interactive instruction is a second instruction indicating to enter the interactive mode, outputting and displaying second digital content, wherein the second digital content has interactive attributes; when the interactive instruction is a third instruction indicating that the entertainment mode is entered, outputting and displaying first preset audio and video data; and when the interactive instruction is a fourth instruction indicating to enter the sleep mode, outputting and displaying second preset audio and video data. Compared with the traditional single working mode, the working mode of the intelligent equipment provided by the embodiment is rich, the intelligent degree is improved, and the increasing diversified use requirements of users are met.

Description

Data interaction method and device, intelligent equipment, system and readable storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a data interaction method, apparatus, intelligent device, system, and readable storage medium.
Background
With the technical development of artificial intelligence, some intelligent devices facing consumers, such as intelligent sound boxes, appear in the market. At present, most of intelligent devices only have a single working mode, and taking an intelligent sound box as an example, music playing or information searching is generally performed only according to a voice instruction of a user; some intelligent sound boxes with display screens can be used for realizing the AI voice function in a digital human form, but the working mode is only limited to music playing and information searching, and the increasing diversified functional requirements of users cannot be met.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a data interaction method, a data interaction apparatus, an intelligent device, an intelligent system, and a computer-readable storage medium.
According to a first aspect of embodiments of the present application, a data interaction method is provided, including:
receiving and identifying an interactive instruction input by a user;
when the interaction instruction is a first instruction indicating to enter a display mode, outputting and displaying first digital content, wherein the first digital content does not have an interaction attribute;
when the interaction instruction is a second instruction indicating to enter an interaction mode, outputting and displaying second digital content, wherein the second digital content has an interaction attribute;
when the interactive instruction is a third instruction indicating that the entertainment mode is entered, outputting and displaying first preset audio and video data;
and when the interactive instruction is a fourth instruction indicating to enter the sleep mode, outputting and displaying second preset audio and video data.
In one embodiment, the first digital content comprises at least one of a digital picture, a digital model, a digital person, a digital scene, an audio-video;
the second digital content comprises at least one of a digital person, a digital model, and a digital scene;
the first preset audio/video comprises music with dynamic light effect display or audio/video with digital man/digital model action;
the second preset audio and video comprises audio and video with a sleep sound effect.
In one embodiment, the method further comprises:
when the display mode is in the display mode, receiving a mode setting instruction input by a user;
determining a corresponding display mode according to the mode setting instruction;
when the determined display mode is the 2D display mode, controlling the screens in all directions to respectively display images with different visual angles;
and when the determined display mode is the 3D display mode, performing image distortion processing on the images displayed at the splicing position of the screens in all directions or the corner of the curved screen or the bending position of the folded screen to display naked eye 3D images.
In one embodiment, the method further comprises:
detecting a location of a user;
determining the viewing position of the user relative to the screen according to the position of the user;
the display mode is switched according to the viewing position of the user relative to the screen.
In one embodiment, the step of switching the display mode according to the viewing position of the user relative to the screen includes:
when the user is just opposite to a screen in a certain direction, switching the display mode into a 2D display mode;
and when the user is positioned at the splicing position of the screen, the corner of the curved screen or the bending position of the folded screen, switching the display mode into a 3D display mode.
In one embodiment, the method further comprises:
when the mobile terminal is in the display mode or the interaction mode, acquiring real-time information of a physical world where a user is located;
converting the real-time information into a keyword which can be identified by a program;
extracting corresponding display content according to the keywords;
and performing change display on the first digital content or the second digital content according to the extracted display content.
In one embodiment, the real-time information includes at least one of physical location, time information, and weather conditions.
In one embodiment, the step of performing modified display on the first digital content or the second digital content according to the extracted display content includes:
and changing and displaying the digital person, the digital scene, the state of the digital person and the state of the digital scene in the first digital content or the second digital content according to the extracted display content.
In one embodiment, the method further comprises:
when the display mode is in, receiving a wake-up instruction, switching the display mode to an interaction mode, and monitoring a next instruction;
and if no next step instruction is monitored within the preset time length, switching the interactive mode to the display mode.
According to a second aspect of the embodiments of the present application, there is provided a data interaction apparatus, including:
the receiving module is used for receiving and identifying an interactive instruction input by a user;
the first output module is used for outputting first digital content when the interaction instruction is a first instruction indicating to enter a display mode, and the first digital content does not have an interaction attribute;
the second output module is used for outputting second digital content when the interaction instruction is a second instruction indicating to enter an interaction mode, and the second digital content has an interaction attribute;
the third output module is used for outputting first preset audio and video data when the interactive instruction is a third instruction indicating that the entertainment mode is entered;
and the fourth output module is used for outputting second preset audio and video data when the interactive instruction is a fourth instruction indicating entering the sleep mode.
According to a third aspect of embodiments of the present application, there is provided a smart device, including:
a screen for displaying the output content;
the data interaction method comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the data interaction method when executing the computer program.
According to a fourth aspect of embodiments of the present application, there is provided an intelligent system, including:
the user end of communication connection and intelligent equipment as above.
According to a fifth aspect of embodiments herein, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the data interaction method described above.
According to the data interaction method, the data interaction device, the intelligent equipment, the intelligent system and the computer readable storage medium provided by the embodiment, when the intelligent equipment receives an interaction instruction input by a user, the interaction instruction is identified, and when the interaction instruction is identified to be a first instruction indicating to enter a display mode, first digital content without interaction attribute is output and displayed, and the display mode is entered; when the interactive instruction is identified as a second instruction indicating to enter the interactive mode, outputting and displaying second digital content with interactive attributes, and entering the interactive mode, wherein in the interactive mode, a user can interact with the second digital content; when the interactive instruction is recognized to be a third instruction indicating entering the entertainment mode, outputting and displaying first preset audio and video data, and entering the entertainment mode; and when the interactive instruction is identified as a fourth instruction indicating to enter the sleep mode, outputting and displaying second preset audio and video data, and entering the sleep mode. Namely, the intelligent device has four working modes, and can enter the corresponding working mode according to the instruction of the user and output the display content corresponding to the working mode.
Drawings
Fig. 1 is an application scenario diagram of a data interaction method according to an embodiment of the present application;
fig. 2 is a flowchart of a data interaction method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a data interaction device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an intelligent device according to an embodiment of the present application.
Detailed Description
To facilitate an understanding of the present application, the present application will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present application are given in the accompanying drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
In this application, unless expressly stated or limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can include, for example, fixed connections, removable connections, or integral parts; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 shows an application scenario of the data interaction method provided by the present embodiment. Wherein, including user, user side and smart machine, the user side can include intelligent Mobile terminal such as cell-phone, panel computer, and intelligent Mobile terminal keeps the communication with smart machine to be connected, installs on it with smart machine assorted APP application, the user can carry out the interaction with smart machine through the APP application on the user side. In addition, the intelligent device can be configured with an audio acquisition interface/touch screen, and a user can directly interact with the intelligent device through the audio acquisition interface/touch screen. The intelligent device is provided with a display screen for displaying output contents, and can be provided with modules such as a camera and a laser detector, wherein the camera can be used for face authentication or user position acquisition, and the laser detector can be used for monitoring the distance between a user and the intelligent device.
In one embodiment, a data interaction method is provided and executed by an intelligent device.
Referring to fig. 2, the data interaction method provided in this embodiment includes the following steps:
and step S100, receiving and identifying an interactive instruction input by a user.
The user can send out an interactive instruction through the intelligent mobile terminal, and also can directly send out a voice interactive instruction or a touch instruction to the intelligent equipment, and after the intelligent equipment receives the interactive instruction input by the user, the interactive instruction is identified, and then corresponding action is executed.
Step S200, when the interactive instruction is a first instruction indicating to enter the display mode, outputting and displaying first digital content, wherein the first digital content does not have interactive attributes.
The intelligent device is provided with a display mode, an interaction mode, an entertainment mode and a sleep mode, when the interaction instruction is determined to be a first instruction indicating entering the display mode, the intelligent device is controlled to enter the display mode, and in the display mode, the intelligent device can obtain first digital content without interaction attributes from an internal resource library and output and display the first digital content in a display screen. Wherein the first digital content may include at least one of a digital picture, a digital model, a digital person, a digital scene, and an audio-video. Of course, in the display mode, the smart device may also output and display the digital content with the interaction attribute, but in the display mode, the default user has no requirement for interacting with the digital content, and therefore, the first digital content without the interaction attribute is generally output and displayed.
And step S300, when the interactive instruction is a second instruction for indicating to enter the interactive mode, outputting and displaying second digital content, wherein the second digital content has interactive attributes.
And when the interactive instruction is determined to be a second instruction indicating to enter the interactive mode, controlling the intelligent equipment to enter the interactive mode, and under the interactive mode, acquiring second digital content with interactive attributes from the internal resource library by the intelligent equipment, outputting and displaying the second digital content in the display screen. The second digital content may include at least one of a digital person, a digital model and a digital scene, and the digital person, the digital model and the digital scene in the second digital content may interact with the user.
And S400, when the interactive instruction is a third instruction for indicating to enter the entertainment mode, outputting and displaying first preset audio and video data.
And when the interactive instruction is determined to be a third instruction indicating to enter the entertainment mode, controlling the intelligent device to enter the entertainment mode, and under the entertainment mode, the intelligent device can acquire first preset audio and video data from the internal resource library and output and display the first preset audio and video data in the display screen. Wherein the first preset audio/video data may comprise music with dynamic light effect display or audio/video with digital human/digital model action.
And S500, when the interactive instruction is a fourth instruction for indicating to enter the sleep mode, outputting and displaying second preset audio and video data.
And when the interactive instruction is determined to be a fourth instruction indicating to enter the sleep mode, the intelligent device is controlled to enter the sleep mode, and in the sleep mode, the intelligent device can acquire second preset audio and video data from the internal resource library and output and display the second preset audio and video data in the display screen. The second preset audio/video data may include audio/video with a sleep sound effect. When in the sleep mode, the user can also set the duration of the sleep mode, thereby realizing the timing shutdown.
In this embodiment, the first preset audio/video data and the second preset audio/video data may be data automatically set by the system, or may be data set according to user requirements, for example, when a user listens to a pop song or listens to a hypnotic song, the user may set a displayed digital human/digital model according to his/her preference.
In addition, the user can upload favorite digital person/digital model to the intelligent device in a customized mode.
According to the data interaction method provided by the embodiment, the intelligent device has four working modes, can enter the corresponding working mode according to the instruction of the user and output the display content corresponding to the working mode, and compared with the traditional single working mode, the working modes of the intelligent device provided by the embodiment are rich, the intelligent degree is improved, and the increasing diversified use requirements of the user are met.
In one embodiment, the data interaction method provided in this embodiment further includes the following steps:
step S210, when the display mode is in the display mode, a mode setting instruction input by a user is received.
The intelligent device can be provided with screens in at least two side directions, can be used for arranging independent screens in different side directions and splicing adjacent screens, and can also be used for arranging a complete curved screen in a side surrounding manner or arranging a folding screen in a side surrounding manner. The intelligent device can have more than one display mode, when the intelligent device is in the display mode, a user can set the display mode according to requirements, namely, a mode setting instruction is sent to the intelligent device, and the intelligent device receives the mode setting instruction input by the user and carries out the next processing.
The intelligent device can be provided with an entity mode switching key, and a user can send a mode setting instruction by controlling the mode switching key; the user can also initiate a mode setting instruction through an APP on the intelligent mobile terminal.
Step S220, determining a corresponding display mode according to the mode setting instruction.
The intelligent device can determine the display mode corresponding to the mode setting instruction input by the user.
Step S230, when the determined display mode is the 2D display mode, controlling the screens in the respective directions to display images of different viewing angles respectively.
Step S240, when the determined display mode is the 3D display mode, performing image distortion processing on the image displayed at the splicing position of the screen in each direction, or at the corner of the curved screen, or at the bending position of the folded screen, so as to display the naked eye 3D image.
Under the 2D display mode, the screens in all directions can respectively display pictures with different visual angles, and multi-visual-angle display can be achieved. Under the 3D display mode, image distortion processing is carried out on images displayed at splicing positions of screens in all directions, corners of curved screens or bending positions of folding screens, so that a naked eye 3D feeling can be achieved when a user watches the fusion images on the screens in all directions at the oblique angles of the intelligent device.
In this embodiment, in addition to switching the display mode through the mode switching key of the entity and the APP application program on the intelligent mobile terminal, the display mode may also be automatically switched according to the orientation of the user. Specifically, in one embodiment, the data interaction method provided in this embodiment further includes the following steps:
and step S250, detecting the position of the user.
The camera configured on the intelligent device can be used for collecting the image of the user, processing and analyzing the collected image and determining the position of the user.
And step S260, determining the viewing position of the user relative to the screen according to the position of the user.
And step S270, switching the display mode according to the watching position of the user relative to the screen.
After the position of the user is determined, the viewing position of the user relative to the screen can be determined, generally, the user may be located right in front of the screen in a certain direction, or the user may be located right in front of a splicing position of adjacent screens, a corner of a curved screen, or a bending position of a folded screen. Different viewing positions have different optimal display modes, for example, when a user is positioned right in front of the screen, the user is suitable for a 2D viewing mode but not suitable for a 3D display mode, and therefore, the current display mode can be switched to the optimal display mode according to the viewing position of the user relative to the screen, so as to ensure the optimal display effect.
In one embodiment, the step S270 of switching the display mode according to the viewing position of the user relative to the screen includes:
step S271, when the user is facing to the screen in a certain direction, the display mode is switched to the 2D display mode.
And step S272, when the user is positioned at the splicing position of the screen, the corner of the curved screen or the bending position of the folded screen, switching the display mode into the 3D display mode.
When a digital person carried in a traditional intelligent device displays or interacts with a user, the state of the digital person and a digital scene where the digital person is located may have a large difference from a scene in a physical world of the user, and further reality cannot be brought to the user. In order to enhance the sense of reality of the virtual world, in one embodiment, the data interaction method provided by this embodiment further includes the following steps:
and S310, when the mobile terminal is in a display mode or an interaction mode, acquiring real-time information of the physical world where the user is located.
When the intelligent device is in a display mode or an interaction mode, the real-time information of the physical world where the user is located can be obtained, the real-time information can comprise at least one of the physical position of the user in the physical world, the time information of the physical world, the weather condition of the physical world and the like,
step S320, converting the real-time information into a keyword for program recognition.
And step S330, extracting corresponding display contents according to the keywords.
After the keywords are determined according to the real-time information, the keywords can be matched with the existing content tags in the software content library, and after the corresponding content tags are matched, the corresponding display content is extracted according to the matched content tags. For example, if the weather condition of the physical world where the user is located is rainy weather, the keyword may be rain, whether a rain content tag exists is searched in the software content library, and if the rain content tag is matched with the rain content tag, the corresponding rain content special effect file is extracted from the weather special effect library according to the rain content tag, that is, the display content is extracted.
And step S340, performing change display on the first digital content or the second digital content according to the extracted display content.
After the display content is extracted, the first digital content in the display mode or the second digital content in the interaction mode can be correspondingly changed and displayed, so that the display content corresponds to the information in the real world of the user, and the experience sense of reality of the user is improved.
In one embodiment, in step S340, the step of performing modified display on the first digital content or the second digital content according to the extracted display content includes: and changing and displaying the digital person, the digital scene, the state of the digital person and the state of the digital scene in the first digital content or the second digital content according to the extracted display content. For example, if the extracted presentation content includes a weather effect file of "light rain", the digital scene in the presented digital content may be changed to a scene of light rain, and the state of the digital person may be changed to a digital person holding an umbrella.
In one embodiment, the data interaction method provided by this embodiment further includes the following steps:
step S610, when the display mode is in, receiving the awakening instruction, switching the display mode to the interaction mode, and monitoring the next step instruction;
step S620, if no next instruction is monitored within the preset time, switching from the interactive mode to the display mode.
When the user wants to enter the interactive mode, a wake-up command can be output to the smart device, for example, a touch wake-up or a voice wake-up. And when the intelligent equipment receives the awakening instruction, the display mode is switched to the interactive mode. When the digital display device is in the interactive mode and the user does not interact with the digital person or the scene, the display mode can be switched to, in actual operation, a preset time length can be set according to actual requirements, and if the digital display device enters the preset time length of the interactive mode and does not receive the instruction of the user, the display mode is switched back.
It should be understood that, although the steps in the flowcharts related to the above embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, another embodiment of the present application further provides a data interaction apparatus for implementing the above-mentioned data interaction method. The implementation scheme for solving the problem provided by the data interaction device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the data interaction device provided below can be referred to the limitations of the data interaction method in the foregoing, and details are not described here.
Referring to fig. 3, the data interaction apparatus provided in this embodiment includes a receiving module 100, a first output module 200, a second output module 300, a third output module 400, and a fourth output module 500. Wherein:
a receiving module 100, configured to receive and identify an interactive instruction input by a user;
a first output module 200, configured to output a first digital content when the interactive instruction is a first instruction indicating to enter a display mode, where the first digital content does not have an interactive attribute;
a second output module 300, configured to output a second digital content when the interactive instruction is a second instruction indicating to enter the interactive mode, where the second digital content has an interactive attribute;
a third output module 400, configured to output first preset audio and video data when the interactive instruction is a third instruction indicating to enter the entertainment mode;
and a fourth output module 500, configured to output second preset audio and video data when the interactive instruction is a fourth instruction indicating to enter the sleep mode.
The data interaction device provided by the embodiment can control the intelligent equipment to enter the corresponding working mode according to the instruction of the user and output the display content corresponding to the working mode, and compared with the traditional single working mode, the data interaction device provided by the embodiment has rich working modes, improves the intelligent degree and meets the increasing diversified use requirements of the user.
In one embodiment, the first digital content comprises at least one of a digital picture, a digital model, a digital person, a digital scene, an audio-video;
the second digital content comprises at least one of a digital person, a digital model, and a digital scene;
the first preset audio/video comprises music with dynamic light effect display or audio/video with digital man/digital model action;
the second preset audio and video comprises audio and video with a sleep sound effect.
In one embodiment, the receiving module is further configured to receive a mode setting instruction input by a user in the display mode;
the first output module is also used for determining a corresponding display mode according to the mode setting instruction;
when the determined display mode is the 2D display mode, controlling the screens in all directions to respectively display images with different visual angles;
and when the determined display mode is the 3D display mode, performing image distortion processing on the images displayed at the splicing position of the screens in all directions or the corner of the curved screen or the bending position of the folded screen to display naked eye 3D images.
In one embodiment, the first output module is further configured to:
detecting a location of a user;
determining the viewing position of the user relative to the screen according to the position of the user;
the display mode is switched according to the viewing position of the user relative to the screen.
In one embodiment, the first output module is further configured to:
when the user is just opposite to a screen in a certain direction, switching the display mode into a 2D display mode;
and when the user is positioned at the splicing position of the screen, the corner of the curved screen or the bending position of the folded screen, switching the display mode into a 3D display mode.
In one embodiment, the first output module and the second output module are further configured to:
acquiring real-time information of a physical world where a user is located;
converting the real-time information into a keyword which can be identified by a program;
extracting corresponding display content according to the keywords;
and performing change display on the first digital content or the second digital content according to the extracted display content.
In one embodiment, the real-time information includes at least one of physical location, time information, and weather conditions.
In one embodiment, the first output module and the second output module are configured to: and changing and displaying the digital person, the digital scene, the state of the digital person and the state of the digital scene in the first digital content or the second digital content according to the extracted display content.
In one embodiment, the receiving module is further configured to receive a wake-up command in the display mode;
the data interaction device provided by this embodiment further includes a switching module, where the switching module is configured to switch from the display mode to the interaction mode when the receiving module receives the wake-up instruction, and the receiving module monitors a next instruction; if the receiving module does not monitor the next step instruction within the preset time length, the switching module switches the working mode from the interactive mode to the display mode.
The modules in the data interaction device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the above method embodiments when the processor executes the computer program.
Fig. 4 is a schematic structural diagram of an intelligent device provided in an embodiment of the present application, where the intelligent device includes a screen, a processor, a memory, and a network interface that are connected through a system bus. Where a screen may be used to display the output content and the processor of the smart device is used to provide computing and control capabilities. The memory of the intelligent device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the intelligent device is used for storing various data related to the data interaction method. The network interface of the intelligent device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a data interaction method.
Those skilled in the art will appreciate that the architecture shown in fig. 4 is a block diagram of only a portion of the architecture associated with the subject application and does not constitute a limitation on the smart device to which the subject application applies, and that a particular smart device may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
In one embodiment, an intelligent system is provided, which comprises a user end connected with communication and an intelligent device as described above. The user side can include intelligent mobile terminal such as cell-phone, panel computer, and the user side is inside can install the APP procedure that is correlated with the smart machine, and the APP procedure on the user accessible user side is controlled the smart machine, and of course, the user also can directly carry out speech control or touch-control to the smart machine.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several implementation modes of the present application, and the description thereof is specific and detailed, but not construed as limiting the scope of the patent. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (13)

1. A method for data interaction, comprising:
receiving and identifying an interactive instruction input by a user;
when the interaction instruction is a first instruction indicating to enter a display mode, outputting and displaying first digital content, wherein the first digital content does not have an interaction attribute;
when the interaction instruction is a second instruction indicating to enter an interaction mode, outputting and displaying second digital content, wherein the second digital content has an interaction attribute;
when the interactive instruction is a third instruction indicating that the entertainment mode is entered, outputting and displaying first preset audio and video data;
and when the interactive instruction is a fourth instruction indicating to enter the sleep mode, outputting and displaying second preset audio and video data.
2. The data interaction method of claim 1, wherein the first digital content comprises at least one of a digital picture, a digital model, a digital person, a digital scene, an audio-video;
the second digital content comprises at least one of a digital person, a digital model, and a digital scene;
the first preset audio/video comprises music with dynamic light effect display or audio/video with digital man/digital model action;
the second preset audio and video comprises audio and video with a sleep sound effect.
3. The data interaction method of claim 1, wherein the method further comprises:
when the display mode is in the display mode, receiving a mode setting instruction input by a user;
determining a corresponding display mode according to the mode setting instruction;
when the determined display mode is the 2D display mode, controlling the screens in all directions to respectively display images with different visual angles;
and when the determined display mode is the 3D display mode, performing image distortion processing on the images displayed at the splicing position of the screens in all directions or the corner of the curved screen or the bending position of the folded screen to display naked eye 3D images.
4. The data interaction method of claim 3, wherein the method further comprises:
detecting a location of a user;
determining the viewing position of the user relative to the screen according to the position of the user;
the display mode is switched according to the viewing position of the user relative to the screen.
5. The data interaction method of claim 4, wherein the step of switching the display mode according to the viewing position of the user relative to the screen comprises:
when the user is just opposite to a screen in a certain direction, switching the display mode into a 2D display mode;
and when the user is positioned at the splicing position of the screen, the corner of the curved screen or the bending position of the folded screen, switching the display mode into a 3D display mode.
6. The data interaction method of claim 1, wherein the method further comprises:
when the mobile terminal is in the display mode or the interaction mode, acquiring real-time information of a physical world where a user is located;
converting the real-time information into a keyword which can be identified by a program;
extracting corresponding display content according to the keywords;
and performing change display on the first digital content or the second digital content according to the extracted display content.
7. The data interaction method of claim 6, wherein the real-time information comprises at least one of physical location, time information, and weather conditions.
8. The data interaction method according to claim 6, wherein the step of performing the modified display on the first digital content or the second digital content according to the extracted display content comprises:
and changing and displaying the digital person, the digital scene, the state of the digital person and the state of the digital scene in the first digital content or the second digital content according to the extracted display content.
9. The data interaction method of claim 1, wherein the method further comprises:
when the display mode is in the display mode, receiving a wakeup instruction, switching the display mode to an interaction mode, and monitoring a next step instruction;
and if no next step instruction is monitored within the preset time length, switching the interactive mode to the display mode.
10. A data interaction device, comprising:
the receiving module is used for receiving and identifying an interactive instruction input by a user;
the first output module is used for outputting first digital content when the interaction instruction is a first instruction indicating to enter a display mode, and the first digital content does not have an interaction attribute;
the second output module is used for outputting second digital content when the interaction instruction is a second instruction for indicating to enter an interaction mode, and the second digital content has an interaction attribute;
the third output module is used for outputting first preset audio and video data when the interactive instruction is a third instruction for indicating to enter an entertainment mode;
and the fourth output module is used for outputting second preset audio and video data when the interactive instruction is a fourth instruction indicating entering the sleep mode.
11. A smart device, comprising:
a screen for displaying the output content;
a memory storing a computer program and a processor implementing the data interaction method of any one of claims 1-9 when the computer program is executed by the processor.
12. An intelligent system, comprising:
a user terminal of a communication connection and a smart device according to claim 11.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the data interaction method of any one of claims 1 to 9.
CN202211107369.3A 2022-09-13 2022-09-13 Data interaction method and device, intelligent equipment, system and readable storage medium Pending CN115291764A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211107369.3A CN115291764A (en) 2022-09-13 2022-09-13 Data interaction method and device, intelligent equipment, system and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211107369.3A CN115291764A (en) 2022-09-13 2022-09-13 Data interaction method and device, intelligent equipment, system and readable storage medium

Publications (1)

Publication Number Publication Date
CN115291764A true CN115291764A (en) 2022-11-04

Family

ID=83833849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211107369.3A Pending CN115291764A (en) 2022-09-13 2022-09-13 Data interaction method and device, intelligent equipment, system and readable storage medium

Country Status (1)

Country Link
CN (1) CN115291764A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040165006A1 (en) * 2002-07-19 2004-08-26 Timothy Kirby Methods and apparatus for an interactive media display
CN109450840A (en) * 2018-08-31 2019-03-08 福建星网视易信息系统有限公司 A kind of scene mode switching method and storage equipment
CN111782312A (en) * 2020-05-14 2020-10-16 北京爱接力科技发展有限公司 Mode switching method and device, robot and computer readable storage medium
CN112612358A (en) * 2019-10-03 2021-04-06 丁建华 Human and large screen multi-mode natural interaction method based on visual recognition and voice recognition
CN114302201A (en) * 2021-03-30 2022-04-08 海信视像科技股份有限公司 Method for automatically switching on and off screen in loudspeaker box mode, intelligent terminal and display device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040165006A1 (en) * 2002-07-19 2004-08-26 Timothy Kirby Methods and apparatus for an interactive media display
CN109450840A (en) * 2018-08-31 2019-03-08 福建星网视易信息系统有限公司 A kind of scene mode switching method and storage equipment
CN112612358A (en) * 2019-10-03 2021-04-06 丁建华 Human and large screen multi-mode natural interaction method based on visual recognition and voice recognition
CN111782312A (en) * 2020-05-14 2020-10-16 北京爱接力科技发展有限公司 Mode switching method and device, robot and computer readable storage medium
CN114302201A (en) * 2021-03-30 2022-04-08 海信视像科技股份有限公司 Method for automatically switching on and off screen in loudspeaker box mode, intelligent terminal and display device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SMART HOME FOR EASY LIFE网友: "小爱同学常用语音指令–娱乐篇", 《HTTPS://EASYLIFE.YT/SMARTHOME/ZH-CN/DATE/2020/07/》 *
SMART HOME FOR EASY LIFE网友: "小爱同学常用语音指令–电器篇", 《HTTPS://EASYLIFE.YT/SMARTHOME/ZH-CN/DATE/2020/07/》 *
爱否科技: "小米小爱触屏音响上手:智能音箱需要屏幕吗?", 《HTTPS://WWW.SOHU.COM/A/302679747_473286》 *

Similar Documents

Publication Publication Date Title
CN110012209B (en) Panoramic image generation method and device, storage medium and electronic equipment
CN110070496B (en) Method and device for generating image special effect and hardware device
CN113010740B (en) Word weight generation method, device, equipment and medium
CN111757175A (en) Video processing method and device
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
CN110798615A (en) Shooting method, shooting device, storage medium and terminal
CN113596555B (en) Video playing method and device and electronic equipment
CN110290426B (en) Method, device and equipment for displaying resources and storage medium
CN111816139A (en) Screen refresh rate switching method and electronic equipment
CN111339938A (en) Information interaction method, device, equipment and storage medium
CN111953900B (en) Picture shooting method and device and electronic equipment
CN111836073B (en) Method, device and equipment for determining video definition and storage medium
US10762799B1 (en) Make-up assisting method implemented by make-up assisting device
CN110493635B (en) Video playing method and device and terminal
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
WO2023169361A1 (en) Information recommendation method and apparatus and electronic device
EP4383070A1 (en) Page processing method, apparatus, device, and storage medium
CN115291764A (en) Data interaction method and device, intelligent equipment, system and readable storage medium
CN116016817A (en) Video editing method, device, electronic equipment and storage medium
CN114125531B (en) Video preview method, device, terminal and storage medium
CN110489040B (en) Method and device for displaying feature model, terminal and storage medium
CN116483234A (en) Page display method, device, equipment and storage medium
CN113468932A (en) Intelligent mirror and makeup teaching method
CN113542878A (en) Awakening method based on face recognition and gesture detection and display device
CN114390205B (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221104