CN113359976A - Control method of intelligent device, server and storage medium - Google Patents

Control method of intelligent device, server and storage medium Download PDF

Info

Publication number
CN113359976A
CN113359976A CN202110565449.2A CN202110565449A CN113359976A CN 113359976 A CN113359976 A CN 113359976A CN 202110565449 A CN202110565449 A CN 202110565449A CN 113359976 A CN113359976 A CN 113359976A
Authority
CN
China
Prior art keywords
target video
target
control node
action
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110565449.2A
Other languages
Chinese (zh)
Inventor
薛玉梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong General Hospital
Original Assignee
Guangdong General Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong General Hospital filed Critical Guangdong General Hospital
Priority to CN202110565449.2A priority Critical patent/CN113359976A/en
Publication of CN113359976A publication Critical patent/CN113359976A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H7/00Devices for suction-kneading massage; Devices for massaging the skin by rubbing or brushing not otherwise provided for
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Rehabilitation Therapy (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Pain & Pain Management (AREA)
  • Epidemiology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Dermatology (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a control method of an intelligent device, the intelligent device, a server and a storage medium, wherein the control method is applied to a video display terminal, the video display terminal is used for playing a target video to a user, the target video at least comprises a control node, and the control node carries identification information, and the method comprises the following steps: acquiring current playing information of the target video; comparing the current playing information of the target video with the identification information of each control node; when the current playing information of the target video is matched with the identification information of any control node, sending an action signal to execution equipment so that the execution equipment can make a target action; wherein the target action corresponds to the control node of the target video. The user can directly feel the impact on sense organs caused by the action corresponding to the target video in the video watching process, and the experience degree of VR video users is effectively improved.

Description

Control method of intelligent device, server and storage medium
Technical Field
The present application relates to the field of intelligent device control, and in particular, to a control method for an intelligent device, a server, and a storage medium.
Background
Virtual Reality (abbreviated as VR) is a new practical technology developed in the 20 th century. Immersive is the most important feature of virtual reality technology, namely, enabling a user to become and feel that the user is a part of the environment created by a computer system, and the immersive performance of the virtual reality technology depends on the perception system of the user.
However, at present, the user perception system is often limited to visual perception only, which results in poor user experience, and thus the user experience problem still becomes a bottleneck in the development of VR technology.
Disclosure of Invention
The application provides a control method of an intelligent device, the intelligent device, a server and a storage medium, and aims to solve the technical problem of how to improve the experience of VR video users.
In a first aspect, the present application provides a method applied to a video display terminal, where the video display terminal is configured to play a target video to a user, where the target video includes at least one control node, and the control node carries identification information,
the method comprises the following steps:
acquiring current playing information of the target video;
comparing the current playing information of the target video with the identification information of each control node;
when the current playing information of the target video is matched with the identification information of any control node, sending an action signal to execution equipment so that the execution equipment can make a target action;
wherein the target action corresponds to the control node of the target video.
Optionally, the causing the execution device to perform the target action includes: and enabling the executing device to directly make the target action to the user or enabling the executing device to control a third-party device to respond to the target action.
Optionally, the action signal includes: one or more of the execution times signal of the target action, the duration signal of the target action and the strength signal of the target action.
Optionally, the identification information carried by the control node includes: controlling a first duration of a node moment of a node on a target video from a starting moment of the target video;
the current playing information of the target video comprises a second time length from the starting time of the target video to the playing time of the target video;
the comparing the playing information of the target video with the identification information of each control node includes:
comparing each of the first and second durations.
Optionally, the identification information carried by the control node includes: the content of the current frame of the target video corresponding to the control node;
the playing information of the target video comprises the content of the current frame of the target video;
the comparing the playing information of the target video with the identification information of the control node includes:
and comparing the content of the current frame of the target video corresponding to each control node with the content of the current frame of the target video.
Optionally, before sending the action signal to the execution device, the method further includes:
and acquiring the category of the target video, and acquiring the execution equipment corresponding to the type of the target video according to the category of the target video.
Optionally, an executing device corresponding to each of the control nodes and an action of the executing device corresponding to each of the control nodes are configured.
In a second aspect, the present application provides an intelligent device, which applies the control method of any one of the above first aspects, and the intelligent device includes: the system comprises a video display module, a main control module and an execution module;
the video display module is used for playing a target video to a user, the target video at least comprises a control node, and the control node carries identification information;
the main control module is used for acquiring the current playing information of the target video; comparing the current playing information of the target video with the identification information of each control node; when the current playing information of the target video is matched with the identification information of any control node, sending an action signal to the execution module so that the execution module can make a target action;
the execution module is used for executing the target action corresponding to the control node of the target video.
In a third aspect, a server is provided, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the control method according to any one of claims 1 to 7 when executing the program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the control method according to any one of the embodiments of the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the control method provided by the embodiment of the application, the current playing information of the target video is obtained; comparing the current playing information of the target video with the identification information of each control node; when the current playing information of the target video is matched with the identification information of any control node, sending an action signal to execution equipment so that the execution equipment can make a target action; wherein the target action corresponds to the control node of the target video. Through the target action executed by the execution equipment, the user can directly feel the sensory impact caused by the action corresponding to the target video, and the physical sensory experience of the user is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a control method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a control method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a control method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an intelligent device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the embodiment of the application, the control method of the intelligent device is applied to a video display terminal, the video display terminal is used for playing a target video to a user, the target video at least comprises one control node, and the control node carries identification information;
fig. 1 is a schematic flowchart of a control method for an intelligent device according to an embodiment of the present application. Referring to fig. 1, in an embodiment of the present application, the control method includes the steps of:
step 100: acquiring current playing information of a target video;
in the embodiment of the application, the video display terminal can play VR video or ordinary video; according to different requirements of users, the content of the target video is different;
step 200: comparing the current playing information of the target video with the identification information of each control node;
in the embodiment of the application, each control node is enabled to carry identification information in a preset mode, and the number of the control nodes in a target video is different according to the content of the target video and the requirements of users;
step 300: when the current playing information of the target video is matched with the identification information of any control node, sending an action signal to the execution equipment so as to enable the execution equipment to make a target action; wherein the target action corresponds to a control node of the target video.
In the embodiment of the application, the execution device may be a wearable device, or may be any intelligent device that can execute the target action; the execution equipment acquires the action signal in the form of Bluetooth, wireless network and the like.
In the embodiment of the application, the current playing information of the target video is obtained; comparing the current playing information of the target video with the identification information of each control node; when the current playing information of the target video is matched with the identification information of any control node, sending an action signal to the execution equipment so as to enable the execution equipment to make a target action; wherein the target action corresponds to the control node of the target video. Through the target action executed by the execution equipment, the user can directly feel the sensory impact caused by the action corresponding to the target video, and the physical sensory experience of the user is improved.
In an application scene of the embodiment of the application, what video display terminal broadcast was the physiotherapy VR video of rehabilitation, when video display carried out the arm massage to the recovered person, executive device received with the signal of the corresponding arm massage of video display, the action of execution to the recovered person arm massage, at this moment, the recovered person not only can see VR video content, can also feel the content of VR video with cutting into the body, improve immersive experience to improve recovered effect greatly.
In an embodiment of the present application, causing an execution device to make a target action includes: and enabling the execution device to directly make the target action to the user, or enabling the execution device to control the third-party device to respond to the target action.
In embodiments of the present application, the target action of the performing device may be made directly to the user, such as the massage action described above; the third-party equipment can also be controlled to respond to the target action, and the response target action refers to making an action adaptive to the target action according to the action signal; for example, when one VR video content is a food display, the preset third-party device may be controlled to emit a taste of food, and a scene adapted to the target video content is displayed to the user, so that the user feels like the same, and the experience of the video is increased.
In an embodiment of the application, the action signal comprises: one or more of a target action execution time signal, a target action duration signal and a target action strength signal.
In the embodiment of the application, different control signals are sent out according to different control nodes; for example, at the first control node, the above-mentioned arm massage actions are performed three times within a preset time period, and/or each time lasts for ten seconds, and/or each massage intensity is medium; at the second control node, the target movement is replaced by a movement of arm stretching, which is performed once within a preset time period, and/or for five seconds each, and/or the magnitude of stretching is slight, etc. The target action corresponding to the control node can be adjusted according to different states of the user, the experience degree and immersion of the user are further improved, and the method is particularly applied to the rehabilitation VR video, so that the rehabilitation effect of a rehabilitee is effectively improved.
In an embodiment of the present application, the identification information carried by the control node includes: controlling a first time length from a node moment of a node on a target video to a starting moment of the target video; the current playing information of the target video comprises a second time length from the starting time of the target video to the playing time of the target video; comparing the playing information of the target video with the identification information of each control node, including: each of the first and second durations is compared.
In the embodiment of the application, identification information carried by a control node is preset; the identification information is time information, namely the first time length of the node moment of the control node on the target video from the starting moment of the target video, the first time length of the control node A is five minutes, the first time length of the control node B is fifteen minutes, and the first time length of the control node C is twenty minutes; at this time, the second time duration from the starting time of the target video to the current time of the target video playing is fifteen minutes, then the control node corresponding to the second time duration is B, and the target action made by the execution device is also an action corresponding to the control node B.
In an embodiment of the present application, the identification information carried by the control node includes: controlling the content of the current frame of the target video corresponding to the node; the playing information of the target video comprises the content of the current frame of the target video; comparing the playing information of the target video with the identification information of the control node, including: and comparing the content of the current frame of the target video corresponding to each control node with the content of the current frame of the target video.
In the embodiment of the application, identification information carried by a control node is preset; the identification information is video content information, that is, the content of the current frame of the target video corresponding to the control node. The content comparison method of the current frame is any one of the image identification methods in the prior art.
In an embodiment of the present application, before sending the action signal to the execution device, the control method further includes: and acquiring the category of the target video, and acquiring the execution equipment corresponding to the type of the target video according to the category of the target video.
In the embodiment of the present application, for example, if the category of the target video is the rehabilitation video, the executing device corresponding to the type of the target video is a massager, a stretcher, an electrical stimulator, or the like; the execution equipment can be used as the interactive peripheral of the user computer terminal, when the user opens the video playing, the user computer terminal displays the interactive peripheral corresponding to the video, acquires the control node by playing the video and calls the interactive peripheral corresponding to the control node. The type of the target video corresponds to the execution equipment, and the improvement of the user experience is ensured on the external hardware configuration.
In the embodiment of the application, the execution device corresponding to each control node is configured, and the action of the execution device corresponding to each control node is configured.
In the embodiment of the application, configuring the execution equipment corresponding to each control node and the action of the execution equipment corresponding to each control node; the configuration information list corresponding to the control node can be generated through pre-configuration, and the configuration information list is called in the process of video playing; and the execution equipment of the control node and the action of the execution equipment can be dynamically configured according to the requirements of the user in the video process. For example, the executing device corresponding to the control node a may be one, several or all of a massager, a stretcher and an electrical stimulator in the above embodiments; the execution times of the target action of any execution device, the duration signal of the target action and the strength of the target action can be preset in a configuration information list corresponding to the control node; or the user feeds the state of the user back to the execution equipment by setting the feedback equipment in the using process, so that the execution equipment adjusts parameters such as the times, duration or intensity of execution actions, the control accuracy and flexibility are improved, and the control is more humanized.
Referring to fig. 2, in an embodiment of the present application, a control method of an intelligent device includes the steps of:
step 100: acquiring current playing information of a target video;
step 200A: controlling a first time length from a node moment of a node on a target video to a starting moment of the target video; the current playing information of the target video comprises a second time length from the starting time of the target video to the playing time of the target video; comparing each of the first and second durations;
step 101: acquiring the category of a target video, and acquiring execution equipment corresponding to the type of the target video according to the category of the target video;
step 300A: when the first duration and the second duration are matched, sending an action signal to the execution equipment so as to enable the execution equipment to make a target action; wherein the target action corresponds to a control node of the target video.
Referring to fig. 3, the above steps 200A and 300A can also be implemented by the following steps 200B and 300B:
step 200B: controlling the content of the current frame of the target video corresponding to the node; the playing information of the target video comprises the content of the current frame of the target video; comparing the playing information of the target video with the identification information of the control node, including: comparing the content of the current frame of the target video corresponding to each control node with the content of the current frame of the target video;
step 300B: when the content of the current frame of the target video corresponding to the control node is matched with the content of the current frame of the target video, sending an action signal to the execution equipment so as to enable the execution equipment to make a target action; wherein the target action corresponds to a control node of the target video.
Referring to fig. 4, in an embodiment of the present application, there is provided an intelligent device, to which the above intelligent device control method is applied, the intelligent device including: the video display module 10, the main control module 20 and the execution module 30;
the video display module 10 is used for playing a target video to a user, the target video at least comprises a control node, and the control node carries identification information;
the main control module 20 is configured to obtain current playing information of the target video; comparing the current playing information of the target video with the identification information of each control node; when the current playing information of the target video is matched with the identification information of any control node, sending an action signal to the execution module so that the execution module can make a target action;
and the execution module 30 is used for executing the target action corresponding to the control node of the target video.
In an embodiment of the present application, the execution module 30 is configured to directly make the target action to the user, or control the third-party device to respond to the target action.
In the embodiment of the present application, the sending, by the main control module 20, the action signal to the execution module 30 includes: one or more of a target action execution time signal, a target action duration signal and a target action strength signal.
In an embodiment of the present application, the identification information carried by the control node includes: controlling a first time length from a node moment of a node on a target video to a starting moment of the target video; the current playing information of the target video comprises a second time length from the starting time of the target video to the playing time of the target video; the main control module 20 is further configured to compare each of the first duration and the second duration.
In an embodiment of the present application, the identification information carried by the control node includes: controlling the content of the current frame of the target video corresponding to the node; the playing information of the target video comprises the content of the current frame of the target video; the main control module 20 is further configured to compare the content of the current target video frame corresponding to each control node with the content of the current target video frame.
In the embodiment of the present application, the main control module 20 is further configured to configure an execution device corresponding to each control node, and an action of the execution device corresponding to each control node.
In the embodiment of the present application, the main control module 20 is further configured to obtain a category of the target video, and obtain, according to the category of the target video, an execution device corresponding to the type of the target video.
Referring to fig. 5, an embodiment of the present application provides a server, including a processor 111, a communication interface 112, a memory 113, and a communication bus 114, where the processor 111, the communication interface 112, and the memory 113 complete communication with each other through the communication bus 114, and the memory 113 is used for storing a computer program;
in an embodiment of the present application, the processor 111 is configured to implement the control method of the smart device according to any one of the foregoing method embodiments when executing the program stored in the memory 113.
The present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the intelligent device control method provided in any one of the foregoing method embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The control method of the intelligent equipment is characterized by being applied to a video display terminal, wherein the video display terminal is used for playing a target video to a user, the target video at least comprises a control node, the control node carries identification information,
the method comprises the following steps:
acquiring current playing information of the target video;
comparing the current playing information of the target video with the identification information of each control node;
when the current playing information of the target video is matched with the identification information of any control node, sending an action signal to execution equipment so that the execution equipment can make a target action;
wherein the target action corresponds to the control node of the target video.
2. The control method according to claim 1, wherein the causing the execution device to make the target action includes:
and enabling the executing device to directly make the target action to the user or enabling the executing device to control a third-party device to respond to the target action.
3. The control method according to claim 2, wherein the action signal includes: one or more of the execution times signal of the target action, the duration signal of the target action and the strength signal of the target action.
4. The control method according to claim 1, wherein the identification information carried by the control node comprises: controlling a first duration of a node moment of a node on a target video from a starting moment of the target video;
the current playing information of the target video comprises a second time length from the starting time of the target video to the playing time of the target video;
the comparing the playing information of the target video with the identification information of each control node includes:
comparing each of the first and second durations.
5. The control method according to claim 1, wherein the identification information carried by the control node comprises: the content of the current frame of the target video corresponding to the control node;
the playing information of the target video comprises the content of the current frame of the target video;
the comparing the playing information of the target video with the identification information of the control node includes:
and comparing the content of the current frame of the target video corresponding to each control node with the content of the current frame of the target video.
6. The control method of claim 1, wherein prior to issuing the action signal to the performing device, the method further comprises:
and acquiring the category of the target video, and acquiring the execution equipment corresponding to the type of the target video according to the category of the target video.
7. The control method according to any one of claims 1 to 6, characterized in that an execution device corresponding to each of the control nodes is configured, and an action of the execution device corresponding to each of the control nodes is configured.
8. An intelligent device, characterized in that, applying the control method of any one of the preceding claims 1 to 7, the intelligent device comprises: the system comprises a video display module, a main control module and an execution module;
the video display module is used for playing a target video to a user, the target video at least comprises a control node, and the control node carries identification information;
the main control module is used for acquiring the current playing information of the target video; comparing the current playing information of the target video with the identification information of each control node; when the current playing information of the target video is matched with the identification information of any control node, sending an action signal to the execution module so that the execution module can make a target action;
the execution module is used for executing the target action corresponding to the control node of the target video.
9. A server is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the control method according to any one of claims 1 to 7 when executing the program stored in the memory.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the control method according to any one of claims 1 to 7.
CN202110565449.2A 2021-05-24 2021-05-24 Control method of intelligent device, server and storage medium Pending CN113359976A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110565449.2A CN113359976A (en) 2021-05-24 2021-05-24 Control method of intelligent device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110565449.2A CN113359976A (en) 2021-05-24 2021-05-24 Control method of intelligent device, server and storage medium

Publications (1)

Publication Number Publication Date
CN113359976A true CN113359976A (en) 2021-09-07

Family

ID=77527325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110565449.2A Pending CN113359976A (en) 2021-05-24 2021-05-24 Control method of intelligent device, server and storage medium

Country Status (1)

Country Link
CN (1) CN113359976A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105929941A (en) * 2016-04-13 2016-09-07 广东欧珀移动通信有限公司 Information processing method and device, and terminal device
CN107616896A (en) * 2016-07-14 2018-01-23 幸福在线(北京)网络技术有限公司 A kind of intelligent massaging system and intelligent massaging method
CN108744535A (en) * 2018-05-25 2018-11-06 数字王国空间(北京)传媒科技有限公司 VR control method for playing back, device, VR control terminals and readable storage medium storing program for executing
CN110446117A (en) * 2019-08-05 2019-11-12 北京卡路里信息技术有限公司 Video broadcasting method, apparatus and system
CN110471536A (en) * 2019-09-17 2019-11-19 河北陆航检测认证有限公司 Fire-fighting experiential method, device and terminal device based on VR

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105929941A (en) * 2016-04-13 2016-09-07 广东欧珀移动通信有限公司 Information processing method and device, and terminal device
CN107616896A (en) * 2016-07-14 2018-01-23 幸福在线(北京)网络技术有限公司 A kind of intelligent massaging system and intelligent massaging method
CN108744535A (en) * 2018-05-25 2018-11-06 数字王国空间(北京)传媒科技有限公司 VR control method for playing back, device, VR control terminals and readable storage medium storing program for executing
CN110446117A (en) * 2019-08-05 2019-11-12 北京卡路里信息技术有限公司 Video broadcasting method, apparatus and system
CN110471536A (en) * 2019-09-17 2019-11-19 河北陆航检测认证有限公司 Fire-fighting experiential method, device and terminal device based on VR

Similar Documents

Publication Publication Date Title
US10976821B2 (en) Information processing device, information processing method, and program for controlling output of a tactile stimulus to a plurality of tactile stimulus units
CN113099298B (en) Method and device for changing virtual image and terminal equipment
CN105159582B (en) A kind of video area method of adjustment and terminal
JPWO2019043781A1 (en) Vibration control device, vibration control method, and program
WO2019135621A9 (en) Video playback device and control method thereof
CN104866185B (en) Control interface display methods and device
US20220375150A1 (en) Expression generation for animation object
CN110825228B (en) Interactive control method and device, storage medium and electronic device
CN113359976A (en) Control method of intelligent device, server and storage medium
CN114501144A (en) Image-based television control method, device, equipment and storage medium
CN110822647B (en) Control method of air conditioner, air conditioner and storage medium
JPWO2018163356A1 (en) Information processing device, program
CN112364478A (en) Virtual reality-based testing method and related device
CN110822649B (en) Control method of air conditioner, air conditioner and storage medium
CN112044053A (en) Information processing method, device, equipment and storage medium in virtual scene
CN107820709A (en) A kind of broadcast interface method of adjustment and device
EP3779303A1 (en) Method, system, and device for controlling air output mode of air conditioner
CN112785490B (en) Image processing method and device and electronic equipment
CN115334325A (en) Method and system for generating live video stream based on editable three-dimensional virtual image
CN113134160A (en) Gear adjusting method and device, storage medium and electronic equipment
CN114115534A (en) Relationship enhancement system and method based on room type interactive projection
JP6461238B2 (en) Operation method and apparatus applied to virtual reality device and virtual reality device
CN112802162A (en) Face adjustment method and device for virtual character, electronic device and storage medium
CN110033501B (en) Animation realization method and electronic terminal
CN114757836A (en) Image processing method, image processing device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210907