CN109445575B - Method and apparatus for presenting projection information - Google Patents

Method and apparatus for presenting projection information Download PDF

Info

Publication number
CN109445575B
CN109445575B CN201811154173.3A CN201811154173A CN109445575B CN 109445575 B CN109445575 B CN 109445575B CN 201811154173 A CN201811154173 A CN 201811154173A CN 109445575 B CN109445575 B CN 109445575B
Authority
CN
China
Prior art keywords
scene
projection
user
information
futon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811154173.3A
Other languages
Chinese (zh)
Other versions
CN109445575A (en
Inventor
邓超
王春雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN201811154173.3A priority Critical patent/CN109445575B/en
Publication of CN109445575A publication Critical patent/CN109445575A/en
Application granted granted Critical
Publication of CN109445575B publication Critical patent/CN109445575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Abstract

The application aims to provide a method and equipment for presenting projection information. Compared with the prior art, the scene projection instruction is generated through the intelligent futon based on the scene characteristic information and is sent to the projection equipment, and then the projection equipment presents the scene image information matched with at least one piece of scene characteristic information based on the instruction. In this way, the corresponding projection information including but not limited to sound information, virtual projection information and the like can be presented for the user sitting on the intelligent futon, so that the user can be calmed better, and the sitting and repairing efficiency is improved.

Description

Method and apparatus for presenting projection information
Technical Field
The present application relates to the field of communications technologies, and in particular, to a technique for presenting projection information.
Background
Before the invention, the traditional futon is only a tool for sitting and does not have more functions for helping a repairer to calm, and a user generally wants to enter a state faster during sitting so as to sit and repair better, obviously, the traditional futon cannot meet the requirements of the user.
Disclosure of Invention
The application aims to provide a method and equipment for presenting projection information.
According to one aspect of the application, there is provided a method at an intelligent futon end for presenting projection information, wherein the method comprises:
responding to a scene presenting instruction, and acquiring at least one scene characteristic information;
generating a scene projection instruction based on the at least one scene feature information;
and sending a scene projection instruction to projection equipment so as to enable the projection equipment to project and present scene image information matched with the at least one scene characteristic information.
Further, wherein the method further comprises:
and responding to the scene determination instruction, sending a portrait projection instruction to the projection equipment so that the projection equipment acquires sitting image information of the user and associates the sitting image information with the scene image information.
Further, wherein the method further comprises:
detecting a sit gesture of the user;
and when the sitting posture is not standard, reminding the user to correct the posture.
Further wherein the detecting the sitting gesture of the user comprises:
detecting whether the center of gravity of the user is shifted;
determining that the sitting posture is not standard when the center of gravity of the user is offset.
Further wherein the detecting the sitting gesture of the user comprises:
and sending the sitting posture information of the user to the projection equipment so that the projection equipment generates second scene image information based on the sitting posture of the user, and prompting the user that the current sitting posture of the user is not standard through the second scene image information.
Further, wherein the method further comprises:
acquiring at least one scene characteristic information based on a scene switching instruction;
generating a scene projection instruction based on the at least one scene feature information;
and sending a scene projection instruction to projection equipment so as to enable the projection equipment to project and present first scene image information matched with at least one scene characteristic information.
According to another aspect of the present application, there is also provided a method for presenting projection information at a projection device, where the method includes:
receiving a scene projection instruction sent by an intelligent futon;
and presenting scene image information matched with at least one scene characteristic information based on the scene projection instruction.
Further, wherein the method further comprises:
receiving a portrait projection instruction sent by the intelligent futon;
and acquiring sitting image information of the user based on the portrait projection instruction, and associating the sitting image information to the scene image information.
Further, wherein the method further comprises:
receiving a scene projection instruction sent by the intelligent futon based on a scene switching instruction;
and presenting first scene image information matched with at least one scene characteristic information based on the scene projection instruction.
Further, wherein the method further comprises:
detecting a sitting posture of the user based on the sitting image information;
and when the sitting posture is not standard, reminding the user to correct the posture.
Further, wherein, when the sitting posture is not standard, reminding the user to correct the posture comprises:
and generating second scene image information based on the sitting image information of the user, and prompting the user to correct the posture through the second scene image information.
Compared with the prior art, the scene projection instruction is generated through the intelligent futon based on the scene characteristic information and is sent to the projection equipment, and then the projection equipment presents the scene image information matched with at least one piece of scene characteristic information based on the instruction. In this way, the corresponding projection information including but not limited to sound information, virtual projection information and the like can be presented for the user sitting on the intelligent futon, so that the user can be calmed better, and the sitting and repairing efficiency is improved.
In addition, the sitting posture of the user can be detected, and when the sitting posture of the user is not standard, the user can be reminded to correct the posture, so that the user can be in the correct sitting posture, the body health of the user is facilitated, and the user experience is improved.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 illustrates a flow diagram of a method for presenting projection information in accordance with an aspect of the subject application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
To further illustrate the technical means and effects adopted by the present application, the following description clearly and completely describes the technical solution of the present application with reference to the accompanying drawings and preferred embodiments.
FIG. 1 illustrates a flow chart of a method for presenting projection information provided by an aspect of the subject application. The method of this embodiment may be implemented by the interaction of the smart futon 1 and the projection device 2. The method comprises the following steps:
s11, the intelligent futon 1 responds to the scene presenting instruction, and obtains at least one scene characteristic information;
s12, generating a scene projection instruction by the intelligent futon 1 based on at least one scene characteristic information;
s13, the intelligent futon 1 sends a scene projection instruction to a projection device 2, so that the projection device projects and presents scene image information matched with at least one piece of scene characteristic information; accordingly, the projection device receives the scene projection instruction;
s14 the projection device 2 presents scene image information matching with at least one scene feature information based on the scene projection instruction.
In this application, the intelligent futon can be used for sitting, and this intelligent futon includes but not limited to the equipment that can interact with projection equipment, for example, intelligent futon includes the communication interaction module, can interact with projection equipment, can also set up corresponding operating button, realizes the interaction with projection equipment through user's trigger. The projection device includes, but is not limited to, a device capable of interacting with other devices and playing projected information, for example, a device capable of projecting or playing sound, and the like.
The smart futon and projection device described herein are merely exemplary, and other existing or future smart futon and projection devices that may be suitable for use in the present application are also intended to be included within the scope of the present application and are hereby incorporated by reference.
In this embodiment, in the step S11, the intelligent futon 1 obtains at least one piece of scene feature information in response to a scene presenting instruction. The scene presenting instruction is used for triggering the intelligent futon to acquire scene characteristic information so as to enable the projection equipment to present scene image information. The scene feature information is used to represent feature information related to a scene to be projected, and may be represented by a scene keyword, for example, a keyword such as a lotus pool side, a bodhi tree, a hollow mountain ghost, a Buddha story, and the like.
Specifically, the intelligent futon may be provided with a corresponding button, and when the user sits, the intelligent futon may directly trigger the scene presenting instruction through the trigger button, or the user may also control the intelligent futon through a remote control device, for example, the scene presenting instruction may be sent to the intelligent futon through a remote control device, and accordingly, the intelligent futon acquires the instruction. Preferably, the smart futon may automatically trigger the scene presentation instruction, for example, when the smart futon detects that the user is sitting up for a sitting, may automatically trigger the instruction to be fetched, and so on. Here, the scene representation instruction may correspond to the scene feature information.
The above manner of triggering the scene presenting instruction is only an example, and other existing or future occurring manners of triggering the scene presenting instruction by the futon 1 should be included in the scope of protection of the present application, and are included herein by reference.
Continuing in this embodiment, in said step S12, said intelligent futon 1 generates scene projection instructions based on at least one scene characteristic information. The scene projection instruction is used for instructing the projection device to present scene image information. Specifically, the intelligent futon 1 may generate a corresponding scene projection instruction based on a preset generation rule, and the instruction may be sent to the corresponding projection device 2.
Continuing in this embodiment, in step S13, the smart futon 1 sends a scene projection instruction to the projection device 2, so that the projection device projects scene image information that matches at least one piece of scene feature information; accordingly, the projection device receives the scene projection instruction. Here, the intelligent futon and the projection device 2 are connected through a network, for example, the network includes but is not limited to bluetooth or a wireless hotspot, and the like, and a specific network is not limited in this application, as long as a network that can implement communication between the intelligent futon 1 and the projection device 2 can be applied in this application.
Continuing in this embodiment, in step S14, the projection device 2 presents scene image information matching at least one scene feature information based on the scene projection instruction. Specifically, when the projection device 2 receives a scene projection instruction, it obtains corresponding scene feature information, and presents the scene image information to at least one scene feature information, where the scene image information includes, but is not limited to, at least any one of sound information and virtual projection information, for example, the scene image information simulates some real scenes, for example, scenes about a forest, a small bridge, a lotus pool, and the like, and these scenes are virtual scene image information, but can present a specific visual and/or auditory effect to the user as if they were in a real scene. Specifically, the projection device 2 stores in advance scene image information corresponding to the scene feature information, so that when the projection device 2 acquires the scene feature information, the projection device can perform matching on the corresponding scene image information in the database and perform projection and/or sound playing.
In another preferred embodiment, the intelligent futon 1 may actively acquire surrounding environment information as scene feature information, for example, the environment information may be determined by image analysis through a camera device carried by the intelligent futon 1, and the environment information is used as the scene feature information, and then a scene projection instruction is generated and sent to the projection device 2, so that the projection device 2 presents scene image information based on the scene feature information, for example, a user may play different music when sitting at a river or in a forest, or different buttons related to the scene feature information may be directly set on the intelligent futon 1, for example, the user may directly press the corresponding button, that is, the scene projection instruction carrying the corresponding scene feature information may be sent to the projection device 2. After the projection device 2 acquires the instruction, the scene image information corresponding to the instruction may be locally queried for presentation.
In this application, the projection device 2 may present at least any one of the sound information or the virtual projection information, that is, the projection device 2 may be a device integrated by the sound device and the virtual projection device, or may be a separate sound device or a virtual projection device. When the projection device 2 is a device integrated by a sound device and a virtual projection device, after the corresponding play instruction is obtained, the projection device 2 may play the corresponding sound information and present the corresponding virtual projection information. Or multiple projection devices may be further included in the present application, for example, a sound device or a projection device is included, the smart futon may send a play instruction to the sound device or the projection device, respectively, so that the sound device plays the corresponding sound information, and the projection device presents the corresponding virtual projection information.
Preferably, the method further comprises: s15 (not shown), the intelligent futon 1 obtains at least one scene feature information based on the scene switching command; s16 (not shown) the smart futon 1 generating scene projection instructions based on at least one scene feature information; (ii) a S17 (not shown), the smart futon 1 sends a scene projection instruction to the projection device 2, and accordingly, the projection device 2 receives the instruction and presents the first scene image information matching with the at least one scene feature information.
In this embodiment, in the step S15, the intelligent futon 1 obtains a scene switching instruction, where the scene switching instruction is used to switch currently presented scene image information, for example, after a user presents the scene image information through a corresponding button on the intelligent futon, the user does not like the currently presented scene image information, and then the switching may be performed, for example, a switching button may be set on the intelligent futon 1, and the user may trigger the switching instruction through the switching button, and further, the intelligent futon 1 may send the switching instruction to the projection device 2 so that the projection device 2 presents new scene image information based on the switching instruction. In this way, the user can select different scene image information to be presented according to the preference of the user.
Preferably, wherein the method further comprises: the intelligent futon 1 responds to the scene determination instruction and sends a portrait projection instruction to the projection equipment 2; correspondingly, the projection equipment receives the instruction, acquires the sitting image information of the user and associates the sitting image information with the scene image information.
In this embodiment, the sitting image of the user may be projected to the scene image information, and the sitting image information may reflect the sitting posture of the user. Specifically, when the user has a need to project the sitting image information, the user may trigger the scene determination instruction by himself or herself, and the scene determination instruction may also be automatically triggered by the intelligent futon, for example, when the user sits on the intelligent futon, the instruction is triggered to trigger the projection device to project the sitting image information.
Preferably, wherein the method further comprises: s18 (not shown) the smart futon 1 detecting a sitting posture of the user; s19 (not shown) when the sitting posture is not standard, the smart futon 1 reminds the user to make posture correction.
In this embodiment, in step S18, the smart futon 1 detects the sitting posture of the user. Specifically, the intelligent futon 1 may detect whether the user is in a center position of the intelligent futon by detecting distances between body boundaries on both sides of the user and boundary portions on both sides of the intelligent futon, and when not in the center position, the sitting posture may be non-standard. Here, the detection method is only an example and is not limited at all.
Preferably, wherein the step S18 includes: s181 (not shown) the smart futon 1 detects whether the center of gravity of the user is shifted; s182 (not shown) when the center of gravity of the user is offset, the smart futon 1 determines that the sitting posture is not standard.
In this embodiment, in the step S181, the intelligent futon may detect whether the center of gravity of the user is shifted through the pressure sensor, for example, when the sitting posture is correct, the center of gravity may be generally located at the center of the user' S futon, and when the center of gravity of the user is shifted, it may be determined that the sitting posture of the user is not standard.
Continuing in this embodiment, in the step S19, when the sit-up posture is not standard, the smart futon alerts the user to perform posture correction. Specifically, the intelligent futon 1 may inherit the intelligent voice system, for example, the user may be prompted to correct the posture through voice prompt of the intelligent voice system, or the intelligent futon may further have a vibration device, and when the posture of the user is incorrect, the user may be prompted through slight vibration, so that the user may correct the posture. The above-mentioned method for reminding the user to correct the posture is only an example, and other existing or future intelligent futons are also included in the scope of the present application, and are included herein by reference.
Preferably, wherein the step S19 includes: the intelligent futon 1 sends the user sitting posture information to the projection equipment 2, so that the projection equipment 2 generates second scene image information based on the user sitting posture, and prompts the user that the current sitting posture is not standard through the second scene image information.
In this embodiment, when the sitting posture of the user is not standard, the user may be prompted by the projection device 2, for example, the user may be prompted by generating second scene image information, where the second scene image information may be preset image information that specifically prompts the user, or may also be image information that is about the sitting posture of the user after being standard, and here, the second scene image information is not limited.
Preferably, wherein the method further comprises: s20 (not shown) the projecting device 2 detects a sitting posture of the user based on the sitting image information; s21 (not shown), when the sitting posture is not standard, the projection device 2 reminds the user of posture correction.
In this embodiment, the sitting posture projection information includes the self-portrait of the sitting person, that is, the projection device may also throw the self-portrait of the sitting person into the projection device when performing virtual projection. In the step S20, the projection apparatus 2 detects the sitting posture of the user, and specifically, the projection apparatus 2 may determine whether the sitting posture of the user is standard by detecting whether the sitting posture projection information of the user is in a state of left-right symmetry, that is, by detecting whether the self-shadow of the user is in a state of standing upright.
Continuing in this embodiment, in step S21, when the sitting posture is not standard, the projection device 2 reminds the user of posture correction.
Specifically, the manner in which the projection device 2 reminds the user to correct the posture may be directly reminded by voice or reminded by the offset of a specific object projected in the scene. Preferably, wherein the step S21 includes: and generating second scene image information based on the sitting image information of the user, and prompting the user to correct the posture through the second scene image information. The second scene image information is used to prompt the user, and may be additional projection information or projection information obtained by changing the current projection information.
For example, when the projected image includes a candle flame, when the user's posture is deviated to the left, the projected candle flame may be inclined to the left all the time, and when the user holds a sitting posture, the flame may be straight, and for example, the projected image includes a crystal ball and the crystal ball includes a standing portrait, when the user's posture is inaccurate, the projected portrait in the crystal ball may be inclined, and the like, or the warning may be given by the direction strength of the scene sound effect surround sound, for example, when the user's sitting posture is deviated to the left, the left sound of the surround sound may be amplified, and the like.
The above-mentioned method for prompting the user to correct the posture is only an example, and other existing or future methods for prompting the user to correct the posture are also included in the scope of the present application, and are included herein by reference.
In this application, can throw out corresponding virtual scene around intelligence futon according to user's hobby, and the scene is diversified can select, the music that fuses with the scene of deuterogamying, the audio, the person's of will sitting self shadow also drops into wherein simultaneously, it detects to assist again to sit the posture to the user, remind in the scene in real time to correct user's bad position of sitting, buddhist is sufficient guides pronunciation etc. helps the user to get into "the environment" of setting better soon, helps it to get into "go into quiet" state fast.
Compared with the prior art, the scene projection instruction is generated through the intelligent futon based on the scene characteristic information and is sent to the projection equipment, and then the projection equipment presents the scene image information matched with at least one piece of scene characteristic information based on the instruction. In this way, the corresponding projection information including but not limited to sound information, virtual projection information and the like can be presented for the user sitting on the intelligent futon, so that the user can be calmed better, and the sitting and repairing efficiency is improved.
In addition, the sitting posture of the user can be detected, and when the sitting posture of the user is not standard, the user can be reminded to correct the posture, so that the user can be in the correct sitting posture, the body health of the user is facilitated, and the user experience is improved.
Furthermore, the embodiment of the present application also provides a computer readable medium, on which computer readable instructions are stored, and the computer readable instructions can be executed by a processor to implement the foregoing method.
An embodiment of the present application further provides an intelligent futon for presenting projection information, wherein the intelligent futon includes:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the foregoing method.
For example, the computer readable instructions, when executed, cause the one or more processors to: responding to a scene presenting instruction, and acquiring at least one scene characteristic information; generating a scene projection instruction based on the at least one scene feature information; and sending a scene projection instruction to projection equipment so as to enable the projection equipment to project and present scene image information matched with the at least one scene characteristic information.
In addition, an embodiment of the present application further provides a projection apparatus for presenting projection information, where the projection apparatus includes:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the foregoing method.
For example, the computer readable instructions, when executed, cause the one or more processors to: receiving a scene projection instruction sent by an intelligent futon; and presenting scene image information matched with at least one scene characteristic information based on the scene projection instruction.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (12)

1. A method at an intelligent futon end for presenting projected information, wherein the method comprises:
responding to a scene presenting instruction, acquiring at least one scene characteristic information, wherein the intelligent futon can actively acquire surrounding environment information as the scene characteristic information, comprises determining the environment information through image analysis by a camera device carried by the intelligent futon, and takes the environment information as the scene characteristic information;
generating a scene projection instruction based on the at least one scene feature information;
sending a scene projection instruction to projection equipment so as to enable the projection equipment to project and present scene image information matched with at least one piece of scene characteristic information, wherein the scene image information simulates a real scene;
and responding to the scene determination instruction, sending a portrait projection instruction to the projection equipment so that the projection equipment acquires sitting image information of the user and associates the sitting image information with the scene image information.
2. The method of claim 1, wherein the method further comprises:
detecting a sit gesture of the user;
and when the sitting posture is not standard, reminding the user to correct the posture.
3. The method of claim 2, wherein the detecting the sitting gesture of the user comprises:
detecting whether the center of gravity of the user is shifted;
determining that the sitting posture is not standard when the center of gravity of the user is offset.
4. The method of claim 3, wherein, when the sit gesture is not standard, alerting the user to make a gesture correction comprises:
and sending the sitting posture information of the user to the projection equipment so that the projection equipment generates second scene image information based on the sitting posture of the user, and prompting the user that the current sitting posture of the user is not standard through the second scene image information.
5. The method of any of claims 1-4, wherein the method further comprises:
acquiring at least one scene characteristic information based on a scene switching instruction;
generating a scene projection instruction based on the at least one scene feature information;
and sending a scene projection instruction to projection equipment so as to enable the projection equipment to project and present first scene image information matched with at least one scene characteristic information.
6. A method for presenting projection information at a projection device, wherein the method comprises:
receiving a scene projection instruction sent by an intelligent futon;
based on the scene projection instruction, presenting scene image information matched with at least one piece of scene characteristic information, wherein the scene image information simulates a real scene, the intelligent futon can actively acquire surrounding environment information as the scene characteristic information, and the intelligent futon can determine the environment information through image analysis and use the environment information as the scene characteristic information through a camera device carried by the intelligent futon;
receiving a portrait projection instruction sent by the intelligent futon;
and acquiring sitting image information of the user based on the portrait projection instruction, and associating the sitting image information to the scene image information.
7. The method of claim 6, wherein the method further comprises:
receiving a scene projection instruction sent by the intelligent futon based on a scene switching instruction;
and presenting first scene image information matched with at least one scene characteristic information based on the scene projection instruction.
8. The method of claim 6, wherein the method further comprises:
detecting a sitting posture of the user based on the sitting image information;
and when the sitting posture is not standard, reminding the user to correct the posture.
9. The method of claim 8, wherein, when the sit gesture is not standard, alerting the user to make a gesture correction comprises:
and generating second scene image information based on the sitting image information of the user, and prompting the user to correct the posture through the second scene image information.
10. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 9.
11. An intelligent futon for presenting projected information, wherein the intelligent futon comprises:
one or more processors; and
memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the method of any of claims 1 to 5.
12. A projection device for presenting projection information, wherein the device comprises:
one or more processors; and
memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the method of any of claims 6 to 9.
CN201811154173.3A 2018-09-30 2018-09-30 Method and apparatus for presenting projection information Active CN109445575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811154173.3A CN109445575B (en) 2018-09-30 2018-09-30 Method and apparatus for presenting projection information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811154173.3A CN109445575B (en) 2018-09-30 2018-09-30 Method and apparatus for presenting projection information

Publications (2)

Publication Number Publication Date
CN109445575A CN109445575A (en) 2019-03-08
CN109445575B true CN109445575B (en) 2022-04-12

Family

ID=65546029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811154173.3A Active CN109445575B (en) 2018-09-30 2018-09-30 Method and apparatus for presenting projection information

Country Status (1)

Country Link
CN (1) CN109445575B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102462488A (en) * 2010-11-15 2012-05-23 绿谷(集团)有限公司 Method for detecting static determinacy degree of human body in meditation process and meditation device
CN204963876U (en) * 2015-09-17 2016-01-13 深圳市智歌科技有限公司 Navigation projector
CN105608467A (en) * 2015-12-16 2016-05-25 西北工业大学 Kinect-based non-contact type student physical fitness evaluation method
CN105786362A (en) * 2014-12-26 2016-07-20 中国移动通信集团公司 reminding method and device based on posture information
CN106667127A (en) * 2017-03-09 2017-05-17 苏州大学 Intelligent cushion capable of correcting sitting posture
CN106861016A (en) * 2017-03-01 2017-06-20 华北理工大学 A kind of psychology hypnosis folding seat
CN106880270A (en) * 2017-02-28 2017-06-23 上海江村市隐智能科技有限公司 Buddhist mattress system
CN107844021A (en) * 2017-12-05 2018-03-27 北海华源电子有限公司 Projector equipment with electric massage chair

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013244274A (en) * 2012-05-28 2013-12-09 Sanesu Kogyo Kk Futon

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102462488A (en) * 2010-11-15 2012-05-23 绿谷(集团)有限公司 Method for detecting static determinacy degree of human body in meditation process and meditation device
CN105786362A (en) * 2014-12-26 2016-07-20 中国移动通信集团公司 reminding method and device based on posture information
CN204963876U (en) * 2015-09-17 2016-01-13 深圳市智歌科技有限公司 Navigation projector
CN105608467A (en) * 2015-12-16 2016-05-25 西北工业大学 Kinect-based non-contact type student physical fitness evaluation method
CN106880270A (en) * 2017-02-28 2017-06-23 上海江村市隐智能科技有限公司 Buddhist mattress system
CN106861016A (en) * 2017-03-01 2017-06-20 华北理工大学 A kind of psychology hypnosis folding seat
CN106667127A (en) * 2017-03-09 2017-05-17 苏州大学 Intelligent cushion capable of correcting sitting posture
CN107844021A (en) * 2017-12-05 2018-03-27 北海华源电子有限公司 Projector equipment with electric massage chair

Also Published As

Publication number Publication date
CN109445575A (en) 2019-03-08

Similar Documents

Publication Publication Date Title
US9721587B2 (en) Visual feedback for speech recognition system
US20150249718A1 (en) Performing actions associated with individual presence
KR101513847B1 (en) Method and apparatus for playing pictures
CN105260360B (en) Name recognition methods and the device of entity
KR20150079796A (en) Recording method, playing method, device, terminal and system
CN108961679A (en) A kind of attention based reminding method, device and electronic equipment
CN111512370B (en) Voice tagging of video while recording
CN111359209B (en) Video playing method and device and terminal
US20110145444A1 (en) Electronic device and method of obtaining connection relationship between interfaces and peripheral devices
US9953630B1 (en) Language recognition for device settings
CN103888696A (en) Photo photographing and checking method and photographing device
US20190096405A1 (en) Interaction apparatus, interaction method, and server device
US20220283773A1 (en) Vehicle and Control Method Thereof
CN109445575B (en) Method and apparatus for presenting projection information
US20170034347A1 (en) Method and device for state notification and computer-readable storage medium
US20170264962A1 (en) Method, system and computer program product
CN111079495B (en) Click-to-read mode starting method and electronic equipment
CN109147776A (en) Display device and acoustic control opportunity indicating means with voice control function
CN111698600A (en) Processing execution method and device and readable medium
CN110875905A (en) Account management method and device and storage medium
CN111081090B (en) Information output method and learning device in point-to-read scene
JP6496220B2 (en) Information distribution apparatus and information distribution program
JP2017070370A (en) Hearing test device, hearing test method, and hearing test program
CN113873085B (en) Voice start-up white generation method and related device
CN111563514B (en) Three-dimensional character display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant