CN113194302B - Intelligent display device and starting control method - Google Patents

Intelligent display device and starting control method Download PDF

Info

Publication number
CN113194302B
CN113194302B CN202110529750.8A CN202110529750A CN113194302B CN 113194302 B CN113194302 B CN 113194302B CN 202110529750 A CN202110529750 A CN 202110529750A CN 113194302 B CN113194302 B CN 113194302B
Authority
CN
China
Prior art keywords
interface
control
display
controller
display screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110529750.8A
Other languages
Chinese (zh)
Other versions
CN113194302A (en
Inventor
王光强
董鹏
李珑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Publication of CN113194302A publication Critical patent/CN113194302A/en
Application granted granted Critical
Publication of CN113194302B publication Critical patent/CN113194302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3147Multi-projection systems
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F21LIGHTING
    • F21VFUNCTIONAL FEATURES OR DETAILS OF LIGHTING DEVICES OR SYSTEMS THEREOF; STRUCTURAL COMBINATIONS OF LIGHTING DEVICES WITH OTHER ARTICLES, NOT OTHERWISE PROVIDED FOR
    • F21V33/00Structural combinations of lighting devices with other articles, not otherwise provided for
    • F21V33/0004Personal or domestic articles
    • F21V33/0052Audio or video equipment, e.g. televisions, telephones, cameras or computers; Remote control devices therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof

Abstract

The embodiment of the application provides an intelligent display device and a starting control method, wherein the intelligent display device comprises: a first imaging mechanism for forming a first display screen; a second imaging mechanism for forming a second display screen; the camera is used for collecting an operation gesture input on the first display picture and enabling the controller to respond according to the operation gesture; a controller configured to: receiving an input starting trigger signal; responding to the trigger signal, acquiring position parameters of the interface control, wherein the position parameters comprise a first position parameter and a second position parameter, and the interface control needing to be operated by the operation gesture is configured as the first position parameter; controlling an interface control corresponding to the first position parameter to form a first starting page through the first formation mechanism; and controlling the interface control corresponding to the second position parameter to form a second start page through the second imaging mechanism. The application improves the user experience of the intelligent display device.

Description

Intelligent display device and starting control method
The present application claims priority from the chinese patent application entitled "an intelligent projection device" filed by the chinese patent office on 15/5/2020, application number 202010408081.4, the entire contents of which are incorporated herein by reference.
Technical Field
The application relates to the technical field of projection, in particular to an intelligent display device and a starting control method.
Background
Projection equipment is a kind of equipment that can project image or video and show on the object, compares with the display device that can directly show image or video on the screen, and projection equipment has obtained more and more users' liking gradually with advantages such as its projection interface size is big, the installation is nimble, can protect eyesight.
The conventional projection device obtains content to be projected by connecting with a display device, and a projection interface of the projection device generally coincides with a display interface of the display device. Display device only has a display screen usually, therefore, all information on the display device all show on this display screen, and after projection equipment and display screen were connected, projection equipment also can be at the display content on the display device of projection on a virtual display screen usually, and the user watches this virtual display screen for a long time, will produce tired sense, has influenced projection equipment's user experience.
Disclosure of Invention
In order to solve the technical problem of poor projection effect, the application provides an intelligent display device and a starting control method.
In a first aspect, the present application provides an intelligent display device, comprising:
a first imaging mechanism for forming a first display screen on a first plane according to control of the controller;
a second imaging mechanism for forming a first display screen on the first plane according to the control of the controller;
the camera is used for collecting an operation gesture input on the first display picture and enabling the controller to respond according to the operation gesture;
a controller in communicative connection with the first imaging mechanism, the second imaging mechanism, and the camera, respectively, the controller configured to:
receiving an input starting trigger signal;
responding to the trigger signal, acquiring position parameters of an interface control, wherein the position parameters comprise a first position parameter and a second position parameter, and the interface control needing to be operated by the operation gesture is configured as the first position parameter;
controlling an interface control corresponding to the first position parameter to form a first starting page through the first formation mechanism;
and controlling the interface control corresponding to the second position parameter to form a second starting page through the second imaging mechanism.
In some embodiments, the controller is further configured to:
and when the interface control corresponding to the control of the second position parameter forms a second starting page through the second imaging mechanism, starting the camera.
In some embodiments, the interface control is further configured with orchestration data and display bit data;
controlling an interface control corresponding to the first position parameter to form a first start page through the first imaging mechanism, including: obtaining first page data according to the arrangement data and the display bit data in the interface control corresponding to the first position parameter, and controlling the first imaging mechanism to form a first starting page according to the first page data;
controlling the interface control corresponding to the second position parameter to form a second start page through the second imaging mechanism, including: and obtaining second page data according to the arrangement data and the display bit data in the interface control corresponding to the second position parameter, and controlling the second imaging mechanism to form a second starting page according to the second page data.
In some embodiments, the interface control corresponding to the first position parameter includes a two-screen control configured to call up a control interface for operating the second display screen on the current interface in response to a trigger.
In a second aspect, the present application provides a smart display device, comprising:
a first imaging mechanism for forming a first display screen on a first plane according to control of the controller;
a second imaging mechanism for forming a first display screen on the first plane according to the control of the controller;
the camera is used for collecting an operation gesture input on the first display picture and enabling the controller to respond according to the operation gesture;
a controller in communicative connection with the first imaging mechanism, the second imaging mechanism, and the camera, respectively, the controller configured to:
receiving an input starting trigger signal;
responding to the trigger signal, acquiring a first group of interface controls and a second group of interface controls, wherein the position parameters of the first group of interface controls are configured as first position parameters, and the position parameters of the second group of interface controls are configured as second position parameters;
generating first starting page data according to the first group of interface controls, and generating second starting page data according to the second group of interface controls, wherein the interface controls needing to be operated by the operation gestures are positioned in the first group of interface controls;
the first start page data is formed into a first start page through the first formation phase and the second start page data is formed into a second start page through the second formation phase.
In a third aspect, the present application provides a power-on control method for an intelligent display device, including:
receiving an input starting trigger signal;
responding to the trigger signal, acquiring a first group of interface controls and a second group of interface controls, wherein the position parameters of the first group of interface controls are configured as first position parameters, and the position parameters of the second group of interface controls are configured as second position parameters;
generating first starting page data according to the first group of interface controls, and generating second starting page data according to the second group of interface controls, wherein the interface controls needing to be operated by the operation gestures are positioned in the first group of interface controls;
the first start page data is formed into a first start page through the first formation phase and the second start page data is formed into a second start page through the second formation phase.
The intelligent display equipment and the startup control method have the advantages that:
according to the method and the device, the position parameters are set in the starting interface data of the intelligent display device, so that the intelligent display device can generate two starting interfaces after being started according to the position parameters, the two starting interfaces are respectively displayed on the two display pictures, a user can acquire information through the plurality of display pictures, the problem that the user is tired due to the fact that the user acquires the information through only one display picture is solved, and user experience is improved.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a schematic diagram of an intelligent desk lamp according to some embodiments of the present disclosure;
FIG. 2 is a schematic illustration of a position where a virtual display screen is formed in some embodiments of the present application;
FIG. 3 is another schematic diagram of an intelligent desk lamp according to some embodiments of the present disclosure;
FIG. 4 is a diagram illustrating a hardware boot process during a boot process according to some embodiments of the present application;
FIG. 5 is a diagram illustrating software boot during boot in some embodiments of the present application;
FIG. 6 is a diagram illustrating a second launch page in some embodiments of the present application;
FIG. 7 is a schematic illustration of a first interface of a first launch page in some embodiments of the present application;
FIG. 8 is a schematic illustration of a second interface of a first launch page in some embodiments of the present application;
FIG. 9 is a schematic illustration of interface switching in some embodiments of the present application;
FIG. 10 is a schematic illustration of a time interface in accordance with some embodiments of the present application;
FIG. 11 is a diagram of a process management page in some embodiments of the present application;
fig. 12 is a schematic diagram of a display control interface D0 corresponding to the two-screen control D in some embodiments of the present application.
Detailed Description
To make the purpose and embodiments of the present application clearer, the following will clearly and completely describe the exemplary embodiments of the present application with reference to the attached drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
It should be noted that the brief descriptions of the terms in the present application are only for convenience of understanding of the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "first," "second," "third," and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The desk lamp is a lighting tool for assisting people in reading, learning and working, common household equipment in the life of people develops towards the direction of intellectualization along with the progress of technology, and under the wave, the functions of the desk lamp are more and more abundant. In some embodiments, the desk lamp may be provided with a projection mechanism, which can be connected to a display device to implement the projection function of the projector, and such a desk lamp may be referred to as an intelligent desk lamp.
However, in the conventional projection technology, the content of the display device is directly projected, and if the content displayed by the display device is the superimposed content, the projected picture also displays the superimposed content, for example, in a video chat scene, a chat window is usually superimposed on the original interface, which may block the content of the original interface and affect the viewing experience of the user.
In order to solve the technical problem, in the embodiment of the application, the plurality of projection mechanisms are arranged on the desk lamp, so that the desk lamp becomes an intelligent display device, a plurality of pictures can be obtained through projection, and then a plurality of interfaces of the application program are displayed on the plurality of projected pictures in a separated manner, so that the plurality of interfaces are not shielded.
Fig. 1 is a schematic structural diagram of an intelligent desk lamp provided in some embodiments of the present application, and as shown in fig. 1, the intelligent desk lamp includes: at least two imaging mechanisms, a controller 200, and a camera 300. The imaging mechanism may be a projection mechanism, and the controller 200 is connected to the at least two projection mechanisms and the camera 300, respectively, so that the controller 200 can control the operating states of the at least two projection mechanisms and acquire the content captured by the camera 300.
In some embodiments, the intelligent desk lamp further comprises a base, a support and an illuminating bulb, the illuminating lamp, the projection mechanism and the camera can be arranged on the support, the support can be arranged on the base, and the controller 200 can be arranged inside the base.
In some embodiments, the controller 200 in the intelligent desk lamp is provided with a network communication function, so that the current intelligent desk lamp can communicate with other intelligent desk lamps, an intelligent terminal (e.g., a mobile phone) or a server (e.g., a network platform) to obtain the projection content.
In some embodiments, the controller 200 in the intelligent desk lamp may further include an operating system, so that the intelligent desk lamp can perform projection without being connected to a display device, and of course, the intelligent desk lamp including the operating system may also have a network communication function, so that the intelligent desk lamp can communicate with a server and other devices to implement some network functions, such as upgrading the operating system, installing an application program, interacting with other intelligent desk lamps, and the like. Referring to fig. 1, the at least two projection mechanisms include at least a first imaging mechanism 110 and a second imaging mechanism 120, the first imaging mechanism 110 being configured to project a first virtual display screen VS 1; the second imaging mechanism 120 is used for projecting and forming a second virtual display screen VS2, and the first virtual display screen VS1 and the second virtual display screen VS2 are formed at different positions.
For example, fig. 2 is a schematic diagram of the forming positions of the virtual display screens in some embodiments of the present application, as shown in fig. 2, the first virtual display screen VS1 projected by the first imaging mechanism 110 may be formed on a desktop of a desk on which the intelligent desk lamp is disposed, the desktop may be a horizontal first plane, and the second virtual display screen VS2 projected by the second imaging mechanism 120 may be formed on a wall surface on which the desk rests, the wall surface may be a vertical second plane. It can be understood that, in practical application, the forming position of the virtual display screen can be adjusted according to actual needs.
It can be understood that the specific display content of the first virtual display screen VS1 may be different from the specific display content of the second virtual display screen VS2, so that the two virtual display screens cooperate with each other to achieve the purpose of comprehensively displaying content with large capacity and high complexity.
After the at least two projection mechanisms respectively project to form the at least two virtual display screens, the camera 300 is configured to collect an operation gesture on the at least one virtual display screen, and send the operation gesture to the controller 200, where the operation gesture may specifically be operation click information of a user on display content on the virtual display screen, and the like.
For example, the camera 300 may capture only the operation gesture on the first virtual display screen VS1, may capture only the operation gesture on the second virtual display screen VS2, or may capture both the operation gestures on the first virtual display screen VS1 and the second virtual display screen VS 2.
In addition, the number of the cameras 300 can be set to be multiple based on the number of the virtual display screens needing to acquire the operation gestures, that is, a single camera acquires the operation gestures of a single virtual display screen.
In some embodiments, the camera 300 may be an infrared camera, so that the infrared detection technology may be utilized to ensure the accuracy of the acquired operation gesture in the poor light scenes such as night and cloudy day.
In some embodiments, the camera 300 may collect user images in addition to the operation gestures, so as to realize functions of video call, photographing, and the like.
After the at least two projection mechanisms respectively project to form the at least two virtual display screens, the controller 200 is configured to control projection contents of the at least two projection mechanisms on the at least two virtual display screens, respectively, and adjust the projection contents of the at least two projection mechanisms based on an operation gesture on the at least one virtual display screen after receiving an operation gesture sent by the camera 300.
For example, the controller 200 may adjust only the projection content of the first imaging mechanism 110 on the first virtual display screen VS1 based on the operation gesture, may adjust only the projection content of the second imaging mechanism 120 on the second virtual display screen VS2 based on the operation gesture, and may adjust both the projection content of the first imaging mechanism 110 on the first virtual display screen VS1 and the projection content of the second imaging mechanism 120 on the second virtual display screen VS2 based on the operation gesture.
It is understood that the two projection mechanisms are only an exemplary illustration of the multi-screen projection performed by the intelligent desk lamp in the present application, and the at least two projection mechanisms may also be another number of projection mechanisms, for example, 3 or more than 3, and the present application does not specifically limit the number of projection mechanisms of the intelligent desk lamp. For convenience of explanation, each of the embodiments of the present application takes two projection mechanisms as an example, and the technical solution of the present application is explained.
In some embodiments, the number of the controllers 200 may be multiple, and may be specifically the same as the number of the projection mechanisms, so that a single controller may be provided to control the projection content of a single projection mechanism, and there is a communication connection between the controllers.
For example, for the case that the at least two projection mechanisms include at least the first imaging mechanism 110 and the second imaging mechanism 120, the controller 200 may specifically include a first controller and a second controller, wherein the first controller controls the projection content of the first imaging mechanism 110, the second controller controls the projection content of the second imaging mechanism 120, and the first controller and the second controller are in communication connection.
In some embodiments, the plurality of controllers may be centralized, that is, the plurality of controllers are disposed at the same designated location in the intelligent desk lamp; the controller may be separately provided, that is, the controller may be provided corresponding to the corresponding projection mechanism, and the like.
Some embodiments provide an intelligence desk lamp, this intelligence desk lamp includes two at least projection mechanisms, and this application belongs to the intelligence desk lamp of many screen projections promptly to, the formation position of the virtual display screen of every projection mechanism projection formation is different, thereby can form a plurality of virtual display screens in different positions, shows through the cooperation of a plurality of virtual display screens, in order to play the purpose that comprehensive display capacity is big, the high display content of complexity. Meanwhile, the operation gesture on the virtual display screen is obtained through the camera, and the projection content is adjusted according to the operation gesture, so that the interactivity among different users can be further enhanced.
Fig. 3 is another schematic structural diagram of an intelligent desk lamp according to some embodiments of the present application, and as shown in fig. 3, the first imaging mechanism 110 includes: a first light source 112, a first imaging unit 114, and a first lens 116; the first light source 112 is configured to emit light, the first imaging unit 114 is configured to form a pattern based on the light emitted by the first light source 112, and the first light source 112 and the first imaging unit 114 are configured to cooperate to form a first projection pattern; the first lens 116 is used for magnifying the first projection pattern, so that the first light source 112, the first imaging unit 114 and the first lens 116 cooperate to display the corresponding display content on the first virtual display screen VS1 corresponding to the first imaging mechanism 110. In some embodiments, first light source 112 includes at least one of a tri-color light source, a white light source, and a blue light wheel light source. The three-color light source and the blue-light wheel light source are used for emitting light with different colors, so that color content can be displayed on the first virtual display screen VS 1. The white light source is used for emitting white light so as to realize the basic desk lamp lighting function.
In some embodiments, first light source 112 may include only a white light source, such that a basic lighting function may be achieved. The first light source 112 may comprise only a three-color light source or only a blue light wheel light source so that color content may be displayed on the first virtual display screen VS1 when projection is desired. The first light source 112 may include a white light source and a three-color light source, or a white light source and a blue light wheel light source, or a white light source, a three-color light source and a blue light wheel light source, so as to realize the basic illumination function and display the color content on the first virtual display screen VS 1.
Referring to fig. 3, the second imaging mechanism 120 includes: a second light source 122, a second imaging unit 124, and a second lens 126; the second light source 122 is configured to emit light, the second imaging unit 124 is configured to form a pattern based on the light emitted by the second light source 122, and the second light source 122 and the second imaging unit 124 are configured to cooperate to form a second projection pattern; the second lens 126 is used for magnifying the second projection pattern, so that the second light source 122, the second imaging unit 124 and the second lens 126 cooperate to display corresponding display contents on the second virtual display screen VS2 corresponding to the second imaging mechanism 120.
In some embodiments, the second light source 122 includes at least one of a three-color light source, a white light source, and a blue light wheel light source. The three-color light source and the blue-light wheel light source are used to emit light of different colors, so that color content can be displayed on the second virtual display screen VS 2. The white light source is used for emitting white light to realize the basic desk lamp lighting function.
In some embodiments, the second light source 122 may include only a white light source, such that a basic lighting function may be achieved. The second light source 122 may comprise only a three-color light source or only a blue light wheel light source so that color content may be displayed on the second virtual display screen VS2 when projection is desired. The second light source 122 may include a white light source and a three-color light source, or a white light source and a blue light wheel light source, or a white light source, a three-color light source and a blue light wheel light source, so as to realize the basic illumination function and display the color content on the second virtual display screen VS 2.
In some embodiments, the lens in the projection mechanism is a focus-adjustable lens, and the controller 200 can adjust the size of the projected image by adjusting the focus of the lens.
In some embodiments, the first light source 112 and the second light source 122 may be different light sources respectively providing light beams for different imaging units, or may be the same light source providing light beams for different imaging units through splitting.
In one embodiment, the intelligent desk lamp may include one or more of the following components: a storage component, a power component, an audio component, and a communication component.
The storage component is configured to store various types of data to support operation at the intelligent desk lamp. Examples of such data include student exercises, examination papers, electronic textbooks, exercise analysis and interpretation, etc. for projection display on the intelligent desk lamp, and types of data specifically include documents, pictures, audio, and video, etc. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply assembly provides power for various components of the intelligent table lamp. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the intelligent desk lamp.
The audio component is configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the smart desk lamp is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a storage component or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
The communication component is configured to facilitate wired or wireless communication between the intelligent desk lamp and other devices. The intelligent desk lamp can access a wireless network based on a communication standard, such as WiFi, 4G or 5G, and the like, or a combination of the WiFi, the 4G or the 5G. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In one embodiment, the principle of the camera 300 acquiring the operation gesture is explained.
The actual imaging interface may become a virtual display screen, which in some embodiments may be a desktop, a wall, a dedicated projection screen, or other surface structure that presents a projected image, and the user's operation on the virtual display screen is identified by an image captured by a camera or location information transmitted by a controlled location sensing device.
Some exemplary operational acquisition modes are as follows:
(I) motion track
After the controller 200 controls the projection mechanism to project on the virtual display screen, the camera 300 captures an image of the finger of the user on the virtual display screen in real time, and sends the image to the controller 200. The controller 200 recognizes the user's fingertip in the image by a fingertip tracking technique, and thus, an operation track of the user on the virtual display screen can be obtained based on the movement track of the fingertip.
In some embodiments, in the image acquired by the camera 300, if only a single finger is included, the operation trajectory of the user is determined based on the fingertip of the finger; if a plurality of fingers are included, the operation trajectory of the user is determined based on the fingertip of a specific finger, and the specific finger may be, for example, an index finger or the like, or the trajectories of a plurality of fingertips are determined.
(II) click operation
The camera 300 of the intelligent desk lamp is arranged above the finger of the user, when the user performs the finger pressing and clicking operation, the fingertip image of the user can be changed to a certain extent, and the controller 200 can identify whether the user performs the clicking operation according to the change of the fingertip image.
For example, when the position of the camera 300 is fixed, when the user performs the finger down-click operation, the distance between the fingertip and the camera 300 changes, and in the image acquired by the camera 300, the size of the fingertip pattern before the finger down-click is larger than the size of the fingertip pattern after the finger down-click, so that when the size of the fingertip pattern changes, it can be considered that the user performs the down-click operation.
For example, when some users click, the fingertips may bend downward, which may cause the image to have a deformed or incomplete fingertip pattern, and thus, when the fingertip pattern is deformed or displayed incompletely, the user may be considered to have performed a click operation.
It can be understood that when the fingertip image is just changed, the user can be considered to be in a fingertip pressing state; after the fingertip image is restored, the user can be considered to be in a fingertip-lifted state, so that the fingertip image of the user changes once, and the user can be considered to have performed an effective click operation once.
(III) Single click operation
When the controller 200 confirms that the user is in a state of fingertip pressing, the position coordinates of the position Point1 of the state and the time stamp are recorded.
When it is confirmed that the user is in a state where the fingertip is lifted, the position coordinates and the time stamp of the position Point2 in the state are recorded.
If the distance between the position coordinates of position Point1 and position coordinates of position Point2 is smaller than a preset threshold, and the time difference between the timestamp of position Point1 and the timestamp of position Point2 is smaller than the preset threshold, it is considered that the user has performed a single click operation at position Point1 (same as Point 2).
(IV) double click operation
When the controller 200 confirms that the user has performed the first valid click operation, the position coordinates and the time stamp of the position Point3 of the click operation are recorded.
When it is confirmed that the user has performed the second valid click operation, the position coordinates and the time stamp of the position Point4 of the click operation are recorded.
If the distance between the position coordinate of the position Point3 and the position coordinate of the position Point4 is smaller than a preset threshold value, and the time difference between the timestamp of the position Point3 and the timestamp of the position Point4 is smaller than the preset threshold value, it is considered that the click operation performed by the user at the position points 3 and Point4 constitutes an effective double-click operation.
It is understood that the recognition principle of the multi-click operation is similar to that of the double-click operation, and the description thereof is omitted here.
(V) Long-pressing operation
When the controller 200 confirms that the user is in a state of fingertip pressing, the position coordinates of the position Point5 of the state and the time stamp are recorded.
When it is confirmed that the user is in a state where the fingertip is lifted, the position coordinates and the time stamp of the position Point6 in the state are recorded.
If the distance between the position coordinates of position Point5 and position coordinates of position Point6 is smaller than a preset threshold, and the time difference between the timestamp of position Point5 and the timestamp of position Point6 is larger than the preset threshold, it is considered that the user has performed a long-press operation at position Point5 (same as Point 6).
(VI) sliding operation
When the controller 200 confirms that the user is in a state of fingertip pressing, the position coordinates of the position Point7 of the state and the time stamp are recorded.
When it is confirmed that the user is in a state where the fingertip is lifted, the position coordinates and the time stamp of the position Point8 in the state are recorded.
If the distance between the position coordinates of position Point7 and position coordinates of position Point8 is greater than a preset threshold, and the time difference between the timestamp of position Point7 and the timestamp of position Point8 is greater than a preset threshold, it is considered that the user has performed a sliding operation between position points 7 to Point 8.
It is understood that the sliding operation may be a lateral sliding, such as a leftward sliding or a rightward sliding, a longitudinal sliding, such as an upward sliding or a downward sliding, or an oblique sliding, such as an upward leftward sliding or a downward rightward sliding, etc.
In some embodiments, the sliding distance and the sliding direction (positive X-axis to the right and positive Y-axis to the up in the default position coordinate system) may be determined based on the position coordinates of the position Point7 and Point 8.
For example, the sliding distance may be calculated by the following formula:
Figure BDA0003066918590000081
where dis is the sliding distance, x7 and y7 are the position coordinates of position Point7, and x8 and y8 are the position coordinates of position Point 8.
When x7 is equal to x8 or the difference between x7 and x8 is smaller than a preset threshold, if y7> y8, the sliding direction is downward sliding; if y7< y8, the sliding direction is up.
When y7 is equal to y8 or the difference between y7 and y8 is smaller than a preset threshold, if x7> x8, the sliding direction is towards sitting; if x7< x8, the sliding direction is rightward sliding.
When x7> x8, if y7> y8, the sliding direction is downward and leftward sliding; if y7< y8, the sliding direction is to slide left and up.
When x7< x8, if y7> y8, the sliding direction is to slide to the lower right; if y7< y8, the sliding direction is to the right and upward.
In one embodiment, the user's operation on the virtual display screen may also be simulated by other peripheral devices. The peripheral devices are specifically such as an induction pen and the like.
In some embodiments, the pen point of the induction pen is provided with a position sensor, the position sensor sends the position of the pen point to the controller 200 of the intelligent desk lamp in real time, and therefore the intelligent desk lamp obtains the operation track of the user on the virtual display screen through the position change condition sent by the controller 200.
In addition, the nib of the induction pen is provided with a pressing induction structure (for example, a pressure sensor and the like), when a user needs to perform a click operation, the user can touch the desktop by using the induction pen, so that the pressing induction structure acquires a pressing signal and sends the acquired pressing signal to the controller 200 of the intelligent desk lamp, and the controller 200 can determine the position where the user clicks based on the current position of the note and the pressing signal.
It is understood that the principle of other operations (such as double-click, long-press, etc.) performed by the user through the sensing pen is the same as that performed through the fingertip, and the detailed description thereof is omitted here.
For the sake of understanding, in the following embodiments, the smart desk lamp is described by taking an example that the smart desk lamp includes a single controller 200, two projection mechanisms (the first imaging mechanism 110 and the second imaging mechanism 120), and a single camera 300, where the camera 300 only captures an operation gesture on the first virtual display screen VS1, the first virtual display screen VS1 projected by the first imaging mechanism 110 is formed on a desktop of a desk on which the smart desk lamp is disposed, and the second virtual display screen VS2 projected by the second imaging mechanism 120 is formed on a wall surface on which the desk leans.
In some embodiments, the first plane may be a horizontal plane, such as a desktop.
In some embodiments, the second plane may be a vertical plane, such as a wall surface.
In one embodiment, a power-on control method of the intelligent desk lamp is explained.
In some embodiments, a power-on key is arranged on the base of the intelligent desk lamp. The starting key can be a physical pressing type structure or a touch control structure, and when the starting key is in the physical pressing type structure, if a user presses the starting key, the starting key can be considered to be in an active state; when the power-on key is a touch structure, if a user's limb (e.g., a finger) is placed on the surface of the power-on key, the power-on key can be considered to be in an active state.
In some embodiments, the active state of the power-on key refers to a state in which the power-on key is pressed.
The traditional desk lamp is also provided with a start-up key, and after a user presses or touches the start-up key, a light source of the desk lamp is powered on to emit light, so that the lighting function is realized. Compare with traditional desk lamp, intelligent desk lamp still is provided with a plurality of virtual display screens, and after the start, except can realizing the illumination function, the virtual display screen can show predetermined projection picture to realize showing the function.
In some embodiments, the intelligent desk lamp can be provided with a start key and a projection key, when the intelligent desk lamp is started, the virtual display screen does not display a projection picture, so that the intelligent desk lamp can be used as a common desk lamp after being started, when a user needs to perform projection display, the user can press or touch the projection key to enable the virtual display screen to display the projection picture, the user presses or touches the projection key again, and the virtual display screen can close the display of the projection picture, so that the effect of saving energy is achieved.
In some embodiments, the function of the above-mentioned projection key may also be implemented by integrating it into a power-on key, for example, the power-on key is configured to be turned on or off by a user for a long time, and the user turns on or off the projection when clicking.
For example, the following describes the starting process of the intelligent desk lamp in detail by taking the example that the projection is automatically turned on after the intelligent desk lamp is started.
In some embodiments, referring to fig. 4, a hardware start-up sequence diagram of the intelligent desk lamp start-up process according to some embodiments is shown.
As shown in fig. 4, after the user triggers the power-on key, the controller obtains a trigger signal of the power-on key, and then starts a power-on process according to the trigger signal, where the trigger signal may be referred to as a power-on trigger signal.
In some embodiments, user activation of the power-on key may be by clicking the power-on key.
In some embodiments, when the intelligent desk lamp is currently in a power-off state, if it is detected that the power key is in an active state, the intelligent desk lamp is not directly turned on, at this time, the duration of the power key in the active state is obtained, if the duration of the power key in the active state reaches a first preset duration T1 (for example, 3 seconds), the intelligent desk lamp is turned on, otherwise, the intelligent desk lamp is not turned on, that is, the user triggers the power key, the user can press the power key for a long time to turn on the intelligent desk lamp, and therefore the situation that the user mistakenly touches the power key to turn on the intelligent desk lamp can be avoided.
In some embodiments, the controller starts a boot process after responding to a trigger signal of the boot key, and the boot process may include three stages, respectively: preparing a starting interface, projecting the starting interface and starting a camera. The method comprises the steps that at the stage of preparing a starting interface, the controller determines the starting interface needing to be projected, and at the stage, the controller can obtain a first starting page needing to be projected on a desktop and a second starting page needing to be projected on a wall surface; in the stage of projecting the starting interface, if the controller obtains a first starting page and a second starting page in the stage of preparing the starting interface, controlling the desktop projection mechanism to project the first starting page and controlling the wall projection mechanism to project the second starting page; and in the stage of starting the camera, the controller controls the camera to start the camera shooting function, so that the operation of the user on the second starting page and/or the first starting page is monitored through the camera.
It should be noted that, in some embodiments, at the stage of preparing the start interface, the controller may only obtain the first start page, and at the stage of projecting the start interface, only the desktop projection mechanism needs to be controlled to project the first start page.
Illustratively, the process of preparing the launch interface is as follows:
in some embodiments, the controller may obtain a pre-stored start interface in a preset storage path of the intelligent desk lamp in a stage of preparing the start interface, where the start interface may be a start interface downloaded from a server when the intelligent desk lamp is networked with the server, or an interface such as a custom start interface for a user at the intelligent desk lamp.
In some embodiments, the controller may acquire interface data required for generating the start interface in a preset storage path of the intelligent desk lamp at a stage of preparing the start interface, and generate the start interface according to the interface data.
In some embodiments, the controller may be networked with the server to download the interface data required for starting the interface or generating the interface, and generate the starting interface according to the interface data.
To further introduce the process of obtaining the first start page and the second start page for the intelligent desk lamp, taking the example that the controller generates the first start page and the second start page after starting the startup process, fig. 5 shows a software startup timing diagram when the intelligent desk lamp is started according to some embodiments.
As shown in fig. 5, in some embodiments, an android operating system is disposed in the smart desk lamp, and software of the android operating system in a boot process is started, including boot start, system service start, and desktop starter start.
In some embodiments, the boot startup is implemented after the smart desk lamp is powered on, and after the user triggers the power-on key, the controller of the smart desk lamp starts to execute a system startup program from a predefined location solidified in a Read-Only Memory (ROM) of the smart desk lamp in response to a trigger signal of the power-on key and the current state of the smart desk lamp is a power-off state, and the system startup program is configured to load a BL (boot loader) program.
In some embodiments, the BL program is an applet before the Android operating system starts running, and the program may be configured to control the system kernel to start booting.
In some embodiments, the system kernel may be a Linux kernel that, when started, performs system settings, which may include setting caches, protected memory, schedule lists, loading drivers, and so on. When the Linux kernel finishes system setting, an init.rc file is searched in the system file, and an init process is started.
In some embodiments, the initiating of the init process includes: the attribute service is initialized and started and the zygate process is started.
In some embodiments, the initiating of the Zygote process includes: the method comprises the steps of creating a java virtual machine, registering JNI for the java virtual machine, creating a server Socket, and starting a system server process, namely controlling system service to start.
In some embodiments, the initiating of the SystemServer process includes: the Binder thread pool and systemlervicemanager are started, and various system services are started.
In some embodiments, launch content of Launcher (Launcher) includes: an AMS (activity management service) started by the SystemServer process starts a Launcher (desktop Launcher), and the desktop Launcher can acquire start interface data from a preset storage path after starting, and generates a start interface according to the start interface data.
In some embodiments, the intelligent desk lamp is provided with two virtual display screens, and the desktop starter can generate two starting interfaces after being started, and the two starting interfaces are respectively displayed on the two virtual display screens, or the desktop starter can generate two groups of starting interfaces after being started, and the two groups of starting interfaces are respectively displayed on the two virtual display screens.
In some embodiments, the intelligent desk lamp is provided with three or more than three virtual display screens, and the desktop starter can generate a plurality of starting interfaces or a plurality of groups of starting interfaces with the same number as the virtual display screens after being started, so that each virtual display can display at least one starting interface or one group of starting interfaces.
In some embodiments, the intelligent desk lamp is provided with two or more virtual display screens, the desktop starter can also generate only one starting interface after being started, and the starting interface is displayed on a default display screen, for example, the virtual display screen on the desktop, and other virtual display screens can not be displayed.
Illustratively, the process of the desktop launcher generating the launch interface according to the launch interface data is as follows:
in some embodiments, the launch interface may display a plurality of interface controls, each interface control being a display position on the launch interface and displaying some text and/or pictures. For some interface controls, the controller may respond to user operations when the user operates the interface controls, such as clicking, double-clicking or long-pressing; for other interface controls, the controller may not respond to user operations while the user operates the interface control. Each interface control can be provided with interface control data, and the launch interface data can include interface control data for a plurality of interface controls.
In some embodiments, the interface control data may include three dimensions of data, position parameters, layout data, and display bit data. The display bit data can include the content to be displayed by the interface control and some response data of the interface control, and the controller can respond to the user operation according to the response data. The layout data may include display parameters such as coordinate positions and display sizes of the interface controls on a launch interface. The position parameters may include a first position parameter indicating that the interface control belongs to a control of a first start page and a second position parameter indicating that the interface control belongs to a control of a second start page, and the position parameter of one interface control is the first position parameter or the second position parameter.
In some embodiments, the location parameters may be pre-added to the interface control data by the content operation and maintenance personnel of the intelligent desk lamp.
In some embodiments, the interface control data may include data in two dimensions, the orchestration data and the display bit data. Position parameters are set in the arrangement data and the display bit data, respectively.
In some embodiments, the location parameter may be represented by location. The position parameter may be set with 2 values, which are 1 and 2 respectively, when the position parameter value is 1, it indicates that the layout data or the display bit data is used for generating the first start page, and when the position parameter value is 2, it indicates that the layout data or the display bit data is used for generating the second start page.
In some embodiments, the content operation and maintenance staff may determine a position parameter that needs to be set for the interface control according to the content type of the display bit data, and set the position parameter to 1 if the content type is a first type that needs to be operated by an operation gesture of the user, such as an interaction type, and set the position parameter to 2 if the content type is a second type that does not need to be operated by the operation gesture, such as a presentation type. For example, the display bit data of the interaction type may include password input control data, account input control data, and the like in the booting process, and the display bit data of the display type may include some picture control data, such as weather data, and the like.
In some embodiments, the content operation and maintenance staff may also determine the position parameter that needs to be added to the display bit data according to the importance of the display bit data, for example, some display bit data, such as password control data, have a higher importance, some display bit data, such as weather data, have a lower importance, and may set the value of the position parameter of the display bit data with the higher importance to 1 and set the value of the position parameter of the display bit data with the lower importance to 2.
In some embodiments, the content operation and maintenance staff may set the value of the layout data setting position parameter belonging to the first start page to 1 and set the value of the layout data setting position parameter belonging to the second start page to 2 according to the layout and layout of the first start page and the second start page.
Taking the value of the position parameter as 1 or 2 as an example, according to the setting of the position parameter, after the desktop starter acquires the start interface data, the start interface data can be divided into two groups of data according to the position parameter, wherein one group of data is interface control data with the value of the position parameter of 1, the group of interface control controls can be called first page data, the other group of data is interface control data with the value of the position parameter of 2, and the group of interface control controls can be called first page data. The desktop starter generates two starting interfaces according to the two sets of data.
In some embodiments, the start interface data may directly include two sets of interface control data, the first set of interface control data being provided with a first position parameter and being interface control data of the first set of interface controls, and the second set of interface control data being provided with a second position parameter and being interface control data of the second set of interface controls. The desktop launcher may obtain a first set of interface controls according to the first set of interface control data, and obtain a second set of interface controls according to the second set of interface control data. Then, the desktop launcher may generate a first launch interface according to the first set of interface controls, and generate a second launch interface according to the second set of interface controls.
In some embodiments, the interface controls that need to be operated by the operating gesture are located in a first set of interface controls, and the interface controls that need not be operated by the operating gesture may be located in the first set of interface controls and in a second set of interface controls.
In some embodiments, the interface controls that need to be operated by the operating gesture may also be located in the first set of interface controls.
In some embodiments, the desktop launcher may set a display screen identifier for the launch interface. The display screen identifiers are used for indicating that the desk lamp can project a plurality of virtual display screens, the plurality of display screen identifiers can be represented by one screen function, for example, the screen function can be screen { VS1, VS2}, the desk lamp can project two virtual display screens, one is VS1, the other is VS2, VS1 and VS2 are both a display screen identifier, the screen function can be stored in a bottom layer program of the controller, the desktop launcher can call the screen function, obtain the number and name of the virtual display screens of the desk lamp according to the screen function, then set the display screen identifier as VS1 on the startup interface corresponding to the interface data with the position parameter 1, set the display screen identifier as VS2 on the startup interface corresponding to the interface data with the position parameter 2, after setting the display screen identifiers, the desktop launcher can complete startup, and the controller can enter the next stage of the startup procedure, i.e. the projection start interface phase. During the projected launch interface phase, the controller can control the projected position of the launch interface according to VS1 or VS2 in the launch interface.
In some embodiments, after the desktop starter generates two start interfaces according to the start interface data, the desktop starter may complete the start, and the controller may enter a next stage of the startup procedure, i.e., a projection start interface stage. In the stage of projecting the starting interface, the controller can control the projection position of the starting interface according to the position parameters in the starting interface.
In some embodiments, if the desktop starter only generates a set of interfaces with VS1 according to the start interface data, a time interface with VS2 may be generated according to the current time, and the time interface is displayed by the user in the wall during the projection display stage.
In some embodiments, if the launch interface set by the desktop launcher includes two groups of interfaces, each group of interfaces includes a plurality of interfaces, a page parameter may be set in the launch interface data, and a value of the page parameter indicates that the interface control data belongs to the second interface in the launch interface. After the desktop starter obtains the starting interface data, two groups of interfaces are generated according to the position parameters, and a plurality of interfaces in each group of interfaces are distinguished according to the page parameters.
In some embodiments, if the second startup page and/or the first startup page generated by the desktop launcher include multiple interfaces, the switching manner of the startup interfaces may be further configured before entering the projection startup interface stage.
In some embodiments, for a set of second start pages and/or a set of first start pages, the desktop launcher may configure the switching manner of the set of interfaces to switch to a next interface according to the direction of the sliding operation in response to receiving the sliding operation of the user.
In some embodiments, for a set of second start pages and/or a set of first start pages, the desktop launcher may configure the switching manner of the set of interfaces to show a layer containing a password input control when one interface is switched to another interface, revoke the layer after a user inputs a correct password, and switch the interface, and prompt that the password is incorrect if the user inputs an incorrect password, without switching the interface.
In some embodiments, the desktop launcher may also perform a combined configuration in a manner of directly switching the interface through a sliding operation and determining whether to switch the interface through a password input control, for example, if a group of second launch pages includes a first interface and a second interface, it may be set that a password needs to be input when switching from the first interface to the second interface, and a password does not need to be input when switching from the second interface to the first interface.
After the switching mode of the starting interface is configured, the desktop starter can be started, and the controller can enter the next stage of the starting process, namely the stage of projecting the starting interface.
For example, during the stage of projecting the start interface, the controller may control the projection mechanism to project the start interface.
In some embodiments, if the desktop starter generates a second start page with VS2, such as fig. 6, and a first start page with VS1, such as fig. 7, during the stage of preparing the start interface, the desktop projector can be controlled to project the first start page, and the wall projector can be controlled to project the second start page.
In some embodiments, if, at the stage of preparing the start interfaces, the desktop launcher generates a set of first start pages provided with VS1 and a set of second start pages provided with VS2, the desktop projector may be controlled to perform projection display on a first interface of the set of first start pages, and the wall projector may be controlled to perform projection display on a first interface of the second start pages, and after receiving a sliding operation of a user, the desktop launcher may switch the corresponding start interfaces according to a switching method of the start interfaces in the above embodiments, where a target object of the user operation is the first virtual display screen or the second virtual display screen. An exemplary set of second splash pages provided with VS2 includes only fig. 6, and a set of first splash pages provided with VS1 includes fig. 7 and 8.
Referring to fig. 6, which is an interface schematic diagram of a second start page according to some embodiments, as shown in fig. 6, the second start page may not be provided with a control, and only shows some information that does not need to be focused by the user for a long time, such as weather information, time information, and the like. Of course, in some embodiments, the first launch page may also be provided with a small number of controls, for example, "weather sunny" in fig. 6 may be an interface control, and the user may enter the weather detail interface by clicking on the interface control.
Referring to fig. 7, which is a schematic diagram of a first interface of a first start-up page according to some embodiments, the first interface may be an educational interface, as shown in fig. 7, the educational interface may be provided with a plurality of controls, such as "online teaching system", "teaching channel", "exercise practice", "test simulation", and "homework correction", each of the controls is a display position, and after a user clicks one of the controls, the user may enter the interface corresponding to the control. The first starting page can be provided with a two-screen control D, and the control D is configured to call up a control interface for operating the wall surface projection interface on the current interface in response to triggering.
Referring to fig. 8, a schematic diagram of a second interface of the first launch page according to some embodiments may be an entertainment interface, as shown in fig. 8, the entertainment interface includes a music module, a video module, a game module, and other application management modules, and the like, and the game module includes a table piano, a chess game, and the like. In addition, the entertainment interface may also include other modules related to the health entertainment application, not to be enumerated here.
Referring to fig. 9, which is a schematic diagram illustrating switching of an entertainment interface according to some embodiments, as shown in fig. 9, if a user performs a sliding operation on the education interface shown in fig. 7, a layer including a password input control may be displayed on the interface shown in fig. 7, after the user inputs a correct password, the layer may be cancelled, and the interface is switched to fig. 8, if the user inputs an incorrect password, a password error may be prompted, and the interface may not be switched. The user can preset the password entering the entertainment interface, namely when the education interface is switched to the entertainment interface, the user can input the corresponding password to successfully switch, otherwise, the user can not use various entertainment functions of the entertainment interface, and therefore parents can be helped to better control the use of the entertainment interface.
In some embodiments, if only one set of interfaces provided with VS1 is generated by the desktop launcher according to the launch interface data at the stage of preparing the launch interface, the generated interfaces may be displayed on the first virtual display screen VS1, and then a time interface shown in fig. 10 is generated according to the current time, and a time interface shown in fig. 10 is displayed on the second virtual display screen VS 2.
After the controller performs projection display on the starting interface, the next stage of the starting process can be entered, namely, the camera is started, and after the camera is started, the starting process can be ended.
According to the embodiment, after the smart desk lamp performs a projection starting interface, the first virtual display screen VS1 enters a Launcher interface, and the second virtual display screen VS2 enters a second starting page. The Launcher Interface, i.e., a UI Interface (User Interface or User Interface), is a medium for a User to interact with the smart desk lamp. The Launcher interface comprises at least a first interface, which is embodied as an educational interface shown in fig. 7, and a second interface, which is embodied as an entertainment interface shown in fig. 8. It is understood that the Launcher interface may also include other types of interfaces.
According to the embodiment, the smart desk lamp can detect the sliding operation of a user through the camera, the Launcher interface to be displayed is determined according to the direction of the sliding operation and the current Launcher interface, if the Launcher interface to be displayed is configured with the password, the Launcher interface to be displayed is displayed, the floating layer is arranged, the password input control is displayed on the floating layer, and at the moment, the Launcher interface to be displayed cannot obtain the focus. And if the password input is successful in the password input control received by the user, canceling the floating layer, setting the Launcher interface to be displayed as an acquirable focus, and if the password input is unsuccessful, maintaining the password input control to be displayed on the floating layer.
For example, when the camera detects that the user switches the interface shown in fig. 7 to the interface shown in fig. 8, as shown in fig. 9, the controller may display a password input control, and the password input control may be provided with a prompt: the user inputs a correct password, then cancels the password input control, and switches the interface shown in fig. 7 to the interface shown in fig. 8, wherein if the user does not set a password before, the prompt of the password input control in fig. 9 may be "please set a password", after the user inputs a password, the desktop starter may cancel the password input control, store the password, and switch the interface shown in fig. 7 to the interface shown in fig. 8. When the user switches the interface shown in fig. 7 to the interface shown in fig. 8 next time, the interface shown in fig. 9 can be popped up, after the user inputs the password, the password input by the user is compared with the password stored when the user previously set the password, if the password input by the user is consistent with the password, the password input by the user is confirmed to be the correct password, if the password input by the user is inconsistent with the correct password, the password input by the user is confirmed to be the wrong password, and only when the password input by the user is the correct password, the password input control is cancelled and the interface is switched.
According to the embodiment, when displaying the Launcher interface and the floating layer to be displayed, if the sliding operation of the user is received, the next interface is switched to according to the direction of the sliding operation.
For example, the desktop launcher may further configure the interface shown in fig. 9 and the interface shown in fig. 7 to switch to the next interface according to the direction of the sliding operation in response to receiving the sliding operation of the user.
In some embodiments, it may be inconvenient for the user to directly operate the control on the second launch page, and the user may click the D control on the first launch page shown in fig. 7, thereby bringing up the control interface of the second launch page on the first virtual display screen, and operating on the second launch page on the control interface.
It should be noted that, referring to fig. 7, fig. 8 and fig. 9, in some embodiments, for the first virtual display screen VS1, the interface lower side of the Launcher interface includes 4 control keys, which are a return key, a home key, a progress key and a minimize key. The return key is used for returning to a previous page, the home page key is used for directly returning to a corresponding Launcher interface, the process key is used for displaying all current processes to conduct process management, and the minimize key is used for minimizing the application running in the current foreground.
In one embodiment, the process control management of the first virtual display VS1 and the second virtual display VS2 is explained.
After a user clicks a process key on the first virtual display screen VS1, the camera 300 collects the operation gesture and sends the operation gesture to the controller 200, the controller 200 controls the first imaging mechanism 110 to display a process management page on the first virtual display screen VS1, and the user can manage a currently running process through the process management page, for example, close, switch the display screens, and the like.
Fig. 11 is a schematic diagram of a process management page, and as shown in fig. 11, a plurality of currently running processes are displayed in a stacked manner, and a user may select a process to be managed by scrolling the processes up and down. For example, the currently managed process is process one, and by scrolling down, the currently managed process can be switched to process two.
In some embodiments, after the currently managed process is switched, if the user clicks and selects the currently managed process, the content of the currently managed process is directly displayed on the corresponding virtual display screen.
In addition, a label A is arranged in a corresponding area of different processes, and the label A is used for identifying the virtual display screen on which the process is currently displayed. After receiving an input instruction for process management, acquiring an operating process and a position presented by the process, and displaying in a label sub-control of a process control controlled by the position presented before receiving the instruction according to an application corresponding to the process (a first virtual display VS1 or a second virtual display VS2), wherein the content displayed by the label sub-control at the position presented before receiving the instruction is different for the application corresponding to the process. For example, label a1 corresponding to process one in fig. 11 is "one screen", which means that process one is displayed on the first virtual display VS 1; label a2 corresponding to process two is "two screens", i.e. it means that process two is displayed on second virtual screen VS 2; the label a3 corresponding to process three is "one screen + two screen", that is, it indicates that process three is simultaneously displayed on the first virtual display screen VS1 and the second virtual display screen VS 2. At this time, if the user clicks to select the first process, directly displaying the content corresponding to the first process on the first virtual display screen VS 1; if the user clicks the selected process two, directly displaying the content corresponding to the process two on a second virtual display screen VS 2; if the user clicks and selects the process three, the content corresponding to the process three is directly displayed on the first virtual display screen VS1 and the second virtual display screen VS 2.
It is to be understood that the multiple processes currently running may also be displayed in other forms, for example, in a form of multiple small windows, and the display form of the multiple processes is not limited in this application.
In addition, the expression form of the tag content is not limited in the application, the tag content can be any combination of numbers, letters and characters, and a user can clearly and intuitively know which virtual display screen the process is displayed on according to the tag content. For example, the tag content "one screen" in fig. 11 may also be "1", "two screens" may also be "2", and the like.
In some embodiments, referring to fig. 11, after the user opens the process management page, in addition to seeing which virtual display screen the process is displayed on, the user can also perform process shutdown management on the currently running process. In some embodiments, the user may close the running process through process close control B.
For example, the user can close process one by clicking on process close control B1 corresponding to process one, close process two by clicking on process close control B2 corresponding to process two, and close process three by clicking on process close control B3 corresponding to process three. Thus, the user can shut down the processes needing to be shut down in a targeted manner.
In some embodiments, referring to fig. 11, the process management page is further provided with a one-touch closing control B0, and when the number of processes that need to be closed currently is large, the user can close all currently running processes by clicking the one-touch closing control B0 without clicking the process closing controls corresponding to the processes one by one, so that the process closing efficiency can be improved.
After the user completes the process closing operation, the page before the process management can be returned by clicking the return key. In some embodiments, after completing the process shutdown operation, if the user clicks on a blank in the process management interface, an educational interface in the Launcher interface is returned.
In some embodiments, referring to fig. 11, after opening the process management page, the user may switch the virtual display screen corresponding to the currently running process, in addition to performing process closing management on the currently running process. In some embodiments, the user may perform a display screen switch on the process by applying the display screen switch control C.
In some embodiments, when the currently managed process is process one, the label a1 corresponding to process one is "one screen", that is, process one is displayed on the first virtual display screen VS1 before receiving the process management instruction, at this time, the user may perform a left-sliding operation corresponding to the control C1 to switch process one to the second virtual display screen VS2 for display, at this time, the first virtual display screen VS1 no longer displays the content of process one, that is, display process one by switching between different screens.
In some embodiments, when the currently managed process is process one, the user may perform a right-slide operation corresponding to control C2, and the process one is still shown on one screen.
In some embodiments, when the currently managed process is process two, the label a2 corresponding to process two is "two screens", that is, process two is displayed on the second virtual display screen VS2 before receiving the process management instruction, at this time, the user may perform a right-swipe operation corresponding to the control C2 to switch process two to the first virtual display screen VS1 for display, at this time, the second virtual display screen VS2 no longer displays the content of process two, that is, the process two is displayed by switching between different screens.
In some embodiments, when the currently managed process is process two, the user may perform a left-sliding operation corresponding to the control C1, and process two is still displayed on the two screens.
In some embodiments, when the currently managed process is process three, the label a3 corresponding to process three is "one screen + two screens", that is, at this time, process two is displayed on the first virtual display screen VS1 and the second virtual display screen VS2 before receiving the process management instruction, at this time, the user may perform a left-swipe operation corresponding to the control C1 to switch process three to the second virtual display screen VS2 for display, at this time, the first virtual display screen VS1 no longer displays the content of process three, that is, switch process three from a dual-screen display to a single-screen (the second virtual display screen VS2) for display. At this time, the first virtual display screen VS1 may display a Launcher interface, for example, an educational interface or the like.
In some embodiments, the user may also perform a right-sliding operation corresponding to the control C2 to switch the process three to the first virtual display screen VS1 for displaying, at this time, the second virtual display screen VS2 no longer displays the content of the process three, that is, the process three is switched from the dual-screen display to the single-screen display (the first virtual display screen VS 1). At this time, the second virtual display screen VS2 may display a Launcher interface or a time interface.
In some embodiments, the controls C1 and C2 are conditionally hidden/displayed, i.e., the controls C1 and C2 are not always in a display state.
In some embodiments, control C1 is not displayed until the currently managed process is currently displayed on the first virtual display screen VS1 and can be switched to the second virtual display screen VS2 for display (e.g., process one in fig. 11), and at this time, control C2 is hidden.
When the currently managed process is currently displayed on the second virtual display VS2 and can be switched to the first virtual display VS1 for display (for example, the process two in fig. 11), the control C2 is displayed, and at this time, the control C1 is hidden.
When the currently managed process is currently displayed on the first virtual display VS1 and the second virtual display VS2, and can be switched to the first virtual display VS1 or the second virtual display VS2 for display (for example, process three in fig. 11), the control C1 and the control C2 are simultaneously displayed.
When the currently managed process is currently displayed on the first virtual display VS1 and the second virtual display VS2 and cannot be switched to the first virtual display VS1 or the second virtual display VS2 for display (for example, process three in fig. 11), the control C1 and the control C2 are hidden.
Therefore, by setting the controls C1 and C2 as conditional hiding/displaying, it is possible to avoid causing interference to the user when the user switches the display.
In some embodiments, the user may also set the controls C1 and C2 to be always displayed or always hidden by setting.
In one embodiment, the screen display control of the second virtual display screen VS2 is explained.
Referring to fig. 8, a two-screen control D is arranged on a display page of the first virtual display screen VS1, and when a user clicks the two-screen control D, the first virtual display screen VS1 displays a display control interface of the second virtual display screen VS2 above the original interface.
Fig. 12 is a schematic diagram of a display control interface D0 corresponding to the two-screen control D, and as shown in fig. 12, the display control interface D0 includes a display area D1, a return-to-one-screen control D2, a close-two-screen control D3, a touch area D4, and an exit control D5.
The display area D1 is used to display the currently running process of the second virtual display screen VS 2.
The return-to-one screen control D2 is used to switch the content displayed by the second virtual display screen VS2 to the first virtual display screen VS1 for display. For example, for a certain process, if the currently displayed content of the process is displayed on the second virtual display screen VS2, if the user clicks the return-to-one-screen control D2, the process switches to the first virtual display screen VS1 for display, and at this time, the second virtual display screen VS2 may display a Launcher interface or a time interface.
And the closing two-screen control D3 is used for switching the content displayed by the second virtual display screen VS2 to the first virtual display screen VS1 for displaying, and simultaneously, the second virtual display screen VS2 is turned off. For example, for a certain process, if the currently displayed content of the process is displayed on the second virtual display screen VS2, if the user clicks the off-two-screen control D3, the process switches to the first virtual display screen VS1 for displaying, at this time, the second virtual display screen VS2 switches to the off-screen state, that is, the second virtual display screen VS2 does not display the content.
Touchpad D4 is used to control the operation of second virtual display screen VS2 (in a similar manner to a notebook touchpad). According to the operation of the user in the touch area D4 acquired by the camera, the mapping relation between the position of the operation of the user in the touch area D4 and the preset position, the operation of the user is mapped to the operation of the corresponding position of the second virtual display screen VS2, and then the control for executing the operation is determined according to the position of the control of the second virtual display screen VS 2. For example, the user can manipulate a screen pointer on the second virtual display screen VS2 through the touch area D4, so as to perform corresponding operations.
The exit control D5 is used to collapse the display control interface D0. For example, the user may collapse the display control interface D0 by clicking the exit control D5, and at this time, an icon of the two-screen control D is displayed on the first virtual display screen VS 1.
In one embodiment, a plurality of intelligent table lamps are connected through a network to form a communication system, a plurality of users corresponding to the plurality of intelligent table lamps can perform information interaction through the communication system, and the plurality of users can be users with different identity types. For example, the plurality of users may include a first number of first identity users and a second number of second identity users, and so on.
In some embodiments, the communication system may be an online teaching system, and the plurality of users corresponding to the plurality of intelligent table lamps may specifically be one or more teachers, a plurality of students, and the like.
It can be understood that when using the multi-user communication function in the communication system, the contents displayed on the first virtual display VS1 and the second virtual display VS2 of the corresponding smart desk lamp can be different for users with different identities. In the present application, when the teacher and the student use the online teaching system, the projection display contents of the teacher's and the student's intelligent desk lamps may be different.
As can be seen from the above embodiments, the position parameters are set in the start interface data of the intelligent display device in the embodiments of the present application, so that the intelligent display device can generate a plurality of start interfaces after being started up according to the position parameters, and the plurality of interfaces are respectively displayed on the plurality of virtual display screens, so that a user can obtain information through the plurality of virtual display screens, the problem that the user is tired due to obtaining information through only one virtual display screen is reduced, and the user experience is improved.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. An intelligent display device, comprising:
a first imaging mechanism for forming a first display screen on a first plane according to control of the controller;
a second imaging mechanism for forming a second display screen on the second plane according to the control of the controller;
the camera is used for collecting an operation gesture input on the first display picture and enabling the controller to respond according to the operation gesture;
a controller in communicative connection with the first imaging mechanism, the second imaging mechanism, and the camera, respectively, the controller configured to:
receiving an input starting trigger signal;
responding to the trigger signal, acquiring position parameters of an interface control, wherein the position parameters comprise a first position parameter and a second position parameter, and the interface control needing to be operated by the operation gesture is configured as the first position parameter;
obtaining first page data according to the arrangement data and the display bit data in the interface control corresponding to the first position parameter, and controlling the first imaging mechanism to form a first starting page according to the first page data;
obtaining second page data according to the arrangement data and the display bit data in the interface control corresponding to the second position parameter, and controlling the second imaging mechanism to form a second starting page according to the second page data;
the layout data comprises the coordinate position and the display size of the interface control, and the display bit data comprises the content to be displayed by the interface control.
2. The smart display device of claim 1, wherein the controller is further configured to:
and when the interface control corresponding to the control of the second position parameter forms a second starting page through the second imaging mechanism, starting the camera.
3. The smart display device as claimed in claim 1, wherein the content type of the display bit data comprises a first type that needs to be operated by the operation gesture and a second type that does not need to be operated by the operation gesture, the content type of the display bit data of the interface control corresponding to the first position parameter comprises the first type, and the content type of the display bit data of the interface control corresponding to the second position parameter comprises the second type.
4. The intelligent display device according to claim 1, wherein the interface control corresponding to the first position parameter comprises a two-screen control configured to call up a control interface for operating the second display screen on the current interface in response to a trigger.
5. The smart display device of claim 4, wherein the control interface comprises:
the display area is used for displaying the currently running process of the second display picture;
returning to a screen control for switching the content displayed by the second display picture to the first display picture for displaying;
closing a second screen control, and switching the content displayed by the second display picture to the first display picture for displaying, and simultaneously turning off the second display picture;
the touch area is used for carrying out operation control on the second display picture;
and the exit control is used for retracting the control interface.
6. An intelligent display device, comprising:
a first imaging mechanism for forming a first display screen on the first plane according to the control of the controller;
a second imaging mechanism for forming a second display screen on a second plane according to the control of the controller;
the camera is used for collecting an operation gesture input on the second display picture and enabling the controller to respond according to the operation gesture;
a controller in communicative connection with the first imaging mechanism, the second imaging mechanism, and the camera, respectively, the controller configured to:
receiving an input starting trigger signal;
responding to the trigger signal, acquiring a first group of interface controls and a second group of interface controls, wherein the position parameters of the first group of interface controls are configured as first position parameters, and the position parameters of the second group of interface controls are configured as second position parameters;
generating first starting page data according to the first group of interface controls, and generating second starting page data according to the second group of interface controls, wherein the interface controls needing to be operated by the operation gestures are positioned in the first group of interface controls;
forming a first start page by passing first start page data through said first formation mechanism and a second start page by passing second start page data through said second formation mechanism.
7. The smart display device of claim 6 wherein interface controls that do not require manipulation by the manipulation gesture are located in a first set of interface controls and in a second set of interface controls.
8. The intelligent display device of claim 6, wherein generating first launch page data according to the first set of interface controls and generating second launch page data according to the second set of interface controls comprises:
and generating first starting page data with a first display screen identifier according to the first group of interface controls, and generating second starting page data with a second display screen identifier according to the second group of interface controls.
9. The smart display device of claim 6, wherein the first set of interface controls comprises a two-screen control configured to bring up a control interface for operating the second display on the current interface in response to a trigger.
10. A startup control method is used for intelligent display equipment and is characterized by comprising the following steps:
receiving an input starting trigger signal;
responding to the trigger signal, acquiring position parameters of an interface control, wherein the position parameters comprise a first position parameter and a second position parameter, and the interface control needing to be operated by an operation gesture is configured as the first position parameter;
obtaining first page data according to the arrangement data and the display bit data in the interface control corresponding to the first position parameter, and controlling the first imaging mechanism to form a first starting page according to the first page data;
obtaining second page data according to the arrangement data and the display bit data in the interface control corresponding to the second position parameter, and controlling the second imaging mechanism to form a second starting page according to the second page data;
the layout data comprises the coordinate position and the display size of the interface control, and the display bit data comprises the content to be displayed by the interface control.
CN202110529750.8A 2020-05-14 2021-05-14 Intelligent display device and starting control method Active CN113194302B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020104080814 2020-05-14
CN202010408081 2020-05-14

Publications (2)

Publication Number Publication Date
CN113194302A CN113194302A (en) 2021-07-30
CN113194302B true CN113194302B (en) 2022-06-21

Family

ID=76929281

Family Applications (4)

Application Number Title Priority Date Filing Date
CN202110360544.9A Active CN113676709B (en) 2020-05-14 2021-04-02 Intelligent projection equipment and multi-screen display method
CN202110522907.4A Active CN113178105B (en) 2020-05-14 2021-05-13 Intelligent display device and exercise record acquisition method
CN202110528461.6A Active CN113676710B (en) 2020-05-14 2021-05-14 Intelligent display device and application management method
CN202110529750.8A Active CN113194302B (en) 2020-05-14 2021-05-14 Intelligent display device and starting control method

Family Applications Before (3)

Application Number Title Priority Date Filing Date
CN202110360544.9A Active CN113676709B (en) 2020-05-14 2021-04-02 Intelligent projection equipment and multi-screen display method
CN202110522907.4A Active CN113178105B (en) 2020-05-14 2021-05-13 Intelligent display device and exercise record acquisition method
CN202110528461.6A Active CN113676710B (en) 2020-05-14 2021-05-14 Intelligent display device and application management method

Country Status (1)

Country Link
CN (4) CN113676709B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598161A (en) * 2017-01-03 2017-04-26 蒲婷 Modular portable optical computer
CN107197223A (en) * 2017-06-15 2017-09-22 北京有初科技有限公司 The gestural control method of micro-projection device and projector equipment
CN108769506A (en) * 2018-04-16 2018-11-06 Oppo广东移动通信有限公司 Image-pickup method, device, mobile terminal and computer-readable medium

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0959452A3 (en) * 1998-05-23 1999-12-22 Mannesmann VDO Aktiengesellschaft Method of displaying variable information
US7469381B2 (en) * 2007-01-07 2008-12-23 Apple Inc. List scrolling and document translation, scaling, and rotation on a touch-screen display
US10503342B2 (en) * 2006-08-04 2019-12-10 Apple Inc. User interface spaces
JP2007184002A (en) * 2007-04-02 2007-07-19 Fujitsu Ltd Multiprocess management device and computer-readable recording medium
US9292306B2 (en) * 2007-11-09 2016-03-22 Avro Computing, Inc. System, multi-tier interface and methods for management of operational structured data
CN102495711B (en) * 2011-11-15 2017-05-17 中兴通讯股份有限公司 Virtual multi-screen implementation method and device
JP2014170147A (en) * 2013-03-05 2014-09-18 Funai Electric Co Ltd Projector
US20140344765A1 (en) * 2013-05-17 2014-11-20 Barnesandnoble.Com Llc Touch Sensitive UI Pinch and Flick Techniques for Managing Active Applications
US9798355B2 (en) * 2014-12-02 2017-10-24 Lenovo (Beijing) Co., Ltd. Projection method and electronic device
CN104461001B (en) * 2014-12-02 2018-02-27 联想(北京)有限公司 A kind of information processing method and electronic equipment
KR102350382B1 (en) * 2015-07-16 2022-01-13 삼성전자 주식회사 Display apparatus and control method thereof
US10209851B2 (en) * 2015-09-18 2019-02-19 Google Llc Management of inactive windows
CN106020796A (en) * 2016-05-09 2016-10-12 北京小米移动软件有限公司 Interface display method and device
CN106647934A (en) * 2016-08-04 2017-05-10 刘明涛 Projection microcomputer
CN112363657A (en) * 2016-10-26 2021-02-12 海信视像科技股份有限公司 Gesture erasing method and device
CN106716357B (en) * 2016-12-29 2019-11-01 深圳前海达闼云端智能科技有限公司 Control method, control device and the electronic equipment of multisystem mobile terminal
CN106782268B (en) * 2017-01-04 2020-07-24 京东方科技集团股份有限公司 Display system and driving method for display panel
CN107132981B (en) * 2017-03-27 2019-03-19 网易(杭州)网络有限公司 Display control method and device, storage medium, the electronic equipment of game picture
WO2018213451A1 (en) * 2017-05-16 2018-11-22 Apple Inc. Devices, methods, and graphical user interfaces for navigating between user interfaces and interacting with control objects
JP2019082649A (en) * 2017-10-31 2019-05-30 アルプスアルパイン株式会社 Video display system
CN108111758A (en) * 2017-12-22 2018-06-01 努比亚技术有限公司 A kind of shooting preview method, equipment and computer readable storage medium
CN108334229B (en) * 2018-01-31 2021-12-14 广州视源电子科技股份有限公司 Method, device and equipment for adjusting writing track and readable storage medium
CN108513070B (en) * 2018-04-04 2020-09-04 维沃移动通信有限公司 Image processing method, mobile terminal and computer readable storage medium
CN108874342B (en) * 2018-06-13 2021-08-03 深圳市东向同人科技有限公司 Projection view switching method and terminal equipment
CN108874341B (en) * 2018-06-13 2021-09-14 深圳市东向同人科技有限公司 Screen projection method and terminal equipment
CN108920016B (en) * 2018-08-15 2021-12-28 京东方科技集团股份有限公司 Touch display device, touch display client and touch information processing device
CN110008011B (en) * 2019-02-28 2021-07-16 维沃移动通信有限公司 Task switching method and terminal equipment
CN110062288A (en) * 2019-05-21 2019-07-26 广州视源电子科技股份有限公司 Application management method, device, user terminal, multimedia terminal and storage medium
CN110347305A (en) * 2019-05-30 2019-10-18 华为技术有限公司 A kind of VR multi-display method and electronic equipment
CN110471639B (en) * 2019-07-23 2022-10-18 华为技术有限公司 Display method and related device
CN110941383B (en) * 2019-10-11 2021-08-10 广州视源电子科技股份有限公司 Double-screen display method, device, equipment and storage medium
CN110908574B (en) * 2019-12-04 2020-12-29 深圳市超时空探索科技有限公司 Display adjusting method, device, terminal and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598161A (en) * 2017-01-03 2017-04-26 蒲婷 Modular portable optical computer
CN107197223A (en) * 2017-06-15 2017-09-22 北京有初科技有限公司 The gestural control method of micro-projection device and projector equipment
CN108769506A (en) * 2018-04-16 2018-11-06 Oppo广东移动通信有限公司 Image-pickup method, device, mobile terminal and computer-readable medium

Also Published As

Publication number Publication date
CN113178105B (en) 2022-05-24
CN113194302A (en) 2021-07-30
CN113676710A (en) 2021-11-19
CN113178105A (en) 2021-07-27
CN113676709B (en) 2023-10-27
CN113676710B (en) 2024-03-29
CN113676709A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN112866734B (en) Control method for automatically displaying handwriting input function and display device
CN113330736B (en) Display and image processing method
CN103870233B (en) Display device and its control method
CN107426887A (en) The operating mode switching method of Intelligent lightening device
US20160165170A1 (en) Augmented reality remote control
CN109388321B (en) Electronic whiteboard operation method and device
CN108566544A (en) A kind of micro projector and its projecting method with AI interactive functions
US20220006972A1 (en) Method For Adjusting Position Of Video Chat Window And Display Device
CN112068987A (en) Method and device for rapidly restoring factory settings
CN112068741B (en) Display device and display method for Bluetooth switch state of display device
CN113194302B (en) Intelligent display device and starting control method
CN111385631B (en) Display device, communication method and storage medium
CN112397033A (en) Display device and backlight brightness adjusting method
CN112068855A (en) Method and system for upgrading application under dual systems
CN112269553B (en) Display system, display method and computing device
CN112399071B (en) Control method and device for camera motor and display equipment
CN112073777B (en) Voice interaction method and display device
CN112073813B (en) Display device and method for detecting and processing abnormal starting between two systems
WO2020248682A1 (en) Display device and virtual scene generation method
CN210157300U (en) Miniature projector with AI interactive function
CN112346754A (en) Control method and device for dual-system application upgrading interface display
CN112073779B (en) Display device and fault-tolerant method for key transmission
CN112995113B (en) Display device, port control method and storage medium
US20210224016A1 (en) Display system and control method
JP6943029B2 (en) Projection device, projection system, control method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant