WO2022226736A1 - 一种多屏交互的方法、装置、终端设备和车辆 - Google Patents
一种多屏交互的方法、装置、终端设备和车辆 Download PDFInfo
- Publication number
- WO2022226736A1 WO2022226736A1 PCT/CN2021/090009 CN2021090009W WO2022226736A1 WO 2022226736 A1 WO2022226736 A1 WO 2022226736A1 CN 2021090009 W CN2021090009 W CN 2021090009W WO 2022226736 A1 WO2022226736 A1 WO 2022226736A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- display screen
- image
- sub
- information
- gesture
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 90
- 230000003993 interaction Effects 0.000 title claims abstract description 65
- 230000033001 locomotion Effects 0.000 claims abstract description 35
- 238000012545 processing Methods 0.000 claims description 32
- 238000004590 computer program Methods 0.000 claims description 16
- 238000003860 storage Methods 0.000 claims description 11
- 230000001960 triggered effect Effects 0.000 claims description 11
- 241000203475 Neopanax arboreus Species 0.000 claims description 10
- 210000003811 finger Anatomy 0.000 description 47
- 230000006870 function Effects 0.000 description 43
- 230000008569 process Effects 0.000 description 29
- 238000010586 diagram Methods 0.000 description 22
- 230000000694 effects Effects 0.000 description 9
- 238000013519 translation Methods 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 7
- 210000003813 thumb Anatomy 0.000 description 6
- 230000007423 decrease Effects 0.000 description 5
- 239000000446 fuel Substances 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 4
- 230000007480 spreading Effects 0.000 description 4
- 238000003892 spreading Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000005224 forefinger Anatomy 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 210000004932 little finger Anatomy 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/20—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
- B60K35/21—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
- B60K35/22—Display screens
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1637—Details related to the display arrangement, including those related to the mounting of the display in the housing
- G06F1/1647—Details related to the display arrangement, including those related to the mounting of the display in the housing including at least an additional display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1423—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/11—Instrument graphical user interfaces or menu aspects
- B60K2360/119—Icons
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/146—Instrument input by gesture
- B60K2360/1464—3D-gesture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/16—Type of output information
- B60K2360/164—Infotainment
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/18—Information management
- B60K2360/182—Distributing information between displays
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/10—Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/80—Arrangements for controlling instruments
- B60K35/81—Arrangements for controlling instruments for controlling displays
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2380/00—Specific applications
- G09G2380/10—Automotive applications
Definitions
- the present application relates to the field of smart cars, and in particular, to a method, device, terminal device and vehicle for multi-screen interaction.
- one display is used to display the vehicle's speed, fuel consumption, mileage and other information
- one display is used to display the navigation route
- one display is used to display music, radio and other entertainment videos, as well as other functions of the display.
- the content displayed on each display screen is fixed and cannot display the content to be displayed by other display screens.
- technologies such as screen projection and screen sharing, it has become an important requirement for users and a research hotspot for manufacturers to realize that one display screen can push its displayed content to other display screens for display to share the displayed content.
- the embodiments of the present application provide a method, apparatus, terminal device and vehicle for multi-screen interaction.
- the present application provides a method for multi-screen interaction, including: acquiring sensing information, where the sensing information includes gesture information; triggering a first display screen to display a first interface image according to the gesture information, and the The first interface image includes a sub-image, and the movement trend of the sub-image is associated with the gesture information; the sub-image is triggered to be displayed on the second display screen.
- gesture information is acquired by acquiring sensor information, and then the display screen is triggered to display an interface image according to the gesture information, and a sub-image is displayed on the display, and the sub-image is realized by associating the sub-image with the gesture information. It moves with the movement of the gesture. By triggering certain conditions, such as the movement distance of the gesture is greater than the set threshold, the sub-image is triggered to move to another display screen, thereby realizing the multi-screen interaction function.
- the content displayed by the sub-image is the entire content of the first interface image.
- the gesture information when the gesture information includes a two-finger grasping gesture, the content displayed by the sub-image is an interface presented by a first application in the first interface image, and the first application The application selected or running for the user.
- the first interface image further includes location information, where the location information is an identifier of at least one other display screen that can display the sub-image, and the at least one display screen is relative to the first display screen. Orientation information on the display.
- the user can intuitively see the moving direction of the subsequent gesture, which is convenient for subsequent multi-screen interaction. operation.
- the method before the triggering the sub-image to be displayed on the second display screen, the method further includes: determining the second display screen, the second display screen is based on the first position information and the second position information and the stored orientation information of at least one display screen relative to the first display screen, the first position information is the position information of the gesture when the first display screen is triggered to display the first interface image, the first position information The second position information is the position information of the gesture at the current moment, and the at least one display screen includes the second display screen.
- the target display screen for interaction is predetermined, so that when subsequent sub-images are triggered to move to the target display screen, they quickly move to the target display screen, thereby reducing processing time and improving user experience.
- the determining of the second display screen includes: determining, according to the first position information and the second position information, a difference between the second position information and the first position information First orientation information; compare the first orientation information with the stored orientation information of at least one display screen relative to the first display screen; when the first orientation information is compared with the second display screen When the orientation information relative to the first display screen is the same, the second display screen is determined.
- the triggering the sub-image to be displayed on the second display screen includes: triggering the sub-image when it is detected that the distance between the first position information and the second position information is greater than a set threshold The image is displayed on the second display screen.
- the triggering the sub-image to be displayed on the second display screen includes: triggering the image indicated by the sub-image to be displayed on the first display screen. Second display.
- the sub-image is an image of an application program
- the application program is directly run without the user's operation, and the user is not required to actively display the screen to run the application.
- the size of the sub-image is smaller than the size of the first display screen, so as to prevent the sub-image from completely covering the original interface image and affecting the user's viewing of the content of the original interface image.
- the position information is displayed on the edge of the first display screen, so that the user can more intuitively see the relative positions of other display screens.
- the method further includes: setting the resolution of the sub-image to the first display screen The resolution of the second display screen prevents sub-images from being displayed on the target display screen due to pixel problems.
- the method further includes: setting the size of the sub-image to the second display screen The size of the sub-image cannot be displayed on the target display due to the size problem.
- the method further includes: reducing the sub-image to the sub-image
- the size of the long side of the sub-image is the same as the size of the long side of the second display screen; or the sub-image is reduced so that the size of the short side of the sub-image is the same as the size of the short side of the second display screen.
- the long side of the sub-image can be aligned with the target display screen, or the short side of the sub-image can be aligned with the target display screen, so that Sub-images can be displayed on the display.
- an embodiment of the present application provides a multi-screen interaction device, including: a transceiver unit for acquiring sensing information, where the sensing information includes gesture information; and a processing unit for, according to the gesture information, Triggering the first display screen to display a first interface image, where the first interface image includes a sub-image, and the movement trend of the sub-image is associated with the gesture information; triggering the sub-image to be displayed on the second display screen.
- the content displayed by the sub-image is the entire content of the first interface image.
- the gesture information when the gesture information includes a two-finger grasping gesture, the content displayed by the sub-image is an interface presented by a first application in the first interface image, and the first application The application selected or running for the user.
- the first interface image further includes location information, where the location information is an identifier of at least one other display screen that can display the sub-image, and the at least one display screen is relative to the first display screen. Orientation information on the display.
- the processing unit is further configured to determine the second display screen, where the second display screen is relative to the second display screen according to the first position information, the second position information and the stored at least one display screen
- the orientation information of the first display screen is determined
- the first position information is the position information of the gesture when the first display screen is triggered to display the first interface image
- the second position information is the position information of the gesture at the current moment
- the at least one display screen includes the second display screen.
- the processing unit is specifically configured to determine, according to the first position information and the second position information, first orientation information of the second position information relative to the first position information ; Compare the first orientation information with the stored orientation information of at least one display screen relative to the first display screen; when the first orientation information and the second display screen are relative to the first display screen When the orientation information of the first display screen is the same, the second display screen is determined.
- the processing unit is specifically configured to trigger the sub-image to be displayed on the second display screen when it is detected that the distance between the first position information and the second position information is greater than a set threshold.
- the processing unit is specifically configured to trigger the image indicated by the sub-image to be displayed on the second display screen.
- the size of the sub-image is smaller than the size of the first display screen.
- the location information is displayed on the edge of the first display screen.
- the processing unit is further configured to set the resolution of the sub-image to the specified resolution the resolution of the second display screen.
- the processing unit is further configured to set the size of the sub-image to be the size of the second display screen The size of the display.
- the processing unit is further configured to reduce the sub-image to the size of the sub-image
- the size of the long side of the sub-image is the same as the size of the long side of the second display screen; or the sub-image is reduced so that the size of the short side of the sub-image is the same as the size of the short side of the second display screen.
- an embodiment of the present application provides a multi-screen interaction system, including: a processor and at least two display screens, configured to execute each possible implementation of the first aspect.
- an embodiment of the present application provides a vehicle, including: at least one camera, at least two display screens, at least one memory, and at least one processor for executing the various possible implementations of the first aspect.
- embodiments of the present application provide a computer-readable storage medium on which a computer program is stored.
- the computer program is executed in a computer, the computer is caused to execute the various possible implementations of the first aspect.
- an embodiment of the present application provides a computing device, including a memory and a processor, wherein the memory stores executable code, and when the processor executes the executable code, the implementation of the On the one hand various possible embodiments.
- a computing device in a seventh aspect, includes: a processor and an interface circuit; wherein the processor is coupled to a memory through the interface circuit, and the processor is configured to execute program codes in the memory to achieve The technical solution provided by any aspect or any possible implementation manner of the second aspect to the fourth aspect.
- this application when the driver wants to push the content displayed on one display screen to another display screen, in order to avoid the driver clicking a specific button on the display screen and dragging the target content on the display screen to slide
- this application only requires the driver to put his hand next to the source display screen very naturally, and open the palm of his hand.
- the source screen After the activation gesture is recognized, when the fingers are gradually drawn together, the source screen It enters the active state of the flying screen, and the whole process is equivalent to grasping the current screen by hand; then you only need to move in the corresponding direction according to the interface prompts.
- Which direction to fly, when the moving distance reaches the threshold open your finger to perform the flying screen operation.
- the whole process is equivalent to grabbing the source display screen and throwing it to the target display screen.
- the operation is simple and does not require the driver to invest more energy. , the multi-screen interaction function can be realized, so as to ensure the safety of the vehicle during driving.
- FIG. 1 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
- Figure 2(a) is a schematic position diagram of a display screen and a camera set in a front seat of a vehicle according to an embodiment of the present application;
- FIG. 2(b) is a schematic position diagram of a display screen and a camera set in a rear seat of a vehicle according to an embodiment of the present application;
- FIG. 3 is a schematic flowchart for realizing a method for multi-screen interaction according to an embodiment of the present application
- FIG. 4( a ) is a schematic diagram of a gesture change for realizing five-finger grasping and realizing the screen projection function of a display screen according to an embodiment of the present application;
- FIG. 4(b) is a schematic diagram of a gesture change for realizing a two-finger grasping and application sharing function provided by an embodiment of the present application;
- FIG. 5 is a schematic diagram of recognizing a gesture in an image according to an embodiment of the present application.
- FIG. 6 is a schematic diagram of a source display screen display image and a corresponding gesture diagram when a sub-image is obtained in the process of realizing five-finger grabbing and realizing the screen projection function of the display screen provided by the embodiment of the application;
- FIG. 7 is a schematic diagram of an image displayed on a source display screen and a corresponding gesture when a sub-image is acquired in the process of realizing two-finger grabbing and implementing an application sharing function according to an embodiment of the present application;
- FIG. 8 is a schematic diagram of a display effect after the molecular image on the upper part of the source display screen is removed according to an embodiment of the present application;
- FIG. 9 is a schematic diagram of a display effect after a part of the sub-images is moved into a target display screen according to an embodiment of the present application.
- FIG. 10 is one of the schematic diagrams of the display effect of the sub-image displayed on the target display screen according to the embodiment of the present application.
- 11 is the second schematic diagram of the display effect of the sub-image displayed on the target display screen according to the embodiment of the application.
- FIG. 12 is the third schematic diagram of the display effect of the sub-image displayed on the target display screen according to the embodiment of the application.
- FIG. 13 is a schematic diagram of a moving effect of two sub-images when a source display screen performs multi-screen sharing to two target display screens at the same time according to an embodiment of the present application;
- FIG. 14 is a schematic diagram of a gesture and a sub-image movement process of five-finger grasping to realize the screen projection function of a display screen according to an embodiment of the present application;
- 15 is a schematic structural diagram of a device for multi-screen interaction provided by an embodiment of the present application.
- FIG. 16 is a schematic structural diagram of another apparatus for multi-screen interaction provided by an embodiment of the present application.
- FIG. 1 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
- the vehicle 100 includes at least two display screens 101 (eg, display screens 101 - 1 to 101 -N shown in FIG. 1 ), at least one camera 102 (eg, shown in FIG. 1 ) camera 102-1 to camera 102-M), processor 103, memory 104 and bus 105.
- the display screen 101 , the camera 102 , the processor 103 and the memory 104 can establish a communication connection through the bus 105 .
- N is an integer greater than 1
- M is a positive integer.
- the type of the display screen 101 may include one or more of a touch-sensitive display screen or a non-touch-type display screen.
- the display screen 101 may be all touch-sensitive display screens, or all non-touch-type displays.
- the display screen 101 may include two types of display screens, a touch-sensitive display screen and a non-touch-sensitive display screen.
- the display screen 101 can be used to display instrument data such as fuel level, vehicle speed, and mileage, as well as data such as navigation routes, music, videos, and images (such as images of the surrounding environment of the vehicle).
- the non-touch display can be used to display instrument data such as fuel level, vehicle speed, and mileage
- the touch display can display navigation routes, music, videos, images, and other data.
- the touch-sensitive display screen can also be used to display instrument data such as fuel level, vehicle speed, and mileage.
- the positional arrangement of the display screen 101 may be set with reference to FIG. 2( a ) or FIG. 2( b ).
- a display screen 101-1 in the front row of the vehicle 100 shown in FIG. 2(a), can be set in the middle of the steering wheel, which is used to display the volume buttons, play/pause buttons, answer/hang up buttons such as playing music Buttons, etc., to facilitate the driver to operate more conveniently while driving, reduce the switching of the driver's line of sight and body angle, and improve driving safety.
- a display screen 101-2 is provided on the vehicle body under the windshield and above the steering wheel of the vehicle 100 to display data such as fuel quantity, vehicle speed, and mileage; a display screen 101-2 is provided between the driver's seat and the passenger seat. 3.
- a display screen 101-4 can be set on the car body in front of the passenger seat, which can be used to display any passenger in the passenger seat that the passenger wants to watch. content.
- a display screen 101-5 and a display screen 101-6 can be provided at the upper position behind each front seat, and these two display screens can display any rear Content that passengers want to watch, such as movies, navigation information, weather, etc.
- the quantity and setting positions of the display screens 101 in the embodiment of the present application are not limited to the quantity and position relationship as shown in FIG. 2( a ) and FIG. 2 ( b ). understanding.
- the specific number of the display screens 101 and the positions provided on the vehicle 100 are determined according to the actual situation.
- Camera 102 may be used to capture still images or video. For example, an object is projected through a lens to generate an optical image onto a photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to a processor (such as an image processor) to convert it into a digital image signal.
- the digital image signal is output to a digital signal processor (DSP) for processing.
- the DSP converts the digital image signal into standard red green blue (red green blue, RGB), luminance bandwidth chrominance (luminance bandwidth chrominance, YUV) and other formats of image signals.
- the camera 102 may be disposed at different positions in the vehicle, and the camera 102 may be used in the vehicle to collect body information (eg, palm information) of the user in the vehicle 100 .
- the working mode of the camera 102 may be periodic photographing, or continuous photographing to obtain a video stream.
- the present application describes the technical solution of the present application by taking periodic photos as an example.
- the positions of the cameras 102 on the vehicle 100 are set as shown in FIG. 2( a ) and FIG. 2( b ).
- a camera 102-1 can be set on the rear-view mirror, and its shooting range covers the driver's seat and the passenger's seat, and is used to collect the palm information of the user in the driver's seat and the passenger's seat;
- a camera 102-3 and a camera 102-4 can be arranged above the display screen 101-5 and the display screen 101-6, respectively, and a camera 102-2 and a camera 102-5 are respectively located on the two rear door frames near the front seats , which is used to collect palm information of users in the rear seats.
- the number and the positions of the cameras 102 in the embodiments of the present application are not limited to those shown in Figure 2 (a) and Figure 2 (b), and the present application is only exemplified here for the convenience of readers. understanding of the program.
- the specific number of cameras 102 and the positions provided on the vehicle 100 are determined according to the actual situation.
- the palm information collected by the camera 102 mainly includes gestures presented by five fingers and position information of the palm relative to the camera 102 . If the present application adopts the binocular ranging principle to determine the position information of the palm, it is necessary to set two cameras at each position.
- the processor 103 can be a vehicle-mounted central control unit, a central processing unit 102 (central processing unit, CPU), a cloud server, etc., and is used to process the image collected by the camera 102, and identify the gesture category and the corresponding gesture category of the user's palm in the image. palm position information, and then control what is displayed on one display to move to the other.
- a central processing unit 102 central processing unit, CPU
- a cloud server etc.
- the memory 104 may include volatile memory (volatile memory), such as random-access memory (RAM); the memory 104 may also include non-volatile memory (non-volatile memory), such as read-only memory (read only memory) -only memory, ROM), flash memory, hard disk drive (HDD), or solid state drive (solid state drive, SSD); the memory 104 may also include a combination of the above-mentioned types of memory.
- volatile memory such as random-access memory (RAM)
- non-volatile memory such as read-only memory (read only memory) -only memory, ROM), flash memory, hard disk drive (HDD), or solid state drive (solid state drive, SSD); the memory 104 may also include a combination of the above-mentioned types of memory.
- the data stored in the memory 104 not only stores the images collected by the camera 102, the database of gesture categories, and the like, but also stores various instructions, application programs, and the like corresponding to the method for executing the multi-screen interaction.
- the multi-screen interaction method may be performed by a computing device, or may be performed by a processing device applied in the computing device, or may be performed by the method shown in FIG. 1 .
- the computing device may be a terminal, for example, a car, an in-vehicle device (such as a car machine, an in-vehicle processor, an in-vehicle computer, etc.), or a cloud device such as a server.
- the processing device may be a chip, a processing circuit, a processor, or the like. For the convenience of description, this application will take the processor executing the multi-screen interaction method as an example for introduction. Referring to Figure 3, the multi-screen interaction method includes:
- S301 Acquire sensing information.
- the sensing information may include information obtained through sensors, for example, gesture information, environmental information, etc. obtained through one or more of camera sensors and radar sensors; environmental sound information may also be obtained through sound sensors, such as user instructions Wait.
- the above-mentioned sensing information may be information collected by a sensor, or may be information processed by one or more devices such as a sensor and a processor, for example, noise reduction processing is performed on image information by a sensor.
- the cockpit camera can detect specific body movements, such as gestures (which can also be understood as wake-up gestures), etc., to trigger the opening of the multi-screen interaction function.
- the multi-screen interaction function can also be triggered by means of voice wake-up, clicking the virtual button on the display screen, etc. to wake up.
- gestures such as wake-up gesture and operation gesture can be implemented in various ways, such as left swipe gesture, right swipe gesture, palm hovering, etc., which can be dynamic gestures (or dynamic trend gestures) or static gestures .
- the processor for gesture recognition can acquire sensor information through the interface circuit, and determine the gesture or action made by the current user.
- the present application introduces gestures in two types of solutions: "five-finger grasping to realize screen projection" and "two-finger grasping to realize application sharing". It should be understood that the present application is not limited to these two solutions.
- the gesture action is introduced.
- the user when the user starts grasping, the user first spreads five fingers and approaches the source display screen; when grasping, The five fingers are gradually drawn together; during the projection process, the five fingers that are together move from the position of the source display screen to the position of the target display screen, then gradually approach the target display screen, and gradually spread the five fingers.
- the gesture action is introduced.
- the user when the user starts grasping, the user first opens five fingers (or thumb and index finger open and let other fingers open) (bent toward the palm), and then approach the source display; when grasping, the thumb and index finger are gradually brought together, and the other fingers are bent toward the palm; during the projection process, the two fingers moved together from the source display position to the target display screen position , then move closer to the target display and gradually open your thumb and forefinger.
- the gesture for enabling the multi-screen interaction function in the embodiment of the present application may be the above-mentioned gesture of “spreading the five fingers”, or the gesture of “spreading the five fingers and gradually approaching the display screen”, or other gestures.
- the following takes the above gesture as an example to introduce.
- the determination of gesture information can be obtained through the image (or video stream) of the camera, such as identifying the fingers of the person in the image and the gestures displayed by each finger through the gesture recognition algorithm; it can also be obtained through radar information, such as obtaining a 3D point cloud image
- the finger features of the person in the image are extracted through the neural network, and then the gesture presented by the finger is determined according to the finger features; other information can also be used for judgment, which is not limited in this application, and the image or video stream captured by the camera is used below as For example, to describe the technical solution of the present application.
- the sensor information of the corresponding position is obtained according to the position of the source display screen.
- the sensing information of the corresponding position may be information collected or processed by sensors such as cameras and radars whose collection range covers the control position of the source display screen.
- the processor After the processor receives the instruction for enabling the interactive function sent by the display screen 101-2, the processor starts the camera to work, so as to avoid turning on all the cameras in the vehicle to work and save the power consumption of the vehicle's power. At the same time, the privacy of passengers in other seats in the vehicle 100 is protected.
- the processor After receiving the sensor information, the processor performs hand target detection and gesture recognition on it, so as to recognize the gesture presented by the user's palm.
- the processor receives an image including a palm collected by the camera, as shown in FIG. 5
- the collected image may be preprocessed first, where the preprocessing may include denoising and information enhancement on the image.
- DOFS degrees of freedom
- 2D two-dimensional
- the processor recognizes a specific gesture such as a wake-up gesture or an operation gesture in an acquired image or a frame of an image in a video stream.
- a specific gesture such as a wake-up gesture or an operation gesture in an acquired image or a frame of an image in a video stream.
- the camera continuously captures images or video streams, and the processor also continuously acquires images or video streams for processing. Therefore, in order to show the timing of the acquired images or images in the video stream, this application describes , let the processor number the acquired images or images in the video stream in the order of time, and define the image acquired at this time as the ith image or the jth frame image in the video stream, where i and j are both greater than 0 positive integer.
- the specific gesture is a dynamic trend gesture.
- the processor can compare the image obtained at the current moment with the finger in the image obtained at the previous moment. The distances are compared, if the distance between the five fingers gradually decreases, it indicates the gesture of "five fingers close together”; if the distance between the five fingers gradually decreases, and the middle finger, ring finger and little finger are bent, it indicates "the thumb and index finger gradually close together, The other fingers are bent towards the palm" gesture.
- the processor When the processor detects a specific gesture, it controls the source display screen to enter a multi-screen interaction state, so that the source display screen will extract the displayed interface or the interface presented by the displayed application program according to the change of the gesture.
- the processor can also use the hand 3-dimension (3-dimension, 3D) positioning algorithm to calculate the spatial coordinates (x0, y0, z0) of the specific gesture relative to the camera according to the image obtained by the camera.
- the camera is a time of flight (TOF) camera, a binocular camera and other cameras that can measure depth
- the processor uses The internal and external parameters of the camera directly calculate the spatial coordinates of the specific gesture relative to the optical center of the camera.
- the TOF camera can continuously send light pulses to the target object, and use the sensor to receive the light returned from the target object, and obtain the distance of the target object based on the flight round-trip time of the detected light pulse.
- the processor can calculate the depth or distance of the gesture through a depth estimation method.
- a depth estimation method several possible depth estimation methods are given as examples, which are not limited in this application, for example:
- the monocular depth estimation network model based on deep learning. After acquiring the image captured by the camera, the monocular depth estimation network model through deep learning can directly estimate the depth of the monocular camera, thereby generating a virtual 3D camera to make up for the lack of the hardware 3D camera, so as to predict that a specific gesture is acquired. Depth information in the next frame or frames of images in the sensory information.
- the processor calculates the spatial coordinates of the specific gesture relative to the optical center of the camera according to the image obtained by the camera and the predicted depth information of the image, using the internal and external parameter data of the camera.
- S302 Trigger the first display screen to display the first interface image according to the gesture information.
- the first display screen is the source display screen
- the first interface image is the interface image displayed on the source display screen
- the first interface image also includes a sub-image
- the sub-image may be a reduced version of the first interface image, or may be An icon of an APP, etc.
- the displayed content may be the entire screen displayed by the first interface image, or may be part of the image displayed by the first interface image.
- the processor After the processor detects a specific gesture, it can convert part or all of the picture presented by the source display screen at the current moment into a sub-image smaller than the size of the source display screen, or convert an APP icon displayed on the source display screen into a sub-image , which is displayed on the source display screen, such as in the middle position, left position, etc. of the source screen.
- the processor may regenerate a sub-image whose size is smaller than the size of the source display screen, and the displayed content is synchronized with the interface displayed on the original source display screen.
- a time bar may be set above the source display screen, and the middle part is the area where the video being played is located.
- the processor cuts the entire interface displayed on the source display screen at the current moment (including the time bar and the region where the video is located) into an image and displays it in the middle of the source display screen.
- the processor cuts the area where the video is displayed on the source display screen at the current moment into an image and displays it on the source display screen. in the middle.
- Set priorities for applications such as setting the navigation APP as the first priority, setting the video, music, radio and other APPs as the second priority, and setting the basic APPs such as time and weather forecast as the third priority.
- the content of the application with the highest priority displayed on the source display screen is converted into a sub-image to ensure that the displayed content of the sub-image is What the user wants to push.
- the present application is not limited to the above two solutions, and other solutions are also possible.
- the size of the sub-image displayed on the source display which can be a fixed value or a variable value.
- the processor continuously acquires the images collected by the camera in subsequent times, and if the distance between the fingers in the identified specific gesture continues to decrease, the size of the sub-image displayed on the source display screen can be controlled to also decrease continuously.
- the processor controls the source display screen to display not only the sub-image, but also position information such as the identification of other display screens that can receive the sub-image, the orientation information of the other display screen relative to itself, and the like. Since the position of each display screen has been fixed when the vehicle leaves the factory, for each display screen, the orientation information of other display screens around itself is fixed, and the position between the display screen and the display screen can be fixed.
- the bearing information is stored in the memory in advance.
- the orientation information between the display screens can be represented by a vector formed between the positions of the two display screens.
- the center of the display screen 101-2 is the origin of the spatial coordinates (X0, Y0, Z0).
- Store the coordinate positions between each display screen determine the coordinates (X, Y, Z) of each other display screen in the coordinate system with the center of the display screen 101-2 as the origin of the spatial coordinate, and then calculate the relative
- the calculated vector M is used as the orientation of each other display screen relative to the display screen 101-2.
- the processor can control the position information of other display screens to be displayed on the source display screen, so that the user can intuitively know how to move the sub-image to the target display screen.
- the source display screen is The pattern of a display screen can be virtualized, and the orientation and name of each display screen relative to the source display screen can be displayed in text at the same time, and the corresponding pattern of each display screen can be set on the edge of the source display screen according to the orientation relationship. near the physical display.
- the processor controls the source display screen to generate a sub-image to be presented in the middle of the source display screen
- the position of the sub-image in the source display screen at this time and the gesture's relationship can be obtained by obtaining the spatial coordinates of the gesture in the camera image obtained at the corresponding moment.
- the spatial coordinates establish a mapping relationship, so that when the gesture moves subsequently, the sub-image moves in the corresponding direction and proportional distance relative to the position of the sub-image at the current moment according to the direction and distance of the gesture movement.
- the processor may receive each image or video stream captured by the camera in real time, and may also receive images or video streams captured by the camera at regular intervals, which is not limited in this application. Therefore, the "next image or the next frame of image” mentioned below does not mean two consecutive images or two frames of images captured by the camera, and may be several (frames) or dozens of (frames) images in between.
- the processor After the processor detects the specific gesture, if the specific gesture is not continuously detected in the subsequent one or more frames of images, the processor can control the source display screen to no longer display the sub-image and position information. For example, the processor does not recognize the specific gesture in the next frame or frames of images after the specific gesture is detected for the first time. It may be that the user's gesture changes, or the gesture moves outside the shooting range of the camera, etc., resulting in the next frame. Or if there is no specific gesture in the multi-frame images, the processor can send a control instruction to the source display screen to control the source display screen not to display the sub-image and position information.
- the processor determines to trigger the function corresponding to the specific gesture after detecting that the specific gesture exceeds a certain number of times. For example, if the processor acquires the specific gesture multiple times in the sensing information within a preset time, the processor may consider that a valid gesture is acquired, and trigger the corresponding function. For another example, a specific gesture is detected in multiple frames in an image acquired by the camera within a preset time, and exceeds a preset threshold.
- the processor can send control instructions to the source display screen to make the source display screen display sub-images and position information; if it is detected that the number of images including a specific gesture is less than the preset threshold, the processor does not perform subsequent operations, and this The detected results are discarded or deleted.
- the processor detects that the next frame or multiple frames of images also include a specific gesture, and compared with the previous frame or the image in which the specific gesture is detected for the first time (that is, the ith image or the jth frame of the video stream),
- the position of a specific gesture changes, it can be calculated according to the spatial coordinates (x0, y0, z0) of the specific gesture calculated by detecting the next frame or multiple frames of images, the image of the previous frame, or the image where the specific gesture is detected for the first time.
- the trigger sub-image is displayed on the second display screen.
- the condition for triggering the display of the sub-image on the second display screen is that the distance between the detected first position information and the second position information is greater than a set threshold.
- the first position information refers to the spatial coordinates of the specific gesture relative to the camera in the image in which the characteristic gesture is detected for the first time (that is, the ith image or the jth frame image in the video stream), and the second position information refers to the next frame or more.
- the spatial coordinates of the specific gesture relative to the camera in the image after the frame that is, the i+nth image or the j+nth frame image in the video stream, where n is a positive integer greater than zero).
- the processor calculates the movement vector m of a specific gesture in the image after the next frame or multiple frames are captured by the camera, combined with the normal vector n of the source display screen plane, it calculates the translation of the projection of the movement vector m to the source display screen plane. vector m1. Then, the processor can determine the moving direction of the sub-image on the source display screen according to the direction of the translation vector m1, which can be based on the modulo length of the translation vector m1 and the relationship between the preset modulo length and the moving distance of the sub-image on the source screen. A scale relationship that determines how far the sub-image on the source display is moved. Finally, the processor can control the sub-image on the source display screen to move from the center of the plane along the direction of the translation vector m1 by a proportional length of the translation vector m1, so that the sub-image moves with the movement of the user's gesture.
- the processor can send a control command to the source display screen according to the received image processing result each time, so that the sub-images are displayed in different positions on the source display screen, so that the sub-images can be displayed at different positions on the source display screen. Move with certain gestures. If the processor detects that the sub-image moves with a specific gesture, and some or all of the interface is not displayed on the source display screen, the processor can send a control instruction to the target display screen at the same time to make the target display screen display the sub-image Interface not shown on the source display.
- the processor can determine, according to the distance moved by the specific gesture, that when the sub-image is moved to be displayed on the target display screen, the controller sends a control instruction to the target display screen, and the control instruction is used to make the target display screen.
- the display screen displays the APP icon, or the target display screen displays the interface after the APP runs.
- this application provides two judgment conditions as examples, which are not limited in this application, for example:
- a threshold is preset in this application. After determining the target display screen according to the movement vector m, the processor can calculate the modulo length of the translation vector m1 according to the motion vector m. If the modulo length is greater than the set threshold, the sub-image Move to the destination display; if the modulo length is not greater than the set threshold, the sub-image is still moving on the source display.
- the center point of the sub-image moves out of the source display.
- the center point of the sub-image moves out of the source display screen as the limit.
- the processor can detect the position of the sub-image on the source display screen in real time. The border on the display screen or not on the source display screen indicates that half of the sub-image has moved out of the source display screen at this time, then move the sub-image to the target display screen; if the center point of the detected sub-image is on the source display screen , indicating that at this time no half of the sub-image is moved out of the source display screen, and the sub-image is still moving on the source display screen.
- the present application can also use the area of the sub-image to move out, the size of the long side of the sub-image, etc. as the judgment condition, which is not limited here.
- the effect displayed on the interface of the source display screen at this time is shown in FIG. 8 .
- the part of the sub-image removed from the source display screen may not be displayed, or the part of the sub-image removed from the source display screen may be displayed, as shown in Figure 9, so that the user can more intuitively see the part of the sub-image moved by himself. Which display the image goes to and the effect of the movement.
- the processor can increase or decrease the resolution of the sub-image to the resolution of the target display, and then send it to the target display display on the display to prevent the sub-image from moving to the target display to display abnormally.
- the processor determines When the image is moved to the target display screen, the size or aspect ratio of the two can be compared, and the size or aspect ratio of the sub-image can be adjusted so that the sub-image can be displayed normally on the target display screen.
- the processor can keep the original size of the sub-image and display it on the target interface, as shown in Figure 10, or increase the sub-image size so that the sub-image is displayed on the target display at its maximum size, as shown in Figure 11; if the size, long-side, and short-side dimensions of the target display are all smaller than the source display or the sub-image, the processor can Reduce the size of the sub-image so that the sub-image can be displayed on the target display normally; if the size of the long side and the short side of the target display is smaller than the source display or the sub-image, the processor can reduce the size of the sub-image Make the size of the long side match the size of the long side of the target display screen, or reduce the size of the sub-image until the size of the short side matches the size of the short side of the target display screen, and then display it on the target display screen, as shown in Figure 12.
- the processor After the sub-image is moved to the target display screen, if the processor receives the camera image captured by the camera, the specific gesture should not be included. At this time, for the user, the transition of the gesture should be converted from a specific gesture to a non-specific gesture of "open five fingers", "open thumb and index finger". Exemplarily, the processor does not receive a camera image including a specific gesture within a set period of time, or detects a specific gesture such as "spread five fingers and gradually approach the target display screen", or receives three consecutive taps on the target display screen. In other ways, the "multi-screen interaction" function can be turned off to save power consumption and computing costs.
- the processor After the sub-image is moved to the target display screen, if the processor receives the camera image captured by the camera also includes a specific gesture, it can default to the user to share the content displayed on the source display screen with multiple screens for the second time.
- the controller may perform the implementation process in steps S301 to S303 again.
- the processor may first mark two specific gestures, and then execute the implementation process in steps S301 to S303 according to the marked multiple specific gestures, so as to realize the content displayed on the source display screen at the same time Multiscreen sharing to multiple target displays.
- the processor can share the content displayed on the source display screen to the two target display screens, the source display screen can simultaneously display sub-images corresponding to two specific gestures, and the sub-image moves. direction and other information, so that the user can intuitively see the moving process of the two sub-images.
- the processor after the processor detects that the image captured by the camera includes a specific gesture, it can convert the content displayed on the source display screen into a sub-image, and display other display screens that can display sub-images on the source display screen
- the user can intuitively see the movement direction of the subsequent gesture, and then determine the target display screen according to the movement direction of the specific gesture, and control the sub-image on the source display screen to follow the movement direction of the specific gesture. Move and move, when it is detected that the distance moved by a specific gesture is greater than the set threshold, the sub-image is moved to the target display screen, so as to realize the multi-screen interaction function.
- FIG. 14 is a schematic diagram of a gesture and a sub-image moving process of five-finger grasping to realize a screen projection function of a display screen according to an embodiment of the present application.
- the changes and movements of the gesture are as follows:
- step S301 in the flowchart shown in Fig. 3 is executed to obtain the image captured by the camera and identify it; for the source display screen, no sub-image, Information such as other display screen identification and orientation information; for the target display screen, no sub-image is displayed at this time; for the user, the user is spreading five fingers close to the source display screen.
- step S302 in the flowchart shown in FIG. 3 is executed, and after detecting that the image captured by the camera includes a specific gesture, the control source display screen displays the sub-image, other Display screen identification and orientation information and other information; for the source display screen, the sub-image reduced to the center of the interface is displayed at this time, and other display screen identification and orientation information located on the interface are displayed; for the target display screen, no sub-image is displayed at this time; For the user, the user's gesture is transitioning from a process of spreading five fingers together gradually.
- step S302 in the flowchart shown in FIG. 3 is executed, and after detecting the movement of a specific gesture, the target display screen and the control source display screen are determined according to the moving direction and distance
- the upper sub-image moves with the movement of a specific gesture; for the source display screen, the sub-image is displayed at this time; for the target display screen, the sub-image is not displayed (or part of the sub-image is displayed) at this time;
- the gesture moves from the source display to the destination display.
- step S303 in the flowchart shown in FIG. 3 is executed, and after detecting that the distance moved by the specific gesture is greater than the set threshold, it controls the sub-image to move to the target display screen ;
- information such as sub-image, other display screen identification and orientation information is not displayed at this time; for the target display screen, the sub-image is displayed at this time; for the user, the user's gesture moves out of the camera shooting area (or will fingers open).
- the processor will continue to identify the images captured by the camera.
- the first is to run a hand target detector, and then further classify the hands detected by the target detector to determine whether there is a flying screen trigger gesture (that is, the specific gesture mentioned above, which is the "five fingers open” gesture) , if there is a trigger gesture, run the target tracking algorithm on it, track the position and state of the trigger hand, and display a prompt on the source display that can activate the flying screen; if the tracking hand switches to the activation gesture (that is, the above-mentioned The specific gesture is "five fingers close together" or "two fingers close together"), then according to the activation gesture, the flying screen is activated, and at this time, the step S302 in FIG.
- a flying screen trigger gesture that is, the specific gesture mentioned above, which is the "five fingers open” gesture
- the source display screen displays the target direction of the flying screen, At this time, if the activated hand moves, the movement vector (x-x0, y-y0, z-z0) is calculated according to the current hand coordinates (x, y, z), and then the screen is calculated according to the normal vector n of the screen plane.
- the translation vector m1 of the projection plane, the source display screen displays the movement effect according to the distance and modulus of the m1 vector, if the moving distance, that is, the modulus of m1 is greater than the threshold, the flying screen action will be performed to the target display screen, if it is less than the threshold, the activation gesture will be abandoned, then Can cancel the current flying screen operation.
- the device for multi-screen interaction may be a computing device or an electronic device (for example, a terminal), or a device in an electronic device (for example, an ISP) or SoC).
- the method for multi-screen interaction as shown in FIG. 3 to FIG. 14 and the above-mentioned optional embodiments can be implemented.
- the apparatus 1500 for multi-screen interaction includes: a transceiver unit 1501 and a processing unit 1502 .
- the specific implementation process of the device 1500 for multi-screen interaction is as follows: the transceiver unit 1501 is used to acquire sensing information, and the sensing information includes gesture information; the processing unit 1502 is used to trigger the first display screen to display the first display according to the gesture information. an interface image, the first interface image includes a sub-image, the movement trend of the sub-image is associated with the gesture information; and the sub-image is triggered to be displayed on the second display screen.
- the transceiver unit 1501 is configured to perform S301 and any optional example in the above method for multi-screen interaction.
- the processing unit 1502 is configured to perform S302, S303 and any optional example in the above-mentioned multi-screen interaction method. For details, refer to the detailed description in the method example, which is not repeated here.
- the device for multi-screen interaction in the embodiments of the present application may be implemented by software, for example, a computer program or instruction having the above-mentioned functions, and the corresponding computer program or instruction may be stored in the internal memory of the terminal.
- the computer reads the corresponding computer program or instructions inside the memory to realize the above functions.
- the apparatus for multi-screen interaction in this embodiment of the present application may also be implemented by hardware.
- the processing unit 1502 is a processor (eg, a processor in an NPU, a GPU, or a system chip), and the transceiver unit 1501 is a transceiver circuit or an interface circuit.
- the apparatus for multi-screen interaction in this embodiment of the present application may also be implemented by a combination of a processor and a software module.
- FIG. 16 is a schematic structural diagram of another apparatus for multi-screen interaction provided by an embodiment of the present application.
- the apparatus for multi-screen interaction may be a computing device or an electronic device (for example, a terminal), or may be a device in an electronic device (for example, ISP or SoC).
- the method for multi-screen interaction as shown in FIG. 3 to FIG. 14 and the above-mentioned optional embodiments can be implemented.
- the apparatus 1600 for multi-screen interaction includes: a processor 1601 , and an interface circuit 1602 coupled to the processor 1601 . It should be understood that although only one processor and one interface circuit are shown in FIG. 16 .
- the apparatus 1600 for multi-screen interaction may include other numbers of processors and interface circuits.
- the specific implementation process of the device 1600 for multi-screen interaction is as follows: the interface circuit 1602 is used to obtain sensing information, and the sensing information includes gesture information; the processor 1601 is used to trigger the first display screen to display the first display according to the gesture information. an interface image, the first interface image includes a sub-image, the movement trend of the sub-image is associated with the gesture information; and the sub-image is triggered to be displayed on the second display screen.
- the interface circuit 1602 is used to communicate with other components of the terminal, such as memory or other processors.
- the processor 1601 is used for signal interaction with other components through the interface circuit 1602 .
- the interface circuit 1602 may be an input/output interface of the processor 1601 .
- the processor 1601 reads computer programs or instructions in a memory coupled thereto through the interface circuit 1602, and decodes and executes the computer programs or instructions.
- these computer programs or instructions may include the above-mentioned terminal function program, and may also include the above-mentioned function program applied to the device for multi-screen interaction in the terminal.
- the terminal or the device for multi-screen interaction in the terminal can be made to implement the solutions in the method for multi-screen interaction provided by the embodiments of the present application.
- these terminal function programs are stored in a memory external to the apparatus 1600 for multi-screen interaction.
- the above-mentioned terminal function program is decoded and executed by the processor 1601, part or all of the above-mentioned terminal function program is temporarily stored in the memory.
- these terminal function programs are stored in a memory inside the apparatus 1600 for multi-screen interaction.
- the apparatus 1600 for multi-screen interaction may be set in the terminal of the embodiment of the present application.
- parts of the terminal function programs are stored in a memory outside the multi-screen interaction device 1600
- other parts of the terminal function programs are stored in a memory inside the multi-screen interaction device 1600 .
- the devices for multi-screen interaction shown in any one of FIGS. 15 to 16 can be combined with each other, and the devices for multi-screen interaction shown in any of FIGS. 1 to 2 and FIGS. 21 to 22 are related to various optional embodiments.
- the design details can refer to each other, and can also refer to the multi-screen interaction method shown in any of FIG. 10 or FIG. 18 and the related design details of each optional embodiment. It will not be repeated here.
- multi-screen interaction method and each optional embodiment shown in FIG. 3 can be used not only for processing during shooting Videos or images, and can also be used to process videos or images that have been captured. This application is not limited.
- the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed in a computer, the computer is made to execute any one of the above methods.
- the present application provides a computing device, including a memory and a processor, where executable code is stored in the memory, and when the processor executes the executable code, any one of the foregoing methods is implemented.
- various aspects or features of the embodiments of the present application may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques.
- article of manufacture encompasses a computer program accessible from any computer readable device, carrier or medium.
- computer readable media may include, but are not limited to, magnetic storage devices (eg, hard disks, floppy disks, or magnetic tapes, etc.), optical disks (eg, compact discs (CDs), digital versatile discs (DVDs) etc.), smart cards and flash memory devices (eg, erasable programmable read-only memory (EPROM), card, stick or key drives, etc.).
- various storage media described herein can represent one or more devices and/or other machine-readable media for storing information.
- the term "machine-readable medium” may include, but is not limited to, wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
- the multi-screen interaction apparatus 1500 in FIG. 15 may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
- software When implemented in software, it can be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
- the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
- the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
- the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
- the size of the sequence numbers of the above-mentioned processes does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not be The implementation process of the embodiments of the present application constitutes any limitation.
- the disclosed systems, devices and methods may be implemented in other manners.
- the apparatus embodiments described above are only illustrative.
- the division of the units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
- the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
- the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that make contributions to the prior art or the parts of the technical solutions, and the computer software products are stored in a storage medium , including several instructions to cause a computer device (which may be a personal computer, a server, or an access network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the embodiments of this application.
- the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Computer Hardware Design (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
- Controls And Circuits For Display Device (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
Abstract
Description
Claims (29)
- 一种多屏交互的方法,其特征在于,包括:获取传感信息,所述传感信息包括手势信息;根据所述手势信息,触发第一显示屏显示第一界面图像,所述第一界面图像包括子图像,所述子图像的移动趋势与所述手势信息相关联;触发所述子图像显示于第二显示屏。
- 根据权利要求1所述的方法,当所述手势信息包括五指抓取手势时,其特征在于,所述子图像显示的内容为所述第一界面图像的全部内容。
- 根据权利要求1所述的方法,当所述手势信息包括双指抓取手势时,其特征在于,所述子图像显示的内容为所述第一界面图像中的第一应用程序呈现的界面,所述第一应用程序为用户选定的应用程序或正在运行的应用程序。
- 根据权利要求1-3任意一项所述的方法,其特征在于,所述第一界面图像还包括位置信息,所述位置信息为其它可显示所述子图像的至少一个显示屏标识,以及所述至少一个显示屏相对于所述第一显示屏的方位信息。
- 根据权利要求1-4任意一项所述的方法,其特征在于,在所述触发所述子图像显示于第二显示屏之前,还包括:确定所述第二显示屏,所述第二显示屏根据第一位置信息、第二位置信息和存储的至少一个显示屏相对于所述第一显示屏的方位信息确定,所述第一位置信息为触发所述第一显示屏显示所述第一界面图像时手势的位置信息,所述第二位置信息为当前时刻手势的位置信息,所述至少一个显示屏包括所述第二显示屏。
- 根据权利要求1-5任意一项所述的方法,其特征在于,所述确定所述第二显示屏,包括:根据所述第一位置信息和所述第二位置信息,确定所述第二位置信息相对于所述第一位置信息的第一方位信息;将所述第一方位信息与所述存储的至少一个显示屏相对于所述第一显示屏的方位信息进行比对;当所述第一方位信息与所述第二显示屏相对于所述第一显示屏的方位信息相同时,确定所述第二显示屏。
- 根据权利要求5或6所述的方法,其特征在于,所述触发所述子图像显示于第二显示屏,包括:检测出所述第一位置信息与第二位置信息之间的距离大于设定阈值时,触发所述子图像显示于第二显示屏。
- 根据权利要求1-7任意一项所述的方法,所述子图像为应用程序图标时,其特征在于,所述触发所述子图像显示于第二显示屏,包括:触发所述子图像指示的图像显示于所述第二显示屏。
- 根据权利要求1-8任意一项所述的方法,其特征在于,所述子图像的尺寸小于所述第一显示屏的尺寸。
- 根据权利要求1-9任意一项所述的方法,其特征在于,所述位置信息显示在所述第一显示屏边缘位置上。
- 根据权利要求1-10任意一项所述的方法,其特征在于,当所述第一显示屏的分辨率与所述第二显示屏的分辨率不相同时,所述方法还包括:将所述子图像的分辨率设置为所述第二显示屏的分辨率。
- 根据权利要求1-11任意一项所述的方法,其特征在于,当所述第一显示屏的尺寸与所述第二显示屏的尺寸不相同时,所述方法还包括:将所述子图像的尺寸设置为所述第二显示屏的尺寸。
- 根据权利要求1-12任意一项所述的方法,其特征在于,当所述第二显示屏的长边的尺寸或短边的尺寸小于所述第一显示屏时,所述方法还包括:将所述子图像缩小至所述子图像的长边尺寸与所述第二显示屏的长边的尺寸相同;或将所述子图像缩小至所述子图像的短边尺寸与所述第二显示屏的短边的尺寸相同。
- 一种多屏交互的装置,其特征在于,包括:收发单元,用于获取传感信息,所述传感信息包括手势信息;处理单元,用于根据所述手势信息,触发第一显示屏显示第一界面图像,所述第一界面图像包括子图像,所述子图像的移动趋势与所述手势信息相关联;触发所述子图像显示于第二显示屏。
- 根据权利要求14所述的装置,当所述手势信息包括五指抓取手势时,其特征在于,所述子图像显示的内容为所述第一界面图像的全部内容。
- 根据权利要求14所述的装置,当所述手势信息包括双指抓取手势时,其特征在于,所述子图像显示的内容为所述第一界面图像中的第一应用程序呈现的界面,所述第一应用程序为用户选定的应用程序或正在运行的应用程序。
- 根据权利要求14-16任意一项所述的装置,其特征在于,所述第一界面图像还包括位置信息,所述位置信息为其它可显示所述子图像的至少一个显示屏标识,以及所述至少一个显示屏相对于所述第一显示屏的方位信息。
- 根据权利要求1-17任意一项所述的装置,其特征在于,所述处理单元,还用于确定所述第二显示屏,所述第二显示屏根据第一位置信息、第二位置信息和存储的至少一个显示屏相对于所述第一显示屏的方位信息确定,所述第一位置信息为触发所述第一显示屏显示所述第一界面图像时手势的位置信息,所述第二位置信息为当前时刻手势的位置信息,所述至少一个显示屏包括所述第二显示屏。
- 根据权利要求18所述的装置,其特征在于,所述处理单元,具体用于根据所述第一位置信息和所述第二位置信息,确定所述第二位置信息相对于所述第一位置信息的第一方位信息;将所述第一方位信息与所述存储的至少一个显示屏相对于所述第一显示屏的方位信息进行比对;当所述第一方位信息与所述第二显示屏相对于所述第一显示屏的方位信息相同时,确定所述第二显示屏。
- 根据权利要求18或19所述的装置,其特征在于,所述处理单元,具体用于检测出所述第一位置信息与第二位置信息之间的距离大于设定阈值时,触发所述子图像显示于第二显示屏。
- 根据权利要求14-20任意一项所述的装置,所述子图像为应用程序图标时,其特征 在于,所述处理单元,具体用于触发所述子图像指示的图像显示于所述第二显示屏。
- 根据权利要求14-21任意一项所述的装置,其特征在于,所述子图像的尺寸小于所述第一显示屏的尺寸。
- 根据权利要求14-22任意一项所述的装置,其特征在于,所述位置信息显示在所述第一显示屏边缘位置上。
- 根据权利要求14-23任意一项所述的装置,其特征在于,当所述第一显示屏的分辨率与所述第二显示屏的分辨率不相同时,所述处理单元,还用于将所述子图像的分辨率设置为所述第二显示屏的分辨率。
- 根据权利要求14-24任意一项所述的装置,其特征在于,当所述第一显示屏的尺寸与所述第二显示屏的尺寸不相同时,所述处理单元,还用于将所述子图像的尺寸设置为所述第二显示屏的尺寸。
- 根据权利要求14-25任意一项所述的装置,其特征在于,当所述第二显示屏的长边的尺寸或短边的尺寸小于所述第一显示屏时,所述处理单元,还用于将所述子图像缩小至所述子图像的长边尺寸与所述第二显示屏的长边的尺寸相同;或将所述子图像缩小至所述子图像的短边尺寸与所述第二显示屏的短边的尺寸相同。
- 一种车辆,其特征在于,包括:至少一个摄像头,至少两个显示屏,至少一个存储器,至少一个处理器,用于执行如权利要求1-13中的任一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行权利要求1-13中任一项的所述的方法。
- 一种计算设备,包括存储器和处理器,其特征在于,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现权利要求1-13中任一项所述的方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180001484.XA CN113330395B (zh) | 2021-04-26 | 2021-04-26 | 一种多屏交互的方法、装置、终端设备和车辆 |
CN202311246725.4A CN117492557A (zh) | 2021-04-26 | 2021-04-26 | 一种多屏交互的方法、装置、终端设备和车辆 |
EP21938227.2A EP4318186A4 (en) | 2021-04-26 | 2021-04-26 | METHOD AND DEVICE FOR INTERACTION BETWEEN MULTIPLE SCREENS AND TERMINAL DEVICE AND VEHICLE |
PCT/CN2021/090009 WO2022226736A1 (zh) | 2021-04-26 | 2021-04-26 | 一种多屏交互的方法、装置、终端设备和车辆 |
JP2023566400A JP2024518333A (ja) | 2021-04-26 | 2021-04-26 | マルチスクリーンインタラクション方法及び機器、端末装置、及び車両 |
US18/494,949 US20240051394A1 (en) | 2021-04-26 | 2023-10-26 | Multi-Screen Interaction Method and Apparatus, Terminal Device, and Vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/090009 WO2022226736A1 (zh) | 2021-04-26 | 2021-04-26 | 一种多屏交互的方法、装置、终端设备和车辆 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/494,949 Continuation US20240051394A1 (en) | 2021-04-26 | 2023-10-26 | Multi-Screen Interaction Method and Apparatus, Terminal Device, and Vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022226736A1 true WO2022226736A1 (zh) | 2022-11-03 |
Family
ID=77427052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/090009 WO2022226736A1 (zh) | 2021-04-26 | 2021-04-26 | 一种多屏交互的方法、装置、终端设备和车辆 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240051394A1 (zh) |
EP (1) | EP4318186A4 (zh) |
JP (1) | JP2024518333A (zh) |
CN (2) | CN117492557A (zh) |
WO (1) | WO2022226736A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117156189A (zh) * | 2023-02-27 | 2023-12-01 | 荣耀终端有限公司 | 投屏显示方法及电子设备 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113448469B (zh) * | 2021-09-01 | 2021-12-31 | 远峰科技股份有限公司 | 车载多屏显示多样化分享交互方法及装置 |
CN114138219A (zh) * | 2021-12-01 | 2022-03-04 | 展讯通信(上海)有限公司 | 一种多屏显示方法、多屏显示系统及存储介质 |
CN114546239A (zh) * | 2021-12-28 | 2022-05-27 | 浙江零跑科技股份有限公司 | 一种智能座舱副驾投屏手势操作方法 |
CN114647319A (zh) * | 2022-03-28 | 2022-06-21 | 重庆长安汽车股份有限公司 | 一种用户车内屏幕显示信息流转的方法、系统及存储介质 |
CN115097929A (zh) * | 2022-03-31 | 2022-09-23 | Oppo广东移动通信有限公司 | 车载投屏方法、装置、电子设备、存储介质和程序产品 |
CN115097970A (zh) * | 2022-06-30 | 2022-09-23 | 阿波罗智联(北京)科技有限公司 | 展示控制方法、装置、电子设备、存储介质及车辆 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108556740A (zh) * | 2018-04-17 | 2018-09-21 | 上海商泰汽车信息系统有限公司 | 多屏共享设备及方法、计算机可读介质、车载设备 |
CN109491558A (zh) * | 2017-09-11 | 2019-03-19 | 上海博泰悦臻网络技术服务有限公司 | 车载系统的屏间应用交互方法及装置、存储介质和车机 |
CN109992193A (zh) * | 2019-03-29 | 2019-07-09 | 佛吉亚好帮手电子科技有限公司 | 一种车内触控屏飞屏互动方法 |
US20200406752A1 (en) * | 2019-06-25 | 2020-12-31 | Hyundai Mobis Co., Ltd. | Control system and method using in-vehicle gesture input |
CN112486363A (zh) * | 2020-10-30 | 2021-03-12 | 华为技术有限公司 | 一种跨设备的内容分享方法、电子设备及系统 |
CN112513787A (zh) * | 2020-07-03 | 2021-03-16 | 华为技术有限公司 | 车内隔空手势的交互方法、电子装置及系统 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140258942A1 (en) * | 2013-03-05 | 2014-09-11 | Intel Corporation | Interaction of multiple perceptual sensing inputs |
KR102091028B1 (ko) * | 2013-03-14 | 2020-04-14 | 삼성전자 주식회사 | 사용자 기기의 오브젝트 운용 방법 및 장치 |
DE102016108885A1 (de) * | 2016-05-13 | 2017-11-16 | Visteon Global Technologies, Inc. | Verfahren zum berührungslosen Verschieben von visuellen Informationen |
US20190073040A1 (en) * | 2017-09-05 | 2019-03-07 | Future Mobility Corporation Limited | Gesture and motion based control of user interfaces |
-
2021
- 2021-04-26 JP JP2023566400A patent/JP2024518333A/ja active Pending
- 2021-04-26 CN CN202311246725.4A patent/CN117492557A/zh active Pending
- 2021-04-26 CN CN202180001484.XA patent/CN113330395B/zh active Active
- 2021-04-26 WO PCT/CN2021/090009 patent/WO2022226736A1/zh active Application Filing
- 2021-04-26 EP EP21938227.2A patent/EP4318186A4/en active Pending
-
2023
- 2023-10-26 US US18/494,949 patent/US20240051394A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109491558A (zh) * | 2017-09-11 | 2019-03-19 | 上海博泰悦臻网络技术服务有限公司 | 车载系统的屏间应用交互方法及装置、存储介质和车机 |
CN108556740A (zh) * | 2018-04-17 | 2018-09-21 | 上海商泰汽车信息系统有限公司 | 多屏共享设备及方法、计算机可读介质、车载设备 |
CN109992193A (zh) * | 2019-03-29 | 2019-07-09 | 佛吉亚好帮手电子科技有限公司 | 一种车内触控屏飞屏互动方法 |
US20200406752A1 (en) * | 2019-06-25 | 2020-12-31 | Hyundai Mobis Co., Ltd. | Control system and method using in-vehicle gesture input |
CN112513787A (zh) * | 2020-07-03 | 2021-03-16 | 华为技术有限公司 | 车内隔空手势的交互方法、电子装置及系统 |
CN112486363A (zh) * | 2020-10-30 | 2021-03-12 | 华为技术有限公司 | 一种跨设备的内容分享方法、电子设备及系统 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4318186A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117156189A (zh) * | 2023-02-27 | 2023-12-01 | 荣耀终端有限公司 | 投屏显示方法及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN113330395B (zh) | 2023-10-20 |
US20240051394A1 (en) | 2024-02-15 |
EP4318186A1 (en) | 2024-02-07 |
EP4318186A4 (en) | 2024-05-01 |
CN113330395A (zh) | 2021-08-31 |
CN117492557A (zh) | 2024-02-02 |
JP2024518333A (ja) | 2024-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022226736A1 (zh) | 一种多屏交互的方法、装置、终端设备和车辆 | |
US11093045B2 (en) | Systems and methods to augment user interaction with the environment outside of a vehicle | |
US10761610B2 (en) | Vehicle systems and methods for interaction detection | |
CN103870802B (zh) | 使用指谷操作车辆内的用户界面的系统和方法 | |
US10120454B2 (en) | Gesture recognition control device | |
US20180224948A1 (en) | Controlling a computing-based device using gestures | |
EP2743799B1 (en) | Control apparatus, vehicle, and portable terminal using hand information for command generation | |
Garber | Gestural technology: Moving interfaces in a new direction [technology news] | |
US9898090B2 (en) | Apparatus, method and recording medium for controlling user interface using input image | |
US10885322B2 (en) | Hand-over-face input sensing for interaction with a device having a built-in camera | |
JP2016520946A (ja) | 人間対コンピュータの自然な3次元ハンドジェスチャベースのナビゲーション方法 | |
US20140198030A1 (en) | Image projection device, image projection system, and control method | |
US10108334B2 (en) | Gesture device, operation method for same, and vehicle comprising same | |
US20200142495A1 (en) | Gesture recognition control device | |
US9639167B2 (en) | Control method of electronic apparatus having non-contact gesture sensitive region | |
US20140223374A1 (en) | Method of displaying menu based on depth information and space gesture of user | |
US20220019288A1 (en) | Information processing apparatus, information processing method, and program | |
US20150123901A1 (en) | Gesture disambiguation using orientation information | |
KR20240072170A (ko) | 원격 디바이스들과의 사용자 상호작용들 | |
CN105759955B (zh) | 输入装置 | |
CN116501167A (zh) | 一种基于手势操作的车内交互系统及车辆 | |
CN114923418A (zh) | 基于点选择的测量 | |
TW201545050A (zh) | 電子裝置的控制方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21938227 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023566400 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202347074753 Country of ref document: IN Ref document number: 2021938227 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2021938227 Country of ref document: EP Effective date: 20231102 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |