WO2022226736A1 - 一种多屏交互的方法、装置、终端设备和车辆 - Google Patents

一种多屏交互的方法、装置、终端设备和车辆 Download PDF

Info

Publication number
WO2022226736A1
WO2022226736A1 PCT/CN2021/090009 CN2021090009W WO2022226736A1 WO 2022226736 A1 WO2022226736 A1 WO 2022226736A1 CN 2021090009 W CN2021090009 W CN 2021090009W WO 2022226736 A1 WO2022226736 A1 WO 2022226736A1
Authority
WO
WIPO (PCT)
Prior art keywords
display screen
image
sub
information
gesture
Prior art date
Application number
PCT/CN2021/090009
Other languages
English (en)
French (fr)
Inventor
马瑞
郭子衡
施嘉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180001484.XA priority Critical patent/CN113330395B/zh
Priority to CN202311246725.4A priority patent/CN117492557A/zh
Priority to EP21938227.2A priority patent/EP4318186A4/en
Priority to PCT/CN2021/090009 priority patent/WO2022226736A1/zh
Priority to JP2023566400A priority patent/JP2024518333A/ja
Publication of WO2022226736A1 publication Critical patent/WO2022226736A1/zh
Priority to US18/494,949 priority patent/US20240051394A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/22Display screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1637Details related to the display arrangement, including those related to the mounting of the display in the housing
    • G06F1/1647Details related to the display arrangement, including those related to the mounting of the display in the housing including at least an additional display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/11Instrument graphical user interfaces or menu aspects
    • B60K2360/119Icons
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/146Instrument input by gesture
    • B60K2360/14643D-gesture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/164Infotainment
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/18Information management
    • B60K2360/182Distributing information between displays
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/10Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/80Arrangements for controlling instruments
    • B60K35/81Arrangements for controlling instruments for controlling displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/10Automotive applications

Definitions

  • the present application relates to the field of smart cars, and in particular, to a method, device, terminal device and vehicle for multi-screen interaction.
  • one display is used to display the vehicle's speed, fuel consumption, mileage and other information
  • one display is used to display the navigation route
  • one display is used to display music, radio and other entertainment videos, as well as other functions of the display.
  • the content displayed on each display screen is fixed and cannot display the content to be displayed by other display screens.
  • technologies such as screen projection and screen sharing, it has become an important requirement for users and a research hotspot for manufacturers to realize that one display screen can push its displayed content to other display screens for display to share the displayed content.
  • the embodiments of the present application provide a method, apparatus, terminal device and vehicle for multi-screen interaction.
  • the present application provides a method for multi-screen interaction, including: acquiring sensing information, where the sensing information includes gesture information; triggering a first display screen to display a first interface image according to the gesture information, and the The first interface image includes a sub-image, and the movement trend of the sub-image is associated with the gesture information; the sub-image is triggered to be displayed on the second display screen.
  • gesture information is acquired by acquiring sensor information, and then the display screen is triggered to display an interface image according to the gesture information, and a sub-image is displayed on the display, and the sub-image is realized by associating the sub-image with the gesture information. It moves with the movement of the gesture. By triggering certain conditions, such as the movement distance of the gesture is greater than the set threshold, the sub-image is triggered to move to another display screen, thereby realizing the multi-screen interaction function.
  • the content displayed by the sub-image is the entire content of the first interface image.
  • the gesture information when the gesture information includes a two-finger grasping gesture, the content displayed by the sub-image is an interface presented by a first application in the first interface image, and the first application The application selected or running for the user.
  • the first interface image further includes location information, where the location information is an identifier of at least one other display screen that can display the sub-image, and the at least one display screen is relative to the first display screen. Orientation information on the display.
  • the user can intuitively see the moving direction of the subsequent gesture, which is convenient for subsequent multi-screen interaction. operation.
  • the method before the triggering the sub-image to be displayed on the second display screen, the method further includes: determining the second display screen, the second display screen is based on the first position information and the second position information and the stored orientation information of at least one display screen relative to the first display screen, the first position information is the position information of the gesture when the first display screen is triggered to display the first interface image, the first position information The second position information is the position information of the gesture at the current moment, and the at least one display screen includes the second display screen.
  • the target display screen for interaction is predetermined, so that when subsequent sub-images are triggered to move to the target display screen, they quickly move to the target display screen, thereby reducing processing time and improving user experience.
  • the determining of the second display screen includes: determining, according to the first position information and the second position information, a difference between the second position information and the first position information First orientation information; compare the first orientation information with the stored orientation information of at least one display screen relative to the first display screen; when the first orientation information is compared with the second display screen When the orientation information relative to the first display screen is the same, the second display screen is determined.
  • the triggering the sub-image to be displayed on the second display screen includes: triggering the sub-image when it is detected that the distance between the first position information and the second position information is greater than a set threshold The image is displayed on the second display screen.
  • the triggering the sub-image to be displayed on the second display screen includes: triggering the image indicated by the sub-image to be displayed on the first display screen. Second display.
  • the sub-image is an image of an application program
  • the application program is directly run without the user's operation, and the user is not required to actively display the screen to run the application.
  • the size of the sub-image is smaller than the size of the first display screen, so as to prevent the sub-image from completely covering the original interface image and affecting the user's viewing of the content of the original interface image.
  • the position information is displayed on the edge of the first display screen, so that the user can more intuitively see the relative positions of other display screens.
  • the method further includes: setting the resolution of the sub-image to the first display screen The resolution of the second display screen prevents sub-images from being displayed on the target display screen due to pixel problems.
  • the method further includes: setting the size of the sub-image to the second display screen The size of the sub-image cannot be displayed on the target display due to the size problem.
  • the method further includes: reducing the sub-image to the sub-image
  • the size of the long side of the sub-image is the same as the size of the long side of the second display screen; or the sub-image is reduced so that the size of the short side of the sub-image is the same as the size of the short side of the second display screen.
  • the long side of the sub-image can be aligned with the target display screen, or the short side of the sub-image can be aligned with the target display screen, so that Sub-images can be displayed on the display.
  • an embodiment of the present application provides a multi-screen interaction device, including: a transceiver unit for acquiring sensing information, where the sensing information includes gesture information; and a processing unit for, according to the gesture information, Triggering the first display screen to display a first interface image, where the first interface image includes a sub-image, and the movement trend of the sub-image is associated with the gesture information; triggering the sub-image to be displayed on the second display screen.
  • the content displayed by the sub-image is the entire content of the first interface image.
  • the gesture information when the gesture information includes a two-finger grasping gesture, the content displayed by the sub-image is an interface presented by a first application in the first interface image, and the first application The application selected or running for the user.
  • the first interface image further includes location information, where the location information is an identifier of at least one other display screen that can display the sub-image, and the at least one display screen is relative to the first display screen. Orientation information on the display.
  • the processing unit is further configured to determine the second display screen, where the second display screen is relative to the second display screen according to the first position information, the second position information and the stored at least one display screen
  • the orientation information of the first display screen is determined
  • the first position information is the position information of the gesture when the first display screen is triggered to display the first interface image
  • the second position information is the position information of the gesture at the current moment
  • the at least one display screen includes the second display screen.
  • the processing unit is specifically configured to determine, according to the first position information and the second position information, first orientation information of the second position information relative to the first position information ; Compare the first orientation information with the stored orientation information of at least one display screen relative to the first display screen; when the first orientation information and the second display screen are relative to the first display screen When the orientation information of the first display screen is the same, the second display screen is determined.
  • the processing unit is specifically configured to trigger the sub-image to be displayed on the second display screen when it is detected that the distance between the first position information and the second position information is greater than a set threshold.
  • the processing unit is specifically configured to trigger the image indicated by the sub-image to be displayed on the second display screen.
  • the size of the sub-image is smaller than the size of the first display screen.
  • the location information is displayed on the edge of the first display screen.
  • the processing unit is further configured to set the resolution of the sub-image to the specified resolution the resolution of the second display screen.
  • the processing unit is further configured to set the size of the sub-image to be the size of the second display screen The size of the display.
  • the processing unit is further configured to reduce the sub-image to the size of the sub-image
  • the size of the long side of the sub-image is the same as the size of the long side of the second display screen; or the sub-image is reduced so that the size of the short side of the sub-image is the same as the size of the short side of the second display screen.
  • an embodiment of the present application provides a multi-screen interaction system, including: a processor and at least two display screens, configured to execute each possible implementation of the first aspect.
  • an embodiment of the present application provides a vehicle, including: at least one camera, at least two display screens, at least one memory, and at least one processor for executing the various possible implementations of the first aspect.
  • embodiments of the present application provide a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed in a computer, the computer is caused to execute the various possible implementations of the first aspect.
  • an embodiment of the present application provides a computing device, including a memory and a processor, wherein the memory stores executable code, and when the processor executes the executable code, the implementation of the On the one hand various possible embodiments.
  • a computing device in a seventh aspect, includes: a processor and an interface circuit; wherein the processor is coupled to a memory through the interface circuit, and the processor is configured to execute program codes in the memory to achieve The technical solution provided by any aspect or any possible implementation manner of the second aspect to the fourth aspect.
  • this application when the driver wants to push the content displayed on one display screen to another display screen, in order to avoid the driver clicking a specific button on the display screen and dragging the target content on the display screen to slide
  • this application only requires the driver to put his hand next to the source display screen very naturally, and open the palm of his hand.
  • the source screen After the activation gesture is recognized, when the fingers are gradually drawn together, the source screen It enters the active state of the flying screen, and the whole process is equivalent to grasping the current screen by hand; then you only need to move in the corresponding direction according to the interface prompts.
  • Which direction to fly, when the moving distance reaches the threshold open your finger to perform the flying screen operation.
  • the whole process is equivalent to grabbing the source display screen and throwing it to the target display screen.
  • the operation is simple and does not require the driver to invest more energy. , the multi-screen interaction function can be realized, so as to ensure the safety of the vehicle during driving.
  • FIG. 1 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
  • Figure 2(a) is a schematic position diagram of a display screen and a camera set in a front seat of a vehicle according to an embodiment of the present application;
  • FIG. 2(b) is a schematic position diagram of a display screen and a camera set in a rear seat of a vehicle according to an embodiment of the present application;
  • FIG. 3 is a schematic flowchart for realizing a method for multi-screen interaction according to an embodiment of the present application
  • FIG. 4( a ) is a schematic diagram of a gesture change for realizing five-finger grasping and realizing the screen projection function of a display screen according to an embodiment of the present application;
  • FIG. 4(b) is a schematic diagram of a gesture change for realizing a two-finger grasping and application sharing function provided by an embodiment of the present application;
  • FIG. 5 is a schematic diagram of recognizing a gesture in an image according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a source display screen display image and a corresponding gesture diagram when a sub-image is obtained in the process of realizing five-finger grabbing and realizing the screen projection function of the display screen provided by the embodiment of the application;
  • FIG. 7 is a schematic diagram of an image displayed on a source display screen and a corresponding gesture when a sub-image is acquired in the process of realizing two-finger grabbing and implementing an application sharing function according to an embodiment of the present application;
  • FIG. 8 is a schematic diagram of a display effect after the molecular image on the upper part of the source display screen is removed according to an embodiment of the present application;
  • FIG. 9 is a schematic diagram of a display effect after a part of the sub-images is moved into a target display screen according to an embodiment of the present application.
  • FIG. 10 is one of the schematic diagrams of the display effect of the sub-image displayed on the target display screen according to the embodiment of the present application.
  • 11 is the second schematic diagram of the display effect of the sub-image displayed on the target display screen according to the embodiment of the application.
  • FIG. 12 is the third schematic diagram of the display effect of the sub-image displayed on the target display screen according to the embodiment of the application.
  • FIG. 13 is a schematic diagram of a moving effect of two sub-images when a source display screen performs multi-screen sharing to two target display screens at the same time according to an embodiment of the present application;
  • FIG. 14 is a schematic diagram of a gesture and a sub-image movement process of five-finger grasping to realize the screen projection function of a display screen according to an embodiment of the present application;
  • 15 is a schematic structural diagram of a device for multi-screen interaction provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of another apparatus for multi-screen interaction provided by an embodiment of the present application.
  • FIG. 1 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
  • the vehicle 100 includes at least two display screens 101 (eg, display screens 101 - 1 to 101 -N shown in FIG. 1 ), at least one camera 102 (eg, shown in FIG. 1 ) camera 102-1 to camera 102-M), processor 103, memory 104 and bus 105.
  • the display screen 101 , the camera 102 , the processor 103 and the memory 104 can establish a communication connection through the bus 105 .
  • N is an integer greater than 1
  • M is a positive integer.
  • the type of the display screen 101 may include one or more of a touch-sensitive display screen or a non-touch-type display screen.
  • the display screen 101 may be all touch-sensitive display screens, or all non-touch-type displays.
  • the display screen 101 may include two types of display screens, a touch-sensitive display screen and a non-touch-sensitive display screen.
  • the display screen 101 can be used to display instrument data such as fuel level, vehicle speed, and mileage, as well as data such as navigation routes, music, videos, and images (such as images of the surrounding environment of the vehicle).
  • the non-touch display can be used to display instrument data such as fuel level, vehicle speed, and mileage
  • the touch display can display navigation routes, music, videos, images, and other data.
  • the touch-sensitive display screen can also be used to display instrument data such as fuel level, vehicle speed, and mileage.
  • the positional arrangement of the display screen 101 may be set with reference to FIG. 2( a ) or FIG. 2( b ).
  • a display screen 101-1 in the front row of the vehicle 100 shown in FIG. 2(a), can be set in the middle of the steering wheel, which is used to display the volume buttons, play/pause buttons, answer/hang up buttons such as playing music Buttons, etc., to facilitate the driver to operate more conveniently while driving, reduce the switching of the driver's line of sight and body angle, and improve driving safety.
  • a display screen 101-2 is provided on the vehicle body under the windshield and above the steering wheel of the vehicle 100 to display data such as fuel quantity, vehicle speed, and mileage; a display screen 101-2 is provided between the driver's seat and the passenger seat. 3.
  • a display screen 101-4 can be set on the car body in front of the passenger seat, which can be used to display any passenger in the passenger seat that the passenger wants to watch. content.
  • a display screen 101-5 and a display screen 101-6 can be provided at the upper position behind each front seat, and these two display screens can display any rear Content that passengers want to watch, such as movies, navigation information, weather, etc.
  • the quantity and setting positions of the display screens 101 in the embodiment of the present application are not limited to the quantity and position relationship as shown in FIG. 2( a ) and FIG. 2 ( b ). understanding.
  • the specific number of the display screens 101 and the positions provided on the vehicle 100 are determined according to the actual situation.
  • Camera 102 may be used to capture still images or video. For example, an object is projected through a lens to generate an optical image onto a photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to a processor (such as an image processor) to convert it into a digital image signal.
  • the digital image signal is output to a digital signal processor (DSP) for processing.
  • the DSP converts the digital image signal into standard red green blue (red green blue, RGB), luminance bandwidth chrominance (luminance bandwidth chrominance, YUV) and other formats of image signals.
  • the camera 102 may be disposed at different positions in the vehicle, and the camera 102 may be used in the vehicle to collect body information (eg, palm information) of the user in the vehicle 100 .
  • the working mode of the camera 102 may be periodic photographing, or continuous photographing to obtain a video stream.
  • the present application describes the technical solution of the present application by taking periodic photos as an example.
  • the positions of the cameras 102 on the vehicle 100 are set as shown in FIG. 2( a ) and FIG. 2( b ).
  • a camera 102-1 can be set on the rear-view mirror, and its shooting range covers the driver's seat and the passenger's seat, and is used to collect the palm information of the user in the driver's seat and the passenger's seat;
  • a camera 102-3 and a camera 102-4 can be arranged above the display screen 101-5 and the display screen 101-6, respectively, and a camera 102-2 and a camera 102-5 are respectively located on the two rear door frames near the front seats , which is used to collect palm information of users in the rear seats.
  • the number and the positions of the cameras 102 in the embodiments of the present application are not limited to those shown in Figure 2 (a) and Figure 2 (b), and the present application is only exemplified here for the convenience of readers. understanding of the program.
  • the specific number of cameras 102 and the positions provided on the vehicle 100 are determined according to the actual situation.
  • the palm information collected by the camera 102 mainly includes gestures presented by five fingers and position information of the palm relative to the camera 102 . If the present application adopts the binocular ranging principle to determine the position information of the palm, it is necessary to set two cameras at each position.
  • the processor 103 can be a vehicle-mounted central control unit, a central processing unit 102 (central processing unit, CPU), a cloud server, etc., and is used to process the image collected by the camera 102, and identify the gesture category and the corresponding gesture category of the user's palm in the image. palm position information, and then control what is displayed on one display to move to the other.
  • a central processing unit 102 central processing unit, CPU
  • a cloud server etc.
  • the memory 104 may include volatile memory (volatile memory), such as random-access memory (RAM); the memory 104 may also include non-volatile memory (non-volatile memory), such as read-only memory (read only memory) -only memory, ROM), flash memory, hard disk drive (HDD), or solid state drive (solid state drive, SSD); the memory 104 may also include a combination of the above-mentioned types of memory.
  • volatile memory such as random-access memory (RAM)
  • non-volatile memory such as read-only memory (read only memory) -only memory, ROM), flash memory, hard disk drive (HDD), or solid state drive (solid state drive, SSD); the memory 104 may also include a combination of the above-mentioned types of memory.
  • the data stored in the memory 104 not only stores the images collected by the camera 102, the database of gesture categories, and the like, but also stores various instructions, application programs, and the like corresponding to the method for executing the multi-screen interaction.
  • the multi-screen interaction method may be performed by a computing device, or may be performed by a processing device applied in the computing device, or may be performed by the method shown in FIG. 1 .
  • the computing device may be a terminal, for example, a car, an in-vehicle device (such as a car machine, an in-vehicle processor, an in-vehicle computer, etc.), or a cloud device such as a server.
  • the processing device may be a chip, a processing circuit, a processor, or the like. For the convenience of description, this application will take the processor executing the multi-screen interaction method as an example for introduction. Referring to Figure 3, the multi-screen interaction method includes:
  • S301 Acquire sensing information.
  • the sensing information may include information obtained through sensors, for example, gesture information, environmental information, etc. obtained through one or more of camera sensors and radar sensors; environmental sound information may also be obtained through sound sensors, such as user instructions Wait.
  • the above-mentioned sensing information may be information collected by a sensor, or may be information processed by one or more devices such as a sensor and a processor, for example, noise reduction processing is performed on image information by a sensor.
  • the cockpit camera can detect specific body movements, such as gestures (which can also be understood as wake-up gestures), etc., to trigger the opening of the multi-screen interaction function.
  • the multi-screen interaction function can also be triggered by means of voice wake-up, clicking the virtual button on the display screen, etc. to wake up.
  • gestures such as wake-up gesture and operation gesture can be implemented in various ways, such as left swipe gesture, right swipe gesture, palm hovering, etc., which can be dynamic gestures (or dynamic trend gestures) or static gestures .
  • the processor for gesture recognition can acquire sensor information through the interface circuit, and determine the gesture or action made by the current user.
  • the present application introduces gestures in two types of solutions: "five-finger grasping to realize screen projection" and "two-finger grasping to realize application sharing". It should be understood that the present application is not limited to these two solutions.
  • the gesture action is introduced.
  • the user when the user starts grasping, the user first spreads five fingers and approaches the source display screen; when grasping, The five fingers are gradually drawn together; during the projection process, the five fingers that are together move from the position of the source display screen to the position of the target display screen, then gradually approach the target display screen, and gradually spread the five fingers.
  • the gesture action is introduced.
  • the user when the user starts grasping, the user first opens five fingers (or thumb and index finger open and let other fingers open) (bent toward the palm), and then approach the source display; when grasping, the thumb and index finger are gradually brought together, and the other fingers are bent toward the palm; during the projection process, the two fingers moved together from the source display position to the target display screen position , then move closer to the target display and gradually open your thumb and forefinger.
  • the gesture for enabling the multi-screen interaction function in the embodiment of the present application may be the above-mentioned gesture of “spreading the five fingers”, or the gesture of “spreading the five fingers and gradually approaching the display screen”, or other gestures.
  • the following takes the above gesture as an example to introduce.
  • the determination of gesture information can be obtained through the image (or video stream) of the camera, such as identifying the fingers of the person in the image and the gestures displayed by each finger through the gesture recognition algorithm; it can also be obtained through radar information, such as obtaining a 3D point cloud image
  • the finger features of the person in the image are extracted through the neural network, and then the gesture presented by the finger is determined according to the finger features; other information can also be used for judgment, which is not limited in this application, and the image or video stream captured by the camera is used below as For example, to describe the technical solution of the present application.
  • the sensor information of the corresponding position is obtained according to the position of the source display screen.
  • the sensing information of the corresponding position may be information collected or processed by sensors such as cameras and radars whose collection range covers the control position of the source display screen.
  • the processor After the processor receives the instruction for enabling the interactive function sent by the display screen 101-2, the processor starts the camera to work, so as to avoid turning on all the cameras in the vehicle to work and save the power consumption of the vehicle's power. At the same time, the privacy of passengers in other seats in the vehicle 100 is protected.
  • the processor After receiving the sensor information, the processor performs hand target detection and gesture recognition on it, so as to recognize the gesture presented by the user's palm.
  • the processor receives an image including a palm collected by the camera, as shown in FIG. 5
  • the collected image may be preprocessed first, where the preprocessing may include denoising and information enhancement on the image.
  • DOFS degrees of freedom
  • 2D two-dimensional
  • the processor recognizes a specific gesture such as a wake-up gesture or an operation gesture in an acquired image or a frame of an image in a video stream.
  • a specific gesture such as a wake-up gesture or an operation gesture in an acquired image or a frame of an image in a video stream.
  • the camera continuously captures images or video streams, and the processor also continuously acquires images or video streams for processing. Therefore, in order to show the timing of the acquired images or images in the video stream, this application describes , let the processor number the acquired images or images in the video stream in the order of time, and define the image acquired at this time as the ith image or the jth frame image in the video stream, where i and j are both greater than 0 positive integer.
  • the specific gesture is a dynamic trend gesture.
  • the processor can compare the image obtained at the current moment with the finger in the image obtained at the previous moment. The distances are compared, if the distance between the five fingers gradually decreases, it indicates the gesture of "five fingers close together”; if the distance between the five fingers gradually decreases, and the middle finger, ring finger and little finger are bent, it indicates "the thumb and index finger gradually close together, The other fingers are bent towards the palm" gesture.
  • the processor When the processor detects a specific gesture, it controls the source display screen to enter a multi-screen interaction state, so that the source display screen will extract the displayed interface or the interface presented by the displayed application program according to the change of the gesture.
  • the processor can also use the hand 3-dimension (3-dimension, 3D) positioning algorithm to calculate the spatial coordinates (x0, y0, z0) of the specific gesture relative to the camera according to the image obtained by the camera.
  • the camera is a time of flight (TOF) camera, a binocular camera and other cameras that can measure depth
  • the processor uses The internal and external parameters of the camera directly calculate the spatial coordinates of the specific gesture relative to the optical center of the camera.
  • the TOF camera can continuously send light pulses to the target object, and use the sensor to receive the light returned from the target object, and obtain the distance of the target object based on the flight round-trip time of the detected light pulse.
  • the processor can calculate the depth or distance of the gesture through a depth estimation method.
  • a depth estimation method several possible depth estimation methods are given as examples, which are not limited in this application, for example:
  • the monocular depth estimation network model based on deep learning. After acquiring the image captured by the camera, the monocular depth estimation network model through deep learning can directly estimate the depth of the monocular camera, thereby generating a virtual 3D camera to make up for the lack of the hardware 3D camera, so as to predict that a specific gesture is acquired. Depth information in the next frame or frames of images in the sensory information.
  • the processor calculates the spatial coordinates of the specific gesture relative to the optical center of the camera according to the image obtained by the camera and the predicted depth information of the image, using the internal and external parameter data of the camera.
  • S302 Trigger the first display screen to display the first interface image according to the gesture information.
  • the first display screen is the source display screen
  • the first interface image is the interface image displayed on the source display screen
  • the first interface image also includes a sub-image
  • the sub-image may be a reduced version of the first interface image, or may be An icon of an APP, etc.
  • the displayed content may be the entire screen displayed by the first interface image, or may be part of the image displayed by the first interface image.
  • the processor After the processor detects a specific gesture, it can convert part or all of the picture presented by the source display screen at the current moment into a sub-image smaller than the size of the source display screen, or convert an APP icon displayed on the source display screen into a sub-image , which is displayed on the source display screen, such as in the middle position, left position, etc. of the source screen.
  • the processor may regenerate a sub-image whose size is smaller than the size of the source display screen, and the displayed content is synchronized with the interface displayed on the original source display screen.
  • a time bar may be set above the source display screen, and the middle part is the area where the video being played is located.
  • the processor cuts the entire interface displayed on the source display screen at the current moment (including the time bar and the region where the video is located) into an image and displays it in the middle of the source display screen.
  • the processor cuts the area where the video is displayed on the source display screen at the current moment into an image and displays it on the source display screen. in the middle.
  • Set priorities for applications such as setting the navigation APP as the first priority, setting the video, music, radio and other APPs as the second priority, and setting the basic APPs such as time and weather forecast as the third priority.
  • the content of the application with the highest priority displayed on the source display screen is converted into a sub-image to ensure that the displayed content of the sub-image is What the user wants to push.
  • the present application is not limited to the above two solutions, and other solutions are also possible.
  • the size of the sub-image displayed on the source display which can be a fixed value or a variable value.
  • the processor continuously acquires the images collected by the camera in subsequent times, and if the distance between the fingers in the identified specific gesture continues to decrease, the size of the sub-image displayed on the source display screen can be controlled to also decrease continuously.
  • the processor controls the source display screen to display not only the sub-image, but also position information such as the identification of other display screens that can receive the sub-image, the orientation information of the other display screen relative to itself, and the like. Since the position of each display screen has been fixed when the vehicle leaves the factory, for each display screen, the orientation information of other display screens around itself is fixed, and the position between the display screen and the display screen can be fixed.
  • the bearing information is stored in the memory in advance.
  • the orientation information between the display screens can be represented by a vector formed between the positions of the two display screens.
  • the center of the display screen 101-2 is the origin of the spatial coordinates (X0, Y0, Z0).
  • Store the coordinate positions between each display screen determine the coordinates (X, Y, Z) of each other display screen in the coordinate system with the center of the display screen 101-2 as the origin of the spatial coordinate, and then calculate the relative
  • the calculated vector M is used as the orientation of each other display screen relative to the display screen 101-2.
  • the processor can control the position information of other display screens to be displayed on the source display screen, so that the user can intuitively know how to move the sub-image to the target display screen.
  • the source display screen is The pattern of a display screen can be virtualized, and the orientation and name of each display screen relative to the source display screen can be displayed in text at the same time, and the corresponding pattern of each display screen can be set on the edge of the source display screen according to the orientation relationship. near the physical display.
  • the processor controls the source display screen to generate a sub-image to be presented in the middle of the source display screen
  • the position of the sub-image in the source display screen at this time and the gesture's relationship can be obtained by obtaining the spatial coordinates of the gesture in the camera image obtained at the corresponding moment.
  • the spatial coordinates establish a mapping relationship, so that when the gesture moves subsequently, the sub-image moves in the corresponding direction and proportional distance relative to the position of the sub-image at the current moment according to the direction and distance of the gesture movement.
  • the processor may receive each image or video stream captured by the camera in real time, and may also receive images or video streams captured by the camera at regular intervals, which is not limited in this application. Therefore, the "next image or the next frame of image” mentioned below does not mean two consecutive images or two frames of images captured by the camera, and may be several (frames) or dozens of (frames) images in between.
  • the processor After the processor detects the specific gesture, if the specific gesture is not continuously detected in the subsequent one or more frames of images, the processor can control the source display screen to no longer display the sub-image and position information. For example, the processor does not recognize the specific gesture in the next frame or frames of images after the specific gesture is detected for the first time. It may be that the user's gesture changes, or the gesture moves outside the shooting range of the camera, etc., resulting in the next frame. Or if there is no specific gesture in the multi-frame images, the processor can send a control instruction to the source display screen to control the source display screen not to display the sub-image and position information.
  • the processor determines to trigger the function corresponding to the specific gesture after detecting that the specific gesture exceeds a certain number of times. For example, if the processor acquires the specific gesture multiple times in the sensing information within a preset time, the processor may consider that a valid gesture is acquired, and trigger the corresponding function. For another example, a specific gesture is detected in multiple frames in an image acquired by the camera within a preset time, and exceeds a preset threshold.
  • the processor can send control instructions to the source display screen to make the source display screen display sub-images and position information; if it is detected that the number of images including a specific gesture is less than the preset threshold, the processor does not perform subsequent operations, and this The detected results are discarded or deleted.
  • the processor detects that the next frame or multiple frames of images also include a specific gesture, and compared with the previous frame or the image in which the specific gesture is detected for the first time (that is, the ith image or the jth frame of the video stream),
  • the position of a specific gesture changes, it can be calculated according to the spatial coordinates (x0, y0, z0) of the specific gesture calculated by detecting the next frame or multiple frames of images, the image of the previous frame, or the image where the specific gesture is detected for the first time.
  • the trigger sub-image is displayed on the second display screen.
  • the condition for triggering the display of the sub-image on the second display screen is that the distance between the detected first position information and the second position information is greater than a set threshold.
  • the first position information refers to the spatial coordinates of the specific gesture relative to the camera in the image in which the characteristic gesture is detected for the first time (that is, the ith image or the jth frame image in the video stream), and the second position information refers to the next frame or more.
  • the spatial coordinates of the specific gesture relative to the camera in the image after the frame that is, the i+nth image or the j+nth frame image in the video stream, where n is a positive integer greater than zero).
  • the processor calculates the movement vector m of a specific gesture in the image after the next frame or multiple frames are captured by the camera, combined with the normal vector n of the source display screen plane, it calculates the translation of the projection of the movement vector m to the source display screen plane. vector m1. Then, the processor can determine the moving direction of the sub-image on the source display screen according to the direction of the translation vector m1, which can be based on the modulo length of the translation vector m1 and the relationship between the preset modulo length and the moving distance of the sub-image on the source screen. A scale relationship that determines how far the sub-image on the source display is moved. Finally, the processor can control the sub-image on the source display screen to move from the center of the plane along the direction of the translation vector m1 by a proportional length of the translation vector m1, so that the sub-image moves with the movement of the user's gesture.
  • the processor can send a control command to the source display screen according to the received image processing result each time, so that the sub-images are displayed in different positions on the source display screen, so that the sub-images can be displayed at different positions on the source display screen. Move with certain gestures. If the processor detects that the sub-image moves with a specific gesture, and some or all of the interface is not displayed on the source display screen, the processor can send a control instruction to the target display screen at the same time to make the target display screen display the sub-image Interface not shown on the source display.
  • the processor can determine, according to the distance moved by the specific gesture, that when the sub-image is moved to be displayed on the target display screen, the controller sends a control instruction to the target display screen, and the control instruction is used to make the target display screen.
  • the display screen displays the APP icon, or the target display screen displays the interface after the APP runs.
  • this application provides two judgment conditions as examples, which are not limited in this application, for example:
  • a threshold is preset in this application. After determining the target display screen according to the movement vector m, the processor can calculate the modulo length of the translation vector m1 according to the motion vector m. If the modulo length is greater than the set threshold, the sub-image Move to the destination display; if the modulo length is not greater than the set threshold, the sub-image is still moving on the source display.
  • the center point of the sub-image moves out of the source display.
  • the center point of the sub-image moves out of the source display screen as the limit.
  • the processor can detect the position of the sub-image on the source display screen in real time. The border on the display screen or not on the source display screen indicates that half of the sub-image has moved out of the source display screen at this time, then move the sub-image to the target display screen; if the center point of the detected sub-image is on the source display screen , indicating that at this time no half of the sub-image is moved out of the source display screen, and the sub-image is still moving on the source display screen.
  • the present application can also use the area of the sub-image to move out, the size of the long side of the sub-image, etc. as the judgment condition, which is not limited here.
  • the effect displayed on the interface of the source display screen at this time is shown in FIG. 8 .
  • the part of the sub-image removed from the source display screen may not be displayed, or the part of the sub-image removed from the source display screen may be displayed, as shown in Figure 9, so that the user can more intuitively see the part of the sub-image moved by himself. Which display the image goes to and the effect of the movement.
  • the processor can increase or decrease the resolution of the sub-image to the resolution of the target display, and then send it to the target display display on the display to prevent the sub-image from moving to the target display to display abnormally.
  • the processor determines When the image is moved to the target display screen, the size or aspect ratio of the two can be compared, and the size or aspect ratio of the sub-image can be adjusted so that the sub-image can be displayed normally on the target display screen.
  • the processor can keep the original size of the sub-image and display it on the target interface, as shown in Figure 10, or increase the sub-image size so that the sub-image is displayed on the target display at its maximum size, as shown in Figure 11; if the size, long-side, and short-side dimensions of the target display are all smaller than the source display or the sub-image, the processor can Reduce the size of the sub-image so that the sub-image can be displayed on the target display normally; if the size of the long side and the short side of the target display is smaller than the source display or the sub-image, the processor can reduce the size of the sub-image Make the size of the long side match the size of the long side of the target display screen, or reduce the size of the sub-image until the size of the short side matches the size of the short side of the target display screen, and then display it on the target display screen, as shown in Figure 12.
  • the processor After the sub-image is moved to the target display screen, if the processor receives the camera image captured by the camera, the specific gesture should not be included. At this time, for the user, the transition of the gesture should be converted from a specific gesture to a non-specific gesture of "open five fingers", "open thumb and index finger". Exemplarily, the processor does not receive a camera image including a specific gesture within a set period of time, or detects a specific gesture such as "spread five fingers and gradually approach the target display screen", or receives three consecutive taps on the target display screen. In other ways, the "multi-screen interaction" function can be turned off to save power consumption and computing costs.
  • the processor After the sub-image is moved to the target display screen, if the processor receives the camera image captured by the camera also includes a specific gesture, it can default to the user to share the content displayed on the source display screen with multiple screens for the second time.
  • the controller may perform the implementation process in steps S301 to S303 again.
  • the processor may first mark two specific gestures, and then execute the implementation process in steps S301 to S303 according to the marked multiple specific gestures, so as to realize the content displayed on the source display screen at the same time Multiscreen sharing to multiple target displays.
  • the processor can share the content displayed on the source display screen to the two target display screens, the source display screen can simultaneously display sub-images corresponding to two specific gestures, and the sub-image moves. direction and other information, so that the user can intuitively see the moving process of the two sub-images.
  • the processor after the processor detects that the image captured by the camera includes a specific gesture, it can convert the content displayed on the source display screen into a sub-image, and display other display screens that can display sub-images on the source display screen
  • the user can intuitively see the movement direction of the subsequent gesture, and then determine the target display screen according to the movement direction of the specific gesture, and control the sub-image on the source display screen to follow the movement direction of the specific gesture. Move and move, when it is detected that the distance moved by a specific gesture is greater than the set threshold, the sub-image is moved to the target display screen, so as to realize the multi-screen interaction function.
  • FIG. 14 is a schematic diagram of a gesture and a sub-image moving process of five-finger grasping to realize a screen projection function of a display screen according to an embodiment of the present application.
  • the changes and movements of the gesture are as follows:
  • step S301 in the flowchart shown in Fig. 3 is executed to obtain the image captured by the camera and identify it; for the source display screen, no sub-image, Information such as other display screen identification and orientation information; for the target display screen, no sub-image is displayed at this time; for the user, the user is spreading five fingers close to the source display screen.
  • step S302 in the flowchart shown in FIG. 3 is executed, and after detecting that the image captured by the camera includes a specific gesture, the control source display screen displays the sub-image, other Display screen identification and orientation information and other information; for the source display screen, the sub-image reduced to the center of the interface is displayed at this time, and other display screen identification and orientation information located on the interface are displayed; for the target display screen, no sub-image is displayed at this time; For the user, the user's gesture is transitioning from a process of spreading five fingers together gradually.
  • step S302 in the flowchart shown in FIG. 3 is executed, and after detecting the movement of a specific gesture, the target display screen and the control source display screen are determined according to the moving direction and distance
  • the upper sub-image moves with the movement of a specific gesture; for the source display screen, the sub-image is displayed at this time; for the target display screen, the sub-image is not displayed (or part of the sub-image is displayed) at this time;
  • the gesture moves from the source display to the destination display.
  • step S303 in the flowchart shown in FIG. 3 is executed, and after detecting that the distance moved by the specific gesture is greater than the set threshold, it controls the sub-image to move to the target display screen ;
  • information such as sub-image, other display screen identification and orientation information is not displayed at this time; for the target display screen, the sub-image is displayed at this time; for the user, the user's gesture moves out of the camera shooting area (or will fingers open).
  • the processor will continue to identify the images captured by the camera.
  • the first is to run a hand target detector, and then further classify the hands detected by the target detector to determine whether there is a flying screen trigger gesture (that is, the specific gesture mentioned above, which is the "five fingers open” gesture) , if there is a trigger gesture, run the target tracking algorithm on it, track the position and state of the trigger hand, and display a prompt on the source display that can activate the flying screen; if the tracking hand switches to the activation gesture (that is, the above-mentioned The specific gesture is "five fingers close together" or "two fingers close together"), then according to the activation gesture, the flying screen is activated, and at this time, the step S302 in FIG.
  • a flying screen trigger gesture that is, the specific gesture mentioned above, which is the "five fingers open” gesture
  • the source display screen displays the target direction of the flying screen, At this time, if the activated hand moves, the movement vector (x-x0, y-y0, z-z0) is calculated according to the current hand coordinates (x, y, z), and then the screen is calculated according to the normal vector n of the screen plane.
  • the translation vector m1 of the projection plane, the source display screen displays the movement effect according to the distance and modulus of the m1 vector, if the moving distance, that is, the modulus of m1 is greater than the threshold, the flying screen action will be performed to the target display screen, if it is less than the threshold, the activation gesture will be abandoned, then Can cancel the current flying screen operation.
  • the device for multi-screen interaction may be a computing device or an electronic device (for example, a terminal), or a device in an electronic device (for example, an ISP) or SoC).
  • the method for multi-screen interaction as shown in FIG. 3 to FIG. 14 and the above-mentioned optional embodiments can be implemented.
  • the apparatus 1500 for multi-screen interaction includes: a transceiver unit 1501 and a processing unit 1502 .
  • the specific implementation process of the device 1500 for multi-screen interaction is as follows: the transceiver unit 1501 is used to acquire sensing information, and the sensing information includes gesture information; the processing unit 1502 is used to trigger the first display screen to display the first display according to the gesture information. an interface image, the first interface image includes a sub-image, the movement trend of the sub-image is associated with the gesture information; and the sub-image is triggered to be displayed on the second display screen.
  • the transceiver unit 1501 is configured to perform S301 and any optional example in the above method for multi-screen interaction.
  • the processing unit 1502 is configured to perform S302, S303 and any optional example in the above-mentioned multi-screen interaction method. For details, refer to the detailed description in the method example, which is not repeated here.
  • the device for multi-screen interaction in the embodiments of the present application may be implemented by software, for example, a computer program or instruction having the above-mentioned functions, and the corresponding computer program or instruction may be stored in the internal memory of the terminal.
  • the computer reads the corresponding computer program or instructions inside the memory to realize the above functions.
  • the apparatus for multi-screen interaction in this embodiment of the present application may also be implemented by hardware.
  • the processing unit 1502 is a processor (eg, a processor in an NPU, a GPU, or a system chip), and the transceiver unit 1501 is a transceiver circuit or an interface circuit.
  • the apparatus for multi-screen interaction in this embodiment of the present application may also be implemented by a combination of a processor and a software module.
  • FIG. 16 is a schematic structural diagram of another apparatus for multi-screen interaction provided by an embodiment of the present application.
  • the apparatus for multi-screen interaction may be a computing device or an electronic device (for example, a terminal), or may be a device in an electronic device (for example, ISP or SoC).
  • the method for multi-screen interaction as shown in FIG. 3 to FIG. 14 and the above-mentioned optional embodiments can be implemented.
  • the apparatus 1600 for multi-screen interaction includes: a processor 1601 , and an interface circuit 1602 coupled to the processor 1601 . It should be understood that although only one processor and one interface circuit are shown in FIG. 16 .
  • the apparatus 1600 for multi-screen interaction may include other numbers of processors and interface circuits.
  • the specific implementation process of the device 1600 for multi-screen interaction is as follows: the interface circuit 1602 is used to obtain sensing information, and the sensing information includes gesture information; the processor 1601 is used to trigger the first display screen to display the first display according to the gesture information. an interface image, the first interface image includes a sub-image, the movement trend of the sub-image is associated with the gesture information; and the sub-image is triggered to be displayed on the second display screen.
  • the interface circuit 1602 is used to communicate with other components of the terminal, such as memory or other processors.
  • the processor 1601 is used for signal interaction with other components through the interface circuit 1602 .
  • the interface circuit 1602 may be an input/output interface of the processor 1601 .
  • the processor 1601 reads computer programs or instructions in a memory coupled thereto through the interface circuit 1602, and decodes and executes the computer programs or instructions.
  • these computer programs or instructions may include the above-mentioned terminal function program, and may also include the above-mentioned function program applied to the device for multi-screen interaction in the terminal.
  • the terminal or the device for multi-screen interaction in the terminal can be made to implement the solutions in the method for multi-screen interaction provided by the embodiments of the present application.
  • these terminal function programs are stored in a memory external to the apparatus 1600 for multi-screen interaction.
  • the above-mentioned terminal function program is decoded and executed by the processor 1601, part or all of the above-mentioned terminal function program is temporarily stored in the memory.
  • these terminal function programs are stored in a memory inside the apparatus 1600 for multi-screen interaction.
  • the apparatus 1600 for multi-screen interaction may be set in the terminal of the embodiment of the present application.
  • parts of the terminal function programs are stored in a memory outside the multi-screen interaction device 1600
  • other parts of the terminal function programs are stored in a memory inside the multi-screen interaction device 1600 .
  • the devices for multi-screen interaction shown in any one of FIGS. 15 to 16 can be combined with each other, and the devices for multi-screen interaction shown in any of FIGS. 1 to 2 and FIGS. 21 to 22 are related to various optional embodiments.
  • the design details can refer to each other, and can also refer to the multi-screen interaction method shown in any of FIG. 10 or FIG. 18 and the related design details of each optional embodiment. It will not be repeated here.
  • multi-screen interaction method and each optional embodiment shown in FIG. 3 can be used not only for processing during shooting Videos or images, and can also be used to process videos or images that have been captured. This application is not limited.
  • the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed in a computer, the computer is made to execute any one of the above methods.
  • the present application provides a computing device, including a memory and a processor, where executable code is stored in the memory, and when the processor executes the executable code, any one of the foregoing methods is implemented.
  • various aspects or features of the embodiments of the present application may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques.
  • article of manufacture encompasses a computer program accessible from any computer readable device, carrier or medium.
  • computer readable media may include, but are not limited to, magnetic storage devices (eg, hard disks, floppy disks, or magnetic tapes, etc.), optical disks (eg, compact discs (CDs), digital versatile discs (DVDs) etc.), smart cards and flash memory devices (eg, erasable programmable read-only memory (EPROM), card, stick or key drives, etc.).
  • various storage media described herein can represent one or more devices and/or other machine-readable media for storing information.
  • the term "machine-readable medium” may include, but is not limited to, wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
  • the multi-screen interaction apparatus 1500 in FIG. 15 may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software When implemented in software, it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
  • the size of the sequence numbers of the above-mentioned processes does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not be The implementation process of the embodiments of the present application constitutes any limitation.
  • the disclosed systems, devices and methods may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that make contributions to the prior art or the parts of the technical solutions, and the computer software products are stored in a storage medium , including several instructions to cause a computer device (which may be a personal computer, a server, or an access network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the embodiments of this application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

一种多屏交互的方法、装置、终端设备和车辆,涉及智能汽车技术领域。本方法在检测摄像头拍摄的图像中包括特定手势后,将此时第一显示屏上显示的内容转化为子图像,并在第一显示屏上显示其它可以显示子图像的显示屏的标识和相对于第一显示屏的方位信息,用户可以直观的看到后续手势的移动方向,然后根据特定手势的移动方向,确定第二显示屏,并控制子图像在第一显示屏上随特定手势的移动而移动,当监测到特定手势移动的距离大于设定阈值时,将子图像移动至第二显示屏上,从而实现多屏交互功能。

Description

一种多屏交互的方法、装置、终端设备和车辆 技术领域
本申请涉及智能汽车领域,尤其涉及一种多屏交互的方法、装置、终端设备和车辆。
背景技术
随着智能化的发展,车辆的智能化也越来越高,智能座舱的功能也越来越受到消费者的关注。为了提升驾驶员的体验,车辆上有多个显示屏,一个显示屏用于显示车辆的速度、油耗、里程等信息,一个显示屏用于显示导航路线,一个显示屏用于显示音乐、无线广播等娱乐视频,以及其它功能的显示屏。一般情况下,每个显示屏显示的内容都是固定的,无法显示其它显示屏所要显示的内容。随着的投屏、屏幕共享等技术的发展,实现一个显示屏将其显示的内容推送到其它显示屏上显示,以共享显示内容,成为了用户的重要需求和厂商的研究热点。
对于配置多个显示屏的车型,在现有技术中,如果驾驶员想让一个显示屏上显示的内容推送到另一个显示屏上,则需要驾驶员在显示屏上点击特定按钮或在显示屏上拖动目标内容进行滑动的操作,这种需要驾驶员在显示屏进行操作的方式,分散了驾驶员精力,使得车辆行驶存在安全性问题。
发明内容
为了解决上述的问题,本申请的实施例提供了一种多屏交互的方法、装置、终端设备和车辆。
第一方面,本申请提供一种多屏交互的方法,包括:获取传感信息,所述传感信息包括手势信息;根据所述手势信息,触发第一显示屏显示第一界面图像,所述第一界面图像包括子图像,所述子图像的移动趋势与所述手势信息相关联;触发所述子图像显示于第二显示屏。
在该实施方式中,通过获取传感信息,以获取手势信息,然后根据手势信息,来触发显示屏显示界面图像,并显示屏一个子图像,通过将子图像与手势信息相关联,实现子图像随手势的移动而移动,通过触发一定的条件,如手势的移动距离大于设定阈值,触发子图像移动到另一个显示屏上,从而实现多屏交互功能。
在一种实施方式中,当所述手势信息包括五指抓取手势时,所述子图像显示的内容为所述第一界面图像的全部内容。
在一种实施方式中,当所述手势信息包括双指抓取手势时,所述子图像显示的内容为所述第一界面图像中的第一应用程序呈现的界面,所述第一应用程序为用户选定的应用程序或正在运行的应用程序。
在一种实施方式中,所述第一界面图像还包括位置信息,所述位置信息为其它可显示所述子图像的至少一个显示屏标识,以及所述至少一个显示屏相对于所述第一显示屏的方位信息。
在该实施方式中,通过在显示屏上显示其它可以显示子图像的显示屏的标识和相对于第一显示屏的方位信息,用户可以直观的看到后续手势的移动方向,方便后续多屏交互的操作。
在一种实施方式中,在所述触发所述子图像显示于第二显示屏之前,还包括:确定所述第二显示屏,所述第二显示屏根据第一位置信息、第二位置信息和存储的至少一个显示屏相对于所述第一显示屏的方位信息确定,所述第一位置信息为触发所述第一显示屏显示所述第一界面图像时手势的位置信息,所述第二位置信息为当前时刻手势的位置信息,所述至少一个显示屏包括所述第二显示屏。
在该实施方式中,通过预先确定进行交互的目标显示屏,以便后续子图像触发移动到目标显示屏时,快速的移动到目标显示屏上,减少处理时间,提高用户体验。
在一种实施方式中,所述确定所述第二显示屏,包括:根据所述第一位置信息和所述第二位置信息,确定所述第二位置信息相对于所述第一位置信息的第一方位信息;将所述第一方位信息与所述存储的至少一个显示屏相对于所述第一显示屏的方位信息进行比对;当所述第一方位信息与所述第二显示屏相对于所述第一显示屏的方位信息相同时,确定所述第二显示屏。
在该实施方式中,通过将手势的方位信息与显示屏之间的方位信息相关联,用户只需要简单的在空间上移动,产生方位变化,即可实现多屏交互,使得实现该功能的操作简便,易实现。
在一种实施方式中,所述触发所述子图像显示于第二显示屏,包括:检测出所述第一位置信息与第二位置信息之间的距离大于设定阈值时,触发所述子图像显示于第二显示屏。
在该实施方式中,通过将手势移动的距离与多屏交互相关联,用户只需要简单的在空间上移动,产生较大的距离差,即可实现多屏交互,使得实现该功能的操作简便,易实现。
在一种实施方式中,所述子图像为应用程序图标时,其特征在于,所述触发所述子图像显示于第二显示屏,包括:触发所述子图像指示的图像显示于所述第二显示屏。
在该实施方式中,如果子图像为一个应用程序的图像,在将子图像移动到目标显示屏上后,为了更加方便用户,不需要用户进行操作,直接运行该应用程序,不要用户主动在显示屏操作来运行该应用程序。
在一种实施方式中,所述子图像的尺寸小于所述第一显示屏的尺寸,避免子图像全部覆盖原界面图像,影响用户观看原界面图像内容。
在一种实施方式中,所述位置信息显示在所述第一显示屏边缘位置上,以便用户更加直观的看到其它显示屏相对位置。
在一种实施方式中,当所述第一显示屏的分辨率与所述第二显示屏的分辨率不相同时,所述方法还包括:将所述子图像的分辨率设置为所述第二显示屏的分辨率,避免子图像因像素问题不能在目标显示屏上无法显示。
在一种实施方式中,当所述第一显示屏的尺寸与所述第二显示屏的尺寸不相同时,所述方法还包括:将所述子图像的尺寸设置为所述第二显示屏的尺寸,避免子图像因尺寸问题不能在目标显示屏上无法显示。
在一种实施方式中,当所述第二显示屏的长边的尺寸或短边的尺寸小于所述第一显示屏时,所述方法还包括:将所述子图像缩小至所述子图像的长边尺寸与所述第二显示屏的 长边的尺寸相同;或将所述子图像缩小至所述子图像的短边尺寸与所述第二显示屏的短边的尺寸相同。
在该实施方式中,如果子图像长宽比和目标显示屏的长宽比不同时,可以让子图像的长边与目标显示屏对齐,或让子图像的短边与目标显示屏对齐,以便子图像能在显示屏上显示。
第二方面,本申请实施例提供了一种多屏交互的装置,包括:收发单元,用于获取传感信息,所述传感信息包括手势信息;处理单元,用于根据所述手势信息,触发第一显示屏显示第一界面图像,所述第一界面图像包括子图像,所述子图像的移动趋势与所述手势信息相关联;触发所述子图像显示于第二显示屏。
在一种实施方式中,当所述手势信息包括五指抓取手势时,所述子图像显示的内容为所述第一界面图像的全部内容。
在一种实施方式中,当所述手势信息包括双指抓取手势时,所述子图像显示的内容为所述第一界面图像中的第一应用程序呈现的界面,所述第一应用程序为用户选定的应用程序或正在运行的应用程序。
在一种实施方式中,所述第一界面图像还包括位置信息,所述位置信息为其它可显示所述子图像的至少一个显示屏标识,以及所述至少一个显示屏相对于所述第一显示屏的方位信息。
在一种实施方式中,所述处理单元,还用于确定所述第二显示屏,所述第二显示屏根据第一位置信息、第二位置信息和存储的至少一个显示屏相对于所述第一显示屏的方位信息确定,所述第一位置信息为触发所述第一显示屏显示所述第一界面图像时手势的位置信息,所述第二位置信息为当前时刻手势的位置信息,所述至少一个显示屏包括所述第二显示屏。
在一种实施方式中,所述处理单元,具体用于根据所述第一位置信息和所述第二位置信息,确定所述第二位置信息相对于所述第一位置信息的第一方位信息;将所述第一方位信息与所述存储的至少一个显示屏相对于所述第一显示屏的方位信息进行比对;当所述第一方位信息与所述第二显示屏相对于所述第一显示屏的方位信息相同时,确定所述第二显示屏。
在一种实施方式中,所述处理单元,具体用于检测出所述第一位置信息与第二位置信息之间的距离大于设定阈值时,触发所述子图像显示于第二显示屏。
在一种实施方式中,所述子图像为应用程序图标时,所述处理单元,具体用于触发所述子图像指示的图像显示于所述第二显示屏。
在一种实施方式中,所述子图像的尺寸小于所述第一显示屏的尺寸。
在一种实施方式中,所述位置信息显示在所述第一显示屏边缘位置上。
在一种实施方式中,当所述第一显示屏的分辨率与所述第二显示屏的分辨率不相同时,所述处理单元,还用于将所述子图像的分辨率设置为所述第二显示屏的分辨率。
在一种实施方式中,当所述第一显示屏的尺寸与所述第二显示屏的尺寸不相同时,所述处理单元,还用于将所述子图像的尺寸设置为所述第二显示屏的尺寸。
在一种实施方式中,当所述第二显示屏的长边的尺寸或短边的尺寸小于所述第一显示屏时,所述处理单元,还用于将所述子图像缩小至所述子图像的长边尺寸与所述第二显示 屏的长边的尺寸相同;或将所述子图像缩小至所述子图像的短边尺寸与所述第二显示屏的短边的尺寸相同。
第三方面,本申请实施例提供了一种多屏交互的系统,包括:处理器和至少两个显示屏,用于执行如第一方面各个可能实现的实施例。
第四方面,本申请实施例提供了一种车辆,包括:至少一个摄像头,至少两个显示屏,至少一个存储器,至少一个处理器,用于执行如第一方面各个可能实现的实施例。
第五方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行如第一方面各个可能实现的实施例。
第六方面,本申请实施例提供了一种计算设备,包括存储器和处理器,其特征在于,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现如第一方面各个可能实现的实施例。
第七方面,提供了一种计算设备,该计算设备置包括:处理器以及接口电路;其中,该处理器通过该接口电路与存储器耦合,该处理器用于执行该存储器中的程序代码,以实现如第二方面至第四方面任一方面或任一可能的实施方式所提供的技术方案。
本申请实施例中,当驾驶员想让一个显示屏上显示的内容推送到另一个显示屏上显示时,为了避免驾驶员在显示屏上点击特定按钮、在显示屏上拖动目标内容进行滑动等操作方式带来的安全隐患,本申请只需要驾驶员非常自然在将手放在源显示屏旁边,做张开手掌的操作,待识别到激活手势后,等手指逐渐并拢时,源显示屏就进入飞屏激活状态,整个过程相当于手进行了当前屏幕的抓取操作;然后只需要根据界面提示,向对应方向进行移动,在移动过程中,也可以即时在界面看到源显示屏向哪个方向飞去,在移动距离达到阈值后手指张开,就进行飞屏操作,整个过程相当于把源显示屏抓取并扔到目标显示屏,操作简单,不需要驾驶员投放较多的精力,即可实现多屏交互功能,从而保证车辆行驶过程中的安全性。
附图说明
下面对实施例或现有技术描述中所需使用的附图作简单地介绍。
图1为本申请实施例提供的一种车辆的结构示意图;
图2(a)为本申请实施例提供的一种车辆前排座位的显示屏和摄像头设置的位置示意图;
图2(b)为本申请实施例提供的一种车辆后排座位的显示屏和摄像头设置的位置示意图;
图3为本申请实施例提供的一种多屏交互的方法实现流程示意图;
图4(a)为本申请实施例提供的一种实现五指抓取实现显示屏投屏功能的手势变化示意图;
图4(b)为本申请实施例提供的一种实现两指抓取实现应用分享功能的手势变化示意图;
图5为本申请实施例提供的一种识别出图像中手势的示意图;
图6为本申请实施例提供的一种实现五指抓取实现显示屏投屏功能过程中获取子图像 时的源显示屏显示图像和对应的手势示意图;
图7为本申请实施例提供的一种实现两指抓取实现应用分享功能过程中获取子图像时的源显示屏显示图像和对应的手势示意图;
图8为本申请实施例提供的一种源显示屏上部分子图像移出后的显示效果示意图;
图9为本申请实施例提供的一种部分子图像移入目标显示屏后的显示效果示意图;
图10为本申请实施例提供的目标显示屏显示子图像的显示效果示意图之一;
图11为本申请实施例提供的目标显示屏显示子图像的显示效果示意图之二;
图12为本申请实施例提供的目标显示屏显示子图像的显示效果示意图之三;
图13为本申请实施例提供的一种源显示屏同时向两个目标显示屏进行多屏共享时两个子图像移动效果示意图;
图14为本申请实施例提供的一种五指抓取实现显示屏投屏功能的手势和子图像移动过程的示意图;
图15为本申请实施例提供的一种多屏交互的装置结构示意图;
图16为本申请实施例提供的另一种多屏交互的装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
图1为本申请实施例提供的一种车辆的结构示意图。如图1所示,该车辆100包括至少两个显示屏101(例如,图1中所示的显示屏101-1至显示屏101-N)、至少一个摄像头102(例如,图1中所示的摄像头102-1至摄像头102-M)、处理器103、存储器104和总线105。其中,显示屏101、摄像头102、处理器103和存储器104可以通过总线105建立通信连接。其中,N为大于1的整数,M为正整数。
显示屏101的类型可以包括触控式显示屏或非触控式显示屏中的一种或多种,例如,显示屏101可以全部为可触控式显示屏,或全部为非触控式显示屏,或者,显示屏101可以包括触控式显示屏和非触控式显示屏这两种类型的显示屏。其中,显示屏101可以用于显示油量、车速、里程等仪表数据,以及导航路线、音乐、视频、图像(如车辆周围环境图像)等数据。为了节省费用,非触控式显示屏可以用于显示油量、车速、里程等仪表数据,触控式显示屏可以显示导航路线、音乐、视频、图像等数据。应理解的是,触控式显示屏也可以用于显示油量、车速、里程等仪表数据,上述显示内容仅为示例,本申请不对显示屏显示的内容进行限定。
示例性的,显示屏101的位置布置可以参照图2(a)或图2(b)进行设置。其中,图2(a)所示的车辆100的前排,可以在方向盘的中间的位置上设置显示屏101-1,用于显示如播放音乐的音量按键、播放/暂停按键、接听/挂断按键等,以便于驾驶员在驾驶时进行更便捷的操作,减少驾驶员视线和身体角度的切换,提高驾驶安全。在车辆100的挡风玻璃之下和方向盘之上的车体上设置显示屏101-2,用于显示油量、车速、里程等数据;在驾驶位和副驾驶位之间设置显示屏101-3,用于显示音乐视频、导航路线等数据;对于副驾驶位,可以在位于副驾驶位前方的车体上设置显示屏101-4,可以用于显示任何副驾驶位的乘客想要观看的内容。对于图2(b)所示的车辆100的后排,可以在每个前排座椅的背后的上方位置设置显示屏101-5和显示屏101-6,这两个显示屏可以显示任何后排乘客想要观看的内 容,例如,电影、导航信息、天气等。
本申请实施例中的显示屏101的数量和设置的位置不仅限于如图2(a)和如图2(b)所示的数量和位置关系,本申请在此仅仅举例说明,以便读者对方案的理解。具体显示屏101的数量和设置在车辆100上的位置根据实际情况来确定。
摄像头102可以用于捕获静态图像或视频。例如,物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给处理器(如图像处理器)转换成数字图像信号。将数字图像信号输出到数字信号处理器(digital signal processor,DSP)加工处理。DSP将数字图像信号转换成标准的红绿蓝(red green blue,RGB)、亮度带宽色度(luminance bandwidth chrominance,YUV)等格式的图像信号。摄像头102可以设置在车内的不同位置上,摄像头102在车内可以用于采集车辆100内用户的肢体信息(如,手掌信息)。摄像头102工作的方式可以为周期性拍照,也可以持续性拍摄得到视频流。本申请后续以周期性拍照为例讲述本申请技术方案。
在本申请中,车辆100上的各个摄像头102设置的位置如图2(a)和如图2(b)所示。对于前排位置,可以在后视镜上设置一个摄像头102-1,其拍摄范围覆盖驾驶位和副驾驶位,用于采集驾驶位和副驾驶位上的用户的手掌信息;对于后排位置,可以分别在显示屏101-5和显示屏101-6的上方设置摄像头102-3和摄像头102-4,以及在两个后门框的靠近前排座椅处分别摄像头102-2和摄像头102-5,用于采集后排座位上的用户的手掌信息。
同理,本申请实施例中的摄像头102的数量和设置的位置不仅限于如图2(a)和如图2(b)所示的数量和位置关系,本申请在此仅仅举例说明,以便读者对方案的理解。具体摄像头102的数量和设置在车辆100上的位置根据实际情况来确定。
其中,摄像头102采集的手掌信息主要包括五指呈现的手势和手掌相对于摄像头102的位置信息。如果本申请采用双目测距原理确定手掌的位置信息时,则需要在每个位置上设置两个摄像头。
处理器103可以为车载中控单元、中央处理器102(central processing unit,CPU)、云服务器等等,用于对摄像头102采集的图像进行处理,识别出图像中用户的手掌对应的手势类别和手掌的位置信息,然后控制一个显示屏上显示的内容移动到另一个显示屏上。
存储器104可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器104也可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM)、快闪存储器、硬盘(hard disk drive,HDD)或固态硬盘(solid state drive,SSD);存储器104还可以包括上述种类的存储器的组合。存储器104存储的数据不仅摄像头102采集的图像、手势类别的数据库等等,还存储用于执行多屏交互的方法对应的各种指令、应用程序等等。
图3为本申请实施例提供的一种多屏交互方法的流程示意图,该多屏交互方法可以由计算设备执行,也可以是由应用在计算设备内的处理装置执行,还可以由图1所示的系统来执行。其中,计算设备可以是终端,例如,车、车载装置(如车机、车载处理器、车载电脑等),还可以是服务器等云端装置。该处理装置可以是芯片、处理电路、处理器等。 为了便于描述,本申请将以处理器执行该多屏交互方法为例,展开介绍。参考图3所示,该多屏交互方法包括:
S301:获取传感信息。
其中,传感信息可以包括通过传感器获取的信息,例如,可以通过摄像头传感器和雷达传感器中的一个或多个获取的手势信息、环境信息等;还可以通过声音传感器获取环境声音信息,例如用户指令等。其中,上述传感信息可以是传感器采集到的信息,也可以是经过传感器和处理器等器件中的一个或多个器件处理过的信息,例如,通过传感器对图像信息进行降噪处理。
在具体实现过程中,当用户需要将一个显示屏(后续将该显示屏称为“源显示屏”)上显示的用户界面(user interface,UI)或应用程序(application,APP)呈现的界面投影到另一个显示屏(后续将该显示屏称为“目标显示屏”)时,可以通过座舱摄像头检测特定的肢体动作,例如手势(也可以理解为唤醒手势)等,触发开启多屏交互功能。也可以通过语音唤醒、点击显示屏上虚拟按键唤醒等方式,触发开启多屏交互功能。
在具体实现过程中,唤醒手势和操作手势等手势可以有多种实现方式,例如左划手势、右划手势、手掌悬停等,可以是动态手势(或称动态趋势手势),也可是静态手势。用于手势识别的处理器可以通过接口电路获取传感信息,确定当前用户所比划的手势或动作。下面,本申请以“五指抓取实现显示屏投屏”和“两指抓取实现应用分享”这两类方案对手势展开介绍,应理解的是,本申请并不仅限于这两种方案。
以“五指抓取实现显示屏投屏”方案为例对手势动作展开介绍,如图4(a)所示,用户在开始抓取时,先张开五指,靠近源显示屏;抓取时,五指逐渐并拢;在投屏过程中,并拢的五指从源显示屏位置移动到目标显示屏位置,然后逐渐靠近目标显示屏,并逐渐张开五指。
以“两指抓取实现应用分享”方案为例对手势动作展开介绍,如图4(b)所示,用户在开始抓取时,先张开五指(或拇指和食指张开并让其它手指弯曲靠向掌心),然后靠近源显示屏;抓取时,拇指和食指逐渐并拢,其它手指弯曲靠向掌心;在投屏过程中,并拢的两指从源显示屏位置移动到目标显示屏位置,然后逐渐靠近目标显示屏,并逐渐张开拇指和食指。
应理解,本申请实施例中的开启多屏交互功能的手势可以是上述“五指张开”的手势,或“五指张开,并逐渐靠近显示屏”的手势,也可以是其他手势,为了便于阐述,下面以上述手势为例展开介绍。
手势信息的判定,可以通过摄像头的图像(或视频流)获取,如通过手势识别算法识别出图像中人的手指、以及各个手指展示的手势;也可以通过雷达信息获取,如得到三维点云图像后,通过神经网络提取出图像中人的手指特征,再根据手指特征确定出手指呈现的手势;还可以通过其它信息进行判定,本申请在此不作限定,下面以摄像头拍摄的图像或视频流为例,来讲述本申请技术方案。
在开启多屏交互功能后,根据源显示屏所处的位置,获取对应位置的传感信息。其中,对应位置的传感信息可以是采集范围覆盖源显示屏操控位置的摄像头、雷达等传感器所采集到的信息或处理过的信息。例如,如图2(a)所示,处理器接收到显示屏101-2发送的开启交互功能的指令后,开启摄像头工作,以避免开启车辆内所有摄像头进行工作,节省车 辆电量的功耗,同时保护车辆100内其它座位上的乘客隐私。
处理器在接收到传感信息后,对其进行手部目标检测和手势识别,以识别出用户手掌呈现的手势。示例性地,处理器接收到摄像头采集到一张包括手掌的图像,如图5所示,可以先对采集的图像进行预处理,其中,预处理可以包括对图片进行去噪和信息增强。然后可以利用手部目标检测算法获取图像中的目标手势,接着对手部的21自由度(degree od freedoms,DOFS)或者26 DOFS(或者更多)的二维(2 dimension,2D)坐标进行关键点估计,得到当前手势的分类和描述。最后通过手势识别算法对目标手势进行识别,识别出图像中手掌展现的手势。
在本申请中,处理器在获取的某一张图像或视频流中某一帧图像中识别出唤醒手势或操作手势等特定手势。其中,在开启交互功能后,摄像头不断的拍摄图像或视频流,处理器也不断的获取图像或视频流进行处理,所以为了展现获取的图像或视频流中的图像的时序,本申请在描述时,让处理器对获取的图像或视频流中的图像按照时间的顺序进行编号,将此时获取的图像定义为第i张图像或视频流中第j帧图像,i和j均为大于0的正整数。
结合图4(a)和图4(b)所示,特定手势为一个动态趋势手势,处理器在确定特定手势过程中,可以将当前时刻得到图像与上一时刻得到的图像中的手指间的距离进行比对,如果五指之间的距离逐渐减小,则表明“五指并拢”手势;如果五指之间的距离逐渐减小,且中指、无名指和小拇指弯曲,则表明“拇指和食指逐渐并拢,其它手指弯曲靠向掌心”手势。
当处理器检测到特定手势时,控制源显示屏进入多屏交互状态,以便源显示屏将根据手势的变化,对显示的界面或显示的应用程序呈现的界面进行提取。处理器还可以根据摄像头获取的图像,利用手部3维(3 dimension,3D)定位算法,计算出特定手势相对于摄像头的空间坐标(x0,y0,z0)。
如图2(a)所示,如果摄像头为飞行时间(time of flight,TOF)摄像头、双目摄像头等可以测量深度的摄像头时,处理器获取摄像头拍摄的图像和摄像头提供的深度信息后,利用摄像头的内参和外参数据,直接计算出特定手势相对于摄像头的光心的空间坐标。其中,TOF摄像头可以通过向目标物体连续发送光脉冲,并利用传感器接收从目标物体返回的光,基于探测光脉冲的飞行往返时间来得到目标物体距离。
如果摄像头为不可测量深度的单目摄像头时,处理器可以通过深度估计方法来计算手势的深度或距离。这里,给出几种可能的深度估计方法作为示例,本申请对此不做限定,例如:
(1)基于多点透视(pespective-n-point,PNP)算法。使用基于手部模型先验,检测手掌处的关键点,结合摄像头模型,基于PNP算法粗略地估计手部位置,因为只需要对握拳手势进行位置估计,因此关键点检测模型可以对握拳手势进行深度优化。尽管个体差异使得这样得到的关键点3D位置绝对精度不高,但由于手势的方向判断所需要的位置精度并不严格,所以该方法也可以应用于判断方向移动的相对量。
(2)基于深度学习的单目深度估计网络模型。获取摄像头拍摄的图像后,通过深度学习的单目深度估计网络模型可以直接对单目摄像头进行深度估计,从而产生一个虚拟3D摄像头,以弥补硬件3D摄像头的缺失,从而预测出特定手势在获取的传感信息中下一帧或多帧图像中的深度信息。
最后,处理器根据摄像头获取的图像和该图像的预测深度信息,利用摄像头的内参和外参数据,计算出特定手势相对于摄像头的光心的空间坐标。
S302:根据手势信息,触发第一显示屏显示第一界面图像。其中,第一显示屏为源显示屏,第一界面图像为源显示屏显示的界面图像,第一界面图像中还包括一个子图像,子图像可以为缩小版的第一界面图像,也可以为某个APP的图标等等,显示的内容可以为第一界面图像显示的全部画面,也可以为第一界面图像显示的部分图像。
处理器在检测到特定手势后,可以将当前时刻源显示屏呈现的部分或全部画面转化为一个小于源显示屏尺寸的子图像,或将源显示屏上显示的某个APP图标转化为子图像,呈现在源显示屏上,例如在源显示屏的中间位置、左侧位置等位置上显示。可选地,处理器在检测到特定手势后,也可以重新生成一个子图像,其尺寸小于源显示屏的尺寸,显示的内容与原先源显示屏上显示的界面同步。
例如,如图6所示,源显示屏上方可以设置有时间栏,中间部分为播放的视频所在区域。当特定手势为“五指并拢”手势时,处理器将当前时刻源显示屏上显示的整个界面(包括时间栏和视频所在区域)截成一张图像显示在源显示屏的中间位置。
如图7所示,当特定手势为“拇指和食指逐渐并拢,其它手指弯曲靠向掌心”手势时,处理器将当前时刻源显示屏上显示视频所在区域截成一张图像显示在源显示屏的中间位置。
对于图7中如何决策将时间栏转换为子图像还是将视频播放区域转换为子图像,在具体实现过程中有多种实现方式,为了便于理解,下面介绍几种可能的实现方式作为示例,本申请对此不做限定。例如:
对应用程序设定优先级,如将导航APP设置为第一优先级,将视频、音乐、无线广播等APP设置为第二优先级,将时间、天气预报等基础APP设置为第三优先级,以此类推,通过对不同的应用程序设置不同的优先级,在转换子图像时,将源显示屏上显示的优先级最高的应用程序的内容转换为子图像,以保证子图像显示的内容为用户想要推送的内容。
再例如:根据特定手势的位置靠近源显示屏上各个区域之间的距离确定,如果特定手势距离视频播放的位置比较近,则将视频播放区域转换为子图像,如果特定手势距离时间栏所处的位置比较近,则将时间栏转换为子图像。本申请不仅限于上述两种方案,也可以为其它方案。
源显示屏上显示的子图像的尺寸,可以为固定值,也可以变化值。可选地,处理器在后续不断获取摄像头采集的图像,如果识别出的特定手势中手指之间的距离不断减小,可以控制源显示屏上显示的子图像的尺寸也不断减小。
另外,处理器控制源显示屏上不仅呈现子图像,还可以显示能接收子图像的其它显示屏的标识、其它显示屏相对自身的方位信息等位置信息。由于车辆在出厂时,各个显示屏的位置已经固定好了,所以对于每一个显示屏来说,其它显示屏处在自身周围的方位信息都是固定的,可以将显示屏与显示屏之间的方位信息提前存储在存储器中。
可选地,显示屏与显示屏之间的方位信息可以通过两个显示屏的位置之间形成的向量来表示。以图2(a)为例,对于显示屏101-2来说,以显示屏101-2的中心为空间坐标原点(X0,Y0,Z0),由于每个显示屏位置是固定的,可以预先存储各个显示屏之间的坐标位置,在确定各个其它显示屏以显示屏101-2的中心为空间坐标原点坐标系下的坐标(X,Y,Z),然后计算出各个其它显示屏相对于显示屏101-2的向量M,将计算出的向量M作为各个 其它显示屏相对于显示屏101-2的方位。
处理器可以控制源显示屏上显示其它显示屏的位置信息,以便用户直观地了解到如何将子图像移动到目标显示屏上。以图6为例,并结合图2(a),由于源显示屏是要将播放的视频分享到其它显示屏上,而车辆上的其它显示屏均可以显示该视频,所以在源显示屏上可以虚拟出一个显示屏的图案,同时以文字的方式显示每个显示屏的相对于源显示屏的方位和名称,并将每个显示屏对应的图案按照方位关系,设置在源显示屏边缘的靠近实体显示屏处。
处理器控制源显示屏生成子图像呈现在源显示屏中间位置时,可以通过获取相应时刻获取的摄像头图像中的手势的空间坐标,将此时子图像处在源显示屏中的位置与手势的空间坐标建立映射关系,使得后续当手势移动后,子图像根据手势移动的方位和距离,相对于当前时刻子图像位置,作对应方位和等比例距离的移动。
此处需要说明的是,处理器可以实时地接收摄像头拍摄的每一张图像或每一帧视频流,也可以每隔一段时间接收一次摄像头拍摄的图像或视频流,本申请在此不作限定。所以下面提到“下一张图像或下一帧图像”并不代表摄像头拍摄的连续两张图像或两帧图像,可以为中间间隔了几张(帧)、几十张(帧)图像。
处理器在检测到特定手势后,若在后续的一帧或多帧图像中,持续没有检测到该特定手势,可以控制源显示屏不再显示子图像和位置信息。例如,处理器在首次检测到特定手势之后的下一帧或多帧图像中,没有识别出特定手势,可能是用户的手势发生变化、或手势移动至摄像头拍摄范围以外等情况,导致下一帧或多帧图像中没有特定手势,则处理器可以向源显示屏发送控制指令,控制源显示屏不显示子图像和位置信息。
可选地,为了避免交互功能的误触发,处理器在检测到特定手势超过一定次数后,确定触发该特定手势对应的功能。例如,处理器在预设时间内的传感信息中多次获取到该特定手势,则处理器可以认为获取了有效手势,并触发对应的功能。再例如,在预设时间内摄像头获取到的图像中的多帧都检测到了特定手势,并超过预设阈值。处理器可以向源显示屏发送控制指令,用于让源显示屏显示子图像和位置信息;如果在检测到包括特定手势的图像数量不足预设阈值时,处理器则不执行后续操作,将此次检测到的结果丢弃或删除。
如果处理器检测到下一帧或多帧图像中还包括特定手势,且相比较上一帧图像或首次检测到特定手势的图像(也即第i张图像或视频流中第j帧图像),特定手势的位置发生变化时,可以根据检测到下一帧或多帧图像计算出的特定手势的空间坐标(x0,y0,z0)、上一帧图像或首次检测到特定手势的图像计算出的特定手势的空间坐标(x,y,z),计算出特定手势的移动矢量m=(x-x0,y-y0,z-z0),然后将移动向量m与方位向量M进行比对,根据比对的结果,将与向量m平行或夹角最小的方位向量M对应的显示屏,确定为目标显示屏。
S303:触发子图像显示于第二显示屏。其中,触发子图像在第二显示屏上显示的条件为:检测第一位置信息与第二位置信息之间的距离大于设定阈值。
第一位置信息是指首次检测到特征手势的图像(也即第i张图像或视频流中第j帧图像)中特定手势相对于摄像头的空间坐标,第二位置信息是指下一帧或多帧之后的图像(也即第i+n张图像或视频流中第j+n帧图像,n为大于零的正整数)中特定手势相对于摄像头的空间坐标。具体实现过程为:
例如:处理器计算出特定手势在摄像头拍摄下一帧或多帧之后的图像中的移动向量m 后,结合源显示屏平面的法向量n,计算出移动向量m投影到源显示屏平面的平移向量m1。然后,处理器可以根据平移向量m1的方向,确定源显示屏上的子图像移动的方向,可以根据平移向量m1的模长,以及预先设定的模长与源显示屏上子图像移动距离的比例关系,确定源显示屏上的子图像移动的距离。最后,处理器可以控制源显示屏上的子图像从平面中心沿平移向量m1的方向移动一定比例关系的平移向量m1的模长,从而实现子图像跟随用户手势的移动而移动。
随着用户的特定手势不断移动,处理器可以根据每次接收到的图像处理结果,都向源显示屏发送一次控制指令,让子图像显示在源显示屏上的不同位置,从而实现子图像随特定手势移动而移动。如果处理器检测到子图像随着特定手势移动而移动后,有部分或全部的界面不在源显示屏上显示时,则处理器可以同时向目标显示屏发送控制指令,让目标显示屏显示子图像未在源显示屏上显示的界面。
可选地,如果子图像为APP图标,处理器可以根据特定手势移动的距离,确定子图像移动到目标显示屏上显示时,控制器向目标显示屏发送控制指令,该控制指令用于让目标显示屏显示该APP图标,或者让目标显示屏显示该APP运行后的界面。
对于如何确定将子图像从源显示屏移动至目标显示屏的判断条件,本申请提供了两种判断条件作为示例,本申请对此不做限定,例如:
1、平移向量m1的模长。本申请预先设定一个阈值,处理器可以根据移动矢量m确定出目标显示屏后,再根据移动矢量m计算出平移向量m1的模长,如果该模长大于设定的阈值,则将子图像移动至目标显示屏;如果该模长不大于设定阈值,则子图像仍在源显示屏上移动。
2、子图像的中心点移出源显示屏。本申请以子图像的中心点移出源显示屏为界限,子图像随着特定手势的移动过程中,处理器可以实时检测源显示屏上子图像的位置,如果检测到子图像的中心点在源显示屏上的边框或不在源显示屏上,表明此时子图像已经有一半的面积移出源显示屏上,则将子图像移动至目标显示屏;如果检测子图像的中心点在源显示屏上,表明此时子图像没有一半的面积移出源显示屏上,则子图像仍在源显示屏上移动。
当然,本申请还可以根据子图像移出的面积、子图像的长边的尺寸等方式作为判断条件,在此不做限定。
对于子图像在源显示屏上移动且子图像的中心点未移出源显示屏这种情况,此时的源显示屏的界面上显示的效果如图8所示。而目标显示屏上,可以不显示源显示屏上移出的子图像的部分,也可以显示源显示屏上移出的子图像的部分,如图9所示,以便用户更加直观的看到自己移动子图像到哪个显示屏和移动的效果。
如果目标显示屏的分辨率和源显示屏的分辨率不相同,子图像移动到目标显示屏后,处理器可以将子图像的分辨率增加或减少至目标显示屏的分辨率,然后发送到目标显示屏上显示,以免子图像移动到目标显示屏上显示异常。
如果子图像的尺寸或长宽比与目标显示屏的尺寸或长宽比不相同,或源显示屏的尺寸或长宽比与目标显示屏的尺寸或长宽比不相同,处理器在确定子图像移动到目标显示屏时,可以根据两者的尺寸或长宽比进行比较,调整子图像的尺寸或长宽比,使子图像可以在目标显示屏上正常显示。例如:如果目标显示屏的尺寸、长边尺寸和短边尺寸都大于源显示 屏,处理器可以让子图像保持原先的尺寸在目标界面上显示,如图10所示,也可以增大子图像的尺寸,以便子图像以最大尺寸在目标显示屏上显示,如图11所示;如果目标显示屏的尺寸、长边尺寸和短边尺寸都小于源显示屏或子图像的尺寸,处理器可以缩小子图像的尺寸,以便子图像可以正常的在目标显示屏上显示;如果目标显示屏的长边尺寸和短边尺寸的一个尺寸小于源显示屏或子图像,处理器可以缩小子图像的尺寸至长边尺寸符合目标显示屏的长边尺寸,或缩小子图像的尺寸至短边尺寸符合目标显示屏的短边尺寸,然后在目标显示屏上显示,如图12所示。
在子图像移动到目标显示屏后,如果处理器接收到摄像头采集的摄像头图像中应该不包括特定手势。此时对于用户来说,手势的转变应该从特定手势转换为“张开五指”、“张开拇指和食指”的非特定手势。示例性地,处理器在设定时间段内没有接收到包括特定手势的摄像头图像,或检测到特定手势如“张开五指,并逐渐靠近目标显示屏”、或接收到连续三次点击目标显示屏等方式,则可以关闭“多屏交互”功能,以节省功耗和计算成本。
在子图像移动到目标显示屏后,如果处理器接收到摄像头采集的摄像头图像中还包括特定手势,则可以默认为用户第二次将源显示屏上显示的内容进行多屏共享,此时处理器可以再次执行步骤S301至步骤S303中实现过程。
处理器在执行上述步骤S301至步骤S303过程中,如果检测到摄像头拍摄的图像中有两个或两个以上的特定手势,则可以认为用户将源显示屏上显示的内容分享至两个或两个以上的目标显示屏上,处理器可以先对两个特定手势进行标记,然后根据标记后的多个特定手势,执行步骤S301至步骤S303中实现过程,从而实现同时源显示屏上显示的内容向多个目标显示屏进行多屏共享。示例性地,如图13所示,处理器可以将源显示屏上显示的内容向两个目标显示屏进行共享时,源显示屏上可以同时显示两个特定手势对应的子图像、子图像移动的方向等信息,以便用户可以直观的看到两个子图像移动的过程。
本申请实施例中,处理器检测摄像头拍摄的图像中包括特定手势后,可以将此时源显示屏上显示的内容转化为子图像,并在源显示屏上显示其它可以显示子图像的显示屏的标识和相对于源显示屏的方位信息,用户可以直观的看到后续手势的移动方向,然后根据特定手势的移动方向,确定目标显示屏,并控制子图像在源显示屏上随特定手势的移动而移动,当检测到特定手势移动的距离大于设定阈值时,将子图像移动至目标显示屏上,从而实现多屏交互功能。
图14为本申请实施例提供的五指抓取实现显示屏投屏功能的手势和子图像移动过程的示意图。如图14所示,驾驶员在实现一次将源显示屏上显示的内容推送到目标显示屏过程中,手势的变化和移动过程为:
先将一只手放置在源显示屏前方,且手的五指处于张开状态,如图14(a)所示;然后五个手指逐渐并拢(和远离源显示屏方向移动),激活源显示屏开启飞屏模式,并将当前界面显示的内容转换成子图像,如图14(b)所示;接着并拢的手指向左(或向目标显示屏方向)移动,此时手指呈现的手势保持不变,源显示屏上的子图像随着手指的移动而移动,且方向相同,如图14(c)所示;最后当子图像移动的距离满足设定条件后,自动在目标显示屏上显示,如图14(d)所示此时的手势可以仍为如图14(c)所示的状态,也可以为并拢的五指逐渐张开(和靠近目标显示屏方向移动)。
而显示屏投屏功能实现过程如下:
图14(a)所示的状态,对于处理器来说,执行图3所示的流程图中步骤S301,获取摄像头采集的图像,并进行识别;对于源显示屏,此时没有显示子图像、其它显示屏标识和方位信息等信息;对于目标显示屏,此时也没有显示子图像;对于用户来说,用户正张开五指靠近源显示屏。
图14(b)所示的状态,对于处理器来说,执行图3所示的流程图中步骤S302,检测到摄像头拍摄的图像中包括特定手势后,控制源显示屏上显示子图像、其它显示屏标识和方位信息等信息;对于源显示屏,此时显示缩小到界面中心的子图像,位于界面的其它显示屏标识和方位信息等信息;对于目标显示屏,此时没有显示子图像;对于用户来说,用户的手势正从张开的五指逐渐并拢的过程转换。
图14(c)所示的状态,对于处理器来说,执行图3所示的流程图中步骤S302,检测到特定手势移动后,根据移动方向和距离,确定目标显示屏和控制源显示屏上子图像的随特定手势的移动而移动;对于源显示屏,此时显示子图像在移动;对于目标显示屏,此时没有显示子图像(或显示部分子图像);对于用户来说,用户的手势从源显示屏处移动到目标显示屏处。
图14(d)所示的状态,对于处理器来说,执行图3所示的流程图中步骤S303,检测到特定手势移动的距离大于设定阈值后,控制子图像移动至目标显示屏上;对于源显示屏,此时不显示子图像、其它显示屏标识和方位信息等信息;对于目标显示屏,此时显示子图像;对于用户来说,用户的手势移出摄像头拍摄区域(或将并拢的五指张开)。
从处理器内部执行来说,开启多屏交互功能后,处理器会持续对摄像头捕获的图像进行识别。首先是会运行一个手部目标检测器,然后对目标检测器检测到的手进行进一步分类,判断是否存在飞屏触发手势(也即上述提到的特定手势,为“五指张开”的手势),若存在触发手势,则对其运行目标跟踪算法,跟踪触发手的位置和状态,并在源显示屏上显示可激活飞屏的提示;若跟踪手切换到了激活手势(也即上述提到的特定手势,为“五指并拢”或“两指并拢”),则根据激活手势,则飞屏激活,此时进入到图3中的步骤S302,源显示屏上显示可进行飞屏的目标方向,此时若激活手发生移动,根据当前的手部坐标(x,y,z)计算移动矢量(x-x0,y-y0,z-z0),再根据屏幕平面的法向量n,计算出屏幕投影平面的平移向量m1,源显示屏根据m1向量的距离和模显示移动效果,若移动距离即m1的模大于阈值,则向目标显示屏执行飞屏动作,若小于阈值内放弃激活手势,则可取消当前飞屏操作。
图15为本申请实施例提供的一种多屏交互的装置的结构示意图,该多屏交互的装置可以计算设备或电子设备(例如,终端),也可以是电子设备内的装置(例如,ISP或SoC)。并且可以实现如图3至图14所示的多屏交互的方法以及上述各可选实施例。如图15所示,多屏交互的装置1500包括:收发单元1501和处理单元1502。
本申请中,多屏交互的装置1500具体实现过程为:收发单元1501用于获取传感信息,该传感信息包括手势信息;处理单元1502用于根据手势信息,触发第一显示屏显示第一界面图像,该第一界面图像包括子图像,该子图像的移动趋势与手势信息相关联;以及触发该子图像显示于第二显示屏。
收发单元1501用于执行上述多屏交互的方法中S301以及其中任一可选的示例。处理单 元1502,用于执行上述多屏交互的方法中S302、S303以及其中任一可选的示例。具体参见方法示例中的详细描述,此处不做赘述。
应理解的是,本申请实施例中的多屏交互的装置可以由软件实现,例如,具有上述功能的计算机程序或指令来实现,相应计算机程序或指令可以存储在终端内部的存储器中,通过处理器读取该存储器内部的相应计算机程序或指令来实现上述功能。或者,本申请实施例中的多屏交互的装置还可以由硬件来实现。其中处理单元1502为处理器(如NPU、GPU、系统芯片中的处理器),收发单元1501为收发电路或接口电路。或者,本申请实施例中的多屏交互的装置还可以由处理器和软件模块的结合实现。
应理解,本申请实施例中的装置处理细节可以参考图4-图14示意的相关内容,本申请实施例将不再重复赘述。
图16为本申请实施例提供的另一种多屏交互的装置的结构示意图,该多屏交互的装置可以计算设备或电子设备(例如,终端),也可以是电子设备内的装置(例如,ISP或SoC)。并且可以实现如图3至图14所示的多屏交互的方法以及上述各可选实施例。如图16所示,多屏交互的装置1600包括:处理器1601,与处理器1601耦合的接口电路1602。应理解,虽然图16中仅示出了一个处理器和一个接口电路。多屏交互的装置1600可以包括其他数目的处理器和接口电路。
本申请中,多屏交互的装置1600具体实现过程为:接口电路1602用于获取传感信息,该传感信息包括手势信息;处理器1601用于根据手势信息,触发第一显示屏显示第一界面图像,该第一界面图像包括子图像,该子图像的移动趋势与手势信息相关联;以及触发该子图像显示于第二显示屏。
其中,接口电路1602用于与终端的其他组件连通,例如存储器或其他处理器。处理器1601用于通过接口电路1602与其他组件进行信号交互。接口电路1602可以是处理器1601的输入/输出接口。
例如,处理器1601通过接口电路1602读取与之耦合的存储器中的计算机程序或指令,并译码和执行这些计算机程序或指令。应理解,这些计算机程序或指令可包括上述终端功能程序,也可以包括上述应用在终端内的多屏交互的装置的功能程序。当相应功能程序被处理器1601译码并执行时,可以使得终端或在终端内的多屏交互的装置实现本申请实施例所提供的多屏交互的方法中的方案。
可选的,这些终端功能程序存储在多屏交互的装置1600外部的存储器中。当上述终端功能程序被处理器1601译码并执行时,存储器中临时存放上述终端功能程序的部分或全部内容。
可选的,这些终端功能程序存储在多屏交互的装置1600内部的存储器中。当多屏交互的装置1600内部的存储器中存储有终端功能程序时,多屏交互的装置1600可被设置在本申请实施例的终端中。
可选的,这些终端功能程序的部分内容存储在多屏交互的装置1600外部的存储器中,这些终端功能程序的其他部分内容存储在多屏交互的装置1600内部的存储器中。
应理解,图15至图16任一所示的多屏交互的装置可以互相结合,图1至图2、图21至图22任一所示的多屏交互的装置以及各可选实施例相关设计细节可互相参考,也可以参考图10或图18任一所示的多屏交互的方法以及各可选实施例相关设计细节。此处不再重复赘述。
应理解,图3所示的多屏交互的方法以及各可选实施例,图15至图16任一所示的多屏交互的装置以及各可选实施例,不仅可以用于在拍摄中处理视频或图像,还可以用于处理已经拍摄完成的视频或图像。本申请不做限定。
本申请提供一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行上述任一项方法。
本申请提供一种计算设备,包括存储器和处理器,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现上述任一项方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
此外,本申请实施例的各个方面或特征可以实现成方法、装置或使用标准编程和/或工程技术的制品。本申请中使用的术语“制品”涵盖可从任何计算机可读器件、载体或介质访问的计算机程序。例如,计算机可读介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,压缩盘(compact disc,CD)、数字通用盘(digital versatile disc,DVD)等),智能卡和闪存器件(例如,可擦写可编程只读存储器(erasable programmable read-only memory,EPROM)、卡、棒或钥匙驱动器等)。另外,本文描述的各种存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读介质。术语“机器可读介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
在上述实施例中,图15中多屏交互装置1500可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
应当理解的是,在本申请实施例的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通 过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者接入网设备等)执行本申请实施例各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。

Claims (29)

  1. 一种多屏交互的方法,其特征在于,包括:
    获取传感信息,所述传感信息包括手势信息;
    根据所述手势信息,触发第一显示屏显示第一界面图像,所述第一界面图像包括子图像,所述子图像的移动趋势与所述手势信息相关联;
    触发所述子图像显示于第二显示屏。
  2. 根据权利要求1所述的方法,当所述手势信息包括五指抓取手势时,其特征在于,所述子图像显示的内容为所述第一界面图像的全部内容。
  3. 根据权利要求1所述的方法,当所述手势信息包括双指抓取手势时,其特征在于,所述子图像显示的内容为所述第一界面图像中的第一应用程序呈现的界面,所述第一应用程序为用户选定的应用程序或正在运行的应用程序。
  4. 根据权利要求1-3任意一项所述的方法,其特征在于,所述第一界面图像还包括位置信息,所述位置信息为其它可显示所述子图像的至少一个显示屏标识,以及所述至少一个显示屏相对于所述第一显示屏的方位信息。
  5. 根据权利要求1-4任意一项所述的方法,其特征在于,在所述触发所述子图像显示于第二显示屏之前,还包括:
    确定所述第二显示屏,所述第二显示屏根据第一位置信息、第二位置信息和存储的至少一个显示屏相对于所述第一显示屏的方位信息确定,所述第一位置信息为触发所述第一显示屏显示所述第一界面图像时手势的位置信息,所述第二位置信息为当前时刻手势的位置信息,所述至少一个显示屏包括所述第二显示屏。
  6. 根据权利要求1-5任意一项所述的方法,其特征在于,所述确定所述第二显示屏,包括:
    根据所述第一位置信息和所述第二位置信息,确定所述第二位置信息相对于所述第一位置信息的第一方位信息;
    将所述第一方位信息与所述存储的至少一个显示屏相对于所述第一显示屏的方位信息进行比对;
    当所述第一方位信息与所述第二显示屏相对于所述第一显示屏的方位信息相同时,确定所述第二显示屏。
  7. 根据权利要求5或6所述的方法,其特征在于,所述触发所述子图像显示于第二显示屏,包括:
    检测出所述第一位置信息与第二位置信息之间的距离大于设定阈值时,触发所述子图像显示于第二显示屏。
  8. 根据权利要求1-7任意一项所述的方法,所述子图像为应用程序图标时,其特征在于,所述触发所述子图像显示于第二显示屏,包括:
    触发所述子图像指示的图像显示于所述第二显示屏。
  9. 根据权利要求1-8任意一项所述的方法,其特征在于,所述子图像的尺寸小于所述第一显示屏的尺寸。
  10. 根据权利要求1-9任意一项所述的方法,其特征在于,所述位置信息显示在所述第一显示屏边缘位置上。
  11. 根据权利要求1-10任意一项所述的方法,其特征在于,当所述第一显示屏的分辨率与所述第二显示屏的分辨率不相同时,所述方法还包括:
    将所述子图像的分辨率设置为所述第二显示屏的分辨率。
  12. 根据权利要求1-11任意一项所述的方法,其特征在于,当所述第一显示屏的尺寸与所述第二显示屏的尺寸不相同时,所述方法还包括:
    将所述子图像的尺寸设置为所述第二显示屏的尺寸。
  13. 根据权利要求1-12任意一项所述的方法,其特征在于,当所述第二显示屏的长边的尺寸或短边的尺寸小于所述第一显示屏时,所述方法还包括:
    将所述子图像缩小至所述子图像的长边尺寸与所述第二显示屏的长边的尺寸相同;或
    将所述子图像缩小至所述子图像的短边尺寸与所述第二显示屏的短边的尺寸相同。
  14. 一种多屏交互的装置,其特征在于,包括:
    收发单元,用于获取传感信息,所述传感信息包括手势信息;
    处理单元,用于根据所述手势信息,触发第一显示屏显示第一界面图像,所述第一界面图像包括子图像,所述子图像的移动趋势与所述手势信息相关联;
    触发所述子图像显示于第二显示屏。
  15. 根据权利要求14所述的装置,当所述手势信息包括五指抓取手势时,其特征在于,所述子图像显示的内容为所述第一界面图像的全部内容。
  16. 根据权利要求14所述的装置,当所述手势信息包括双指抓取手势时,其特征在于,所述子图像显示的内容为所述第一界面图像中的第一应用程序呈现的界面,所述第一应用程序为用户选定的应用程序或正在运行的应用程序。
  17. 根据权利要求14-16任意一项所述的装置,其特征在于,所述第一界面图像还包括位置信息,所述位置信息为其它可显示所述子图像的至少一个显示屏标识,以及所述至少一个显示屏相对于所述第一显示屏的方位信息。
  18. 根据权利要求1-17任意一项所述的装置,其特征在于,所述处理单元,还用于
    确定所述第二显示屏,所述第二显示屏根据第一位置信息、第二位置信息和存储的至少一个显示屏相对于所述第一显示屏的方位信息确定,所述第一位置信息为触发所述第一显示屏显示所述第一界面图像时手势的位置信息,所述第二位置信息为当前时刻手势的位置信息,所述至少一个显示屏包括所述第二显示屏。
  19. 根据权利要求18所述的装置,其特征在于,所述处理单元,具体用于
    根据所述第一位置信息和所述第二位置信息,确定所述第二位置信息相对于所述第一位置信息的第一方位信息;
    将所述第一方位信息与所述存储的至少一个显示屏相对于所述第一显示屏的方位信息进行比对;
    当所述第一方位信息与所述第二显示屏相对于所述第一显示屏的方位信息相同时,确定所述第二显示屏。
  20. 根据权利要求18或19所述的装置,其特征在于,所述处理单元,具体用于
    检测出所述第一位置信息与第二位置信息之间的距离大于设定阈值时,触发所述子图像显示于第二显示屏。
  21. 根据权利要求14-20任意一项所述的装置,所述子图像为应用程序图标时,其特征 在于,所述处理单元,具体用于
    触发所述子图像指示的图像显示于所述第二显示屏。
  22. 根据权利要求14-21任意一项所述的装置,其特征在于,所述子图像的尺寸小于所述第一显示屏的尺寸。
  23. 根据权利要求14-22任意一项所述的装置,其特征在于,所述位置信息显示在所述第一显示屏边缘位置上。
  24. 根据权利要求14-23任意一项所述的装置,其特征在于,当所述第一显示屏的分辨率与所述第二显示屏的分辨率不相同时,所述处理单元,还用于
    将所述子图像的分辨率设置为所述第二显示屏的分辨率。
  25. 根据权利要求14-24任意一项所述的装置,其特征在于,当所述第一显示屏的尺寸与所述第二显示屏的尺寸不相同时,所述处理单元,还用于
    将所述子图像的尺寸设置为所述第二显示屏的尺寸。
  26. 根据权利要求14-25任意一项所述的装置,其特征在于,当所述第二显示屏的长边的尺寸或短边的尺寸小于所述第一显示屏时,所述处理单元,还用于
    将所述子图像缩小至所述子图像的长边尺寸与所述第二显示屏的长边的尺寸相同;或
    将所述子图像缩小至所述子图像的短边尺寸与所述第二显示屏的短边的尺寸相同。
  27. 一种车辆,其特征在于,包括:
    至少一个摄像头,
    至少两个显示屏,
    至少一个存储器,
    至少一个处理器,用于执行如权利要求1-13中的任一项所述的方法。
  28. 一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行权利要求1-13中任一项的所述的方法。
  29. 一种计算设备,包括存储器和处理器,其特征在于,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现权利要求1-13中任一项所述的方法。
PCT/CN2021/090009 2021-04-26 2021-04-26 一种多屏交互的方法、装置、终端设备和车辆 WO2022226736A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN202180001484.XA CN113330395B (zh) 2021-04-26 2021-04-26 一种多屏交互的方法、装置、终端设备和车辆
CN202311246725.4A CN117492557A (zh) 2021-04-26 2021-04-26 一种多屏交互的方法、装置、终端设备和车辆
EP21938227.2A EP4318186A4 (en) 2021-04-26 2021-04-26 METHOD AND DEVICE FOR INTERACTION BETWEEN MULTIPLE SCREENS AND TERMINAL DEVICE AND VEHICLE
PCT/CN2021/090009 WO2022226736A1 (zh) 2021-04-26 2021-04-26 一种多屏交互的方法、装置、终端设备和车辆
JP2023566400A JP2024518333A (ja) 2021-04-26 2021-04-26 マルチスクリーンインタラクション方法及び機器、端末装置、及び車両
US18/494,949 US20240051394A1 (en) 2021-04-26 2023-10-26 Multi-Screen Interaction Method and Apparatus, Terminal Device, and Vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/090009 WO2022226736A1 (zh) 2021-04-26 2021-04-26 一种多屏交互的方法、装置、终端设备和车辆

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/494,949 Continuation US20240051394A1 (en) 2021-04-26 2023-10-26 Multi-Screen Interaction Method and Apparatus, Terminal Device, and Vehicle

Publications (1)

Publication Number Publication Date
WO2022226736A1 true WO2022226736A1 (zh) 2022-11-03

Family

ID=77427052

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/090009 WO2022226736A1 (zh) 2021-04-26 2021-04-26 一种多屏交互的方法、装置、终端设备和车辆

Country Status (5)

Country Link
US (1) US20240051394A1 (zh)
EP (1) EP4318186A4 (zh)
JP (1) JP2024518333A (zh)
CN (2) CN117492557A (zh)
WO (1) WO2022226736A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117156189A (zh) * 2023-02-27 2023-12-01 荣耀终端有限公司 投屏显示方法及电子设备

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113448469B (zh) * 2021-09-01 2021-12-31 远峰科技股份有限公司 车载多屏显示多样化分享交互方法及装置
CN114138219A (zh) * 2021-12-01 2022-03-04 展讯通信(上海)有限公司 一种多屏显示方法、多屏显示系统及存储介质
CN114546239A (zh) * 2021-12-28 2022-05-27 浙江零跑科技股份有限公司 一种智能座舱副驾投屏手势操作方法
CN114647319A (zh) * 2022-03-28 2022-06-21 重庆长安汽车股份有限公司 一种用户车内屏幕显示信息流转的方法、系统及存储介质
CN115097929A (zh) * 2022-03-31 2022-09-23 Oppo广东移动通信有限公司 车载投屏方法、装置、电子设备、存储介质和程序产品
CN115097970A (zh) * 2022-06-30 2022-09-23 阿波罗智联(北京)科技有限公司 展示控制方法、装置、电子设备、存储介质及车辆

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108556740A (zh) * 2018-04-17 2018-09-21 上海商泰汽车信息系统有限公司 多屏共享设备及方法、计算机可读介质、车载设备
CN109491558A (zh) * 2017-09-11 2019-03-19 上海博泰悦臻网络技术服务有限公司 车载系统的屏间应用交互方法及装置、存储介质和车机
CN109992193A (zh) * 2019-03-29 2019-07-09 佛吉亚好帮手电子科技有限公司 一种车内触控屏飞屏互动方法
US20200406752A1 (en) * 2019-06-25 2020-12-31 Hyundai Mobis Co., Ltd. Control system and method using in-vehicle gesture input
CN112486363A (zh) * 2020-10-30 2021-03-12 华为技术有限公司 一种跨设备的内容分享方法、电子设备及系统
CN112513787A (zh) * 2020-07-03 2021-03-16 华为技术有限公司 车内隔空手势的交互方法、电子装置及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140258942A1 (en) * 2013-03-05 2014-09-11 Intel Corporation Interaction of multiple perceptual sensing inputs
KR102091028B1 (ko) * 2013-03-14 2020-04-14 삼성전자 주식회사 사용자 기기의 오브젝트 운용 방법 및 장치
DE102016108885A1 (de) * 2016-05-13 2017-11-16 Visteon Global Technologies, Inc. Verfahren zum berührungslosen Verschieben von visuellen Informationen
US20190073040A1 (en) * 2017-09-05 2019-03-07 Future Mobility Corporation Limited Gesture and motion based control of user interfaces

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109491558A (zh) * 2017-09-11 2019-03-19 上海博泰悦臻网络技术服务有限公司 车载系统的屏间应用交互方法及装置、存储介质和车机
CN108556740A (zh) * 2018-04-17 2018-09-21 上海商泰汽车信息系统有限公司 多屏共享设备及方法、计算机可读介质、车载设备
CN109992193A (zh) * 2019-03-29 2019-07-09 佛吉亚好帮手电子科技有限公司 一种车内触控屏飞屏互动方法
US20200406752A1 (en) * 2019-06-25 2020-12-31 Hyundai Mobis Co., Ltd. Control system and method using in-vehicle gesture input
CN112513787A (zh) * 2020-07-03 2021-03-16 华为技术有限公司 车内隔空手势的交互方法、电子装置及系统
CN112486363A (zh) * 2020-10-30 2021-03-12 华为技术有限公司 一种跨设备的内容分享方法、电子设备及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4318186A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117156189A (zh) * 2023-02-27 2023-12-01 荣耀终端有限公司 投屏显示方法及电子设备

Also Published As

Publication number Publication date
CN113330395B (zh) 2023-10-20
US20240051394A1 (en) 2024-02-15
EP4318186A1 (en) 2024-02-07
EP4318186A4 (en) 2024-05-01
CN113330395A (zh) 2021-08-31
CN117492557A (zh) 2024-02-02
JP2024518333A (ja) 2024-05-01

Similar Documents

Publication Publication Date Title
WO2022226736A1 (zh) 一种多屏交互的方法、装置、终端设备和车辆
US11093045B2 (en) Systems and methods to augment user interaction with the environment outside of a vehicle
US10761610B2 (en) Vehicle systems and methods for interaction detection
CN103870802B (zh) 使用指谷操作车辆内的用户界面的系统和方法
US10120454B2 (en) Gesture recognition control device
US20180224948A1 (en) Controlling a computing-based device using gestures
EP2743799B1 (en) Control apparatus, vehicle, and portable terminal using hand information for command generation
Garber Gestural technology: Moving interfaces in a new direction [technology news]
US9898090B2 (en) Apparatus, method and recording medium for controlling user interface using input image
US10885322B2 (en) Hand-over-face input sensing for interaction with a device having a built-in camera
JP2016520946A (ja) 人間対コンピュータの自然な3次元ハンドジェスチャベースのナビゲーション方法
US20140198030A1 (en) Image projection device, image projection system, and control method
US10108334B2 (en) Gesture device, operation method for same, and vehicle comprising same
US20200142495A1 (en) Gesture recognition control device
US9639167B2 (en) Control method of electronic apparatus having non-contact gesture sensitive region
US20140223374A1 (en) Method of displaying menu based on depth information and space gesture of user
US20220019288A1 (en) Information processing apparatus, information processing method, and program
US20150123901A1 (en) Gesture disambiguation using orientation information
KR20240072170A (ko) 원격 디바이스들과의 사용자 상호작용들
CN105759955B (zh) 输入装置
CN116501167A (zh) 一种基于手势操作的车内交互系统及车辆
CN114923418A (zh) 基于点选择的测量
TW201545050A (zh) 電子裝置的控制方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21938227

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023566400

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 202347074753

Country of ref document: IN

Ref document number: 2021938227

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2021938227

Country of ref document: EP

Effective date: 20231102

NENP Non-entry into the national phase

Ref country code: DE