CN114860119A - Screen interaction method, device, equipment and medium - Google Patents

Screen interaction method, device, equipment and medium Download PDF

Info

Publication number
CN114860119A
CN114860119A CN202210322067.1A CN202210322067A CN114860119A CN 114860119 A CN114860119 A CN 114860119A CN 202210322067 A CN202210322067 A CN 202210322067A CN 114860119 A CN114860119 A CN 114860119A
Authority
CN
China
Prior art keywords
target object
screen
information
visual element
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210322067.1A
Other languages
Chinese (zh)
Inventor
邵昌旭
许亮
李轲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202210322067.1A priority Critical patent/CN114860119A/en
Publication of CN114860119A publication Critical patent/CN114860119A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The embodiment of the disclosure provides a screen interaction method, a screen interaction device, screen interaction equipment and a screen interaction medium, wherein the method comprises the following steps: acquiring a target image in a vehicle cabin; identifying behavior information of a target object inside the vehicle cabin based on the target image; and under the condition that the picture displayed by the display device in the vehicle cabin is a screen saver picture, performing display control on at least one visual element in the screen saver picture according to the behavior information of the target object. The method can enable the target object to complete man-machine interaction with the screen more individually, and increases the appreciation and interestingness when people go by bus.

Description

Screen interaction method, device, equipment and medium
Technical Field
The embodiment of the disclosure relates to the technical field of visual algorithms, in particular to a screen interaction method, a screen interaction device, screen interaction equipment and a screen interaction medium.
Background
Along with the popularization of electronic products, a plurality of more and more intelligent man-machine interaction modes appear to meet the requirements of people on entertainment and relaxation. For example, a user performs a click operation on a touch screen of a mobile phone to realize human-computer interaction. With the development of the current technologies of intelligent cabins, car networking and the like, people also have more expectations for human-computer interaction modes in vehicle scenes.
Disclosure of Invention
In view of this, the disclosed embodiments provide at least one screen interaction method, apparatus, device, and medium.
Specifically, the embodiment of the present disclosure is implemented by the following technical solutions:
in a first aspect, a method for screen interaction is provided, the method comprising:
acquiring a target image in a vehicle cabin;
identifying behavior information of a target object inside the vehicle cabin based on the target image;
and under the condition that the picture displayed by the display device in the vehicle cabin is a screen saver picture, performing display control on at least one visual element in the screen saver picture according to the behavior information of the target object.
In some optional embodiments, in a case that the screen displayed by the display device inside the cabin is a screen saver screen, performing display control on at least one visual element in the screen saver screen according to the behavior information of the target object, includes: and under the condition that the pictures displayed by the target display device in the plurality of available display devices at different positions in the vehicle cabin are screen protection pictures, performing display control on at least one visual element in the screen protection pictures displayed by the target display device according to the behavior information of the target object.
In some optional embodiments, before the performing display control on at least one visual element in a screen saver screen displayed by the target display device according to the behavior information of the target object, the method further includes: identifying position information of the target object inside the vehicle cabin based on the target image; and determining an available display device matched with the target object from a plurality of available display devices at different positions in the vehicle cabin as a target display device according to the position information.
In some optional embodiments, the determining, as the target display device, an available display device matching the target object from among available display devices at a plurality of different positions inside the vehicle cabin according to the position information includes: determining an available display device closest to the position of the target object from among available display devices at a plurality of different positions inside the vehicle cabin as a target display device.
In some optional embodiments, before the performing display control on at least one visual element in a screen saver screen displayed by the target display device according to the behavior information of the target object, the method further includes: acquiring position calibration information of a camera for acquiring the target image; and selecting an available display device corresponding to the position of the camera from a plurality of available display devices at preset positions in the vehicle cabin as a target display device according to the position calibration information of the camera.
In some optional embodiments, the performing, according to the behavior information of the target object, display control on at least one visual element in the screen saver screen includes: and transforming the presentation form of at least one visual element in the screen saver picture according to the behavior information of the target object.
In some optional embodiments, the transforming the presentation form of at least one visual element in the screen saver screen according to the behavior information of the target object includes: and transforming the presentation form of at least one visual element to a corresponding contracted or expanded state according to the action of contraction or expansion of the preset body part of the target object represented by the behavior information of the target object.
In some optional embodiments, the transforming the presentation form of at least one visual element in the screen saver screen according to the behavior information of the target object includes: reading first state information representing the current state of at least one visual element in the screen saver picture; determining second state information of the at least one visual element after display control according to the behavior information of the target object and the first state information; controlling the at least one visual element to present a target state characterized by the second state information.
In some optional embodiments, the performing display control on at least one visual element in the screen saver screen according to the behavior information of the target object includes: and according to the behavior information of the target object, converting the display position of at least one visual element in the screen saver picture.
In some optional embodiments, the transforming a display position of at least one visual element in the screen saver screen according to the behavior information of the target object includes: and moving the display position of the at least one visual element in the screen saver picture to a corresponding moving direction according to the moving direction of the preset body part of the target object represented by the behavior information of the target object.
In some optional embodiments, the behavior information comprises at least one of: limb action information, face action information; the limb motion information comprises at least one of: head action information, hand action information, and torso action information; the facial motion information includes at least one of: expression information, mouth motion information, and eye motion information.
In some optional embodiments, the screen saver screen includes a background screen, a dynamic object visual element superimposed on the background screen, and a display effect visual element superimposed on the background screen; the display control of at least one visual element in the screen saver picture according to the behavior information of the target object comprises: and performing display control on at least one dynamic object visual element or display effect visual element in the screen saver picture according to the behavior information of the target object.
In some optional embodiments, the method further comprises: collecting sound information of the target object; and controlling at least one visual element in the screen saver picture according to the sound information of the target object.
In a second aspect, there is provided a screen interaction device, the device comprising:
the image acquisition module is used for acquiring a target image in the vehicle cabin;
the image recognition module is used for recognizing the behavior information of the target object in the vehicle cabin based on the target image;
and the display control module is used for performing display control on at least one visual element in the screen saver picture according to the behavior information of the target object under the condition that the picture displayed by the display device in the vehicle cabin is the screen saver picture.
In some optional embodiments, the display control module is specifically configured to: and under the condition that the pictures displayed by the target display device in the plurality of available display devices at different positions in the vehicle cabin are screen protection pictures, performing display control on at least one visual element in the screen protection pictures displayed by the target display device according to the behavior information of the target object.
In some optional embodiments, the image recognition module is further configured to: identifying position information of the target object inside the vehicle cabin based on the target image; and determining an available display device matched with the target object from a plurality of available display devices at different positions in the vehicle cabin as a target display device according to the position information.
In some optional embodiments, the image recognition module, when configured to determine, from the available display devices at the plurality of different locations inside the cabin, an available display device that matches the target object as the target display device according to the location information, is specifically configured to: determining an available display device closest to the position of the target object from among available display devices at a plurality of different positions inside the vehicle cabin as a target display device.
In some optional embodiments, the apparatus further comprises a location determination module to: acquiring position calibration information of a camera for acquiring the target image; and selecting an available display device corresponding to the position of the camera from a plurality of available display devices at preset positions in the vehicle cabin as a target display device according to the position calibration information of the camera.
In some optional embodiments, the display control module is specifically configured to: and transforming the presentation form of at least one visual element in the screen saver picture according to the behavior information of the target object.
In some optional embodiments, the display control module is specifically configured to: and transforming the presentation form of at least one visual element to a corresponding contracted or expanded state according to the action of contraction or expansion of the preset body part of the target object represented by the behavior information of the target object.
In some optional embodiments, the display control module is specifically configured to: reading first state information representing the current state of at least one visual element in the screen saver picture; determining second state information of the at least one visual element after display control according to the behavior information of the target object and the first state information; controlling the at least one visual element to present a target state characterized by the second state information.
In some optional embodiments, the display control module is specifically configured to: and according to the behavior information of the target object, converting the display position of at least one visual element in the screen saver picture.
In some optional embodiments, the display control module, when configured to transform a display position of at least one visual element in the screen saver screen according to the behavior information of the target object, is specifically configured to: and moving the display position of the at least one visual element in the screen saver picture to a corresponding moving direction according to the moving direction of the preset body part of the target object represented by the behavior information of the target object.
In some optional embodiments, the behavior information comprises at least one of: limb action information, face action information; the limb motion information comprises at least one of: head action information, hand action information, and torso action information; the facial motion information includes at least one of: expression information, mouth motion information, and eye motion information.
In some optional embodiments, the screen saver screen includes a background screen, a dynamic object visual element superimposed on the background screen, and a display effect visual element superimposed on the background screen; the display control module is specifically configured to: and performing display control on at least one dynamic object visual element or display effect visual element in the screen saver picture according to the behavior information of the target object.
In some optional embodiments, the apparatus further comprises a sound control module to: collecting sound information of the target object; and controlling at least one visual element in the screen saver picture according to the sound information of the target object.
In a third aspect, an electronic device is provided, which includes a memory for storing computer instructions executable on a processor, and the processor is configured to implement the screen interaction method according to any embodiment of the present disclosure when executing the computer instructions.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the screen interaction method according to any one of the embodiments of the present disclosure.
According to the screen interaction method provided by the technical scheme of the embodiment of the disclosure, at least one visual element in a screen saver picture of a vehicle-mounted display device is controlled in an interval mode through behavior information of a target object, so that the change of the visual effect of the screen saver picture is realized, and the target object can complete man-machine interaction with a screen more individually.
Drawings
In order to more clearly illustrate one or more embodiments of the present disclosure or technical solutions in related arts, the drawings used in the description of the embodiments or related arts will be briefly described below, it is obvious that the drawings in the description below are only some embodiments described in one or more embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive exercise.
FIG. 1 is a flow chart illustrating a method of screen interaction in accordance with at least one embodiment of the present disclosure;
FIG. 2 is a flow diagram illustrating another screen interaction method in accordance with at least one embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating yet another screen interaction method in accordance with at least one embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating yet another screen interaction method in accordance with at least one embodiment of the present disclosure;
FIG. 5 is a diagram of a background screen shown in at least one embodiment of the present disclosure;
FIG. 6 is a diagram of a dynamic object visual element, shown in at least one embodiment of the present disclosure;
FIG. 7 is a diagram of a display effect visual element, shown in at least one embodiment of the present disclosure;
FIG. 8 is a block diagram of a screen interaction device, according to at least one embodiment of the present disclosure;
FIG. 9 is a block diagram of another screen interaction device shown in at least one embodiment of the present disclosure;
FIG. 10 is a block diagram of yet another screen interaction device, shown in at least one embodiment of the present disclosure;
fig. 11 is a hardware structure diagram of an electronic device according to at least one embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
With the popularization of private cars and the rapid development of intelligent car cabins, people have more demands and expectations on driving experience and riding experience in the driving process. In addition to a traditional vehicle center control screen, a screen is also mounted on a front driver and a rear row of a vehicle, but most of interfaces of the screens are in accordance with application modes such as a mobile phone and a tablet, and a user needs to perform touch operation on the screen, so that a man-machine interaction mode becomes mediocre and boring, and a novel experience cannot be brought to the user. Moreover, most of the conventional screen savers of the central control screen in the vehicle are static screen savers, for example, information such as date, time, temperature and the like is displayed on a piece of wallpaper, because the central control screen needs to be prevented from having too much interference and attention to the driver. However, with the popularization of automatic driving and the mounting of screens such as vehicle-mounted copiers and rear rows, the existing static screen saver pages do not meet the application trend of the current intelligent cockpit and the vehicle networking.
As shown in fig. 1, fig. 1 is a flowchart illustrating a screen interaction method according to at least one embodiment of the present disclosure, which may include the following steps:
in step 102, an image of a target inside a vehicle cabin is acquired.
The vehicle in this embodiment may be a private car, a bus, a high-speed rail, a subway, or any other vehicle. In other examples, the method of the present embodiment may also be applied to various types of vehicles such as airplanes, spaceships, ships, and the like.
The target image is an image containing environment and personnel information inside the vehicle cabin, and at least one target object inside the vehicle cabin can be included in the target image. The target object is a person inside the vehicle cabin, and may be a driver, a passenger, a security officer, or the like. The target image is collected by a camera installed on the vehicle, one or more cameras can be installed in the vehicle, and the camera is used for collecting images inside the vehicle cabin. For example, the camera can be installed at the inside rear view mirror to acquire images of the personnel in the whole cabin; can be installed on the ceiling in the vehicle; a camera can also be respectively arranged in front of each saddle.
In step 104, behavior information of a target object inside the vehicle cabin is identified based on the target image.
The behavior information is description of self activity of the target object in the riding process, and can be description of specific behavior actions, such as opening mouth, closing mouth, blinking, moving fingers, making gestures, tilting upper body forwards and the like; or a description of an abstract behavioral action, such as sadness, happy, smiling, talking, calling, dancing, etc. In this embodiment, the behavior information may be information of an empty behavior, and the behavior of the target object is not in contact with a display screen of a display device inside the vehicle cabin.
In this step, target object detection may be performed on at least one frame of target image, and the behavior of the detected target object is identified to obtain behavior information of the target object. For example, when there are a plurality of target objects in the target image, behavior information of each target object may be output separately.
The present embodiment does not limit the manner of the target object detection and recognition processing, for example, a face detection method based on a neural network may be used to detect a face to obtain a detection result of the target object, and a pre-trained behavior recognition neural network may be used to recognize the behavior of the detected target object.
In step 106, in a case that a screen displayed by the display device inside the cabin is a screen saver screen, performing display control on at least one visual element in the screen saver screen according to the behavior information of the target object.
For example, the visual elements may be basic elements constituting a screen, such as a background, an object, and a dynamic particle in the screen, and may be elements related to screen display, such as brightness, color, contrast, and a font size of the screen.
The display device inside the vehicle cabin may be various display screens, such as a photoelectric glass display screen, a liquid crystal display screen, an LED (Light-Emitting Diode) display screen, and the like.
The display device inside the vehicle compartment may have one or more display devices, and may be installed at any position inside the vehicle compartment, for example, at a steering wheel, behind a seat, in a door, in a ceiling, and the like, or may be combined with a windshield of a window to use a windshield having a display function as the display device.
The present embodiment does not limit the manner in which the display control is performed on the visual elements. The following examples illustrate examples of controlling the display of at least one visual element in a screen saver screen according to behavior information of a target object, but it is understood that the following examples are not intended to limit the present invention:
in one example, a new visual element may be generated, for example, the screen saver screen is originally blank, and as the target object continuously generates behavior information, the content in the screen saver screen is continuously enriched.
In one example, the original visual elements may be eliminated, for example, there are many leaves in the screen saver screen initially, and as the target object continues to generate behavior information, the leaves in the screen saver screen also continue to decrease.
In one example, the display form of the visual element may be transformed, for example, there is a virtual character in the screen saver, and when the target object generates different behavior information, the virtual character correspondingly displays the behavior corresponding to the same behavior information.
In one example, the display value corresponding to the visual element may be adjusted, for example, when the visual element is the brightness of the screen, the brightness value of the screen may be adjusted through the behavior information of the target object.
In yet another example, the position of the visual element may be moved, for example, there is a feather in the screen saver screen, and when the behavior information of the target object is to move a finger, the feather in the screen saver screen may move along the movement track of the finger.
In this step, when the behavior information of the target object meets the control trigger condition, the visual element may be controlled according to a mapping relationship between the preset behavior information and the visual element.
For example, in an exemplary application scenario, the visual element in the screen saver screen may initially comprise a complete apple, and the predetermined control trigger condition comprises: mouth opens, closes, hand left and right rocking and specific gesture etc. the mapping that sets for in advance can be: the mouth is opened and closed to change the shape of the apple, the left-right hand swing is used for switching the current fruit into other fruits, and the specific gesture is used for controlling the quantity of the fruits. When the behavior information of the target object is that the mouth is opened and closed, the complete apple in the screen saver picture is changed into the apple with one bite, and then when the behavior information of the target object is still that the mouth is opened and closed, the apple with one bite in the screen saver picture is changed into the apple with two bites; when the behavior information of the target object is that the hands swing left and right, the complete apples in the screen saver picture are switched into complete pears, and then when the behavior information of the target object is that the mouth opens and closes, the complete pears in the screen saver picture are switched into pears which are bitten off; when the behavior information of the target object is two fingers are stretched out, such as a single-hand ratio "V", one complete apple in the screen saver screen is changed into two complete apples, and then, when the behavior information of the target object is five fingers are stretched out, two complete apples in the screen saver screen are changed into five complete apples.
According to the screen interaction method provided by the technical scheme of the embodiment of the disclosure, at least one visual element in a screen saver picture of a vehicle-mounted display device is subjected to space control through behavior information of a target object, so that the change of the visual effect of the screen saver picture is realized, and the target object can complete man-machine interaction with a screen more individually; and when the display device is in a standby state and the screen display screen saver picture is displayed on the screen, the target object can be interacted with the screen saver picture of the vehicle-mounted display device inadvertently, instead of unlocking the screen and clicking certain interface functions to perform human-computer interaction intentionally, the target object can be provided with non-sensible experience, an emotional communication way is provided, and the ornamental and interesting effects during riding and traveling are improved.
Fig. 2 provides a screen interaction method according to another embodiment of the present disclosure, which may include the following processes, wherein the same steps as those of the flowchart of fig. 1 will not be described in detail.
In step 202, an image of a target inside the vehicle cabin is acquired.
The target image is acquired by a camera installed on the vehicle, and the target image is an image containing environment and personnel information inside the vehicle cabin and comprises at least one target object inside the vehicle cabin.
In step 204, behavior information of a target object inside the vehicle cabin is identified based on the target image, and position information of the target object inside the vehicle cabin is identified based on the target image.
In this step, the position information of the target object inside the vehicle cabin may be determined by performing target object detection on the target image. The position of the target object may be localized, for example, by face detection, and the seating position of the target object is further determined based on the position of the target object face region in the image. After the target objects in the target image are detected, the behavior information of each target object can be obtained when the behavior of the detected target object is identified.
For example, in a four-seat private car, the position information of the target object may be seat information of the target object, such as a main driver seat, a sub-driver seat, a rear right seat, and a rear left seat. In a bus, the position information of the target object may include seat information and station information of the target object.
For example, the position information of the target object in the vehicle cabin may be obtained by comparing the surrounding environment of the target object in the target image with the image of when no person is present in the vehicle cabin.
For another example, a neural network for detecting a target object and a neural network for recognizing a behavior of the target object may be trained in advance, and a target image may be input to the neural network for detecting a target object to obtain position information of each target object in the cabin. And then, inputting the image of each target object into a neural network for identifying the behavior of the target object to obtain the behavior information of the target object.
Alternatively, an end-to-end neural network for detecting target objects and identifying target object behavior may be trained. And inputting the target image into the neural network, so that the detection result and the behavior recognition result of the target object can be obtained. In addition, when the position information of the target object in the vehicle cabin is obtained, the position and the shooting angle of a camera for collecting the target image can be combined for judgment. For example, when the camera is located at the ceiling in the vehicle and performs image acquisition toward the rear row of the vehicle, the target object in the obtained target image is located at the rear row of the vehicle, and it may be further determined whether the target object is located at the right position of the rear row, the middle position of the rear row, or the left position of the rear row by processing the target image.
In step 206, an available display device matching the target object is determined as the target display device from a plurality of available display devices at different positions inside the vehicle cabin, based on the position information.
In the case where a plurality of available display devices at different positions are included in the vehicle cabin interior, an available display device matching a target object may be selected from among the available display devices according to the position information of the target object as a target display device to be subjected to display control. The target display device matched with a certain target object may be one or more.
When there is one matched target display device to be determined for the target object, the matching rule may be that the distance between the target display device and the target object is the closest, or the optimal viewing angle at which the target display device is located at the target object, or may be that a mapping relationship between the position information and the available display devices is preset, and one target display device matched with the target object is determined according to the position information and the mapping relationship.
When a plurality of matched target display devices need to be determined for the target object, the matching rule may be that the distance between the target display device and the target object is within a set range, or the target display device is located within a viewing angle of the target object, or a mapping relationship between the position information and the available display devices is preset, and the plurality of target display devices matched with the target object are determined according to the position information and the mapping relationship.
In one example, an available display device closest to the position of the target object may be determined as the target display device from among available display devices at a plurality of different positions inside the vehicle cabin.
For example, when the position information of the target object indicates that the target object is located in the rear row of the vehicle, the available display device closest to the target object is the available display device at the back of the seat in front of the target object, and the available display device is determined as the target display device matched with the target object. For another example, when the position information of the target object indicates that the target object is located in the passenger compartment, the display device closest to the target object is the display device on the front window thereof, and the next-closest display device is the display device available at the side door thereof, but the display device on the front window may not be available due to a failure, and the display device available at the side door thereof is determined as the target display device matched with the target object.
In yet another example, an available display device in the same row as the position of the target object may be determined as the target display device from among available display devices at a plurality of different positions inside the vehicle cabin.
For example, there are a plurality of rows of seats in the vehicle, each row being provided with a display device, and when the position information of the target object indicates that the target object is located in the third row, the available display device in the third row may be taken as the target display device.
In step 208, in a case that a screen displayed by a target display device of the plurality of available display devices at different positions inside the cabin is a screen saver screen, performing display control on at least one visual element in the screen saver screen displayed by the target display device according to the behavior information of the target object.
For example, when there is one target display device matching the target object and the screen displayed by the target display device is a screen saver screen, the display of at least one visual element in the screen saver screen of the target display device may be controlled according to the behavior information of the target object. When there are a plurality of target display devices matching the target object and the screen displayed by the plurality of target display devices is a screen saver screen, different display controls or the same display control may be performed on the visual elements in the screen of different target display devices according to the behavior information of the target object.
According to the screen interaction method provided by the technical scheme of the embodiment of the disclosure, the target display device matched with the target object is determined according to the position information of the target object in the vehicle cabin, and at least one visual element in the screen saver picture of the target display device is subjected to spacing control through the behavior of the target object, so that the change of the visual effect of the screen saver picture is realized, the target object can conveniently view the visual effect change brought by the behavior information, and the target object can more personally complete the man-machine interaction with the screen; and when the display device is in a standby state and the screen display screen saver picture is displayed on the screen, the target object can be made to interact with the screen saver picture of the vehicle-mounted display device inadvertently, instead of unlocking the screen and clicking some interface functions to perform man-machine interaction intentionally, the target object can be provided with non-sensible experience, an emotional communication way is provided, and the appreciation, interestingness and comfort during riding and traveling are improved.
Fig. 3 provides a screen interaction method according to another embodiment of the present disclosure, which may include the following processes, wherein the same steps as those of the flowchart of fig. 1 will not be described in detail.
In step 302, an image of a target inside a vehicle cabin is acquired.
The target image is acquired by a camera mounted on the vehicle, and the target image comprises at least one target object in the vehicle cabin of the vehicle.
In step 304, behavior information of the target object is identified based on the target image.
In step 306, position calibration information of a camera used for acquiring the target image is acquired.
The position calibration information of the camera may include the number of the camera, or include information such as the position and angle of the camera. For example, when four cameras are installed inside the vehicle cabin, the four cameras are numbered as camera 1, camera 2, camera 3 and camera 4 respectively, wherein camera 1 is located in front of the driving seat and performs image acquisition towards the driving seat, camera 2 is located in front of the copilot and performs image acquisition towards the copilot, camera 3 is located on the back of the driving seat and performs image acquisition towards the rear left position, and camera 4 is located on the back of the copilot and performs image acquisition towards the rear right position.
In step 308, according to the position calibration information of the camera, an available display device corresponding to the position of the camera is selected from a plurality of available display devices at predetermined positions in the vehicle cabin as a target display device.
For example, when the position calibration information of the camera is the number of the camera, the available display device corresponding to the number of the camera may be selected as the target display device matched with the target object from the available display devices at the plurality of predetermined positions in the vehicle cabin based on the mapping relationship between the number of the camera and the available display devices at the plurality of predetermined positions. Illustratively, the camera numbered 1 may be mapped with the available display device numbered 1 in advance, or the camera numbered 1 may be mapped with the available display device numbered 2 and the available display device numbered 4 in advance.
For example, when the position calibration information of the camera is information such as the position and angle of the camera, the target display device corresponding to the position of the camera may be selected from a plurality of available display devices at different positions in the cabin based on the information such as the position and angle of the camera. For example, for a camera located on the seat back of the driving seat and facing the left position of the rear row for image acquisition, an available display device near the camera may be selected as the target display device, for example, an available display device also located on the seat back of the driving seat may be selected as the target display device, so as to facilitate the target object of the left position of the rear row to show behavior information and view a screen saver picture on the screen.
In step 310, in the case that a screen displayed by a target display device of the plurality of available display devices at different positions inside the cabin is a screen saver screen, performing display control on at least one visual element in the screen saver screen according to the behavior information of the target object.
According to the screen interaction method provided by the technical scheme of the embodiment, a target display device corresponding to a target object is determined from a plurality of available display devices according to position calibration information of a camera for collecting a target image containing the target object, and at least one visual element in a screen saver picture of the display device is subjected to space control through the behavior of the target object, so that the change of the visual effect of the screen saver picture is realized, the target object can conveniently view the visual effect change brought by the behavior information, and the target object can more personally complete the man-machine interaction with a screen; and when the display device is in a standby state and the screen display screen saver picture is displayed on the screen, the target object can be made to interact with the screen saver picture of the vehicle-mounted display device inadvertently, instead of unlocking the screen and clicking some interface functions to perform man-machine interaction intentionally, the target object can be provided with non-sensible experience, an emotional communication way is provided, and the appreciation, interestingness and comfort during riding and traveling are improved.
Fig. 4 is another screen interaction method provided in at least one embodiment of the present disclosure, where a screen saver screen in this embodiment includes a background screen, a dynamic object visual element superimposed on the background screen, and a display effect visual element superimposed on the background screen. The visual element may be any one or more of a background screen, a dynamic object visual element, and a display effect visual element, and may also be an element related to screen display such as brightness, color, contrast, and size of a screen font of a screen saver screen.
As an example, when the in-vehicle display device in the present embodiment is in standby, the presented initial screen saver screen can be specifically obtained by using the following method: alternatively, an image or video is used as a background picture in the screen saver picture, and the background picture fills the entire interface of the screen of the display device and is repeatedly played, for example, as shown in fig. 5, a solid image is used as a background color background. Optionally, a 2D or 3D dynamic object visual element is superimposed on the background picture as an intermediate layer, and the dynamic object visual element may be a person, an object, or the like, for example, a leaf, a flower, a butterfly, a lady wearing a skirt, or an apple as shown in fig. 6. Optionally, the display effect visual elements are superimposed on the background screen, and the special display effect visual elements may be visual elements such as a naked eye 3D effect, a glare effect, a light emitting effect, and a 3D particle effect, where the 3D particle visual elements are shown in fig. 7. When rendering the screen, rendering may be performed by using an AR (Augmented Reality) technique. Through the processing, the gorgeous visual effect can be rendered on the screen protection picture of the standby screen.
The method of the present embodiment is explained below, wherein the steps identical to the flow of the above embodiment will not be described in detail.
In step 402, an image of a target inside a vehicle cabin is acquired.
In step 404, behavior information of the target object is identified based on the target image.
The behavior information in this embodiment may be behavior information in which the target object is not in contact with the touch screen of the display device. Wherein the behavior information includes at least one of: limb movement information, face movement information. The limb movement information includes at least one of: head movement information, hand movement information, and torso movement information, such as gestures, body recline, head shake, etc.; the facial motion information includes at least one of: the expression information, mouth movement information, and eye movement information may be, for example, a surprise expression, sneezing, eyeball rolling, or the like.
In step 406, position information of the target object is determined based on the target image and a position of a camera that acquired the target image.
For example, the target object is located in the middle area of the target image, and the camera that captures the target image is a camera facing the copilot, it can be determined that the target object is located in the copilot.
In step 408, an available display device matching the target object is determined as a target display device from among available display devices at a plurality of different positions inside the vehicle cabin, based on the position information.
For example, the available display apparatus closest to the target object may be selected as the target display apparatus based on the position information.
Continuing with the above example, when the target object B is located in the copilot, the available display device closest to the target object is the display device in front of the copilot, and the display device in front of the copilot is determined to be the target display device matching the target object B.
In step 410, in the case that the screen displayed by the target display device is a screen saver screen, controlling at least one visual element in the screen of the target display device according to the behavior information of the target object.
In this step, the presentation form of the at least one visual element may be transformed according to the behavior information of the target object.
Specifically, the presentation form of at least one visual element may be transformed to a corresponding contracted or expanded state according to the action of contraction or expansion of the preset body part of the target object represented by the behavior information of the target object. The preset body parts can be body parts such as mouth, eyes, double arms, hands and the like. For example, when the hand of the target object is in a fist making motion, the opened flowers in the screen saver screen are slowly closed to be closed, and when the hand of the target object is in an opening motion, the closed flowers in the screen saver screen are slowly opened to be opened.
First state information representing the current state of at least one visual element in the screen saver screen can also be read; determining second state information of at least one visual element after display control according to the behavior information and the first state information of the target object; and controlling at least one visual element to present the target state characterized by the second state information. For example, when the visual element is the brightness of the screen, the first state information of the current state is the current brightness value, the brightness value of the screen can be adjusted through the behavior information of the target object, the behavior information may be the lifting motion of the hand, the current brightness value is read, the displacement of the lifted hand is identified and obtained, the brightness value is improved according to the displacement, a new brightness value of the brightness of the screen is obtained, that is, the brightness of the screen saver screen is adjusted to the new brightness value through the second state information after display control.
In this step, according to the behavior information of the target object, the display position of at least one visual element in the screen can be further changed.
Specifically, the display position of the at least one visual element in the screen saver screen can be moved in the corresponding movement direction according to the movement direction of the preset body part of the target object represented by the behavior information of the target object. The preset body part may be any body part of the target subject. For example, when the eyes of the target object look to the left, the display position of the visual element in the screen saver screen can be moved to the left; when the target object's eyes look to the right, the display position of the visual element in the screen saver screen can be moved to the right.
And the display position of at least one visual element in the screen saver picture can be moved along the corresponding movement track according to the movement track of the preset body part of the target object represented by the behavior information of the target object. For example, when the behavior information of the target object is to move a finger, a feather in the screen saver screen may move along the movement track of the finger.
In this step, at least one dynamic object visual element or display effect visual element in the screen saver screen can be controlled according to the behavior information of the target object. In one example, the screen saver screen is a bush, the dynamic object visual elements that can be controlled include flowers, butterflies, and the display effect visual element is a streamer effect. The behavior information may be concrete behavior action information or abstract behavior action information. When the emotion of the target object is sensed to be happy, the flower is controlled to be converted from the closed presenting form to the open presenting form, when behavior information (such as 'smiling' expression) of the target object continuously indicates that the flower is in the happy state, the open flower gradually changes more, and when the flower is open, the hand action of the target object can control the butterfly to change the display position, for example, the palm swings upwards, and the butterfly in the screen saver screen also correspondingly flies upwards. It may also be arranged that the streamer effect appears when the upper body of the subject is leaning forward and disappears when the upper body of the subject is leaning backward.
For example, the flowers in the screen saver screen of the target display device in front of the co-driver seat may be controlled to turn from closed to full on the basis of the behavior information including a happy smile exhibited by the target subject. The target object can face the palm of the screen and the camera at the inner rear-view mirror continuously collects the target image containing the palm waving of the target object B. And processing the target images again in the steps, sensing the hand position of the target object B through a sensing algorithm in the processing process, controlling the butterfly in the screen saver picture according to the movement of the hand position, and presenting a visual effect that the butterfly continuously dances along with the moving hand of the target object B.
In one embodiment, the method further comprises: collecting sound information of the target object; and controlling at least one visual element in the screen saver picture according to the sound information of the target object. The control can be performed according to the behavior information and the sound information respectively, or the control can be performed by combining the behavior information and the sound information. For example, in addition to the control using the behavior information obtained by performing the recognition processing on the target image, a control instruction of the target object may be obtained by analyzing the meaning of the sound information, and the visual element in the screen saver screen may be controlled according to the control instruction of the target object. For another example, the emotion of the target object can be more comprehensively perceived through the combination of the sound information and the behavior information, and the visual elements in the screen saver screen can be controlled according to the emotion of the target object.
According to the screen interaction method provided by the embodiment, the action of the target object is sensed through the camera, the target object can be made to interact with the screen saver picture of the vehicle-mounted display device inadvertently without clicking the touch screen, instead of unlocking the screen and clicking some interface functions to perform man-machine interaction intentionally, the non-inductive experience of the target object can be provided, an emotional communication way is provided, human-vehicle interaction is enabled to be more contextual, the relationship and emotional connection between a person and a vehicle are reconstructed, and the interest and the appreciation during riding are improved.
As shown in fig. 8, fig. 8 is a block diagram of a screen interaction device according to at least one embodiment of the present disclosure, where the device includes:
and the image acquisition module 81 is used for acquiring a target image in the vehicle cabin.
And the image recognition module 82 is used for recognizing the behavior information of the target object in the vehicle cabin based on the target image.
And the display control module 83 is configured to, when a screen displayed by the display device in the cabin is a screen saver screen, perform display control on at least one visual element in the screen saver screen according to the behavior information of the target object.
In an example, the display control module 83 is specifically configured to: and under the condition that the pictures displayed by the target display device in the plurality of available display devices at different positions in the vehicle cabin are screen protection pictures, performing display control on at least one visual element in the screen protection pictures displayed by the target display device according to the behavior information of the target object.
In one example, the image recognition module 82 is further configured to: identifying position information of the target object inside the vehicle cabin based on the target image; and determining an available display device matched with the target object from a plurality of available display devices at different positions in the vehicle cabin as a target display device according to the position information.
In one example, the image recognition module 82, when configured to determine, from the available display devices at the plurality of different positions inside the cabin, an available display device that matches the target object as the target display device according to the position information, is specifically configured to: determining an available display device closest to the position of the target object from among available display devices at a plurality of different positions inside the vehicle cabin as a target display device.
In an example, the display control module 83 is specifically configured to: and transforming the presentation form of at least one visual element in the screen saver picture according to the behavior information of the target object.
In an example, when the display control module 83 is configured to transform the presentation form of at least one visual element in the screen saver screen according to the behavior information of the target object, specifically, the display control module is configured to: and transforming the presentation form of at least one visual element to a corresponding contracted or expanded state according to the action of contraction or expansion of the preset body part of the target object represented by the behavior information of the target object.
In an example, when the display control module 83 is configured to transform the presentation form of at least one visual element in the screen saver screen according to the behavior information of the target object, specifically, the display control module is configured to: reading first state information representing the current state of at least one visual element in the screen saver picture; determining second state information of the at least one visual element after display control according to the behavior information of the target object and the first state information; controlling the at least one visual element to present a target state characterized by the second state information.
In an example, the display control module 83 is specifically configured to: and according to the behavior information of the target object, converting the display position of at least one visual element in the screen saver picture.
In an example, the display control module 83, when configured to transform a display position of at least one visual element in the screen saver screen according to the behavior information of the target object, is specifically configured to: and moving the display position of the at least one visual element in the screen saver picture to a corresponding moving direction according to the moving direction of the preset body part of the target object represented by the behavior information of the target object.
In one example, the behavior information includes at least one of: limb action information, face action information; the limb motion information comprises at least one of: head action information, hand action information, and torso action information; the facial motion information includes at least one of: expression information, mouth motion information, and eye motion information.
In one example, the screen saver screen includes a background screen, a dynamic object visual element superimposed on the background screen, and a display effect visual element superimposed on the background screen; the display control module 83 is specifically configured to: and performing display control on at least one dynamic object visual element or display effect visual element in the screen saver picture according to the behavior information of the target object.
As shown in fig. 9, the apparatus further includes a position determination module 84.
In one example, the position determination module 84 is configured to: acquiring position calibration information of a camera for acquiring the target image; and selecting an available display device corresponding to the position of the camera from a plurality of available display devices at preset positions in the vehicle cabin as a target display device according to the position calibration information of the camera.
As shown in fig. 10, the apparatus further includes a sound control module 85.
In one example, the voice control module 85 is configured to: collecting sound information of the target object; and controlling at least one visual element in the screen saver picture according to the sound information of the target object.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 11, where the electronic device includes a memory 11 and a processor 12, the memory 11 is configured to store computer instructions executable on the processor, and the processor 12 is configured to implement the screen interaction method according to any embodiment of the present disclosure when executing the computer instructions.
The embodiments of the present disclosure also provide a computer program product, which includes a computer program/instruction, and when the computer program/instruction is executed by a processor, the computer program/instruction implements the screen interaction method according to any embodiment of the present disclosure.
The embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method for screen interaction according to any embodiment of the present disclosure is implemented.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (16)

1. A method for screen interaction, the method comprising:
acquiring a target image in a vehicle cabin;
identifying behavior information of a target object inside the vehicle cabin based on the target image;
and under the condition that the picture displayed by the display device in the vehicle cabin is a screen saver picture, performing display control on at least one visual element in the screen saver picture according to the behavior information of the target object.
2. The method according to claim 1, wherein in a case where the screen displayed by the display device inside the vehicle cabin is a screen saver screen, performing display control on at least one visual element in the screen saver screen according to the behavior information of the target object includes:
and under the condition that the pictures displayed by the target display device in the plurality of available display devices at different positions in the vehicle cabin are screen protection pictures, performing display control on at least one visual element in the screen protection pictures displayed by the target display device according to the behavior information of the target object.
3. The method of claim 2, wherein before the performing display control of at least one visual element in a screen saver screen displayed by the target display device according to the behavior information of the target object, the method further comprises:
identifying position information of the target object inside the vehicle cabin based on the target image;
and determining an available display device matched with the target object from a plurality of available display devices at different positions in the vehicle cabin as a target display device according to the position information.
4. The method according to claim 3, wherein the determining an available display device matching the target object from among available display devices at a plurality of different positions inside the vehicle cabin as a target display device according to the position information comprises:
determining an available display device closest to the position of the target object from among available display devices at a plurality of different positions inside the vehicle cabin as a target display device.
5. The method of claim 2, wherein before the performing display control of at least one visual element in a screen saver screen displayed by the target display device according to the behavior information of the target object, the method further comprises:
acquiring position calibration information of a camera for acquiring the target image;
and selecting an available display device corresponding to the position of the camera from a plurality of available display devices at preset positions in the vehicle cabin as a target display device according to the position calibration information of the camera.
6. The method according to claim 1, wherein the performing display control on at least one visual element in the screen saver screen according to the behavior information of the target object comprises:
and transforming the presentation form of at least one visual element in the screen saver picture according to the behavior information of the target object.
7. The method according to claim 6, wherein transforming the presentation form of at least one visual element in the screen saver screen according to the behavior information of the target object comprises:
and transforming the presentation form of at least one visual element to a corresponding contracted or expanded state according to the action of contraction or expansion of the preset body part of the target object represented by the behavior information of the target object.
8. The method according to claim 6, wherein transforming the presentation form of at least one visual element in the screen saver screen according to the behavior information of the target object comprises:
reading first state information representing the current state of at least one visual element in the screen saver picture;
determining second state information of the at least one visual element after display control according to the behavior information of the target object and the first state information;
controlling the at least one visual element to present a target state characterized by the second state information.
9. The method according to any one of claims 1 to 8, wherein the performing display control on at least one visual element in the screen saver screen according to the behavior information of the target object comprises:
and according to the behavior information of the target object, converting the display position of at least one visual element in the screen saver picture.
10. The method of claim 9, wherein transforming a display position of at least one visual element in the screen saver screen according to the behavior information of the target object comprises:
and moving the display position of the at least one visual element in the screen saver picture to a corresponding moving direction according to the moving direction of the preset body part of the target object represented by the behavior information of the target object.
11. The method according to any one of claims 1 to 10,
the behavior information includes at least one of: limb action information, face action information;
the limb motion information comprises at least one of: head action information, hand action information, and torso action information;
the facial motion information includes at least one of: expression information, mouth motion information, and eye motion information.
12. The method according to any one of claims 1 to 11, wherein the screen saver screen comprises a background screen, a dynamic object visual element superimposed on the background screen, and a display effect visual element superimposed on the background screen;
the display control of at least one visual element in the screen saver picture according to the behavior information of the target object comprises:
and performing display control on at least one dynamic object visual element or display effect visual element in the screen saver picture according to the behavior information of the target object.
13. The method according to any one of claims 1 to 12, further comprising:
collecting sound information of the target object;
and controlling at least one visual element in the screen saver picture according to the sound information of the target object.
14. A screen interaction device, the device comprising:
the image acquisition module is used for acquiring a target image in the vehicle cabin;
the image recognition module is used for recognizing the behavior information of the target object in the vehicle cabin based on the target image;
and the display control module is used for performing display control on at least one visual element in the screen saver picture according to the behavior information of the target object under the condition that the picture displayed by the display device in the vehicle cabin is the screen saver picture.
15. An electronic device, comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the method of any one of claims 1 to 13 when executing the computer instructions.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 13.
CN202210322067.1A 2022-03-29 2022-03-29 Screen interaction method, device, equipment and medium Pending CN114860119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210322067.1A CN114860119A (en) 2022-03-29 2022-03-29 Screen interaction method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210322067.1A CN114860119A (en) 2022-03-29 2022-03-29 Screen interaction method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114860119A true CN114860119A (en) 2022-08-05

Family

ID=82629694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210322067.1A Pending CN114860119A (en) 2022-03-29 2022-03-29 Screen interaction method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114860119A (en)

Similar Documents

Publication Publication Date Title
US11726577B2 (en) Systems and methods for triggering actions based on touch-free gesture detection
US20210295025A1 (en) Classifying facial expressions using eye-tracking cameras
US20220189093A1 (en) Interaction based on in-vehicle digital persons
US11861873B2 (en) Event camera-based gaze tracking using neural networks
JP7011578B2 (en) Methods and systems for monitoring driving behavior
KR20210011416A (en) Shared environment for vehicle occupants and remote users
US20220203996A1 (en) Systems and methods to limit operating a mobile phone while driving
CN112034977B (en) Method for MR intelligent glasses content interaction, information input and recommendation technology application
TW202036465A (en) Method, device and electronic equipment for monitoring driver's attention
JP2021517312A (en) Motion recognition, driving motion analysis methods and devices, and electronic devices
JP2022530605A (en) Child state detection method and device, electronic device, storage medium
CN103488299A (en) Intelligent terminal man-machine interaction method fusing human face and gestures
US11645823B2 (en) Neutral avatars
Katrolia et al. Ticam: A time-of-flight in-car cabin monitoring dataset
KR20180118669A (en) Intelligent chat based on digital communication network
CN114860119A (en) Screen interaction method, device, equipment and medium
US20230347903A1 (en) Sensor-based in-vehicle dynamic driver gaze tracking
CN112297842A (en) Autonomous vehicle with multiple display modes
WO2021196751A1 (en) Digital human-based vehicle cabin interaction method, apparatus and vehicle
CN111736700A (en) Digital person-based vehicle cabin interaction method and device and vehicle
CN113874238A (en) Display system for a motor vehicle
US20230215070A1 (en) Facial activity detection for virtual reality systems and methods
KR20200053163A (en) Apparatus and method for providing virtual reality contents without glasses
US11948261B2 (en) Populating a graphical environment
WO2023133149A1 (en) Facial activity detection for virtual reality systems and methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination