CN114840092A - Method, device, equipment and storage medium for interacting with vehicle-mounted display equipment - Google Patents

Method, device, equipment and storage medium for interacting with vehicle-mounted display equipment Download PDF

Info

Publication number
CN114840092A
CN114840092A CN202210613139.8A CN202210613139A CN114840092A CN 114840092 A CN114840092 A CN 114840092A CN 202210613139 A CN202210613139 A CN 202210613139A CN 114840092 A CN114840092 A CN 114840092A
Authority
CN
China
Prior art keywords
vehicle
hand
movable element
screen saver
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210613139.8A
Other languages
Chinese (zh)
Inventor
隋玉坤
毛宁元
许亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202210613139.8A priority Critical patent/CN114840092A/en
Publication of CN114840092A publication Critical patent/CN114840092A/en
Priority to PCT/CN2023/091195 priority patent/WO2023231664A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/11Hand-related biometrics; Hand pose recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for interacting with vehicle-mounted display equipment, wherein the method comprises the following steps: acquiring a hand image of an in-vehicle person corresponding to the on-vehicle display equipment under the condition that the on-vehicle display equipment works in a screen saver mode, wherein the on-vehicle display equipment displays a screen saver picture containing movable elements in the screen saver mode; determining a target movable element matched with the position of the hand in the movable elements of the screen saver picture based on first position information of the hand in the hand image; determining a dynamic interaction effect of the target movable element; and displaying the dynamic interaction effect of the target movable element in the screen saver picture.

Description

Method, device, equipment and storage medium for interacting with vehicle-mounted display equipment
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method, a device, equipment and a storage medium for interacting with vehicle-mounted display equipment.
Background
Along with the development of the intelligent cabin of the automobile, the hardware configuration in the cabin is improved, the display screen of at least one vehicle-mounted display device in one automobile can be operated by an owner or a passenger, and under the condition of unmanned operation, the vehicle-mounted display device enters a screen protection mode. In the related art, the screen saver mode includes static wallpaper, dynamic wallpaper, video, and the like, and the screen saver has a single presentation mode.
Disclosure of Invention
In view of this, embodiments of the present application provide a method, an apparatus, a device, and a storage medium for interacting with an in-vehicle display device.
In a first aspect, an embodiment of the present application provides a method for interacting with an in-vehicle display device, where the method includes: under the condition that the vehicle-mounted display equipment works in a screen saver mode, acquiring a hand image of an in-vehicle person corresponding to the vehicle-mounted display equipment, wherein the vehicle-mounted display equipment displays a screen saver picture containing movable elements in the screen saver mode; determining a target movable element matched with the position of the hand in the movable elements of the screen saver picture based on first position information of the hand in the hand image; determining a dynamic interaction effect of the target movable element; and displaying the dynamic interaction effect of the target movable element in the screen saver picture.
In a second aspect, an embodiment of the present application provides an apparatus for interacting with an in-vehicle display device, where the apparatus includes: the vehicle-mounted display equipment comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring a hand image of an in-vehicle person corresponding to the vehicle-mounted display equipment under the condition that the vehicle-mounted display equipment works in a screen saver mode, and the vehicle-mounted display equipment displays a screen saver picture containing movable elements under the screen saver mode; a first determining module, configured to determine, based on first position information of a hand in the hand image, a target movable element that matches a position of the hand in movable elements of the screen saver screen; a second determination module, configured to determine a dynamic interaction effect of the target movable element; and the display module is used for displaying the dynamic interaction effect of the target movable element in the screen saver picture.
In a third aspect, an embodiment of the present application provides an on-vehicle display device, where the device includes: a memory storing a computer program operable on a processor and a processor implementing the steps of the method when executing the program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the method.
In the embodiment of the application, firstly, under the condition that the vehicle-mounted display equipment works in a screen saver mode, a hand image of an in-vehicle person corresponding to the vehicle-mounted display equipment is acquired; then, based on the first position information of the hand in the hand image, determining a target movable element matched with the position of the hand in the movable elements of the screen saver picture, so that the subsequent target movable elements can form a dynamic interaction effect between the hand and the target movable elements according to the position of the hand; finally, determining the dynamic interaction effect of the target movable element and displaying the dynamic interaction effect of the target movable element in a screen saver picture, thus determining the target movable element matched with the position of the hand on the basis of the screen saver picture of the vehicle-mounted display equipment and determining the dynamic interaction effect of the target movable element, thus enriching the screen saver picture of the vehicle-mounted display equipment on one hand; on the other hand, the interestingness of interaction between the personnel in the automobile and the screen saver picture is increased.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the technical aspects of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1A is a schematic architecture diagram of an execution system of a method for interacting with an in-vehicle display device according to an embodiment of the present application;
fig. 1B is a schematic implementation flowchart of a method for interacting with a vehicle-mounted display device according to an embodiment of the present application;
fig. 2 is a schematic implementation flowchart of a method for interacting with a vehicle-mounted display device according to an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating an implementation of a method for interacting with a vehicle-mounted display device according to an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating an implementation of a method for interacting with a vehicle-mounted display device according to an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating an implementation of a method for interacting with a vehicle-mounted display device according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an implementation interface of a method for interacting with a vehicle-mounted display device according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another implementation interface of a method for interacting with a vehicle-mounted display device according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram illustrating an apparatus for interacting with a vehicle-mounted display device according to an embodiment of the present disclosure;
fig. 9 is a hardware entity diagram of an in-vehicle display device according to an embodiment of the present application.
Detailed Description
The technical solution of the present application is further elaborated below with reference to the drawings and the embodiments. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning by themselves. Thus, "module", "component" or "unit" may be used mixedly.
It should be noted that the terms "first \ second \ third" referred to in the embodiments of the present application are only used for distinguishing similar objects and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged under specific ordering or sequence if allowed, so that the embodiments of the present application described herein can be implemented in other orders than illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
(1) The vehicle-mounted screen is a new information carrier, and information is input into the screen by a computer; information can also be input in a remote control mode; and the information can be input in a mobile phone short message form. The diversification of the information input of the car screen brings great convenience to users.
(2) Gesture recognition, a method for recognizing human gestures through an algorithm, a user can control or interact with a device by using simple gestures, and a computer can understand human behaviors, wherein core technologies of gesture recognition comprise gesture segmentation, gesture analysis and gesture recognition.
The embodiment of the application provides an interaction method, which can be executed by a vehicle-mounted display device, wherein the vehicle-mounted display device can be implemented as various terminals with display screens, such as a vehicle event data recorder, a vehicle-mounted computer, a vehicle audio system, a navigation system, a vehicle information system, a vehicle-mounted household appliance and the like. In some embodiments, the interaction method provided by the embodiment of the present application may be applied to a client application platform of a vehicle-mounted display device. The client application platform may be a network (Web) application platform or an applet. In some embodiments, the interaction method provided in the embodiments of the present application may also be applied to an application program of a vehicle-mounted display device.
Fig. 1A is a schematic diagram of an alternative architecture of an execution system 10 of a method for interacting with a vehicle-mounted display device according to an embodiment of the present application, as shown in fig. 1A, a vehicle-mounted display device 300 is connected to an image capturing apparatus 100 through a network 200. The network 200 may be a wide area network or a local area network, or a combination of both. The in-vehicle display device 300 and the image capture apparatus 100 may be physically separate or integrated. The image capture apparatus 100 may store the acquired hand image to the in-vehicle display device 300 through the network 200. The vehicle-mounted display device 300 acquires a hand image of an in-vehicle person corresponding to the vehicle-mounted display device when the vehicle-mounted display device works in a screen saver mode; determining a target movable element matched with the position of the hand in the movable elements of the screen saver picture based on first position information of the hand in the hand image; determining a dynamic interaction effect of the target movable element; displaying the dynamic interaction effect of the target movable element in the screen saver picture; therefore, according to the position of the hand, the target movable element matched with the position of the hand is determined, the dynamic interaction effect of the target movable element is determined, interaction between a screen saver picture and people in the vehicle is formed, and the screen saver picture of the vehicle-mounted display device is enriched.
In some embodiments, the image capturing device 100 is an electronic device capable of capturing images and sharing captured data, for example, the image capturing device may be a camera above a rear view mirror in a vehicle cabin, a camera mounted on an instrument panel, or one or more cameras mounted at any position in the vehicle cabin according to requirements, and further, the image capturing device may also be a video camera, a mobile phone with a function of capturing or recording video, a digital camera, or the like.
In the following, the method for interacting with the vehicle-mounted display device provided by the embodiment of the present application will be described in conjunction with the exemplary application and implementation of the vehicle-mounted display device provided by the embodiment of the present application.
Fig. 1B is a schematic flow chart illustrating an implementation process of a method for interacting with a vehicle-mounted display device according to an embodiment of the present application, where as shown in fig. 1B, the method includes the following steps:
step S101, acquiring a hand image of an in-vehicle person corresponding to an in-vehicle display device under the condition that the in-vehicle display device works in a screen saver mode;
here, the in-vehicle display apparatus displays a screen saver screen including the movable element in a screen saver mode; the movable elements refer to elements which can move through triggering in the screen saver picture in a certain interactive scene, and the movable elements in the screen saver picture are different in different interactive scenes, for example, the movable elements can be different types of animals in a forest park under the condition that the interactive scene is the forest park; in the case of the interactive scene being the subsea world, the mobile elements may be fish, algae, etc.
In some embodiments, in the case that the dynamic interaction effect is not triggered, the screen saver screen and the movable elements contained in the screen saver screen are statically displayed; and under the condition that the dynamic interaction effect is triggered, the movable element is changed from a static state to a dynamic state, and the dynamic interaction effect of the movable element and the hand of the person in the vehicle is displayed in the screen saver picture.
In some embodiments, the vehicle may be any type of vehicle, such as a car, electric car, truck, bus, off-road vehicle, electric bicycle, tricycle, or any other vehicle capable of mounting an on-board display device. The vehicle-mounted display device may be a display screen of a vehicle-mounted device installed in a vehicle, wherein the vehicle-mounted display device may be any type of terminal having a display screen, such as a driving recorder, a vehicle-mounted computer, a vehicle-mounted sound system, a navigation system, and the like. The vehicle can comprise a plurality of vehicle-mounted display screens, wherein the position of each vehicle-mounted display screen is different, the personnel in the vehicle corresponding to the vehicle-mounted display screens are different, and in addition, the personnel in the vehicle and the vehicle-mounted display screens are within a certain distance, so that the vehicle-mounted display screens and the personnel in the vehicle can conveniently interact. For example, if the vehicle-mounted display screen is located in front of the copilot, the vehicle-mounted person corresponding to the vehicle-mounted display screen is a copilot passenger, and thus, the screen saver screen on the vehicle-mounted display screen interacts with the hand of the copilot passenger. Or, if the vehicle-mounted display screen is positioned in front of (or above, etc. the rear seat), the vehicle-mounted person corresponding to the vehicle-mounted display screen is the rear passenger, and thus, the screen saver screen on the vehicle-mounted display screen interacts with the hand of the rear passenger.
In some possible implementations, in a case where the in-vehicle display device is not turned off and the in-vehicle display device is not operated for a long time, the in-vehicle display device enters a screen saver mode, and a screen saver screen is displayed on the in-vehicle display device, such as a display screen. Firstly, determining the idle time length of the vehicle-mounted display equipment which does not receive an operation instruction; and then, under the condition that the idle time reaches the preset time, adjusting the working mode of the vehicle-mounted display equipment into a screen saver mode. After the vehicle-mounted display equipment of the vehicle enters a screen saver mode, starting image acquisition equipment, acquiring an in-vehicle image, and obtaining a hand image according to the in-vehicle image; or, when the vehicle-mounted display device of the vehicle enters the screen saver mode, the hand image of the person transmitted by the other device is received. Here, the hand of the person is located within the acquisition range of the image acquisition device and the hand of the person may have a distance to the on-board display device.
In some possible implementations, the acquired hand images of the person in the vehicle are extracted from in-vehicle images acquired by the image acquisition device, wherein the in-vehicle images may be images with higher image quality or images with lower image quality; the in-vehicle image with higher image quality may be image definition and higher integrity of the hand in the image, for example, the in-vehicle image picture includes all areas of the hand which are complete and clear; the image with low quality of the in-vehicle image may be with low image definition and low integrity of the hand in the image, for example, the in-vehicle image has blurred image quality, any area of the in-vehicle image with missing hand is not recognizable or any area of the hand in the in-vehicle image is not recognizable, and the like.
In some embodiments, the number of the hand images may be one or more, at this time, the image capturing device may obtain a plurality of in-vehicle images according to the video frame, and extract a plurality of corresponding hand images according to the plurality of in-vehicle images, respectively, where the plurality of hand images may be displayed in a specific time period, and the motion trajectory of the hand of the in-vehicle person and the change of the gesture corresponding to the position of the in-vehicle display screen are provided.
In some possible implementation manners, after the in-vehicle image is acquired, the image quality of the image is detected; and if the picture quality is detected to be low, discarding the image, continuously acquiring the in-vehicle image until the acquired in-vehicle image meeting the quality requirement is acquired, and detecting and segmenting the hand image of the person from the acquired in-vehicle image.
Step S102, determining a target movable element matched with the position of the hand in the movable elements of the screen saver picture based on first position information of the hand in the hand image;
here, the first position information is position information of the hand with respect to the image coordinate system, and the position information may be expressed by coordinates of the hand in the image coordinate system. Illustratively, an image coordinate system is established with the center of the image as an origin, wherein in the image coordinate system, the first position information may be coordinates (x) in the image coordinate system 0 ,y 0 ) The position of (a). The first position information may also be represented by the current hand position and the distance between the hand and a fixed reference, e.g. a point in an image. For example, in the case where the current hand is located in the east-right direction and the distance between the hand and the right frame of the vehicle-mounted display device in the image is 3cm, the first position information may be represented as: the azimuth is the true east and the distance is 3 cm.
Here, the target movable element is a movable element that can make a state transition following the hand position on the screen saver screen. For example, a puppy originally in a static state exists in the screen saver screen, and the puppy is changed from the static state to a jumping state when the hands of the puppy move to the body part of the puppy; for another example, a hedgehog in a static state originally exists in the screen saver screen, and the hedgehog rolls up the body when the hand moves to the body part of the hedgehog.
Step S103, determining the dynamic interaction effect of the target movable element;
here, the interactive effect may be an effect for any one or more elements in the screen saver screen, and may be an animation effect, an image effect, a video effect, or the like. Illustratively, the screen saver picture may be a forest picture in which animals such as wild boars, lions, and birds are included, and the interactive effect may be that the animals in the forest interact with the hands of the people in the vehicle and display the interactive picture on the display device. For example, in the case where the hand of the person in the vehicle moves to a body part of a small animal, the small animal in a static state may move, such as a bird may fly away, a lion may run, or the like; when the hand of the person in the vehicle leaves the body part of the small animal, the small animal in the moving state returns to the static state. For another example, the screen saver screen may be a picture of a marine world including animals such as whales and dolphins and plants such as water plants, and the interaction effect may be that a living being in the marine world interacts with a gesture and displays the interaction screen on the display device. For example, in the case of an upward motion when the hands move to whales, the whales follow the motion trajectory of the hands upstream; with the hands off the whale, the whale swims back to the original position and remains stationary.
In some possible implementations, the dynamic interactive effect of the target movable element that matches the position of the hand may be selected from a preset effect library. Firstly, acquiring a preset effect library; the effect library comprises a plurality of preset interactive effects of the screen saver picture and incidence relations between the preset interactive effects and hand positions and between target movable elements in the picture; then, in a preset effect library, a dynamic interaction effect with the target movable element in the screen is selected based on the association relationship and the hand position.
And step S104, displaying the dynamic interaction effect of the target movable element in the screen saver picture.
In some embodiments, the interactive effect is displayed in an overlapping manner on the basis of the screen saver picture, so that a picture overlapping the interactive effect is presented for the people in the vehicle. In some possible implementations, in the screen saver screen, for a target movable element, the interaction effect of the target movable element is superimposed in an image area of the screen saver screen where the target movable element is located. For example, when the hand of the person in the vehicle moves to the body part of the target movable element, if the target movable element is a bird stopped on a branch, the interaction effect between the bird and the hand is that the bird flies off the branch when the hand moves to the body of the bird, and at this time, the animation of the bird flying can be superimposed on the screen saver screen in the form of animation; for another example, if the target movable element is a sun in the sky, the interaction effect of the sun and the hand is that when the hand moves to the position of the sun, the sun moves from the current position to the position behind a dark cloud nearest to the sun, and at this time, a video of the movement of the sun can be played in the form of a video on the screen saver screen.
In the embodiment of the application, firstly, under the condition that the vehicle-mounted display equipment works in a screen saver mode, a hand image of an in-vehicle person corresponding to the vehicle-mounted display equipment is acquired; then, based on the first position information of the hand in the hand image, determining a target movable element matched with the position of the hand in the movable elements of the screen saver picture, so that the subsequent target movable elements can form a dynamic interaction effect between the hand and the target movable elements according to the position of the hand; finally, determining the dynamic interaction effect of the target movable element and displaying the dynamic interaction effect of the target movable element in a screen saver picture, thus determining the target movable element matched with the position of the hand on the basis of the screen saver picture of the vehicle-mounted display equipment and determining the dynamic interaction effect of the target movable element, thus enriching the screen saver picture of the vehicle-mounted display equipment on one hand; on the other hand, the interestingness of interaction between the personnel in the automobile and the screen saver picture is increased.
In some embodiments, the target movable element is determined by determining second position information in the screen saver screen that matches the first position information, i.e. as shown in fig. 2, step S102 includes the steps of:
step S201, determining second position information matched with the first position information in the screen saver picture;
here, the second position information is position information of the hand in the screen saver screen, and the position information can be represented by coordinates of the hand in a first coordinate system, wherein the first coordinate system is a rectangular coordinate system established by taking a vertex at the lower left corner of a display screen of the vehicle-mounted display device as an origin, taking the right side of the display screen as a positive x-axis direction, and taking the upper side as a positive y-axis direction; the distance between the current hand and a fixed reference object in the display screen, for example, the frame of the display screen, may also be used for representation. For example, in the first coordinate system, the second position information may be a coordinate of (x) 1 ,y 1 ) (ii) a For another example, the second position information may be 5cm from the upper frame of the display device and 10cm from the right frame of the display device.
In some possible implementation manners, firstly, determining coordinates of position calibration points in the vehicle-mounted display device under an image coordinate system, wherein the position calibration points may be four vertexes of the vehicle-mounted display device or a central point of the vehicle-mounted display device, and the like; and then determining the relative position relation between the position calibration point and the position corresponding to the first position information according to the coordinates of the position calibration point and the first position information represented by the coordinates, and further determining second position information matched with the first position information on a display screen of the vehicle-mounted display equipment.
In another possible implementation manner, first, according to the first position information of the hand in the hand image and the position information of the vehicle-mounted display device in the image coordinate system, the hand in the hand image is directly projected onto the display screen of the vehicle-mounted display device, and second position information, which is second position information of the hand on the display screen, is obtained, wherein the second position information is matched with the first position information.
And step S202, determining the target movable element based on the second position information.
In some embodiments, each movable element in the screen saver screen has a preset position coordinate, and the movable element at the position indicated by the second position information is the target movable element. Illustratively, there is a fox at the position corresponding to the second position information, and the fox is the target movable element. If there is a lion at the position corresponding to the second position information, the lion is the target moving element.
More specifically, after the dynamic effect is triggered, the state of the target movable element may change, for example, the state of the target movable element changes from static to dynamic.
In steps S201 to S202, first, second position information matching the first position information of the hand is determined in the screen saver screen, and then the target movable element is determined based on the second position information, so that the target movable element matching the position of the hand can be determined in the screen saver screen in a targeted manner, and a dynamic interaction effect between the hand and the target movable element can be formed subsequently.
In some embodiments, after determining the second location information, the determining of the target movable element may further comprise the steps of:
step S203, generating a virtual hand pattern at a position indicated by the second position information in the screen saver picture;
in some embodiments, the virtual hand pattern is a virtual figure of a gesture generated on the vehicle-mounted display device according to a real hand, wherein the virtual hand pattern may be a two-dimensional figure, a three-dimensional figure, a figure with an animation effect, and the like, the virtual hand pattern may be a gray scale figure, a figure after rendering and adding a special effect, for example, the virtual hand pattern may be a two-dimensional hand-shaped figure, a three-dimensional gesture with a particle special effect, a three-dimensional hand-shaped model capable of rotating or jumping, and the like.
In some embodiments, the virtual hand pattern may also be a particle effect presentation of the hand pattern. And the virtual hand pattern displayed by the particle effect can move in the screen saver picture along with the motion of the real hand, so that the passenger of the vehicle can check the position and the effect of the control of the spaced hand through the screen saver picture.
In some possible implementations, on the screen area of the vehicle-mounted display device where the second position information is located, a virtual hand pattern is generated according to the hand information in the hand image, such as the size of the current hand, the posture of the current hand, and the like, by direct projection, and the ratio of the virtual hand pattern to the current real hand may be a preset ratio, such as 1: 1 or 1: 2, etc. In other possible implementations, first, motion trajectory parameters of a virtual hand are generated on a screen area of the vehicle-mounted display device according to hand information in the multiple frames of hand images, for example, a motion trajectory of a current hand formed by the multiple frames of hand images, and then a virtual hand pattern is added to the motion trajectory formed by the motion trajectory parameters or a special effect is added to the hand pattern.
And step S204, determining the position movable elements of the virtual hand patterns as target movable elements.
In some embodiments, when the hand of the person in the vehicle moves in the vehicle cabin, the second position information of the real hand of the person in the vehicle in the display device changes, and at the same time, the virtual hand pattern moves synchronously in the display device along with the real hand, and at this time, the state of the target movable element in the screen saver picture is triggered to change, that is, the person in the vehicle interacts with the screen saver picture through a gesture, so that an interaction effect matched with the screen saver picture in the display device can be obtained.
After determining second position information of the hand in the screen saver screen in steps S203 to S204, generating a virtual hand pattern at a position indicated by the second position information; then determining the movable element matched with the position of the virtual hand pattern as a target movable element; therefore, on one hand, when the display position of the virtual hand pattern is overlapped with the position of the target movable element, the dynamic interaction effect of the target movable element is triggered; on the other hand, when the virtual hand pattern is displayed and moved in the screen saver picture, the person in the vehicle can more intuitively see the movement of the hand position in the vehicle-mounted display device, and further perceive the trigger scene of the dynamic interaction effect.
In some embodiments, after generating the virtual hand pattern, the method further comprises:
and moving the position of the virtual hand pattern in the screen saver picture based on the position change information of the hand in the hand images of the plurality of frames.
Here, when the hand position of the person in the vehicle changes, the change information of the hand position may be collected by the multi-frame hand image, and the change of the real hand position may be corresponded by moving the position of the virtual hand pattern in the screen saver screen.
In some embodiments, the change in the position of the hand in the hand image is visually demonstrated by moving the position of the virtual hand pattern in the screen saver screen, thereby enabling the demonstration of a dynamic interaction effect between the hand and the target movable element in the screen saver screen.
In some embodiments, the step S103 of determining movement information of the target movable element based on the position change information of the hand in the at least two frames of hand images and generating a dynamic interaction effect to move the target movable element according to the movement information includes the following steps:
step S301, determining position change information of the hand in at least two frames of hand images;
in some embodiments, the hand position of the vehicle occupant is in a dynamically changing state, and therefore, the position change information of the hand is to be determined based on at least the position of the hand in the two-frame hand image.
Step S302, determining the movement information of the target movable element in the screen saver screen based on the position change information of the hand;
here, since the movement of the target movable element is matched with the position change information of the hand, the movement information of the target movable element can be determined from the position change information of the hand. Illustratively, in a case where the position change information of the hand is from bottom to top, the movement information of the target movable element is determined to be rising from a relatively low position in the screen saver screen to a relatively high position in the screen saver screen; for example, when the change information of the position of the hand is from left to right, the movement information of the target movable element is determined to be a movement from a relatively left position on the screen saver screen to a relatively right position on the screen saver screen.
Step S303, generating a dynamic interaction effect for moving the target movable element according to the movement information.
In some embodiments, a dynamic interactive effect that moves according to the movement information is generated in the screen saver screen based on the movement information of the target movable element. Illustratively, the target movable element is a shark, the movement information of the shark is from left side to right side of the screen saver screen, and a dynamic interaction effect of the shark according to the movement from the left side to the right side is generated in the screen saver screen.
Determining movement information of the target movable element in the screen saver screen based on the position change information of the hand in steps S301 to S303; and a dynamic interaction effect for moving the target movable element according to the movement information is generated, so that the dynamic interaction effect of matching the hand with the target movable element can be generated in the screen saver picture, and the interestingness of interaction between the personnel in the vehicle and the screen saver picture is increased.
In some embodiments, in a case that the displacement distance represented by the position change information is greater than a first threshold and/or the movement distance represented by the movement information is greater than a second threshold, the dynamic interaction effect of the target movable element is stopped from being displayed in the screen saver screen.
In some possible implementations, the vehicle-mounted display device may send the third position information and the fourth position information of the hand to the server through the network, the server calculates a displacement distance between the third position information and the fourth position information, and then the server sends the displacement distance to the vehicle-mounted display device through the network. In some embodiments, the server may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The vehicle-mounted display device and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
In some embodiments, in the case that the displacement distance is greater than the first threshold, triggering the target movable element displayed in the screen saver screen to stop the current dynamic interaction effect.
Here, the first threshold value is set according to an empirical value of a person skilled in the art or according to a design requirement of a user. For example, the first threshold is 0.5cm, and the displacement distance is 0.8cm, so when the displacement distance is greater than the first threshold, it is considered that the hand of the vehicle occupant is away from the body part of the target movable element, and at this time, the target movable element displayed on the screen saver screen is triggered to stop the current dynamic interaction effect.
In some embodiments, in a case that the movement information of the target movable element is greater than the second threshold, triggering the target movable element displayed in the screen saver screen to stop the current dynamic interaction effect.
Here, the second threshold value may be understood with reference to the first threshold value.
In the embodiment of the application, under the condition that the displacement distance represented by the position change information is greater than the first threshold value and/or the movement distance represented by the movement information is greater than the second threshold value, the dynamic interaction effect of the target movable element is stopped being displayed in the screen saver picture, so that the interaction effect of interaction with personnel can be displayed in time, and the power consumption of the display equipment can be reduced in time.
In some embodiments, under the condition that the playing time of the dynamic interaction effect reaches a preset time, triggering the presentation effect of the target movable element to return to a screen saver picture in a screen saver mode from the current interaction effect;
here, the preset time duration may be set according to the time duration of the interactive video of the target movable element, for example, if the time duration of the interactive video of the target movable element is 20 seconds(s), the preset time duration is 20 s; or may be set according to the requirements of the user, for example, if the user requests that the interactive video after triggering the target movable element is played 2 times each time, the preset time duration is 40 s.
In some embodiments, if the playing duration of the interactive effect reaches the preset duration, the playing of the interactive effect is stopped, and the screen saver screen is returned to the screen saver screen in the screen saver mode from the currently played interactive effect. Therefore, when the playing time of the interactive effect reaches the preset time, the interactive effect is automatically stopped from being played and the original screen saver picture is returned, so that the interactive effect interacted with the personnel can be displayed in time, and the power consumption of the display equipment can be reduced.
In some embodiments, the movable element in the hidden state is determined to be a target movable element, and the movement information of the target movable element is determined based on the position change information of the hand, so as to generate a dynamic interaction effect for moving the target movable element according to the movement information, that is, in the case that the movable element in the hidden state is the target movable element, the method includes the following steps:
step S41, determining that the display state of the target movable element is changed from a hidden state to a visible state, and determining position change information of the hand in at least two frames of the hand image;
here, in a case where the second position information is located within the preset interaction region of the screen saver screen, the movable element in the hidden state is determined as the target movable element in the screen saver screen.
Here, the preset interactive area refers to an area in which a dynamic interactive effect of the target movable element can be formed in the screen saver screen when the virtual hand pattern moves to the area. For example, in the case where the virtual hand pattern moves to the preset interaction area, the target movable element in the screen saver screen may change state, e.g., from a hidden state to a visible state, etc. Illustratively, in a scene of a forest park, the target movable element is a bird, the bird is initially hidden in the forest, and in the case that the second position information is located within the preset interaction area of the screen saver screen, the hidden bird is triggered to be displayed from the forest, or the hidden bird stands on a visible trunk. In the scene of the ocean world, the target movable element is the shrimp, the shrimp is initially hidden in the coral, and under the condition that the second position information is located in the preset interaction area of the screen saver screen, the hidden shrimp is triggered to swim out of the coral from the coral.
In some embodiments, the size, the position, and the like of the preset interaction region may be set according to experience of a person skilled in the art, or may be set according to design requirements of a user.
A step S42 of determining movement information of the target movable element based on the position change information of the hand;
step S43, generating a dynamic interactive effect for moving the target movable element in accordance with the movement information.
Here, step S42 and step S43 can be understood with reference to step S302 and step S303.
In steps S41 to S43, when the second position information is located within the preset interaction area of the screen saver screen, the mobile element in the hidden state is determined as the target mobile element, the movement information of the target mobile element is determined based on the position change information of the hand, and finally the dynamic interaction effect of the target mobile element is obtained, so that the dynamic interaction effect in which the hand matches the target mobile element can be generated in the screen saver screen, and the interest in interaction between the vehicle occupant and the screen saver screen is increased.
In some embodiments, in case it is detected that the second location information moves from inside the preset interaction area to outside the preset interaction area, at least one of the following is performed:
(1) controlling the target movable element to stop moving;
illustratively, the target movable element is a monkey in the jungle, the monkey is climbing a tree, and in the case that the second position information of the hand is detected to move from the preset interaction area to the outside of the preset interaction area, the monkey stops climbing the tree and stops at the current position. For another example, the target movable element is a rabbit in the farm, the rabbit eating grass, and in the case that the second position information of the hand is detected to move from the preset interaction area to the outside of the preset interaction area, the rabbit stops eating grass and stops at the current position.
(2) Controlling the target movable element to move to a position corresponding to the hidden state;
in some embodiments, since the target movable element is changed from the hidden state to the motion state in a case where the second position information of the hand moves within the preset interaction region, in a case where it is detected that the second position information of the hand moves from within the preset interaction region to outside of the preset interaction region, the target movable element moves from the current position to a position corresponding to the hidden state. Illustratively, the target movable element is a monkey in a jungle, the monkey is climbing a tree, and when the second position information of the hand is detected to move from the preset interaction area to the outside of the preset interaction area, the monkey returns to the position corresponding to the originally hidden state from the current position and is in a static state at the position. For another example, the target movable element is a rabbit in the farm, the rabbit is eating grass, and when the second position information of the hand is detected to move from the preset interaction area to the outside of the preset interaction area, the rabbit returns to the position corresponding to the originally hidden state from the current position and is in a static state at the position.
(3) And controlling the target movable element to return to the hidden state from the display state.
Illustratively, the target movable element is a panda in a jungle, the panda is sunning the sun, and when the second position information of the hand is detected to move from the preset interaction area to the outside of the preset interaction area, the panda disappears from the current position, that is, the panda returns to the hidden state from the display state. In another example, the target movable element is a horse in the farm, the horse is eating grass, and when the second position information of the hand is detected to move from the preset interaction area to the outside of the preset interaction area, the horse disappears from the current position, namely, the horse returns to the hidden state from the display state.
In the embodiment of the application, under the condition that the second position information of the hand is detected to move from the preset interaction area to the outside of the preset interaction area, the motion state of the target movable element is triggered to change, so that the integrity of the interaction effect when the personnel in the vehicle interact with the screen picture is improved, and the interestingness of the interaction between the personnel in the vehicle and the screen saver picture is increased.
In some embodiments, the method further comprises: performing gesture recognition on the hand image to obtain a gesture recognition result;
in some embodiments, the gesture recognition method comprises at least one of:
(1) template matching method: the method comprises the steps of regarding the motion of a gesture as a gesture template sequence, and then comparing the gesture template sequence to be recognized with a known gesture template sequence so as to recognize the gesture. During implementation, the gesture actions in each acquired hand image are connected in series according to the sequence of the acquisition time to obtain a gesture template sequence, and then the sequence is compared with the known gesture template sequence, so that the gesture is recognized.
(2) Hidden Markov Model (HMM): a Markov process is described that includes hidden unknown parameters, or in other words, a process in which hidden parameters of the Markov process are determined from observable parameters and then used for gesture recognition or analysis. During implementation, observable parameters are firstly extracted from the acquired hand images, then hidden parameters of the Markov process are determined by using the observable parameters, and finally gesture recognition is carried out by using the hidden parameters.
(3) A neural network method: when the method is implemented, the hand image is input into a network model for gesture recognition, and the network model can be obtained by training a large number of sample images marked with gestures. Obtaining confidence degrees of various gestures in the hand image through the network model, and taking the gesture with the maximum confidence degree as the recognition result; thus, the accuracy of gesture recognition of the hand image can be improved.
Here, the gesture recognition result includes a hand gesture, a hand orientation, a hand motion trajectory, and the like, wherein the hand gesture includes at least one of: fist making, five fingers stretching, one finger stretching, three fingers stretching, OK, thumb stretching, orchid finger stretching and the like; the hand orientation comprises that the palm faces back to the person, the palm faces the camera, and the like; therefore, the gesture recognition result of the person in the vehicle can be obtained by performing gesture recognition on the hand image, and the interaction effect matched with the screen saver picture in the display equipment can be conveniently determined according to the gesture recognition result.
In some embodiments, in the case of obtaining a gesture recognition result, a dynamic interaction effect of the target movable element corresponding to the hand motion characterized by the gesture recognition result is determined.
In some embodiments, displaying a dynamic interaction effect of a target movable element corresponding to a hand action in a screen saver picture according to a gesture recognition result; therefore, for the same screen saver picture, the interaction effects matched with different gesture recognition results are different; or, for the same gesture recognition result, the screen contents in the screen saver screen are different, and the matched interaction effects are also different. The interactive effect may be an animation effect, an image effect, or a video effect, etc. Illustratively, the screen saver screen may be a forest screen, wherein the forest includes wild boars, lions, birds and other animals, and the interaction effect may be that the animals in the forest interact with the gestures and display the interaction screen in the screen saver screen. For example, when the gesture is a grabbing motion and the gesture moves to a body part of the small animal, the small animal in the originally hidden state is displayed on the screen saver screen along with the grabbing motion, in other words, the small animal hidden in the screen saver screen is grabbed by the hand and taken out; in case the gesture changes from finger relaxed state to fist, the small animal stops moving. For another example, the screen saver screen may be a marine world screen in which animals such as whale and dolphin and plants such as aquatic weed are included in the sea, and the interaction effect may be that a living being in the marine world interacts with a gesture and displays the interaction screen in the screen saver screen. For example, in the case of an upward motion where the hands move to whales, the whales follow the path of the hand movement upstream; with the hands off the whale, the whale swims back to the original position and remains stationary.
In the embodiment of the application, firstly, gesture recognition is carried out on a hand image to obtain a gesture recognition result; then, determining the dynamic interaction effect of the target movable element corresponding to the hand motion represented by the gesture recognition result, so that on one hand, screen saver pictures of the vehicle-mounted display device are enriched; on the other hand, the interestingness of interaction between the personnel in the vehicle and the screen saver picture is increased.
In some embodiments, according to an operation of selecting an interactive scene by the in-vehicle person, a screen saver picture corresponding to the selected interactive scene is acquired, and the screen saver picture corresponding to the selected interactive scene is displayed when the in-vehicle display device operates in a screen saver mode.
Here, the interactive scene is a scene in which the screen saver is located, for example, the interactive scene may be a jungle scene, a sea scene, a farm scene, or the like, and the person in the vehicle may select the interactive scene according to a preference or a requirement.
In some embodiments, when the vehicle-mounted display device operates in the screen saver mode, after an interactive scene is selected by a person in the vehicle, the screen saver picture is a corresponding screen saver picture in the interactive scene.
In the embodiment of the application, under the condition that the vehicle-mounted display equipment works in the screen saver mode, the display screen displays the screen saver picture corresponding to the interactive scene selected by the vehicle-mounted person, so that the screen saver picture of the vehicle-mounted display equipment can be enriched, and the interestingness of interaction between the vehicle-mounted person and the screen saver picture can be increased.
In some embodiments, in the case that the on-board display device operates in the screen saver mode, the image of the hand of the person in the vehicle is obtained by activating the image capturing device in the vehicle, that is, as shown in fig. 4, the implementation of step S101 includes the following steps:
step S401, under the condition that the vehicle-mounted display equipment works in a screen saver mode, starting an image acquisition device in a vehicle;
in some embodiments, if the display screen of the in-vehicle display device is not operated for a long time, a screen saver mode is entered while animation, video, or image-like screen saver pictures are displayed on the display screen. In some possible implementations, the image capturing device in the vehicle may be any location in the vehicle where a person in the vehicle can be captured, including: mounted in a location where a passenger of the co-driver can be captured (e.g., opposite the passenger seat), mounted in a location where a passenger of the rear row can be captured (e.g., behind the passenger or driver seat), or mounted on top of the vehicle to capture hand images of any person throughout the vehicle. The image acquisition device and the display screen belong to the same vehicle-mounted display device, and can also be other devices with image acquisition functions independent of the display screen, such as a camera and a camera independently installed in a vehicle or a device with a camera (such as a vehicle-mounted computer with a camera or a vehicle-mounted sound with a camera).
In some possible implementations, the display screen enters the screen saver mode as a trigger condition to trigger activation of an image capture device in the vehicle. Or the vehicle-mounted system controls the image acquisition device in the vehicle to start under the condition that the display screen is detected to enter the screen saver mode. Or the image acquisition device detects the display screen, and when the display screen is detected to enter the screen saver mode, the image acquisition device is automatically started.
And S402, acquiring hand images of the persons in the vehicle corresponding to the vehicle-mounted display equipment based on the images in the vehicle acquired by the image acquisition device.
In some embodiments, the image acquisition device acquires an image in the vehicle, and the image acquisition range may be a certain range in the vehicle cabin corresponding to the position of a certain display screen; and then extracting a hand image of the person in the vehicle from the detected image in the vehicle, wherein the person in the vehicle is the person in the vehicle corresponding to the position of the display screen.
In the embodiment of the application, under the condition that the display screen of the vehicle-mounted display device enters the screen saver mode, the image acquisition device in the vehicle is automatically triggered to be started, the image in the vehicle is acquired through the image acquisition device, so that the hand image of a person in the vehicle can be acquired in real time, a target movable element in a screen saver picture can be conveniently determined through the position of the hand image, and interaction between the target movable element in the screen saver picture and the person in the vehicle is further realized.
In some embodiments, the in-vehicle display device is a touch screen, and after the dynamic interaction effect of the target movable element is displayed in the screen saver screen, the method further includes:
and responding to the touch operation aiming at the vehicle-mounted display equipment, controlling the vehicle-mounted display equipment to quit displaying the dynamic interaction effect, presenting an interface corresponding to the touch operation, and controlling the image acquisition device to be closed.
Here, in the case of the interactive effect displayed by the vehicle-mounted display device, if the vehicle interior personnel perform a trigger operation on the vehicle-mounted display device, the display screen of the vehicle-mounted display device exits the interactive effect and synchronously enters a page corresponding to the trigger operation. The triggering operation may be clicking a display screen of the vehicle-mounted display device, touching the display screen, inputting a voice into the display screen, or inputting a text into the display screen, and the like, where the triggering operation is an operation for making the vehicle-mounted display device enter a working state, for example, the vehicle-mounted display device is a navigation device, and the triggering operation may be triggering the navigation device to display a current position of the vehicle; or, the vehicle-mounted display device is an audio device, and the triggering operation may be to trigger the audio device to play music or the like. And the vehicle-mounted display equipment responds to the trigger operation, controls the display screen to quit displaying the interaction effect, and controls the display screen to synchronously present the interface corresponding to the trigger operation.
In the embodiment of the application, in response to the triggering operation for the vehicle-mounted display device, the vehicle-mounted display device controls the display screen to quit the interaction effect as soon as monitoring the triggering operation, and synchronously enters the interface corresponding to the triggering operation, so that interaction with personnel in a vehicle can be realized through the interaction effect, and the application requirements of the personnel in the vehicle on the vehicle-mounted display device can be timely responded.
In the following, an exemplary application of the embodiment of the present application in an actual application scenario will be described, taking an example of determining a target movable element by a position of a hand image to realize human-to-automobile digital screen saver interaction.
An embodiment of the present application provides a method for interacting with a vehicle-mounted display device, as shown in fig. 5, the method includes:
s501, the screen of the vehicle enters a screen saver mode, and a screen saver picture is displayed on a screen;
in some embodiments, as shown in fig. 6, in a case where the car-on-screen 51 of the vehicle 50 is not operated for a long time, the car-on-screen 51 enters a screen saver mode, and at this time, a person in the vehicle can interact with the screen saver screen 52 by touching the screen of the car-on-screen 51 with a hand 53.
In some embodiments, the screen of the car monitor 51 enters the screen saver mode if no one clicks or touches the screen of the car monitor 51 within a preset time period, for example, 2 minutes (min).
In some embodiments, in the case where the car screen 51 enters the screen saver mode, the screen saver screen 52 is displayed on the screen, for example, the screen saver screen 52 may be an animation, a video, or a game included in an interactive scene; the interactive scene may be a natural scene, a marine world, a jungle animal world, a farm animal world, and the like, and the embodiment of the present application is not limited thereto.
In some embodiments, after the vehicle screen is initialized, a Debug Bridge (DB) command is used to push a screensaver Package (PK) to the vehicle screen through a Universal Serial Bus (USB) connection line, and the screensaver Application on the vehicle screen can run normally after the Application Package is installed.
S502, acquiring a real-time hand image of a person in the vehicle by a camera;
in some embodiments, the camera may be a camera of a driver monitoring system, a camera above a rear view mirror in a vehicle cabin, a camera mounted on an instrument panel, or the like, and the number of the cameras may be one or more.
In some embodiments, the shooting range of the camera in the cabin may be the maximum range that the camera can shoot, or may be a range set in real time according to the user's needs.
In some embodiments, after the camera collects hand images of people in the vehicle, gesture recognition can be performed to obtain a gesture recognition result, so that the interaction effect of the hand and the target movable element can be determined according to the gesture recognition result.
Step S503, determining the position of the hand in the display screen, and generating a virtual hand pattern on the screen of the car machine screen based on the position;
here, the position of the hand 53 in the display screen corresponds to the second position information in the foregoing embodiment.
In some embodiments, as also shown in FIG. 6, the virtual hand pattern 54 may be a two-dimensional or three-dimensional hand graphic or animation generated on-screen from a real hand using a particle effect.
In some embodiments, the virtual hand pattern 54 moves in synchrony with the real hand captured by the camera, illustratively as the real hand captured by the camera moves to the left relative to the screen, the virtual hand pattern 54 also moves to the left relative to the screen at the same time; when the camera acquires that the real hand moves upwards relative to the screen, the virtual hand pattern also moves upwards relative to the screen at the same time.
Step S504, responding to the virtual hand pattern contacting the target movable element, and forming an interactive picture between the virtual hand pattern and the target movable element;
in some embodiments, the target movable elements are elements in an interactive scene, for example, in the case where the interactive scene is a jungle animal world, the target movable elements include various small animals in the jungle, as shown in fig. 7, including monkeys 55, tigers 56, boars 57, birds 58, lions 59, and so forth.
In some embodiments, in the event that the virtual hand pattern 54 moves onto a body part of a small animal in a jungle, the corresponding small animal triggers a different animation. For example, monkey 55 may swing, tiger 56 may lift the forepaws, boar 57 may run, bird 58 may fly away, lion 59 may scratch the head, etc. After the virtual hand pattern 54 leaves the screen or the playing of the current interactive animation is completed, the small animals return to the standby animation state.
In other embodiments, in the case that the virtual hand pattern 54 moves to a preset interaction area in the jungle, the small animal hidden in the jungle appears and runs jump along the motion track of the virtual hand pattern 54, and the small animal stops at the original position or runs back to the original position after the virtual hand pattern 54 leaves the preset interaction area, and the hidden state is recovered.
In some embodiments, as shown in fig. 7, the display screen of the in-vehicle display device further includes an operation interface such as log display, model display, exit program, camera, screen capture, camera recording, preview horizontal flip, preview vertical flip, Software Development Kit (SDK) horizontal flip, SDK vertical flip, and the like.
And S505, responding to the fact that the body part of a person in the vehicle touches the screen of the vehicle machine screen, enabling the screen to exit a screen saver mode, and stopping the camera from collecting hand images.
Here, when a body part of a vehicle occupant touches the screen of the vehicle-mounted device screen 51, the system defaults that the vehicle occupant should perform some operations on the vehicle-mounted device screen 51, for example, the vehicle occupant should select to switch songs on the vehicle-mounted device screen 51, or the vehicle occupant should query a map on the vehicle-mounted device screen 51, and at this time, the vehicle-mounted device screen 51 enters the working mode, so that the screen of the vehicle-mounted device screen 51 exits the screen saver mode, and the camera stops capturing hand images.
In one specific example, when the passenger is not using the screen of the vehicle for a long time during the riding process, the screen of the vehicle enters a screen saver mode. According to the embodiment of the application, gesture recognition is added on the basis of the static or dynamic screen saver, and the passengers can interact with the target movable elements in the screen saver through the virtual hand patterns to obtain interactive feedback, so that the interestingness and the scientific and technological experience of the screen saver are increased.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein. It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the foregoing embodiments, an interaction apparatus is provided in an embodiment of the present application, where the apparatus includes modules and sub-modules included in the modules, and each unit included in each sub-module and each sub-unit included in each unit may be implemented by electronic devices; of course, the implementation can also be realized through a specific logic circuit; in the implementation process, the Processor may be a Central Processing Unit (CPU), a microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
An interaction apparatus is provided in an embodiment of the present application, fig. 8 is a schematic structural diagram of a composition of an apparatus for interacting with a vehicle-mounted display device provided in an embodiment of the present application, and as shown in fig. 8, an interaction apparatus 800 includes:
the first obtaining module 810 is configured to obtain a hand image of an in-vehicle person corresponding to an on-vehicle display device when the on-vehicle display device operates in a screen saver mode, where the on-vehicle display device displays a screen saver screen including a movable element in the screen saver mode;
a first determining module 820, configured to determine, based on first position information of a hand in the hand image, a target movable element that matches a position of the hand in the movable elements of the screen saver screen;
a second determining module 830, configured to determine a dynamic interaction effect of the target movable element;
a display module 840, configured to display the dynamic interaction effect of the target movable element in the screen saver screen.
In some embodiments, the first determining module 820 comprises: the first determining submodule is used for determining second position information matched with the first position information in the screen saver picture; a second determination submodule for determining a target movable element based on the second position information.
In some embodiments, the first determining module 820 further comprises: a generation submodule for generating a virtual hand pattern at a position indicated by the second position information in the screen saver screen; and the third determining submodule is used for determining the movable elements matched with the positions of the virtual hand patterns as target movable elements.
In some embodiments, the apparatus further comprises: and the moving module is used for moving the position of the virtual hand pattern in the screen saver picture based on the position change information of the hand in the hand images of a plurality of frames.
In some embodiments, the second determining module 830 includes: a third determining submodule for determining position change information of the hand in at least two frames of the hand images; a fourth determination sub-module configured to determine movement information of the target movable element in the screen saver screen based on the position change information of the hand; and the first generation submodule is used for generating a dynamic interaction effect for enabling the target movable element to move according to the movement information.
In some embodiments, the apparatus further comprises: and the stopping module is used for stopping displaying the dynamic interaction effect of the target movable element in the screen saver picture under the condition that the displacement distance represented by the position change information is greater than a first threshold value and/or the movement distance represented by the movement information is greater than a second threshold value.
In some embodiments, the second determination submodule is further configured to: determining a movable element in a hidden state as a target movable element in the screen saver picture under the condition that the second position information is located in a preset interaction area of the screen saver picture;
the second determining module 830 includes: a fifth determination submodule for determining that the display state of the target movable element is changed from a hidden state to a visible state, and determining position change information of the hand in at least two frames of the hand image; a sixth determination sub-module for determining movement information of the target movable element based on the position change information of the hand; and the second generation submodule is used for generating a dynamic interaction effect for enabling the target movable element to move according to the movement information.
In some embodiments, the apparatus further comprises: a detection module, configured to, when it is detected that the second location information moves from within the preset interaction area to outside the preset interaction area, perform at least one of the following: controlling the target movable element to stop moving; controlling the target movable element to move to a position corresponding to the hidden state; controlling the target movable element to return from the display state to the hidden state.
In some embodiments, the apparatus further comprises: the third determining module is used for performing gesture recognition on the hand image to obtain a gesture recognition result; the second determining module 830 is further configured to determine a dynamic interaction effect of the target movable element corresponding to the hand motion represented by the gesture recognition result.
In some embodiments, the apparatus further comprises: and the second acquisition module is used for acquiring a screen saver picture corresponding to the selected interactive scene according to the operation of selecting the interactive scene by the personnel in the vehicle and displaying the screen saver picture corresponding to the selected interactive scene under the condition that the vehicle-mounted display equipment works in the screen saver mode.
In some embodiments, the first obtaining module 810 includes: the starting sub-module is used for starting an image acquisition device in the vehicle under the condition that the vehicle-mounted display equipment works in a screen saver mode; and the acquisition submodule is used for acquiring hand images of people in the vehicle corresponding to the vehicle-mounted display equipment based on the images in the vehicle acquired by the image acquisition device.
In some embodiments, the in-vehicle display device is a touch screen; the device further comprises: and the control module is used for responding to the touch operation aiming at the vehicle-mounted display equipment, controlling the vehicle-mounted display equipment to quit displaying the dynamic interaction effect, presenting an interface corresponding to the touch operation and controlling the image acquisition device to be closed.
The functions or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is given here.
It should be noted that, in the embodiment of the present application, if the above interaction method is implemented in the form of a software functional module and is sold or used as a standalone product, it may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing an electronic device (which may be a personal computer, a server, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, an embodiment of the present application provides an in-vehicle display device, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor executes the computer program to implement the steps in the interaction method provided in the foregoing embodiment.
Correspondingly, the embodiment of the present application provides a readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the above-mentioned interaction method.
Here, it should be noted that: the above description of the storage medium and platform embodiments is similar to the description of the method embodiments above, with similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the platform of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
Fig. 9 is a schematic diagram of a hardware entity of a vehicle-mounted display device according to an embodiment of the present application, and as shown in fig. 9, the hardware entity of the vehicle-mounted display device 900 includes: a processor 901, a communication interface 902, and a memory 903, wherein the processor 901 generally controls the overall operation of the in-vehicle display apparatus 900. The communication interface 902 may enable the in-vehicle display device 900 to communicate with other platforms or electronic devices or servers over a network.
The Memory 903 is configured to store instructions and applications executable by the processor 901, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by each module in the to-be-processed processor 901 and the in-vehicle display apparatus 900, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, etc. In addition, the shown or discussed coupling, direct coupling or communication connection between the components may be through some interfaces, indirect coupling or communication connection between devices or units, and the like.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing module, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit may be implemented in the form of hardware, or in the form of hardware plus a software functional unit. Those of ordinary skill in the art will understand that: all or part of the steps of implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer-readable storage medium, and when executed, executes the steps including the method embodiments; and the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method of interacting with an in-vehicle display device, the method comprising:
under the condition that the vehicle-mounted display equipment works in a screen saver mode, acquiring a hand image of an in-vehicle person corresponding to the vehicle-mounted display equipment, wherein the vehicle-mounted display equipment displays a screen saver picture containing movable elements in the screen saver mode;
determining a target movable element matched with the position of the hand part in the movable elements of the screen saver picture based on first position information of the hand part in the hand part image;
determining a dynamic interaction effect of the target movable element;
and displaying the dynamic interaction effect of the target movable element in the screen saver picture.
2. The method according to claim 1, wherein the determining a target movable element of the movable elements of the screen saver screen that matches the position of the hand based on the first position information of the hand in the hand image comprises:
determining second position information matched with the first position information in the screen saver picture;
determining a target movable element based on the second position information.
3. The method of claim 2, wherein after determining the second position information, determining a target movable element of the movable elements of the screen saver screen that matches the position of the hand based on the first position information of the hand in the hand image further comprises:
generating a virtual hand pattern at a location in the screen saver screen indicated by the second location information;
determining the movable element of the virtual hand pattern with the matched position as a target movable element.
4. The method of claim 3, wherein after generating the virtual hand pattern, the method further comprises:
and moving the position of the virtual hand pattern in the screen saver picture based on the position change information of the hand in the hand images of the plurality of frames.
5. The method of claim 2, wherein the determining the dynamic interaction effect of the target movable element comprises:
determining position change information of the hand in at least two frames of the hand image;
determining movement information of the target movable element in the screen saver screen based on the position change information of the hand;
and generating a dynamic interaction effect for moving the target movable element according to the movement information.
6. The method of claim 5, further comprising:
and under the condition that the displacement distance represented by the position change information is greater than a first threshold value and/or the movement distance represented by the movement information is greater than a second threshold value, stopping displaying the dynamic interaction effect of the target movable element in the screen saver picture.
7. The method of claim 2, wherein determining a target movable element based on the second location information comprises:
determining a movable element in a hidden state as a target movable element in the screen saver picture under the condition that the second position information is located in a preset interaction area of the screen saver picture;
the determining the dynamic interaction effect of the target movable element comprises:
determining that the display state of the target movable element is changed from a hidden state to a visible state, and
determining position change information of the hand in at least two frames of the hand image;
determining movement information of the target movable element based on the position change information of the hand;
and generating a dynamic interaction effect for moving the target movable element according to the movement information.
8. The method of claim 7, further comprising:
under the condition that the second position information is detected to move from the preset interaction area to the outside of the preset interaction area, executing at least one of the following steps:
controlling the target movable element to stop moving;
controlling the target movable element to move to a position corresponding to the hidden state;
controlling the target movable element to return from the display state to the hidden state.
9. The method according to any one of claims 1 to 8, further comprising:
performing gesture recognition on the hand image to obtain a gesture recognition result;
the determining the dynamic interaction effect of the target movable element comprises:
and determining the dynamic interaction effect of the target movable element corresponding to the hand motion represented by the gesture recognition result.
10. The method according to any one of claims 1 to 9, further comprising:
and according to the operation of selecting the interactive scene by the personnel in the vehicle, acquiring a screen saver picture corresponding to the selected interactive scene and displaying the screen saver picture corresponding to the selected interactive scene under the condition that the vehicle-mounted display equipment works in a screen saver mode.
11. The method according to any one of claims 1 to 10, wherein the acquiring the hand image of the vehicle-mounted person corresponding to the vehicle-mounted display device when the vehicle-mounted display device operates in a screen saver mode comprises:
under the condition that the vehicle-mounted display equipment works in the screen saver mode, starting an image acquisition device in the vehicle;
and acquiring hand images of people in the vehicle corresponding to the vehicle-mounted display equipment based on the images in the vehicle acquired by the image acquisition device.
12. The method of claim 11, wherein the vehicle-mounted display device is a touch screen; after displaying the dynamic interaction effect of the target movable element in the screen saver screen, the method further comprises:
and responding to the touch operation aiming at the vehicle-mounted display equipment, controlling the vehicle-mounted display equipment to quit displaying the dynamic interaction effect, presenting an interface corresponding to the touch operation, and controlling the image acquisition device to be closed.
13. An apparatus for interacting with an in-vehicle display device, the apparatus comprising:
the vehicle-mounted display equipment comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring a hand image of an in-vehicle person corresponding to the vehicle-mounted display equipment under the condition that the vehicle-mounted display equipment works in a screen saver mode, and the vehicle-mounted display equipment displays a screen saver picture containing movable elements under the screen saver mode;
a first determining module, configured to determine, based on first position information of a hand in the hand image, a target movable element that matches a position of the hand in movable elements of the screen saver screen;
a second determination module, configured to determine a dynamic interaction effect of the target movable element;
and the display module is used for displaying the dynamic interaction effect of the target movable element in the screen saver picture.
14. An in-vehicle display device comprising a memory and a processor, the memory storing a computer program operable on the processor, the processor implementing the steps of the method of any one of claims 1 to 12 when executing the program.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 12.
CN202210613139.8A 2022-05-31 2022-05-31 Method, device, equipment and storage medium for interacting with vehicle-mounted display equipment Pending CN114840092A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210613139.8A CN114840092A (en) 2022-05-31 2022-05-31 Method, device, equipment and storage medium for interacting with vehicle-mounted display equipment
PCT/CN2023/091195 WO2023231664A1 (en) 2022-05-31 2023-04-27 Method and apparatus for interacting with vehicle-mounted display device, and device, storage medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210613139.8A CN114840092A (en) 2022-05-31 2022-05-31 Method, device, equipment and storage medium for interacting with vehicle-mounted display equipment

Publications (1)

Publication Number Publication Date
CN114840092A true CN114840092A (en) 2022-08-02

Family

ID=82571629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210613139.8A Pending CN114840092A (en) 2022-05-31 2022-05-31 Method, device, equipment and storage medium for interacting with vehicle-mounted display equipment

Country Status (2)

Country Link
CN (1) CN114840092A (en)
WO (1) WO2023231664A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116383028A (en) * 2023-06-05 2023-07-04 北京博创联动科技有限公司 Vehicle man-machine interaction system based on vehicle-mounted intelligent terminal
WO2023231664A1 (en) * 2022-05-31 2023-12-07 上海商汤智能科技有限公司 Method and apparatus for interacting with vehicle-mounted display device, and device, storage medium, and computer program product

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110488971B (en) * 2019-07-16 2023-02-28 北京华捷艾米科技有限公司 Method and system for realizing somatosensory application interaction of android system
CN112445323B (en) * 2019-08-29 2023-12-29 斑马智行网络(香港)有限公司 Data processing method, device, equipment and machine-readable medium
CN112527110B (en) * 2020-12-04 2024-07-16 北京百度网讯科技有限公司 Non-contact interaction method, non-contact interaction device, electronic equipment and medium
CN113377198B (en) * 2021-06-16 2023-10-17 深圳Tcl新技术有限公司 Screen saver interaction method and device, electronic equipment and storage medium
CN114840092A (en) * 2022-05-31 2022-08-02 上海商汤临港智能科技有限公司 Method, device, equipment and storage medium for interacting with vehicle-mounted display equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231664A1 (en) * 2022-05-31 2023-12-07 上海商汤智能科技有限公司 Method and apparatus for interacting with vehicle-mounted display device, and device, storage medium, and computer program product
CN116383028A (en) * 2023-06-05 2023-07-04 北京博创联动科技有限公司 Vehicle man-machine interaction system based on vehicle-mounted intelligent terminal

Also Published As

Publication number Publication date
WO2023231664A1 (en) 2023-12-07

Similar Documents

Publication Publication Date Title
CN108197589B (en) Semantic understanding method, apparatus, equipment and the storage medium of dynamic human body posture
KR101574099B1 (en) Augmented reality representations across multiple devices
CN106462242B (en) Use the user interface control of eye tracking
CN114840092A (en) Method, device, equipment and storage medium for interacting with vehicle-mounted display equipment
EP2795936B1 (en) User-to-user communication enhancement with augmented reality
CN103988220B (en) Local sensor augmentation of stored content and AR communication
JP2024500650A (en) System and method for generating augmented reality objects
CN111714886B (en) Virtual object control method, device, equipment and storage medium
CN102135799A (en) Interaction based on computer application
CN112402960B (en) State switching method, device, equipment and storage medium in virtual scene
WO2022142626A1 (en) Adaptive display method and apparatus for virtual scene, and electronic device, storage medium and computer program product
CN111768438B (en) Image processing method, device, equipment and computer readable storage medium
CN113867531A (en) Interaction method, device, equipment and computer readable storage medium
CN112711458A (en) Method and device for displaying prop resources in virtual scene
US20170043256A1 (en) An augmented gaming platform
CN110176044B (en) Information processing method, information processing device, storage medium and computer equipment
CN117789306A (en) Image processing method, device and storage medium
CN114210051A (en) Carrier control method, device, equipment and storage medium in virtual scene
EP4012663A1 (en) Image processing device, image processing method, and program
CN113440850A (en) Virtual object control method and device, storage medium and electronic device
KR20230085934A (en) Picture display method and apparatus, device, storage medium, and program product in a virtual scene
CN112870694A (en) Virtual scene picture display method and device, electronic equipment and storage medium
KR20210004479A (en) Augmented reality-based shooting game method and system for child
WO2024051398A1 (en) Virtual scene interaction processing method and apparatus, electronic device and storage medium
González-Ortega et al. 3D Kinect-Based Gaze Region Estimation in a Driving Simulator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination