CN115437527A - Interaction control method and device, electronic equipment, storage medium and vehicle - Google Patents

Interaction control method and device, electronic equipment, storage medium and vehicle Download PDF

Info

Publication number
CN115437527A
CN115437527A CN202210751171.2A CN202210751171A CN115437527A CN 115437527 A CN115437527 A CN 115437527A CN 202210751171 A CN202210751171 A CN 202210751171A CN 115437527 A CN115437527 A CN 115437527A
Authority
CN
China
Prior art keywords
display screen
user
playing
moving path
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210751171.2A
Other languages
Chinese (zh)
Inventor
贺永强
巩雪君
郭宁
霍国栋
马跃
崔斌
胡含
安庆涵
王涛
苏皓然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Co Wheels Technology Co Ltd
Original Assignee
Beijing Co Wheels Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Co Wheels Technology Co Ltd filed Critical Beijing Co Wheels Technology Co Ltd
Priority to CN202210751171.2A priority Critical patent/CN115437527A/en
Publication of CN115437527A publication Critical patent/CN115437527A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure provides an interaction control method, an interaction control device, electronic equipment, a storage medium and a vehicle. The interaction control method comprises the following steps: responding to a received display task triggered when a user interacts with the first display screen, and judging whether the display task is executed by the second display screen; and in the case that the display task is a display task performed by the second display screen, controlling the first display screen to play a dynamic guide character for gradually switching the user's attention display screen from the first display screen to the second display screen. By adopting the method, the visual focus of the user can move along with the guide image, so that the focus of the user can be involuntarily moved to the second display screen, and the problem of poor user experience caused by prompting the user to focus on other display screens by adopting static prompt information is solved.

Description

Interaction control method and device, electronic equipment, storage medium and vehicle
Technical Field
The disclosure relates to the technical field of man-machine intelligent interaction, in particular to an interaction control method, an interaction control device, electronic equipment, a storage medium and a vehicle.
Background
With the popularization and commercialization of the intelligent cabin concept, a plurality of display screens have been installed on some of the medium and high-grade vehicles to achieve more convenient and personalized interaction between the driver and the vehicle and personalized entertainment needs of the driver at each location.
Although multiple display screens are deployed in a vehicle, not every display screen is configured to perform all of the display functions. For example, in a vehicle, only the center control display screen can display the navigation map. When the driver and the passenger adopt other display screens except the central control display screen to interact with the car machine, and the navigation map can not be displayed on other screens when the navigation is determined to be started, but only can be displayed on the central control screen. In the foregoing case, it is necessary to display cross-screen prompt information in the display screen currently viewed by the user to prompt the user to pay attention to the center control display screen.
However, the existing cross-screen prompting method is only to display static prompting information in a display screen currently watched by a driver and passenger, and cannot actively guide the driver and passenger to pay attention to other display screens, so that the user experience is poor.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present disclosure provide an interaction control method, an apparatus, an electronic device, a storage medium, and a vehicle.
In a first aspect, an embodiment of the present disclosure provides an interaction control method, including:
responding to a received display task triggered when a user interacts with the first display screen, and judging whether the display task is executed by the second display screen;
and in the case that the display task is a display task executed by the second display screen, controlling the first display screen to play a dynamic guide image, wherein the dynamic guide image is used for gradually switching the display screen which is focused by the user from the first display screen to the second display screen.
Optionally, before controlling the first display screen to play the dynamic guidance image, the method further includes:
determining a guiding direction according to the relative position relation of the first display screen and the second display screen;
planning a first playing moving path according to the guiding direction, wherein the first playing moving path is a moving path of a dynamic guiding image when the dynamic guiding image is played on a first display screen;
the controlling the first display screen to play the dynamic guidance image includes:
and playing the dynamic guide image on the first display screen along the first playing moving path.
Optionally, the method further comprises:
determining a current focus position, wherein the current focus position is a display position of a first display screen watched by a user when the user interacts with the first display screen;
planning a first playing moving path according to the guiding direction, comprising:
and planning a first playing moving path according to the guiding direction and the current focus position.
Optionally, planning the first play moving path according to the guiding direction and the current focus position includes:
determining the current focus position as the initial position of a first playing moving path;
determining a projection direction of the guide direction on the first display screen;
determining the end point position of the first playing moving path at the edge of the first display screen according to the initial position and the projection direction;
and planning a first playing moving path based on the initial position and the end position.
Optionally, determining the current attention position includes:
acquiring a user image shot in real time, wherein the user image is an image comprising eye features of a user;
processing the user image, and determining an iris area and an eye white area of the user;
determining a relative attention direction of a user according to the area of an eye white region located on the peripheral side of the iris region;
determining the current attention direction according to the relative attention direction and the external reference of a camera for shooting the user image;
and determining the current attention position according to the current attention direction.
Optionally, the method further comprises: determining a user position;
determining a first distance between a user and the first display screen according to the position of the user and the position of the first display screen; determining a second distance between the user and the second display screen according to the position of the user and the position of the second display screen;
determining a zooming mode of the dynamic guide image based on the magnitude relation between the first distance and the second distance;
playing the dynamic guide image on the first display screen along the first playing moving path, comprising:
and playing the dynamic guide image on the first display screen along the first playing moving path according to the zooming mode.
Optionally, determining a scaling manner of the dynamic guide image based on a magnitude relationship between the first distance and the second distance includes:
determining the zooming mode as a zooming-out mode under the condition that the first distance is smaller than the second distance; and the number of the first and second groups,
and determining the zooming mode to be the amplifying mode under the condition that the first distance is greater than the second distance.
Optionally, the method further comprises: determining an initial playing position of the dynamic guide image when the dynamic guide image is played on the second display screen according to the guide direction;
acquiring a target playing position when the dynamic guide image is played on the second display screen;
planning a second playing moving path according to the initial playing position and the target playing position, wherein the second playing moving path is a moving path when the dynamic guide image is played on the second display screen;
after controlling the first display screen to finish playing the dynamic guide image, the method further comprises:
and controlling the second display screen to continue to play the dynamic guide image along the second playing moving path.
Optionally, while playing the dynamic guidance character on the first display screen along the first playing moving path, the method further includes:
and controlling a stereo playing system to output a first prompt voice along the first playing moving path, wherein the first prompt voice is a voice for prompting a user to switch a concerned display screen from the first display screen to a second display screen.
Optionally, the controlling the stereo playing system to output the first prompt voice along the first playing moving path includes: determining a simulated play position of an audio frame in the first prompt voice in the first play moving path;
generating a modulation signal for controlling each loudspeaker to sound according to the analog playing position, the audio frame and the position of each loudspeaker in the stereo playing system;
and driving the corresponding loudspeaker to work by adopting the modulation signal, so that the sound wave emitted by each loudspeaker reverberates in a sound field to form the first prompt voice.
Optionally, the method further comprises: determining a continuous moving path of the simulated sound source according to the end point position of the first playing moving path and the position of the second display screen, wherein the continuous moving path is a path for continuously playing the prompt voice after the dynamic guiding image is played in the first display screen;
after the controlling the stereo playback system to output the first cue tone along the first playback movement path, the method further comprises:
and controlling the stereo playing system to output a second prompt voice along the continuous moving path, wherein the second prompt voice is a voice for continuously guiding the user to pay attention to the second display screen.
In a second aspect, an embodiment of the present disclosure provides an interaction control apparatus, including:
the task judging unit is used for responding to a display task triggered when a user interacts with the first display screen and judging whether the display task is executed by the second display screen;
and a display control unit which controls the first display screen to play a dynamic guide character for switching the user's focus display screen from the first display screen to the second display screen step by step in a case where the display task is a display task performed by the second display screen.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including a processor and a memory, the memory storing a computer program;
the computer program, when loaded by a processor, causes the processor to perform the interactive control method of any one of claims 1 to 10.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement the foregoing interactive control method.
In a fifth aspect, embodiments of the present disclosure provide a vehicle, including a processor, a memory, a first display screen, and a second display screen; the processor stores a computer program; when the computer program is loaded by the processor, the processor is enabled to execute the interactive control method as before, and the first display screen is enabled to play the dynamic guide image; and after the first display screen finishes playing the dynamic guide image, controlling the second display screen to execute a display task.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
by adopting the technical scheme provided by the embodiment of the disclosure, under the condition that the display task triggered by the user is received and the display task is judged to be executed by the second display screen, the dynamic guide image which can prompt the user to gradually switch the attention display screen from the first display screen to the second display screen is played by adopting the first display screen, so that the visual attention point of the user can move along with the guide image, the attention focus of the user can further move to the second display screen without autonomy, and the problem that the user is prompted to pay attention to other display screens by adopting static prompt information and the user experience is poor is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those skilled in the art that other drawings can be obtained from these drawings without inventive exercise, wherein:
FIG. 1 is a flowchart of an interaction control method provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart of an interaction control method provided in some further embodiments of the present disclosure;
FIG. 3 is a flow chart of an interaction control method provided by some embodiments of the present disclosure;
FIG. 4 is a schematic structural diagram of an interactive control device provided in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided in some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description. It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
Fig. 1 is a flowchart of an interaction control method provided in an embodiment of the present disclosure. As shown in fig. 1, the interaction control method provided by the embodiment of the present disclosure includes S110-S120.
It should be noted that the interaction control method provided by the embodiment of the present disclosure may be executed by various electronic devices, and the electronic devices may be vehicle terminal devices, smart home devices, and other terminal devices that may be configured with multiple display screens. Of course, the method provided by the embodiment of the present disclosure may also be executed cooperatively by a plurality of electronic devices, for example, a central control device of a vehicle terminal device, a secondary driving entertainment device, and a rear row entertainment device. The interaction control method provided by the embodiment of the disclosure will be described in the following by a vehicle terminal device including a plurality of display screens.
S110: and judging whether the display task is executed by the second display screen or not in response to receiving the display task triggered when the user interacts with the first display screen.
In the embodiment of the present disclosure, the first display screen is a display screen currently interacting with a user, that is, a display screen that a user is paying attention to or watching.
For example, if the user is a secondary driver who is viewing a secondary entertainment display screen, the secondary entertainment display screen may be considered the first display screen.
As another example, if the user is a rear passenger who is watching a rear entertainment display, the rear entertainment display may be considered the first display.
For another example, if the user is a driver who is watching a head-up display screen on the front side of the main driver, an instrument display screen of the main driver, or a control display screen on the steering wheel, the head-up display screen, the instrument display screen, or the control display screen on the steering wheel may be used as the first display screen.
In the embodiment of the disclosure, a user can perform interactive operation with the first display screen in various interactive modes to trigger the vehicle-mounted terminal device to control the display screen to execute a display task.
For example, when the first display screen is the control display screen on the copilot entertainment display screen, the rear row entertainment display screen, and the main steering wheel, and the first display screen is the touch display screen, the user can control the function button displayed in the first display screen to enable the vehicle-mounted terminal device to obtain the interaction instruction, so that the vehicle-mounted terminal device receives the display task triggered by the user.
For another example, when the vehicle-mounted terminal is configured with a voice interaction control function, when the user watches the first display screen, the user may send an interaction control instruction in a voice interaction manner, so that the vehicle-mounted terminal device receives a display task triggered by the user. Specifically, at least one sound pickup for picking up sound in the vehicle is provided in the in-vehicle terminal apparatus. The sound pickup picks up sound signals in a vehicle compartment in real time and transmits the sound signals to the vehicle-mounted terminal equipment. Subsequently, the vehicle-mounted terminal device extracts the sound signal and acquires the voice signal in the sound signal. And under the condition that the picked sound signal comprises a voice signal, the vehicle-mounted terminal equipment processes the voice signal to obtain a text corresponding to the voice signal, processes the text to determine whether to acquire a control instruction sent by a user, and further determines whether to acquire a display task triggered by the user according to the control instruction.
After receiving a display task triggered when a user interacts with the first display screen, the vehicle-mounted terminal equipment judges whether the display task is executed by the second display screen.
The second display screen is an independent display screen which is at a different position from the first display screen.
In the embodiment of the disclosure, the vehicle-mounted terminal device is pre-configured with display tasks that can be executed by each display screen. For example, with respect to the aforementioned copilot entertainment display screen or rear row entertainment display screen, it may perform various entertainment display tasks, such as playing a movie, playing a song, displaying a game picture. For another example, it may only perform the task of displaying the vehicle state, the road ahead characteristics, for the aforementioned head-up display or meter display. For another example, for a central display screen, it may display tasks for navigation, adjusting pages for vehicle status.
S120: and in the case that the display task is a display task executed by the second display screen, controlling the first display screen to play a dynamic guide image, wherein the dynamic guide image is used for gradually switching the display screen which is focused by the user from the first display screen to the second display screen.
In a case where it is determined that the display task is a display task performed by the second display screen, the in-vehicle terminal device then performs the aforementioned display task using the second display screen. In the embodiment of the disclosure, in the case that the vehicle-mounted terminal device determines that the display task is a display task executed by the second display screen, the vehicle-mounted terminal may control the first display screen to play the dynamic guidance image.
The dynamic guide avatar is a guide avatar that gradually switches a display screen focused by a user from a first display screen to a second display screen. The dynamic guide image can be an anthropomorphic image, an anthropomorphic image and the like.
For example, in one embodiment of the present disclosure, the dynamic guidance avatar is an animated character avatar. For another example, in another particular embodiment of the present disclosure, the dynamic guide avatar is a face-like avatar that is circular in shape and has movable eyes. In the embodiment of the disclosure, the dynamic guide image can have a plurality of display states, such as a user-oriented state, a turning state, a state facing away from the user, and the like
In the embodiment of the disclosure, the first display screen is controlled to play the dynamic guidance image, and the video frame including the dynamic guidance image is played frame by frame, so that the dynamic guidance image gradually moves from the display position of the first display screen close to the middle area to the edge display position of the first display screen close to the second display screen, or moves from the position of the first display screen far away from the second display screen to the edge position of the first display screen close to the second display screen, and further, the visual focus of the user is gradually guided from the middle area of the first display screen to the edge area, and is further naturally switched to the second display screen.
As mentioned above, the dynamic guide image in the embodiment of the present disclosure has a plurality of different display states. When the first display screen is controlled to play the dynamic guide image which gradually guides the visual focus of the user from the first display screen to the second display screen, the display state of the dynamic guide image can be timely adjusted. For example, the dynamic guide avatar is displayed as a backward form avatar at a middle area of the first display screen. And after a section of backward image is displayed, the dynamic guide image is converted into a turning image or a side image, so that the user can perceive that the guide image gradually leaves the middle area of the first display screen.
By adopting the interaction control method provided by the embodiment of the disclosure, under the condition that the display task triggered by the user is received and the display task is judged to be executed by the second display screen, the dynamic guide image capable of prompting the user to gradually switch the attention display screen from the first display screen to the second display screen is played by adopting the first display screen, so that the visual attention point of the user can move along with the guide image, the attention focus of the user can further move to the second display screen involuntarily, and the problem that the user is prompted to pay attention to other display screens by adopting static prompt information, and the user experience is poor is solved.
Fig. 2 is a flowchart of an interaction control method according to some other embodiments of the present disclosure. In some other embodiments of the present disclosure, as shown in FIG. 2, the interactive control method includes S210-S240.
S210: responding to a received display task triggered when a user interacts with the first display screen, and judging whether the display task is executed by the second display screen; if yes, go to step S220.
S220: and determining the guiding direction according to the positions of the first display screen and the second display screen.
S230: and planning a first playing moving path according to the guiding direction, wherein the first playing moving path is a moving path of the dynamic guiding image when playing on the first display screen.
S240: and playing the dynamic guide image on the first display screen along the first playing moving path.
Different from the foregoing embodiment, in the embodiment of the present disclosure, when it is determined that the display task triggered when the user interacts with the first display screen is the display task executed by the second display screen, the vehicle-mounted terminal may acquire the position of the first display screen and the position of the second display screen, and determine the guidance direction according to the positions of the first display screen and the second display screen. And then planning a first playing moving path of the dynamic guide image when the first display screen is played according to the guide direction.
In some embodiments of the present disclosure, under the condition that the first display screen and the second display screen are fixed display screens, the vehicle terminal device may search for pre-stored information according to the identification information of the first display screen and the second display screen, and determine the positions of the first display screen and the second display screen.
In some other embodiments of the present disclosure, in a case that at least one of the first display screen and the second display screen is a movable display screen, the vehicle-mounted terminal device may determine a real-time position of the corresponding display screen according to a signal output by a device such as an inertial sensor in the display screen. After the real-time positions of the first display screen and the second display screen are determined, the vehicle-mounted terminal device can determine the guiding direction according to the positions of the first display screen and the second display screen.
The guidance direction is a direction guiding the user to switch the focus from the first display screen to the second display screen. In the embodiment of the present disclosure, the guiding direction is a three-dimensional guiding direction, that is, the guiding direction is from the position of the first display screen to the position of the second display screen.
The first playing moving path is a moving path of the dynamic guidance character when playing in the first display screen, that is, a moving path of how the dynamic guidance character moves from the middle area to the edge area of the first display screen.
In the embodiment of the disclosure, the determination of the playing movement path of the dynamic guide image according to the positions of the first display screen and the second display screen is based on the relative position relationship between the first display screen and the second display screen, and a path which is better and convenient for moving the visual attention point of the user from the first display screen to the second display screen is determined.
For example, in the case where the first display screen is a front passenger display screen located right in front of the passenger and the second display screen is a center control screen located in the center control area of the vehicle, the playback movement path may be a path from the middle position of the first display screen to the lower left edge of the first display screen. For another example, in a case where the first display screen is an entertainment display screen located in a middle area of the rear seat and the second display screen is a center control screen of the vehicle located in a center control area, the playing movement path may be a path from the middle position of the first display screen to a lower middle edge.
After the first playing moving path is determined, the vehicle terminal device can control the first display screen to play the dynamic guiding image along the playing moving path, so that the dynamic guiding image can guide the user according to the playing moving path, the visual attention point of the user moves to the edge of the first display screen, and the user naturally moves to the second display screen.
Optionally, in some embodiments of the present disclosure, before performing the foregoing S230 to plan the first playing moving path according to the guiding direction, the vehicle-mounted terminal device may further perform S240-S250.
S250: a current location of interest of the user is determined.
In the embodiment of the disclosure, the current focus position is a display position of the first display screen focused when the user interacts with the first display screen. That is, the current focus position is the display position of the first display screen currently viewed by the user.
In some embodiments of the present disclosure, the current focus location of the user may be determined using S251-S2452 as follows.
S251: the user location and the user's current direction of interest are determined.
In some embodiments of the present disclosure, the in-vehicle terminal device may determine the user position in various ways. For example, in some embodiments, the in-vehicle terminal device may determine that the user is seated in the seat based on a pressure sensor on the seat, and thus determine the user's location.
In some other embodiments of the present disclosure, at least two microphones are configured in the car, and the two microphones monitor the audio signal in the car in real time. The vehicle terminal device may determine the user position through S251A-S251C as follows.
S251A: and acquiring audio signals monitored by at least two sound pickups.
S251B: judging whether the audio signal comprises a voice signal; if yes, S251C is performed.
S251C: and carrying out space positioning according to the voice signal to determine the position of the user.
After the audio signals monitored by the at least two sound pick-up devices are obtained, the vehicle-mounted terminal equipment firstly judges whether the audio signals comprise voice signals. If the audio signal includes a speech signal, it is determined that the user is speaking. At this time, the vehicle-mounted terminal device can perform spatial positioning according to the voice signals included in the at least two paths of audio signals to determine the position of the user, namely, the position of the head area of the user.
Specifically, the vehicle-mounted terminal device performs spatial positioning according to the voice signals included in the at least two paths of audio signals, and performs inverse operation according to the receiving time of the voice signals with the same characteristics in each path of audio signals and the position of the sound pickup to determine the position of the mouth of the user. Specifically, how to perform spatial positioning according to a voice signal and determine a user position refers to technical documents and products existing in the acoustic field, and the embodiment of the present disclosure is not explained again.
Optionally, before executing the foregoing S251C, the vehicle-mounted terminal device may further execute S251D: and acquiring a pressure signal output by the seat pressure sensor.
Correspondingly, in step S251C, spatial positioning is performed according to the voice signal, and the position of the user may be determined specifically as S251C1.
S251C1: and performing spatial fusion positioning according to the voice signal and the pressure signal to determine the position of the user.
In a specific embodiment, a kalman filtering or particle filtering mode may be adopted to perform spatial fusion positioning on the position determined by the speech signal and the position determined by the pressure signal, so as to determine the position of the user. By fusing and positioning the voice signal and the pressure signal, the accuracy of determining the position of the user can be improved.
In some other embodiments of the present disclosure, an interior camera is further disposed in the vehicle cabin, and the interior camera can capture images in the vehicle cabin, specifically, capture images of users in the vehicle cabin. The vehicle-mounted terminal device may determine the current direction of attention of the user using S251E-S251I as follows.
S251E: and acquiring the user image shot in real time.
S251F: the user image is processed to determine iris and white regions.
In the embodiment of the disclosure, the interior camera can shoot images in a compartment in real time, and determine user images according to the shot images in the compartment. It should be noted that the user image in the embodiment of the present disclosure is an image including the eye feature of the user.
After the user image is acquired, the vehicle-mounted terminal device can determine the iris area and the white area of the eyes of the user according to the user image. In a specific embodiment, a deep learning model trained in advance may be used to determine an eye region of a user, and pixel extraction may be performed according to pixel features of an iris region and an eye white region to determine the iris region and the eye white region.
S251G: the relative direction of attention of the user is determined from the area of the white eye region located on the peripheral side of the iris region.
As is known in the art, the direction of attention of a user is determined by the user's eye rotation. And the ultimate goal of the user's eye rotation is to change the orientation of the iris area. When the user rotates the eyeball, the area of the white area around the iris exposed outside the eyelids changes. For example, when the user looks forward, the white areas of the left and right regions of the iris are substantially the same, and the white areas of the upper and lower regions of the iris are substantially the same. When the user rotates the eyeball to look right, the area of the white area on the left side of the iris is about the area of the white area on the right side, and the areas of the white areas on the upper side and the lower side of the iris are approximately the same. The relative attention direction of the user can be determined according to the distribution characteristics of the areas of the white eye regions. It should be noted that the aforementioned relative attention direction may be a relatively precise direction or a relatively rough direction.
S251H: and determining the current attention direction according to the relative attention direction and the external reference of a camera for shooting the user image.
S251I: and determining the current attention position according to the current attention direction.
The external parameters of the camera are parameters for representing how the position and shooting angle of the camera are calibrated in a world coordinate system, and the external parameters can comprise a rotation matrix. And determining the current attention direction according to the relative attention direction and the external reference, performing coordinate transformation according to the relative attention direction and the external reference, and taking the transformed attention direction as the current attention direction.
After the current attention direction is determined, a display area of the first display screen in the current attention direction can be determined according to the current attention direction, and the display area is the current attention position.
Following the current direction of interest of the user, S252 may then be performed.
In the foregoing embodiment, the current attention direction is determined based on the user image, and in other embodiments, the current attention direction of the user may also be determined based on the location of the user.
S252: and determining the current attention position according to the position of the user and/or the current attention direction of the user.
In some embodiments of the present disclosure, the first display screen is a larger display screen, and the user may only focus on a partial area of the first display screen, for example, only focus on an area of the first display screen near the user's own location. In order to guide the user to switch the focus display screen by using the dynamic guide image, the initial playing position of the dynamic guide image in the first display screen is preferably located at the current focus position of the first display screen. That is, the current focus position needs to be determined as the initial position of the play movement path.
In this embodiment of the present disclosure, after determining the current attention position of the user, the step S230 of planning the first play moving path according to the guiding direction may specifically be step S231.
S231: and planning a first playing moving path according to the guiding direction and the current focus position.
In the embodiment of the present disclosure, a first playing moving path is planned according to the current focus position and the guiding direction, the current focus position is used as a starting point, the guiding direction is used to determine the approximate direction of the first playing moving path, and the first playing moving path is determined.
Optionally, in some embodiments of the present disclosure, planning the first mobile playback path according to the guidance direction and the current attention position may include S231A to S231D.
S231A: determining the current focus position as the initial position of the first playing moving path;
S231B: the projection direction of the guidance direction on the first display screen is determined.
The projection direction is a direction determined by projecting the guiding direction on the first display screen, that is, the projection direction is a sub-direction of the guiding direction on a plane where the first display screen is located. In the embodiment of the disclosure, after the projection direction and the plane where the first display screen is located are determined, the projection direction can be determined through projection transformation.
S231C: and determining the end position of the first playing moving path at the edge of the first display screen according to the initial position and the projection direction.
In the embodiment of the present disclosure, the end position is determined according to the initial position and the projection direction, and a ray with the initial position as a starting point and the projection direction as an extending direction may be determined. Since the projection direction is a direction on the screen where the first display screen is located, the ray is located on the screen where the first display screen is located. Then, the corner points between the ray and the edge of the first display screen can be solved, and the position of the middle point of the first playing moving path at the edge of the first display screen is determined.
S231D: and planning a first playing moving path based on the initial position and the end position.
In the embodiment of the present disclosure, after the initial position and the end position of the first playing moving path are determined, the initial position and the end position may be linearly connected, and the connected straight line is taken as the first playing moving path. In practical application, the connecting straight line can be subjected to random disturbance to obtain a smooth curve, and the smooth curve is used as a first playing moving path.
By adopting the method, the initial position of the playing moving path is determined according to the current focus position of the user, so that the dynamic guide image can be seen by the user when the playing is started. Determining the end position of the playing moving path according to the positions of the first display screen and the second display screen, so that the dynamic guide image moves along a reasonable playing moving path, and then the focus of the user is guided in a reasonable direction, so that the user can be naturally guided to watch the second display screen.
Further, in some embodiments of the present disclosure, the in-vehicle terminal apparatus may further perform S260-S280 before performing S230 to play the dynamic guidance character on the first display screen along the play movement path.
S260: the location of the user is determined.
In the embodiment of the disclosure, the core of determining the position of the user is to determine the position of the head of the user. The determination of the location of the user can be made by using the foregoing methods of pressure detection, sound source detection, and image detection, and will not be repeated here.
S270: determining a first distance from the user to the first display screen according to the position of the user and the position of the first display screen; and determining a second distance from the user to the second display screen according to the position of the user and the position of the second display screen.
S280: and determining a zooming mode of the dynamic guide image according to the first distance and the second distance.
According to the first distance and the second distance, the scaling mode for determining the cross-screen guide image comprises the following contents.
(1) And determining whether the dynamic guide image is in a reduced playing mode, an enlarged playing mode or a mode with unchanged proportion on the first display screen.
In order to make the scaling manner of the dynamic guidance image conform to the visual sense, the following method is adopted in the present disclosure to determine the scaling manner of the dynamic guidance image according to the first distance and the second distance: determining the zooming mode as a zooming-out mode under the condition that the first distance is smaller than the second distance; determining the zooming mode as an amplifying mode under the condition that the first distance is larger than the second distance; and under the condition that the first distance is the same as the second distance, determining the scaling mode to be a scale-invariant mode.
In some embodiments, the second distance is greater than the first distance, i.e., the second display screen is a display screen that is farther from the user than the first display screen. For example, the first display screen is a rear entertainment display screen, the second display screen is a vehicle central control display screen, and a second distance from the second display screen to a rear user is greater than a first distance from the first display screen to the rear user. In this case, in order to enable the user to autonomously look forward while viewing the dynamic guidance character, it is also necessary to make the user perceive that the dynamic guidance character is moving away. In order to make the user perceive that the dynamic guidance character is gradually far away, the dynamic guidance character can be made to be played in a reduced size.
In other embodiments, the first distance is substantially the same as the second distance, i.e., the first display screen and the second display screen are substantially the same distance from the user. For example, in the case where the first display screen is a copilot entertainment display screen and the second display screen is a vehicle central control display screen, the second distance from the second display screen to the copilot user is substantially equal to the first distance from the first display screen to the copilot user. In this case, the dynamic guide character is merely translated on the first display screen, and the scale of the dynamic guide character may be kept unchanged.
(2) In the case where the zoom manner is a reduced play manner, a reduction ratio and a zoom rate at each position are determined. In the case where the zoom mode is the enlargement play mode, the enlargement ratio and the enlargement rate at each position are determined.
In the embodiment of the present disclosure, the scaling ratio may be determined according to a ratio of the first distance to the second distance, and the scaling ratio may be determined according to the moving speed and the moving playing path of the dynamic guide image.
In the case where the aforementioned S260-S280 are performed, the playing of the avatar dynamic guide avatar at the first display screen along the first playing movement path S230 includes S231.
S231: and playing the dynamic guide image on the first display screen along the first playing moving path according to the zooming mode.
The dynamic guide image is played in a zooming mode along the first mobile playing path, so that a user can feel the distance or the approach of the cross-screen image, the user can further perceive whether the second display screen is a display screen closer to the first display screen or a display screen farther from the first display screen, the user can better perceive the position of the second display screen, and the user experience is further improved.
In the foregoing embodiment, the end position of the first playing moving path is determined according to the projection direction of the guiding direction on the first display screen and the current focus position. In some other embodiments of the present disclosure, in a case that the dynamic guidance avatar may be zoomed, the end position of the first playing movement path may also be directly determined according to the current focus position and the current focus position guidance direction.
For example, in some embodiments, the step of determining the end position of the first playing movement path according to the current focus position and the guidance direction may include S232.
S232: according to the imaging principle, the end point position of the first playing moving path is determined according to the current attention position and the guiding direction.
According to the imaging principle, according to the current focus position and the guide direction, the current position is used as a starting point, the guide direction is used as an actual pointing direction, and how to reach the second display screen from the current focus position by a convenient path is determined if the dynamic guide image is a real image. And then simulating the path from the current focus position by adopting a first display screen and a three-dimensional imaging principle, and determining the end point position of the first playing moving path.
S233: and determining a first playing moving path according to the current attention position and the end point position.
After the end position is determined, the initial position and the end position may be linearly connected and the connection line may be used as the first play movement path. In practical application, the connecting straight line can be subjected to random disturbance to obtain a smooth curve, and the smooth curve is used as a first playing moving path. It should be noted that, since the dynamic guide character is zoomed in a set zoom manner when played along the first play movement path, the user may feel that it is far or close when viewing the cross-screen guide character.
Optionally, in some embodiments of the present disclosure, in addition to performing the aforementioned S290 and S230 to plan the first playing moving path according to the guiding direction, the vehicle-mounted terminal device may also perform S290-S320.
S290: and determining the initial playing position of the dynamic guide image when the dynamic guide image is played on the second display screen according to the guide direction.
In the embodiment of the present disclosure, the following method may be adopted to determine the initial playing position of the dynamic guidance image when playing on the second display screen according to the guidance direction.
(1) A projection of the guiding direction on the second display screen is determined. And then according to the extending direction of the projection and the guide direction, determining a point with the projection nearest to the first display screen, and taking the nearest point as an initial position of the dynamic guide image when the dynamic guide image is played on the second display screen.
(2) And determining the intersection point of the guide direction and the second display screen, and directly taking the intersection point as the initial playing position of the dynamic guide image on the second display screen.
S300: and acquiring a target playing position when the dynamic guide image is played on the second display screen.
In the embodiment of the disclosure, the corresponding playing position can be searched according to the task type of the display task to be used as the target playing position when the dynamic guidance image is played on the second display screen.
S310: and planning a second playing moving path according to the initial playing position and the target playing position.
In the embodiment of the present disclosure, the second playing moving path is a moving path when the dynamic guidance image is played on the second display screen.
In the case where the aforementioned S290-S310 are performed, after the aforementioned S240 is completed, S320 may also be performed.
S320: and controlling the second display screen to continue to play the dynamic guide image along the second playing moving path.
In the embodiment of the disclosure, the second display screen is controlled to play the dynamic guidance image, so that the dynamic guidance image moves from the initial playing position to the target playing position. Of course, when the second display screen is used for playing the dynamic guidance image along the second playing moving path, the zooming mode of the dynamic guidance image on the second display screen can be determined according to the distance between the initial playing position and the target playing position and the user.
S240: after the foregoing S230 is executed and the first display screen is controlled to complete the playing of the dynamic guidance figure, the in-vehicle terminal device may further execute S290.
The dynamic guide image enables the user to perceive that the guide image moves to the second display screen by enabling the second display screen to play the cross-screen continuous play dynamic guide image, and further enables the user to naturally switch the focus of attention to the second display screen.
Fig. 3 is a flowchart of an interaction control method provided by some embodiments of the present disclosure. As shown in fig. 3, in some embodiments of the present disclosure, an interactive control method includes S410-S430.
S410: responding to a received display task triggered when a user interacts with the first display screen, and judging whether the display task is executed by the second display screen; if yes, go to step S420.
S420: and planning a first playing moving path according to the positions of the first display screen and the second display screen, wherein the first playing moving path is a moving path when the dynamic guide image is played on the first display screen.
430: and playing the dynamic guide image on the first display screen along the first playing moving path.
The specific implementation process of S410-S430 may be the same as that of the foregoing embodiment, and will not be repeated here, and specific reference may be made to the foregoing embodiment.
S440: and controlling the stereo playing system to output the first prompt voice along the first playing moving path.
In the embodiment of the present disclosure, the first prompt voice is a voice for prompting the user to switch the attention display screen from the first display screen to the second display screen. In some embodiments of the present disclosure, the output prompt voice may be a voice such as "a subsequent task will be displayed by the second display screen".
When the first display screen plays the guide image, the user can more naturally acquire information under the guidance of the first prompt voice and the dynamic guide image by outputting the first prompt voice, and then the attention display screen can be more naturally transferred from the first display screen to the second display screen.
In some embodiments of the present disclosure, S440 may include S441-S443.
S441: and determining the simulated playing position of the audio frame in the first prompt voice in the first playing moving path.
The determination of the simulated playing position of the audio frame in the first playing moving path in the first prompting sound is to determine the coordinate position of the simulated sound source when playing the audio frame.
The vehicle-mounted terminal device may determine the analog playing positions of all audio frames in the first playing moving path before playing the first prompt sound, or may gradually determine the analog playing positions in the process of playing the first prompt sound, which is not particularly limited in the embodiment of the present disclosure.
S442: and generating a modulation signal for controlling each loudspeaker to sound according to the analog playing position, the audio frame and the position of each loudspeaker in the stereo playing system.
In order to make the sound emitted from each speaker reverberate in the sound field and make the reverberated sound heard by the user feel the stereo motion effect, it is necessary to adaptively adjust the operating characteristics of each speaker in the stereo playback system, for example, adjust whether each speaker is working to emit sound, the sound emission intensity when the speaker is working to emit sound, and the playback time for a specific audio frame signal.
In some embodiments, the in-vehicle terminal device needs to determine sound characteristics (e.g., tone color and tone quality) that need reverberation formation from an audio frame, determine reverberation position characteristics formed after the occurrence of each speaker from an analog play position, and then generate a modulation signal that controls the occurrence of each speaker according to the sound characteristics, the reverberation position characteristics, and the positions of each speaker. In order to achieve the above objective, a technician firstly calibrates the compartment space, the installation position of the speaker and the listening position of the main user, and performs a sound field reverberation test on the basis of the calibration result to construct a sound field reverberation model.
When the vehicle-mounted terminal equipment works, the sound field reverberation model can be adopted to determine the modulation signals for controlling all the loudspeakers based on the simulated playing position and the audio frame.
S443: and driving the corresponding loudspeaker to work by adopting the modulation signal, so that the sound wave emitted by each loudspeaker reverberates in the sound field to form a first prompting voice.
According to the analysis, after the modulation signals are used for driving the loudspeakers to work, the sound waves emitted by the loudspeakers are reverberated in the sound field to form single-point sounds corresponding to the audio frames, and the single-point sounds are continuously played to form a first sound, after the auditory system of the user receives a first prompt sound of stereo sound, the auditory system feels that the first prompt sound gradually moves to the edge of the first display screen along the playing moving path in the first display screen. In this way, the user can be naturally guided by the first prompt voice while watching the dynamic guide image, and the user has better experience.
Optionally, in some embodiments of the present disclosure, after the foregoing S440 is performed, the vehicle-mounted terminal device may further perform S450-S450.
S450: and determining a continuous moving path of the simulated sound source according to the end position of the first playing moving path and the position of the second display screen.
In the embodiment of the present disclosure, the continuous moving path is a continuous moving path of the simulated sound source after the dynamic guidance image is played in the first display screen.
S460: and after controlling the stereo playing system to output the first prompt voice, controlling the stereo playing system to continuously output a second prompt voice along the moving path, wherein the second prompt voice is a voice for continuously guiding the user to pay attention to the second display screen.
In the embodiment of the present disclosure, the second prompt voice is a voice for continuously guiding the user to focus on the second display screen.
After the first prompt voice is output, the stereo playing system is continuously adopted to output the second prompt voice along the continuous moving path, and the user can be continuously guided to naturally focus on the second display screen through the second prompt voice, so that the user experience is further improved.
In some embodiments of the present disclosure, before performing the aforementioned S450, the in-vehicle terminal device may further perform S470.
S470: judging whether barriers exist between the first display screens and between the second display screens; if so, S450 is specifically S451.
S451: and determining a continuous moving path according to the end position of the playing moving path, the position of the barrier and the position of the second display screen.
In practical applications, there may or may not be an obstacle between the first display screen and the second display screen.
For example, if the first display screen is a copilot entertainment display screen and the second display screen is a central control display screen, no obstacle exists between the first display screen and the second display screen, and the shortest straight line between the first display screen and the second display screen can be directly used as a sound source moving path.
And if the first display screen is a rear entertainment display screen which can be moved by a user and the second display screen is a central control display screen, the space between the first display screen and the second display screen can be blocked by a front seat. In this case, if a line between the first display screen and the second display screen is directly taken as a sound source moving path, the sound source moving path may overlap with the front seats. The stereo alert voice is outputted according to the moving path of the sound source, so that the user feels that the sound is emitted from the seat, and the user feels a sense of confusion, which deteriorates the user's experience.
In order to avoid the foregoing situation, in the embodiment of the present disclosure, when it is determined that there is an obstacle between the first display screen and the second display screen, the vehicle-mounted terminal device further determines a sound source continuous moving path according to the position of the obstacle, so that the sound source continuous moving path avoids the obstacle, and then the simulated sound source position moves in an open position, thereby avoiding confusion of perception generated by the user, and ensuring that the user has better use experience.
It should be noted that the aforementioned first prompt voice and the second prompt voice may be voices with continuous contents or may also be voices with discontinuous contents, and the embodiment of the present disclosure is not particularly limited. It should also be noted that the second prompt voice is a voice before the second display screen continues to play the dynamic guidance character.
In addition to playing the aforementioned first prompt voice and second prompt voice, in some embodiments of the present disclosure, the second prompt voice may be continuously played while the second display continues to play the dynamic guidance image.
In addition to providing the foregoing interaction control method, an embodiment of the present disclosure further provides an interaction control apparatus for implementing the foregoing interaction control method.
Fig. 4 is a schematic structural diagram of an interaction control device provided in an embodiment of the present disclosure. As shown in fig. 4, an interaction control apparatus 400 provided in an embodiment of the present disclosure includes a task determination unit 401 and a display control unit 402.
The task determination unit 401 is configured to determine, in response to receiving a display task triggered when a user interacts with the first display screen, whether the display task is a display task executed by the second display screen.
The display control unit 402 is configured to control the first display screen to play a dynamic guidance character for switching the user's focus display screen from the first display screen to the second display screen step by step in a case where the display task is a display task performed by the second display screen.
In some embodiments of the present disclosure, the interaction control device 400 further includes a guidance direction determining unit and a movement path determining unit. The guiding direction determining unit is used for determining the guiding direction according to the position relation of the first display screen and the second display screen. The moving path determining unit is used for planning a first playing moving path according to the guiding direction, and the first playing moving path is a moving path of the dynamic guiding image when playing on the first display screen.
Correspondingly, the display control unit 402 plays the dynamic guidance image on the first display screen along the first playing moving path.
In some embodiments of the present disclosure, the interaction control means 400 further comprises a current focus position determination unit. The current focus position determining unit is used for determining a current focus position of the user, wherein the current focus position is a display position of a first display screen focused when the user interacts with the first display screen. Correspondingly, the moving path determining unit plans a first playing moving path according to the guiding direction and the current attention position.
In some embodiments of the present disclosure, the movement path determination unit includes an initial position determination subunit, a projection direction determination subunit, an end position determination subunit, and a first play movement path determination subunit. The initial position determining subunit is configured to determine the current focus position as an initial position of the first playing movement path. The projection direction determining subunit is configured to determine a projection direction of the guidance direction on the first display screen. The end position determining subunit is configured to determine an end position of the first playing movement path at the edge of the first display screen according to the initial position and the projection direction. The first playing movement path determining subunit is configured to plan a first playing movement path based on the initial position and the end position.
In some embodiments of the present disclosure, the initial position determining subunit determines the current position of interest using the following method: acquiring a user image shot in real time, wherein the user image is an image comprising eye features of a user; processing the user image, and determining an iris area and an eye white area of the user; determining a relative attention direction of a user according to the area of an eye white region located on the peripheral side of the iris region; determining the current attention direction according to the relative attention direction and the external reference of a camera for shooting the user image; and determining the current attention position according to the current attention direction.
In some embodiments of the present disclosure, the interaction control device 400 further comprises a position determination unit, a distance calculation unit, and a scaling manner determination unit. The position determination unit is used for determining the position of the user. The distance calculation unit is used for determining a first distance between the user and the first display screen according to the position of the user and the position of the first display screen, and determining a second distance between the user and the second display screen according to the position of the user and the position of the second display screen. The zooming mode determining unit is used for determining the zooming mode of the dynamic guide image according to the first distance and the second distance. Correspondingly, the display control unit 402 plays the dynamic guidance image on the first display screen along the first playing moving path in a zooming manner.
In some embodiments of the present disclosure, the scaling manner determination unit determines the scaling manner to be the reduction manner in a case where the first distance is smaller than the second distance, and determines the scaling manner to be the enlargement manner in a case where the first distance is larger than the second distance.
In some embodiments of the present disclosure, the moving path determining unit is further configured to determine an initial playing position of the dynamic guidance image when the dynamic guidance image is played on the second display screen according to the guidance direction, obtain a target playing position of the dynamic guidance image when the dynamic guidance image is played on the second display screen, and plan a second playing moving path according to the initial playing position and the target playing position, where the second playing moving path is a moving path of the dynamic guidance image when the dynamic guidance image is played on the second display screen.
The display control unit 402 may further control the second display screen to continue playing the dynamic guidance character along the second playing moving path after controlling the first display screen to complete playing the dynamic guidance character.
In some embodiments of the present disclosure, the interaction control device 400 further comprises a voice output unit. The voice output unit is configured to control the stereo playing system to output a first prompt voice along a first playing moving path while the display control unit 402 controls the first display screen to play the dynamic guidance image, where the first prompt voice is a voice for prompting a user to switch the attention display screen from the first display screen to the second display screen.
In some embodiments of the present disclosure, the voice output unit includes a playback position determining subunit, a modulated signal sound field subunit, and a control subunit. The playing position determining subunit is configured to determine a simulated playing position of the audio frame in the first prompt voice in the first playing moving path. And the modulation signal sound field subunit is used for generating modulation signals for controlling the speakers to sound according to the analog playing position, the audio frame and the positions of the speakers in the stereo playing system. The control subunit is used for driving the corresponding loudspeaker to work by adopting the modulation signal, so that the sound wave emitted by each loudspeaker reverberates in the sound field to form a first prompt voice.
In some embodiments of the present disclosure, the moving path determining unit is further configured to control the stereo playback system to continue outputting a second prompt voice along the moving path after controlling the stereo playback system to output the first prompt voice, where the second prompt voice is a voice that continues to guide the user to pay attention to the second display screen
The embodiment of the present disclosure further provides an electronic device, which includes a processor and a memory, where the memory stores a computer program, and when the computer program is executed by the processor, the interaction control method of any of the above embodiments can be implemented.
Fig. 5 is a schematic structural diagram of an electronic device provided in some embodiments of the present disclosure. Referring now specifically to fig. 5, a schematic diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a read only memory ROM502 or a program loaded from a storage means 508 into a random access memory RAM 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM502, and the RAM503 are connected to each other through a bus 504. An input/output I/O interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 505 including, for example, a touch screen, touch pad, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to a received display task triggered when a user interacts with the first display screen, and judging whether the display task is executed by the second display screen; and in the case that the display task is a display task performed by the second display screen, controlling the first display screen to play a dynamic guide character that switches the user's focus display screen from the first display screen to the second display screen step by step.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection according to one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, the method of any of the above method embodiments can be implemented, and the execution manner and the beneficial effect are similar, and are not described herein again.
In addition, the embodiment of the disclosure also provides a vehicle, and the vehicle comprises a processor, a memory, a first display screen and a second display screen. The processor may be a car processor or a dedicated processor. The memory stores a computer program; when the computer program is loaded by the processor, the processor executes the interactive control method in the foregoing embodiment to make the first display screen play the dynamic guide image; and after the first display screen finishes the playing of the dynamic guide image, controlling the second display screen to execute the display task.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (15)

1. An interaction control method, comprising:
responding to a received display task triggered when a user interacts with a first display screen, and judging whether the display task is a display task executed by a second display screen;
and under the condition that the display task is performed by the second display screen, controlling the first display screen to play a dynamic guide image, wherein the dynamic guide image is used for gradually switching the display screen concerned by the user from the first display screen to the second display screen.
2. The method of claim 1, wherein prior to said controlling said first display to play a dynamic guide character, said method further comprises:
determining a guiding direction according to the position relation of the first display screen and the second display screen;
planning a first playing moving path according to the guiding direction, wherein the first playing moving path is a moving path of a dynamic guiding image when the dynamic guiding image is played on a first display screen;
the controlling the first display screen to play the dynamic guidance image includes:
and playing the dynamic guide image on the first display screen along the first playing moving path.
3. The method of claim 2, further comprising:
determining a current focus position, wherein the current focus position is a display position of a first display screen watched by the user when the user interacts with the first display screen;
the planning of the first playing moving path according to the guiding direction includes:
and planning the first playing moving path according to the guiding direction and the current attention position.
4. The method of claim 3, said planning the first play movement path according to the guiding direction and a current focus position, comprising:
determining the current focus position as an initial position of the first playing moving path;
determining a projection direction of the guide direction on the first display screen;
determining the end position of the first playing moving path at the edge of the first display screen according to the initial position and the projection direction;
and planning the first playing moving path based on the initial position and the end position.
5. The method of claim 4, wherein the determining a current location of interest comprises:
acquiring a user image shot in real time, wherein the user image is an image comprising eye features of a user;
processing the user image to determine an iris area and an eye white area of the user;
determining a relative direction of attention of the user from an area of an eye white region located on a peripheral side of the iris region;
determining the current attention direction according to the relative attention direction and the external reference of a camera for shooting the user image;
and determining the current attention position according to the current attention direction.
6. The method of claim 2, further comprising:
determining a user position;
determining a first distance between the user and the first display screen according to the position of the user and the position of the first display screen; determining a second distance between the user and the second display screen according to the position of the user and the position of the second display screen;
determining a scaling mode of the dynamic guide image based on the magnitude relation between the first distance and the second distance;
the playing the dynamic guide image on the first display screen along the first playing moving path includes:
and playing the dynamic guide image on the first display screen along the first playing moving path according to the zooming mode.
7. The method of claim 6, wherein determining the scaling manner of the dynamic guide character based on the magnitude relation between the first distance and the second distance comprises:
determining the zooming mode to be a zooming-out mode under the condition that the first distance is smaller than the second distance; and (c) a second step of,
and determining the zooming mode to be the amplifying mode when the first distance is larger than the second distance.
8. The method according to any one of claims 2-7, further comprising:
determining an initial playing position of the dynamic guide image when the dynamic guide image is played on a second display screen according to the guide direction;
acquiring a target playing position when the dynamic guide image is played on the second display screen;
planning a second playing moving path according to the initial playing position and the target playing position, wherein the second playing moving path is a moving path of the dynamic guide image when the dynamic guide image is played on the second display screen;
after controlling the first display screen to finish playing the dynamic guide image, the method further comprises:
and controlling the second display screen to continue to play the dynamic guide image along the second playing moving path.
9. The method of any of claims 2-7, wherein while the dynamic guide avatar is playing on the first display screen along the first playing movement path, the method further comprises:
and controlling a stereo playing system to output a first prompt voice along the first playing moving path, wherein the first prompt voice is a voice for prompting a user to switch a concerned display screen from the first display screen to a second display screen.
10. The method of claim 9, wherein controlling the stereo playback system to output a first cue voice along the first playback movement path comprises:
determining a simulated play position of an audio frame in the first prompt voice in the first play moving path;
generating a modulation signal for controlling each loudspeaker to sound according to the analog playing position, the audio frame and the position of each loudspeaker in the stereo playing system;
and driving the corresponding loudspeaker to work by adopting the modulation signal, so that the sound wave emitted by each loudspeaker reverberates in a sound field to form the first prompt voice.
11. The method of claim 10, further comprising:
determining a continuous moving path of the simulated sound source according to the end position of the first playing moving path and the position of the second display screen, wherein the continuous moving path is a path for continuously playing the prompt voice after the dynamic guide image is played in the first display screen;
after the controlling the stereo playback system to output a first cue voice along the first playback movement path, the method further comprises:
and controlling the stereo playing system to output a second prompt voice along the continuous moving path, wherein the second prompt voice is a voice for continuously guiding the user to pay attention to the second display screen.
12. An interactive control apparatus, comprising:
the task judging unit is used for responding to a display task triggered when a user interacts with the first display screen and judging whether the display task is executed by the second display screen;
and a display control unit which controls the first display screen to play a dynamic guidance image for switching the user's attention display screen from the first display screen to the second display screen step by step in case that the display task is a display task performed by the second display screen.
13. An electronic device comprising a processor and a memory, the memory for storing a computer program;
the computer program, when loaded by the processor, causes the processor to carry out the interaction control method of any of claims 1-11.
14. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, causes the processor to implement the interaction control method according to any one of claims 1-11.
15. A vehicle comprising a processor, a memory, a first display screen, and a second display screen; the memory stores a computer program; the computer program, when loaded by the processor, causes the processor to execute the interactive control method of any one of claims 1-11, causing the first display screen to play the dynamic guide figure; and the number of the first and second groups,
and after the first display screen finishes the playing of the dynamic guide image, controlling the second display screen to execute the display task.
CN202210751171.2A 2022-06-28 2022-06-28 Interaction control method and device, electronic equipment, storage medium and vehicle Pending CN115437527A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210751171.2A CN115437527A (en) 2022-06-28 2022-06-28 Interaction control method and device, electronic equipment, storage medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210751171.2A CN115437527A (en) 2022-06-28 2022-06-28 Interaction control method and device, electronic equipment, storage medium and vehicle

Publications (1)

Publication Number Publication Date
CN115437527A true CN115437527A (en) 2022-12-06

Family

ID=84241356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210751171.2A Pending CN115437527A (en) 2022-06-28 2022-06-28 Interaction control method and device, electronic equipment, storage medium and vehicle

Country Status (1)

Country Link
CN (1) CN115437527A (en)

Similar Documents

Publication Publication Date Title
EP3440538B1 (en) Spatialized audio output based on predicted position data
JP7068986B2 (en) Agent system, agent control method, and program
CN113302664A (en) Multimodal user interface for a vehicle
CN110968048B (en) Agent device, agent control method, and storage medium
US11176948B2 (en) Agent device, agent presentation method, and storage medium
US11779836B2 (en) Vibration control apparatus
CN112805182B (en) Agent device, agent control method, and storage medium
US11450316B2 (en) Agent device, agent presenting method, and storage medium
KR20150001425A (en) Head mount display apparatus and method for operating the same
CN115431911A (en) Interaction control method and device, electronic equipment, storage medium and vehicle
CN115437527A (en) Interaction control method and device, electronic equipment, storage medium and vehicle
US20200265252A1 (en) Information processing apparatus and information processing method
CN110139205B (en) Method and device for auxiliary information presentation
JP2020059401A (en) Vehicle control device, vehicle control method and program
JP2020041897A (en) Player
US20240105052A1 (en) Information management device, information management method and storage medium
WO2021157312A1 (en) Information processing apparatus, control method, program, and storage medium
CN117750004A (en) Information management device, information management method, and storage medium
WO2020090456A1 (en) Signal processing device, signal processing method, and program
CN116684777A (en) Audio processing and model training method, device, equipment and storage medium
JP2023094982A (en) Communication system, information processing device, information processing method, program, and recording medium
JP2022189035A (en) Control device, control method, and control program
JP2024046018A (en) Information management device, information management method, and program
CN116017029A (en) Video playing method and device and vehicle
KR20140096747A (en) electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination