CN116647614A - Content output apparatus and content output control method - Google Patents

Content output apparatus and content output control method Download PDF

Info

Publication number
CN116647614A
CN116647614A CN202210142318.8A CN202210142318A CN116647614A CN 116647614 A CN116647614 A CN 116647614A CN 202210142318 A CN202210142318 A CN 202210142318A CN 116647614 A CN116647614 A CN 116647614A
Authority
CN
China
Prior art keywords
content
user
output
reaction
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210142318.8A
Other languages
Chinese (zh)
Inventor
丛勇鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alps Alpine Co Ltd
Original Assignee
Alps Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alps Electric Co Ltd filed Critical Alps Electric Co Ltd
Priority to CN202210142318.8A priority Critical patent/CN116647614A/en
Publication of CN116647614A publication Critical patent/CN116647614A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72442User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a content output device and a content output control method. The content output device includes: an operation unit for user operation; an output unit that outputs the content selected by the operation of the operation unit; a reaction confirmation unit configured to confirm a reaction of the first user when a second user, who is different from the first user, switches to a second content, which is different from the first content, by an operation and outputs the second content from the output unit, when the first content is output by the first user selection operation; and an output control unit configured to control output of the output unit and return output contents to the first contents when the reaction confirmation unit confirms that the first user has a predetermined reaction. According to the present invention, when a user selects a content and plays it, and another user changes the play content, by confirming the response of the previous user, if the user does not want to select the change of the play content, the play content can be returned to the content before the change and played without any operation of the previous user.

Description

Content output apparatus and content output control method
Technical Field
The present invention relates to a content output apparatus and a content output control method, and more particularly, to a content output apparatus and a content output control method that control output content based on a response of a user when the user selects and outputs content and another user changes the output content.
Background
At present, various electronic devices are not separated in life. Such as a cell phone, television, computer, etc. The electronic devices may also vary in different scenarios. For example, in a home life scenario, a user may select a favorite television program at home via a television, projector, or the like, relaxing entertainment. Under the scene in the car, the user can select required functions through the vehicle-mounted electronic equipment, so that the driving is more convenient and interesting, and the like. For example, when a user drives, if the user is unfamiliar with road conditions, the user can easily reach a destination by selecting a navigation function and guiding the user through navigation. In addition, users sometimes enjoy the mind and body by playing frequency modulated broadcast, local music, and the like, so that driving is not boring.
In any scene, if the user uses the electronic device to view, the user can randomly switch or change the played content because no other person exists. However, when a user selects to watch or listen to audio/video content through an electronic device, and when other users exist in the environment, the other users sometimes switch the content being played or listened to and select other content to play, so that the user who is utilizing the electronic device is bothered and uncomfortable in mood. If the previous user still wants to continue to view the previous content, the user can only reselect the played content again, switching back to the previous content.
In addition, when other users switch to other content for playing while playing the content selected by the user, the former user wants to switch back to the content previously played, but the identity, the status, etc. of the other users are hindered, for example, the content before switching back may be unpleasant, so that the user is hard to advance and retreat, which brings trouble to the user.
For example, in a vehicle-mounted environment and in a driving process of a driver user, when the user listens to the frequency modulation traffic broadcast through the vehicle-mounted electronic equipment, the traffic road condition information is obtained, and when a co-passenger is in existence, the co-passenger is bored in riding, and the importance degree of the current playing content to the driver is not known, and the current playing content is switched to local music for playing. When the fellow passenger is a lifetime, the driver wants to switch to the previously broadcast fm music broadcast, but suffers from the identity and status of the fellow passenger, and falls into two difficulties. In addition, if the driver user wants to continue to listen to the traffic broadcast information, he has to manually reselect the audio source, which poses a risk to safe driving.
Disclosure of Invention
The present invention has been made to solve the above-described problems, and an object of the present invention is to provide a content output apparatus and a content output control method that can return a playback content to a content before modification for playback without any operation of a preceding user by confirming a response of the preceding user when the user selects the content and plays back the playback content while the other user modifies the playback content.
The present invention provides a content output device, comprising: an operation unit for user operation; an output unit configured to output the content selected by the operation of the operation unit; a reaction confirmation unit configured to confirm a reaction of a first user when the first user selects a first content to be operated, and when a second user different from the first user switches to a second content different from the first content by an operation and the second content is outputted from the output unit; and an output control unit configured to control output of the output unit and return output contents to the first contents when the reaction confirmation unit confirms that the first user has a predetermined reaction.
The invention also provides a content output control method, which comprises the following steps: a first operation step of selecting a first content by a first user; a first output step of outputting the first content selected by the first operation step; a second operation step of selecting, when outputting the first content, a second content different from the first content by a second user different from the first user; a second output step of outputting the second content selected by the second operation step; a reaction confirmation step of confirming a reaction of the first user when the second content is outputted through the second output step; and a third output step of switching the second content to the first content and outputting the first content when the first user has a predetermined reaction, as determined by the reaction determination step.
Thus, when a user selects one content to play, if another user selects another content to play by an operation, the user confirms the response of the previous user, and when the user confirms that the user does not wish to change the content, the content is switched back from the other content to the previous content. Further, embarrassment when the user switches back to the previous content by force is avoided. Meanwhile, the previous content can be switched back without any operation of the user, and convenience is provided for the user.
Drawings
Fig. 1 is a functional block diagram showing a content output apparatus 100 according to an embodiment of the present invention.
Fig. 2 is a flowchart showing a content output control method implemented by the content output apparatus 100 according to the embodiment of the present invention.
Fig. 3 is a functional block diagram showing a content output apparatus 200 according to a first embodiment of the present invention.
Fig. 4 is a diagram showing a first embodiment of the present invention.
Fig. 5 is a flowchart showing a content output control method implemented by the content output apparatus 200 according to the first embodiment of the present invention.
Fig. 6 is a functional block diagram showing a content output apparatus 300 according to a second embodiment of the present invention.
Fig. 7 is a flowchart showing an information presentation method implemented by the content output apparatus 300 according to the second embodiment of the present invention.
Detailed Description
The present invention will be described in more detail below with reference to the accompanying drawings and embodiments. The following description is merely an example for facilitating understanding of the present invention, and is not intended to limit the scope of the present invention. In the embodiments and examples, only the components related to the present invention are shown in the content output apparatus, and other descriptions thereof are omitted, and the components may be changed, combined, deleted or added according to actual circumstances, and the steps of the method may be changed, combined, deleted, added or changed in order according to actual circumstances. In the drawings, the size, direction, and the like are merely illustrative, and may be changed according to actual conditions. In addition, the content output device of the invention can be a vehicle-mounted device, and can also be a television, a tablet computer, a notebook computer and a smart phone.
Hereinafter, a specific embodiment of the present invention will be described with reference to fig. 1 and 2. Fig. 1 is a functional block diagram showing a content output apparatus 100 according to the embodiment, and here, the content output apparatus 100 only shows components related to the present invention, and the content output apparatus 100 may have other functional components, and the description thereof is omitted.
As shown in fig. 1, the content output apparatus 100 includes an operation unit 1, an output unit 2, a control unit 3, a reaction confirmation unit 4, and an output control unit 5. Each part of the content output apparatus 100 may be implemented by a software module or by a dedicated hardware integrated circuit.
The operation unit 1 is provided with a hard key or a soft key for a user to perform an input operation, a selection operation, a change operation, and the like. Or other terminals are used as the operation part of the content output device. For example, when the content output apparatus is a television, the operation unit 1 may be a hard key provided in the television, or may be a remote controller used in combination with the television. When the user wants to select the content to be played back, the user can select the content to be played back through a hard key or a remote control provided on the television. When the content output apparatus is an apparatus with a touch screen, the touch screen serves as an operation section through which a user can operate. When the user wants to select to play the content, the selection operation can be performed on the content output device by touching or pressing the touch screen. In addition, the current smart phone may be used as an operation part of the content output device, for example, when the content output device is a television, the smart phone can pair with the model of the television by downloading a corresponding software program, and after the pairing is successful, the smart phone can select and control the playing content of the television.
The operation unit 1 is not limited to one, and may have a plurality of operation units at the same time, and the user may operate the content output apparatus through any one of the operation units. For example, when the content output apparatus is a television, the operation section may be simultaneously a hard key provided on the television, a remote controller, and an operation terminal through a mobile phone. When the content output device is a vehicle-mounted device, the operation part can be a hard key arranged on the steering wheel, and the touch screen on the vehicle-mounted device, and the mobile phone can be connected with the vehicle-mounted device, so that the touch screen of the mobile phone can be used as the operation part of the vehicle-mounted device.
The output unit 2 outputs the content selected by the operation unit 1. The output section 2 may be constituted by a display screen, and outputs the contents selected by the operation section 1 in a display form through the screen. The output unit 2 may be configured by at least one speaker, and outputs the content selected by the operation unit 1 in the form of sound through the speaker.
The output unit 2 may be constituted by a display screen and a speaker at the same time. When the content selected by the operation unit 1 includes both the screen and the sound, the selected content is output in the form of a display and a sound.
The control unit 3 includes a CPU, a ROM, and a RAM, which are not shown, and performs various controls and processes on the content output apparatus 100 by using built-in application programs. Here, the control unit 3 is connected to the operation unit 1, the output unit 2, the reaction confirmation unit 4, and the output control unit 5.
The reaction confirmation unit 4 can confirm the reaction of the user by recognition and analysis by a camera or by collecting the voice of the user. In addition, the user's reaction may be confirmed by acquiring other data.
In the present invention, when a first user selects a first content by an operation of the operation unit 1 and outputs a playback by the output unit 2, a second user selects a second content by an operation of the operation unit, and when the output unit 2 outputs a switch from the first content to the second content, it is checked whether or not a predetermined reaction exists in the reaction of the first user. Here, the reaction is a reaction when the first user who operates first selects to change the output content to the second user, and outputs the second content to be played.
The predetermined reaction means: when the expression of the user is a specified expression; or the user speaks words that express a meaning; the specified data is obtained; by the above-described several cases, it is presumed whether the user has a prescribed reaction.
For example, when the content output device is an in-vehicle device and the first user is a driver, other occupants change the content of the driver selection output, and the confirmation of the driver's reaction may be the confirmation of the first user, that is, the driver's reaction, by acquiring the driving data. It is possible to define that the user has a predetermined reaction when at least one of the turning information of a predetermined angle or more is present in the acquired driving data in a short time.
Here, the first user and the first content, and the second user and the second content mean: the first user refers to a user who initially or preferentially selects to output content, and the content selected by the first user to be output is first content; the second user is a user with respect to the first user, and is another user different from the first user, and the other user selects the output content as the second content. For example, when there is another user changing the content selected by the user to be output, the other user is the second user, the changed output content is the second content, the previous user who changed the output content is the first user, and the content selected by the previous user is the first content.
A first user, a second user, and first content corresponding to the first user, and second content of the second user are illustrated. For example, when the king selects song a for playing and outputting, xiao Li selects song B, and switches the content of the output playing from song a to song B for playing, the king is the first user, song a is the first content, xiao Li is the second user, and song B is the second content. In addition, if there are other users, for example, xiao Liu, when song C is Liu Xuanze smaller, the output play song is switched from song B to song C, here xiao Liu is the second user, song C is the second content, xiao Li becomes the first user, and song B becomes the first content.
In addition, how to distinguish the first user from the second user may be identified and confirmed by an operation user identification and confirmation unit, not shown. Here, the operation user identification confirmation unit may acquire face information of the user via the camera, identify the user who is operating by using the face identification technique, and confirm whether or not the user who is operating this time has changed from the previous user. When the user operates the content output apparatus through the operation section, it is known by the identification confirmation of the operation user that the user currently operated is the second user and the user previously operated is the first user. In addition, when the reaction confirmation unit 3 confirms the reaction of the user by the camera, the camera of the user-distinguished operation user may be shared with the camera to which the reaction confirmation unit 3 applies.
In addition, when there are a plurality of operation sections, different users are defined when the operation sections are operated by different operation sections. For example, when it is confirmed that the operation unit for performing the present operation is different from the operation unit for performing the previous operation, the user for the present operation may be defined as the second user, and the user for the previous operation may be the first user.
The output control unit 5 performs control processing on the output content. The output control unit 5 may be incorporated in the control unit 3 as a single component for controlling the output content, or may be a part of the control unit 3 for controlling the output content. Here, when the confirmation by the reaction confirmation unit 3 confirms that there is a predetermined reaction to the fact that the first user selects the second content and plays it, the output content is switched back from the played content to the first content before the first user, and the output of the first content is continued.
A content output control method implemented by the content output apparatus 100 in the embodiment will be specifically described below with reference to fig. 2. Fig. 2 is a flowchart showing a content output control method implemented by the content output apparatus 100 according to the embodiment of the present invention.
As shown in fig. 2, first, in step S101, when the first user selects the first content by the operation of the operation unit 1 and plays the first content by the output unit 2, the flow proceeds to step S102. In step S102, when the second user who is different from the first user selects the second content through the operation unit 1, the output of the output unit 2 is switched from the first content to the second content output, and the flow proceeds to step S103. In step S103, when the output content is switched from the first content to the second content, the reaction confirmation unit 4 confirms whether or not there is a predetermined reaction in the reaction of the first user who selects the first content for playback. When the confirmation unit 4 confirms that the first user has a predetermined reaction (yes in step S103), the flow proceeds to step S104. In step S104, the output control unit 5 switches the output content from the second content to the first content, and continues to output the first content.
In the process of step S103, when the first user does not have a predetermined reaction by the confirmation of the reaction confirmation unit 4 (no in step S103), the process ends. That is, when the second user who is different from the first user selects to switch the first content to the second content output playback by the selection operation, the first user can know that the first user does not care about the played content without specifying the reaction, and therefore, the processing ends, and the control processing is not performed on the output content.
Therefore, according to the embodiment of the invention, when the first user selects the first content and outputs and plays the first content, the other second users select the second content, and when the output content is changed, the response of the first user is confirmed, and when the first user has a specified response, the output content can be switched back to the first content selected by the first user without any operation of the first user, so that the content output device is more humanized and intelligent, and convenience is provided for users who preferentially utilize the device.
(first embodiment)
Next, a first embodiment of the present invention will be described with reference to fig. 3 and 4. In this embodiment, the reaction confirmation unit confirms the reaction of the user by a camera that can acquire the facial image of the user and by determining the determination result of the expression determination unit of the reaction of the user based on the acquired facial expression image information. And when the reaction of the user is judged through the expression of the user, judging that the user has the specified reaction when the expression of the user is the specified expression. Next, the present embodiment will be described with specific reference to fig. 3.
As shown in fig. 3, fig. 3 is a functional block diagram of the content output apparatus 200 according to the first embodiment. The content output apparatus 200 shown in fig. 3 includes an expression determination unit 21 in addition to the operation unit 1, the output unit 2, the control unit 3, the reaction confirmation unit 4, and the output control unit 5, which are the same as those of the content output apparatus 100. The same or similar contents as those in the above embodiment are omitted in this example.
In the present embodiment, the reaction confirmation unit 4 includes a camera that can capture at least the face of the user and an expression determination unit 21 that determines the reaction of the user based on the facial expression of the user captured by the camera, and confirms whether or not the user has a predetermined reaction based on the determination result.
Here, the camera may be directly connected to the reaction confirmation unit 4 or may be connected to the control unit 3. Whether the camera is directly connected to the reaction confirmation unit 4 or connected to the control unit 3, the facial image information of the user captured by the camera may be transmitted to the reaction confirmation unit 4. The connection position and connection form of the camera are not limited.
As shown in fig. 3, the expression determination unit 21 is connected to the control unit 3, and the reaction confirmation unit 4 confirms the reaction of the user by determining the expression of the user by the expression determination unit 21. The expression determination unit 21 may be provided inside the reaction confirmation unit 4 as a part of the reaction confirmation unit. When provided inside the reaction confirmation unit 4, the reaction confirmation unit 4 may confirm the user's reaction directly by using the result of the determination, and the position and connection form of the expression determination unit are not limited here as well.
The expression determination unit 21 determines the reaction of the user based on the meaning exhibited by the facial expression obtained by the camera from the facial image information of the user. And when the determined expression of the user is the specified expression, the user is considered to have a specified reaction. Here, the expression is specified to represent any one of an angry, offensive, dislike, depression, and smoldering expression.
The expression determination unit 21 may confirm the user's reaction based on the meaning information expressed by the expression from information on the expression stored in an expression database not shown. Specifically, in the database, various expressions and information of the meaning corresponding to the expression are stored. For example, an expression exhibited upon Qi generation; angry, complain of expression presented at the time of angry; expression presented when offensive, disliked; depression, expressions exhibited during depression, etc. For ease of understanding, the expressions corresponding to different emotions are organized in a tabular form as shown in table 1 below.
TABLE 1
Therefore, in the judgment process of the expression judgment section 21, when the judged expression is a predetermined expression, the user is considered to have a predetermined reaction by using the facial information of the user. That is, the reaction confirmation unit 4 confirms the reaction of the user based on the meaning expressed by the expression presented by the user.
In a specific control process, when a certain user, that is, a first user selects a content, that is, a first content, by an operation of the operation section 1 and then the selected content is output by the output section 2, other users, that is, a second user, select other content, that is, a second content, by an operation of the operation section 2. At this time, the output content of the output section 2 is changed from the first content to the second content. Then, the first user face image information is acquired by the reaction confirmation section 4, that is, the expression determination section 21, through the camera, and the reaction of the user is confirmed by confirming the expression of the first user. When the expression of the first user is determined to be a predetermined expression by the expression determination unit, the user is considered to have a predetermined reaction. Then, the output control unit 5 controls the output content to switch the output content of the output unit 2 to the first content output before the output, and the first content is continuously output.
Here, when the expression information acquired by the camera is any one of expression a, expression B, expression C, expression D, expression E, expression F, expression G, expression H in table 1, it is considered that a prescribed reaction exists. That is, when the user is known to exhibit any one of the expressions of angry, anger, annoyance, dislike, depression, and smouldering by the expression determination, it is considered that the user has a prescribed reaction, and the user does not wish to select the content to be played by himself to be changed, thereby performing the output control described above.
Hereinafter, this first embodiment is exemplified with reference to fig. 4. For example, when the content selected for output by the first user is changed by the second user and the second content is output, when the facial image information of the first user acquired by the camera is expression 1 or expression 2, the meaning expressed according to the expression is angry or angry. Thus, it can be inferred that the first user does not wish to select himself to have the content changed, switching the output content back to the first content.
Hereinafter, a content output control method implemented by the content output apparatus 200 in the first embodiment will be specifically described with reference to fig. 5. Fig. 5 is a flowchart showing a content output control method implemented by the content output apparatus 200 according to the present embodiment. The same reference numerals are given to the same steps in the content output control method flowchart in fig. 5 as those in the content output control method flowchart shown in fig. 2.
As shown in fig. 5, first, in step S101, when the first user selects the first content by the operation of the operation unit 1 and plays the first content by the output unit 2, the flow proceeds to step S102. In step S102, when the second user who is different from the first user selects the second content through the operation unit 1, the output of the output unit 2 is switched from the first content to the second content output, and the flow proceeds to step S201. In step S201, when the output content is switched from the first content to the second content, the facial image of the first user acquired by the camera is then determined by the expression determination unit 21 as to whether or not the reaction of the first user is a predetermined reaction by the facial expression. When the expression determination unit 21 determines that the expression of the first user is a predetermined expression, the process proceeds to step S104, when the first user is considered to have a predetermined reaction (yes in step S201). In step S104, the output control unit 5 switches the output content from the second content to the first content, and continues to output the first content.
In the process of step S201, when the first user does not exhibit the predetermined expression, that is, when there is no predetermined reaction, by the judgment of the expression judgment section 21 (no in step S201), the process ends. That is, when the second user who is different from the first user selects to switch the first content to the second content output playback by the selection operation, the first user can learn that the first user is not aware of the played content without specifying the reaction, and therefore, the processing ends and the output content is not controlled.
Thus, in this embodiment, when the first user selects the first content and outputs the first content for playback, the other second user selects the second content, and when the output content is changed, the first user's reaction is confirmed by determining the meaning expressed by the expression of the first user.
(second embodiment)
Next, a second embodiment of the present invention will be described with reference to fig. 6 and 7. In the present embodiment, the reaction confirmation unit is a microphone for collecting voice information, and analyzes the voice information of the user collected by the microphone to determine the reaction of the user. Next, the present embodiment will be described with specific reference to fig. 6.
As shown in fig. 6, fig. 6 is a functional block diagram of a content output apparatus 300 according to the second embodiment. The content output apparatus 300 shown in fig. 6 includes an audio determination unit 31 in addition to the operation unit 1, the output unit 2, the control unit 3, the reaction confirmation unit 4, and the output control unit 5, which are the same as those of the content output apparatus 100. The same or similar contents as those in the above embodiment are omitted in this example.
In the present embodiment, the reaction confirmation unit 3 includes a microphone and a sound determination unit 31. The microphone can collect at least the sound of the first user; and a sound determination unit configured to determine a reaction of the first user by analyzing the sound of the first user collected by the microphone. And confirming the reaction of the user according to the analysis result. Here, the microphone is disposed at a position where the user's sound can be collected, for example, at a position near the user's head. The microphone may be directly connected to the reaction confirmation unit 4 or may be connected to the control unit 3. The microphone may be connected directly to the reaction confirmation unit 4 or connected to the control unit 3, so long as the user's voice can be collected. The connection position and connection form of the microphone are not limited.
As shown in fig. 6, the sound determination unit 31 is connected to the control unit 3, and the response confirmation unit 4 confirms the response of the user by analyzing the sound of the user by the sound determination unit 31. The sound determination unit 31 may be provided inside the reaction confirmation unit 4 as a part of the reaction confirmation unit. When provided in the reaction confirmation unit 4, the reaction confirmation unit 4 may confirm the reaction of the user directly by using the result of the determination, and the position and connection form of the sound determination unit 31 are not limited here.
The voice determination unit 31 analyzes voice information of the user collected by the microphone to determine the reaction of the user. In a specific control process, when a first user selects a first content by operation of the operation unit 1 and outputs the first content from the output unit 2, other users, that is, a second user, select other content, that is, a second content by operation of the operation unit 2, and the output content of the output unit 2 is changed from the first content to the second content. At this time, the sound determination unit 31 collects sound information of the first user by a microphone provided near the first user, and determines the reaction of the user by analyzing the sound information of the first user collected by the microphone. When it is recognized by analysis that the first content information indicating the desire to continue outputting is contained in the first user's voice collected by the microphone, it is determined that the first user has a predetermined reaction, and then the output control unit 5 controls the output content, and the output content of the output unit 2 is switched to the first content output before the output, and the first content is continuously output.
Here, the information indicating that the first content is desired to be continuously output may be obtained by analysis of the collected audio information of the first user, and the audio information may include: when "i want to listen to the first content", "how you switch to other content", "how you are so annoying", "just content i have not listened to the hair" and so on, it is considered that the first user still wants to continue outputting the first content. The speech indicating that the first user wants to continue to output is not limited to the above-described sentence, and if the speech indicating that the first user wants to continue to output is included in the collected sound information of the first user, the first user is considered to have a predetermined reaction, and the output control unit 5 controls the output of the content.
Hereinafter, a content output control method implemented by the content output apparatus 300 in the second embodiment will be specifically described with reference to fig. 7. Fig. 7 is a flowchart showing a content output control method implemented by the content output apparatus 300 according to the present embodiment. The same reference numerals are given to the same steps in the content output control method flowchart in fig. 7 as those in the content output control method flowchart shown in fig. 2.
As shown in fig. 7, first, in step S101, when the first user selects the first content by the operation of the operation unit 1 and plays the first content by the output unit 2, the flow proceeds to step S102. In step S102, when the second user who is different from the first user selects the second content through the operation unit 1, the output of the output unit 2 is switched from the first content to the second content output, and the flow proceeds to step S301. In step S301, when the output content is switched from the first content to the second content, the sound of the first user is collected by the microphone, and then the sound determination unit 31 analyzes the collected sound of the first user, and when it is recognized that the speech indicating that the first content is desired to be output is included in the collected sound information of the first user (yes in step S301), the flow proceeds to step S104. In step S104, the output control unit 5 switches the output content from the second content to the first content, and continues to output the first content.
In the process of step S301, when the voice determination unit 31 analyzes the voice of the first user and the utterance indicating that the first content is desired to be output is not recognized (no in step S301), the process ends. When a second user different from the first user selects to switch the first content to the second content for output and play through selection operation, the first user does not recognize the specified reaction of the first user which expresses the desire to continue output through sound collection and analysis, and the first user does not pay attention to the played content, so that the processing is finished and the output content is not controlled.
Thus, in the present embodiment, when the first user selects the first content and outputs the first content for playback, and when the other second user selects the second content and changes the output content, the first user's voice is collected and then analyzed and identified, and the first user's reaction is confirmed based on the identified content.
The content output device may be an in-vehicle device, or may be a functional structure that is part of an in-vehicle system, and when the content output device is a functional structure that is part of an in-vehicle system, the operation unit of the content output device may be an electronic device that is provided on a hard key on a steering wheel in a vehicle, a window touch panel, a touch panel on an inner panel of a door, a armrest, a driver seat back, or a passenger seat back. Therefore, when the content output apparatus is used as an in-vehicle system, the operation unit may be provided in a plurality of different positions. In addition, a mobile terminal connected to the in-vehicle device or the in-vehicle system, for example, a mobile phone may be used as the operation unit, and the operation of the in-vehicle device or the system may be performed by the mobile phone.
The content output device may be a content output system, and the content output system may control the content output.
The embodiments and examples of the present invention are described above with reference to the drawings. The foregoing embodiments and specific examples are provided for the understanding of the present invention and are not intended to limit the scope of the present invention. Those skilled in the art can make various modifications, combinations, and reasonable omissions of the elements and the embodiments based on the technical ideas of the present invention, and the resulting approaches are also included in the scope of the present invention.

Claims (10)

1. A content output device includes: an operation unit for user operation; an output unit configured to output the content selected by the operation of the operation unit; it is characterized in that the method comprises the steps of,
a reaction confirmation unit configured to confirm a reaction of a first user when the first user selects a first content to be operated, and when a second user different from the first user switches to a second content different from the first content by an operation and the second content is outputted from the output unit;
and an output control unit configured to control output of the output unit and return output contents to the first contents when the reaction confirmation unit confirms that the first user has a predetermined reaction.
2. The content output apparatus according to claim 1, wherein,
the reaction confirmation part comprises a camera and an expression judgment part,
a camera which can shoot at least the face of the first user;
and an expression determination unit configured to determine a reaction of the first user by recognizing a facial expression of the first user captured by the camera.
3. The content output apparatus according to claim 2, wherein,
the expression determination unit determines that the first user has the predetermined response when any one of a gas, anger, annoyance, dislike, angry, complaint of anger, depression, and smoldering is identified from the expression of the first user.
4. The content output apparatus according to claim 1, wherein,
the reaction confirmation unit includes a microphone and a sound determination unit,
a microphone capable of collecting at least the sound of the first user;
and a sound determination unit configured to determine a reaction of the first user by analyzing the sound of the first user collected by the microphone.
5. The content output apparatus according to claim 4, wherein,
the sound determination unit determines that the first user has the predetermined response when it is recognized that the sound of the first user contains information indicating that the first user desires to continue outputting the first content.
6. The content output apparatus according to claim 1, wherein,
the output unit outputs the content at least in one form of display or sound.
7. The content output apparatus according to claim 1, wherein,
the content output device is an in-vehicle device or an in-vehicle system.
8. The content output apparatus as claimed in any one of claims 1 to 7, wherein,
the plurality of operation units are provided, and the content can be selectively operated by any one of the operation units.
9. The content output apparatus according to claim 8, wherein,
at least one of the plurality of operation units is a portable terminal.
10. A content output control method, characterized by comprising:
a first operation step of selecting a first content by a first user;
a first output step of outputting the first content selected by the first operation step;
a second operation step of selecting, when outputting the first content, a second content different from the first content by a second user different from the first user;
a second output step of outputting the second content selected by the second operation step;
a reaction confirmation step of confirming a reaction of the first user when the second content is outputted through the second output step;
and a third output step of switching the second content to the first content and outputting the first content when the first user has a predetermined reaction, as determined by the reaction determination step.
CN202210142318.8A 2022-02-16 2022-02-16 Content output apparatus and content output control method Pending CN116647614A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210142318.8A CN116647614A (en) 2022-02-16 2022-02-16 Content output apparatus and content output control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210142318.8A CN116647614A (en) 2022-02-16 2022-02-16 Content output apparatus and content output control method

Publications (1)

Publication Number Publication Date
CN116647614A true CN116647614A (en) 2023-08-25

Family

ID=87614005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210142318.8A Pending CN116647614A (en) 2022-02-16 2022-02-16 Content output apparatus and content output control method

Country Status (1)

Country Link
CN (1) CN116647614A (en)

Similar Documents

Publication Publication Date Title
US7702130B2 (en) User interface apparatus using hand gesture recognition and method thereof
CN105933822B (en) Vehicle audio control system and method
JP4304952B2 (en) On-vehicle controller and program for causing computer to execute operation explanation method thereof
JP6501217B2 (en) Information terminal system
CN108228729B (en) Content providing apparatus and content providing method
JP2004505322A (en) Remote control user interface
EP1555652B1 (en) Activation of a speech dialogue system
JP2017146437A (en) Voice input processing device
CN110770693A (en) Gesture operation device and gesture operation method
CN107436680B (en) Method and device for switching application mode of vehicle-mounted device
CN110648663A (en) Vehicle-mounted audio management method, device, equipment, automobile and readable storage medium
CN110696756A (en) Vehicle volume control method and device, automobile and storage medium
CN112818311A (en) Service method of vehicle-mounted function, man-machine interaction system and electronic equipment
JP2003114698A (en) Command acceptance device and program
JP4916005B2 (en) Karaoke system
CN111752169A (en) Vehicle-mounted terminal control method, device and system
CN112092820B (en) Initialization setting method for vehicle, and storage medium
CN116647614A (en) Content output apparatus and content output control method
JP4632974B2 (en) Car audio system
CN115830724A (en) Vehicle-mounted recognition interaction method and system based on multi-mode recognition
JP2001154689A (en) Voice recognition device
JP2009104025A (en) Voice recognition controller
US10999624B2 (en) Multimedia device, vehicle including the same, and broadcast listening method of the multimedia device
US11449167B2 (en) Systems using dual touch and sound control, and methods thereof
CN116670624A (en) Interface control method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication