CN115473987A - Interaction method, interaction device, electronic equipment and storage medium - Google Patents

Interaction method, interaction device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115473987A
CN115473987A CN202210700404.6A CN202210700404A CN115473987A CN 115473987 A CN115473987 A CN 115473987A CN 202210700404 A CN202210700404 A CN 202210700404A CN 115473987 A CN115473987 A CN 115473987A
Authority
CN
China
Prior art keywords
infant
image
display device
target
safety seat
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210700404.6A
Other languages
Chinese (zh)
Inventor
丁彬
傅强
帅一帆
范皓宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Co Wheels Technology Co Ltd
Original Assignee
Beijing Co Wheels Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Co Wheels Technology Co Ltd filed Critical Beijing Co Wheels Technology Co Ltd
Priority to CN202210700404.6A priority Critical patent/CN115473987A/en
Publication of CN115473987A publication Critical patent/CN115473987A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q3/00Arrangement of lighting devices for vehicle interiors; Lighting devices specially adapted for vehicle interiors
    • B60Q3/80Circuits; Control arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Mechanical Engineering (AREA)
  • Child & Adolescent Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an interaction method, an interaction device, electronic equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps that under the condition that an infant is seated on a target safety seat, a first camera is obtained to shoot a first image formed by a driver, wherein the first camera is located on the front side of the driver seat, the first image is output through a first display device, the first display device is located on the front side of the target safety seat, the image of the driver can be seen by the infant sitting on the target safety seat in real time, interaction between the driver and the infant is achieved, safety is brought to the infant, and bad emotion of the infant is relieved.

Description

Interaction method, interaction device, electronic equipment and storage medium
Technical Field
The present application relates to the field of data display technologies, and in particular, to an interaction method, an interaction apparatus, an electronic device, and a storage medium.
Background
Along with the improvement of the living standard of people, in order to facilitate the trip, the automobile becomes an indispensable vehicle in the daily life of people, and the convenience is brought. However, when a driver drives a car, the driver needs to pay attention to the car and cannot take care of infants, and the infants cry and scream when sitting behind alone can cause the driver to distract from driving, so that the driving safety is affected, and therefore, how to relieve the emotion of the infants is a technical problem to be solved.
Disclosure of Invention
The application provides an interaction method, an interaction device, electronic equipment and a storage medium, and the images of a driver can be seen in real time by an infant who takes a target safety seat, so that the infant is provided with a sense of safety, and bad emotion of the infant is relieved.
An embodiment of an aspect of the present application provides an interaction method, including:
under the condition that an infant is determined to be seated on a target safety seat, a first camera is obtained to shoot a first image formed by a driver, and the first camera is located on the front side of the driver seat;
and outputting the first image by adopting a first display device, wherein the first display device is positioned at the front side of the target safety seat.
In another aspect, an embodiment of the present application provides an interaction apparatus, including:
the acquisition module is used for acquiring a first image formed by a driver by a first camera under the condition that an infant is determined to be seated on a target safety seat, and the first camera is positioned on the front side of the driver seat;
and the display module is used for outputting the first image by adopting a first display device, and the first display device is positioned at the front side of the target safety seat.
Another embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the method according to the foregoing aspect is implemented.
Another embodiment of the present application proposes a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method according to the foregoing one aspect.
An embodiment of the present application in another aspect proposes a computer program product, on which a computer program is stored, which program, when being executed by a processor, implements the method according to the foregoing aspect.
According to the interaction method, the interaction device, the electronic equipment and the storage medium, under the condition that an infant is seated on a target safety seat, a first image formed by a driver is obtained by a first camera, wherein the first camera is located on the front side of the driver seat, the first image is output by a first display device, the first display device is located on the front side of the target safety seat, the image of the driver can be seen in real time by the infant seated on the target safety seat, the safety feeling is brought to the infant, and the bad emotion of the infant is relieved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an interaction method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another interaction method provided in the embodiment of the present application;
fig. 3 is a schematic flowchart of another interaction method provided in the embodiment of the present application;
fig. 4 is a schematic structural diagram of an interaction apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
An interaction method, an apparatus, an electronic device, and a storage medium of embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of an interaction method according to an embodiment of the present application.
The execution subject of the interaction method in the embodiment of the application is an interaction method device, and the device can be arranged at a vehicle end.
As shown in fig. 1, the method may include the steps of:
step 101, under the condition that the target safety seat is determined to be seated with an infant, a first camera is obtained to shoot a first image formed by a driver, and the first camera is located on the front side of the driver seat.
The front side of the driver seat refers to a region between the vehicle head and the driver seat in the vehicle interior space range, and includes the front side, the oblique front side and the like of the driver seat.
In the embodiment of the application, when the safety buckle of the target safety seat is locked, or when a pressure signal generated by a pressure sensor in the target safety seat is acquired, or when an image of an infant is determined at the target seat according to a captured image, it is determined that the infant is seated on the target safety seat, which is specifically described below.
In a first implementation manner of the embodiment of the application, a pressure signal generated by a pressure sensor arranged on at least one safety seat is monitored, where the pressure signal is pressure data collected by the pressure sensor, and if a target pressure sensor whose pressure data is greater than a set pressure threshold is monitored, a safety seat corresponding to the target pressure sensor is used as the target safety seat. And then acquiring an image of the object on the target safety seat, identifying the image to determine whether the image contains the infant, and determining that the infant is seated on the target safety seat under the condition that the image of the target safety seat contains the infant.
In a second implementation manner of the embodiment of the application, the locking state of the safety buckle of at least one safety seat and the safety belt bottom plate is monitored, the safety seat, in which the safety buckle and the safety belt bottom plate are monitored, is taken as a target safety seat, an image of an object on the target safety seat is further acquired, the image is identified to determine whether the image contains an infant, and the target safety seat is determined to have the infant when the image of the target safety seat contains the infant.
In a third implementation manner of the embodiment of the application, captured images obtained by capturing an object on each safety seat are acquired, the object included in each captured image is identified, and a safety seat corresponding to a captured image in which the object is identified as an infant is used as a target safety seat, that is, when an infant image is determined to be located at the target seat according to the captured images, it is determined that the infant is seated on the target safety seat.
It should be noted that, the combination of the above implementation manners may also be used to determine that the target safety seat is seated with an infant, thereby improving the accuracy of the determination.
And 102, outputting a first image by adopting a first display device, wherein the first display device is positioned on the front side of the target safety seat.
The first display device is a display screen and is used for displaying a first image, and the first image can be a video or a picture of a driver.
The front side of the target safety seat refers to a region located between the front seat and the target safety seat in the vehicle interior space, and in a specific implementation, the front side of the target seat may be a front side of the target seat, or an oblique front side, and preferably an oblique front side.
In the embodiment of the application, the first display device is a display device on the front side of the target safety seat, that is, the target safety seat is determined, and the first display device can be determined according to the corresponding relationship between the target safety seat and the first display device.
In the embodiment of the application, the vehicle end adopts the first display device to output the first image under the condition that the infant is seated on the target safety seat in the rear row of the driver seat, so that the infant sitting on the target safety seat in the rear row of the driver seat can see the real-time image of the driver, the driver who usually carries the infant out can take care of the infant in daily life, the first image of the driver is output through the first display device, the interaction between the driver and the infant can be realized, the safety sense is provided for the infant, the crying and screaming of the infant are avoided, the bad emotion of the infant is pacified, and further, the driving safety is improved.
According to the interaction method, under the condition that the target safety seat is determined to be occupied by the infant, a first image formed by a driver is shot by a first camera, wherein the first camera is located on the front side of the driver seat, the first image is output by a first display device, the first display device is located on the front side of the target safety seat, the image of the driver can be seen by the infant taking the target safety seat in real time, interaction between the driver and the infant is achieved, the infant is provided with a sense of safety, and the emotion of the infant is relieved.
Based on the foregoing embodiment, fig. 2 is a schematic flowchart of another interaction method provided in the embodiment of the present application, and as shown in fig. 2, the method includes the following steps:
step 201, under the condition that the target safety seat is determined to be seated with an infant, a first camera is obtained to shoot a first image formed by a driver, and the first camera is located on the front side of the driver seat.
The first camera can multiplex the same camera with a Driver Monitor System (DMS), and the first camera can at least capture a complete image of the face of the Driver. Optionally, under the dark scene, when the first camera of car machine end control shot the driver and shoot, can also the synchronous control light filling device carry out the light filling to realize under the dark scene, also can acquire driver's clear image.
Optionally, when the vehicle end controls the first camera to shoot the driver, the acquired first image can be processed to eliminate red eyes, so that the infant is prevented from being scared.
Step 202, outputting a first image by using a first display device, wherein the first display device is located on the front side of the target safety seat.
In step 201 and step 202, the explanation in the foregoing embodiments can be referred to, and the principle is the same, which is not described herein again.
And step 203, acquiring a second image formed by shooting the infant by the second camera under the condition that the infant is determined to be seated in the target safety seat.
The second camera can be multiplexed with the first camera or set independently. In one scenario, the first camera can acquire images of infants on the target safety seat at the rear row of the driver seat, and then the second camera and the first camera are multiplexed, namely the infants are shot by the first camera to form a second image. In another scenario, in order to improve the clarity of the acquisition and avoid the interference of other objects, a second camera is separately arranged to acquire the image of the infant on the target safety seat, and the second camera may be arranged at the front side of the target safety seat, for example, at the side of the first display device, so as to capture the image of the infant through the second camera to form a second image.
And step 204, outputting a second image by using a second display device, wherein the second display device is a display device which can be viewed by a driver under the normal driving condition.
In the embodiment of the application, in order to facilitate the driver to know the condition of the infant on the target safety seat in real time, the second display device may be adopted to output the second image. The second display device is a display device which can be checked by a driver under the normal driving condition, the display device can be a front instrument panel of a vehicle or a mobile phone display screen of the driver, and the mobile phone can be connected with a vehicle end through Bluetooth to acquire a second image from the vehicle end and output the second image.
It should be noted that step 203 and step 204 may be executed after step 202, before step 201, or in synchronization with step 201 and step 202, and the execution timing is not limited in the embodiment of the present application.
In step 205, the second image is sent to the target client, so that the target client views the second image.
The client, or called user, refers to a program corresponding to the server and providing local services to the client. The target client is a client with authority, and the target client can log in the cloud according to the registered account so as to view the second image.
In this application embodiment, under the condition that the vehicle end acquires the second image that the second camera shot the infant and pre-school children and formed, can also pass through the network upload of vehicle end with the second image to high in the clouds, send to the target client through the high in the clouds to make the target client look over the second image, realized looking over infant and pre-school children state in the target client is long-range. For example, a mother is not beside an infant, the mother can remotely check the state of the infant through a mobile phone without disturbing a driver, the driver is prevented from being distracted, the driving safety is improved, and the checking requirements of different personnel are met.
Step 205 may be executed after step 204, or may be executed before step 204, which is not limited in this embodiment.
In the interaction method provided by the embodiment of the application, under the condition that the target safety seat is determined to be occupied by the infant, the infant who sits the target safety seat can see the image of the driver in real time, the real-time image of the driver and the infant are interacted, the safety sense is brought to the infant, the emotion of the infant is relieved, meanwhile, the second image formed by shooting the infant by the second camera is obtained, the second image is displayed by the display device which can be checked by the driver, the condition that the driver can conveniently know the infant in the driving process is made, the driver is prevented from distracting from driving, the interaction between the driver and the infant is realized, and the driving safety is improved.
Based on the foregoing embodiments, fig. 3 is another interaction method provided in the embodiments of the present application, and fig. 3 is a schematic flowchart of the other interaction method provided in the embodiments of the present application, as shown in fig. 3, the method includes the following steps:
step 301, under the condition that it is determined that the target safety seat is occupied by an infant, acquiring a first image formed by a driver, wherein the first image is shot by a first camera, and the first camera is positioned at the front side of the driver seat.
Step 302, outputting a first image by using a first display device, wherein the first display device is located at the front side of the target safety seat.
Step 301 and step 302 may refer to the explanations in the foregoing embodiments, and are not described herein again.
And step 303, acquiring ambient light intensity information of the carriage, and adjusting the display brightness of the first display device according to the ambient light intensity information.
In the embodiment of the application, acquire the ambient light intensity information in the carriage of light sensor monitoring, according to the ambient light intensity information who sets for the luminous intensity scope and gather in real time, adjust first display device's demonstration luminance to be in and set for the luminous intensity within range, avoid light intensity too strong or too weak, with the eyes of protection infant.
Step 304, determining the status of the infant.
In an implementation manner of the embodiment of the application, the voice data acquired by the microphone is acquired, the voice data is recognized, the voice data belonging to the infant is recognized, and the state of the infant is recognized according to the voice data of the infant, wherein the state of the infant comprises an emotional state and a rest state, and the emotional state comprises a happy state, a crying state and the like. And the rest state comprises a sleep state and a play state.
In another implementation manner of the embodiment of the application, the second image of the infant is recognized, the recognition model can be obtained through training, the recognition model can recognize the acquired second image, for example, the state of the infant is obtained through recognizing facial features, and the state of the infant includes an emotional state and a rest state, where the emotional state includes a happy state, a crying state, a restlessness state, and the like. And the rest state comprises a sleep state and a play state.
The second image of the infant is identified, and the second image can be executed through a vehicle end, and can also be identified by a second camera which is controlled by the vehicle end to acquire the second image, which is not limited in this embodiment.
Step 305, when the state of the infant is crying, playing a target image by using a first display device, wherein the target image is an image for soothing the infant.
In the embodiment of the application, when the state of the infant is a crying state, it is described that the infant needs to be pacified, that is, the first image of the driver currently displayed on the first display device cannot pacify the infant, and therefore, the target image needs to be acquired, and the first display device is used to play the target image to pacify the infant. In one scenario, the target image is a target animation, for example, a favorite animation of an infant; in another scenario, the target image is an image of a target object on which the infant depends, for example, an image of a mother, and the image of the mother may be a placation image that is pre-recorded and stored in a set storage unit, or an image of the mother that is acquired from a target client in real time, because the vehicle end is a networking device, and the target client on the network can send the image that is acquired in real time to the vehicle end.
Step 306, continuing to determine the status of the infant.
In the embodiment of the application, the state of the infant can be continuously determined due to the fact that the state of the infant changes rapidly, and the strategy can be adjusted according to the determined state of the infant.
The method for continuously determining the status of the infant can refer to the explanation in the foregoing steps, and the principle is the same, and will not be described herein again.
In step 307, the first display device is turned off when the state of the infant is the sleep state.
In the embodiment of the application, under the condition that the state of the infant is the sleep state, the first display device is closed, so that the image played on the first display device is closed, and the infant is prevented from being disturbed.
According to the interaction method, the state of the infant is obtained through identification of the acquired second image of the infant, different strategies are executed according to different states of the infant, and if the state of the infant is a crying state, the first image of the driver displayed on the first display device is replaced to be a target image, so that the infant is pacified. If the state of the infant is a sleeping state, the first display device can be turned off so as to reduce the influence on the infant. Meanwhile, in the process that the first display device displays images, the display brightness of the first display device is adjusted by monitoring the intensity of the ambient light in the carriage, so that the eyes of the infant are protected.
In order to implement the above embodiments, an interactive device is further provided in the embodiments of the present application.
Fig. 4 is a schematic structural diagram of an interaction device according to an embodiment of the present application.
As shown in fig. 4, the apparatus may include:
the acquiring module 41 is configured to acquire a first image formed by a driver by using a first camera located on a front side of the driver seat when it is determined that the target safety seat is occupied by an infant.
And a display module 42, configured to output the first image by using a first display device, where the first display device is located on a front side of the target safety seat.
Further, in one implementation of an embodiment of the present application,
the obtaining module 41 is further configured to obtain a second image formed by the infant shot by the second camera under the condition that it is determined that the infant is seated in the target safety seat;
the display module 42 is further configured to output the second image by using a second display device, where the second display device is a display device that can be viewed by the driver under the normal driving condition.
In an implementation manner of the embodiment of the present application, the apparatus further includes:
and the sending module is used for sending the second image to the target client so that the target client can view the second image.
In an implementation manner of the embodiment of the present application, the apparatus further includes:
a first determination module for determining a status of the infant; and under the condition that the state of the infant is a crying state, playing a target image by adopting the first display device, wherein the target image is an image for soothing the infant.
In an implementation manner of the embodiment of the present application, the first determining module is further configured to:
continuing to determine the status of the infant;
and when the state of the infant is a sleep state, turning off the first display device.
In one implementation manner of the embodiment of the present application, the apparatus further includes a second determining module.
The second determination module is used for determining that an infant is seated on the target safety seat when the safety buckle of the target safety seat is locked, or the pressure signal generated by a pressure sensor in the target safety seat is acquired, or the infant image is determined to be present at the target safety seat according to the shot image.
In an implementation manner of the embodiment of the present application, the apparatus further includes:
the adjusting module is used for acquiring the ambient light intensity information of the carriage; and adjusting the display brightness of the first display device according to the ambient light intensity information.
In the interaction device provided by the embodiment of the application, under the condition that an infant is seated on a target safety seat, a first camera is obtained to shoot a first image formed by a driver, wherein the first camera is positioned on the front side of the driver seat, the first image is output by adopting a first display device, the first display device is positioned on the front side of the target safety seat, the image of the driver can be seen in real time by the infant sitting on the target safety seat, the interaction between the driver and the infant is realized, the safety feeling is brought to the infant, and the emotion of the infant is relieved.
In order to implement the foregoing embodiments, the present application further proposes an electronic device, which includes a memory, a processor and a computer program stored on the memory and executable on the processor, and when the processor executes the program, the electronic device implements the method according to the foregoing method embodiments.
In order to implement the above embodiments, the present application also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method as described in the aforementioned method embodiments.
In order to implement the above-mentioned embodiments, the present application also proposes a computer program product having a computer program stored thereon, which, when being executed by a processor, implements the method as described in the aforementioned method embodiments.
Fig. 5 is a block diagram of an electronic device according to an embodiment of the present disclosure. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present application.
As shown in fig. 5, the electronic device 10 includes a processor 11, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 12 or a program loaded from a Memory 16 into a Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 are also stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An Input/Output (I/O) interface 15 is also connected to the bus 14.
The following components are connected to the I/O interface 15: a memory 16 including a hard disk and the like; and a communication section 17 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like, the communication section 17 performing communication processing via a Network such as the internet; a drive 18 is also connected to the I/O interface 15 as necessary.
In particular, according to embodiments of the present application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program, carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 17. Which when executed by the processor 11 performs the above-mentioned functions as defined in the method of the present application.
In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as the memory 16 comprising instructions, executable by the processor 11 of the electronic device 10 to perform the above-described method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specified otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. An interaction method, comprising:
under the condition that an infant is seated on a target safety seat, acquiring a first image formed by a driver shot by a first camera, wherein the first camera is positioned on the front side of the driver seat;
and outputting the first image by adopting a first display device, wherein the first display device is positioned at the front side of the target safety seat.
2. The method of claim 1, further comprising:
acquiring a second image formed by shooting the infant by a second camera under the condition that the infant is determined to be seated on the target safety seat;
and outputting the second image by adopting a second display device, wherein the second display device is a display device which can be viewed by the driver under the normal driving condition.
3. The method of claim 2, wherein after said obtaining a second image of the infant formed by a second camera, comprising:
and sending the second image to a target client so that the target client views the second image.
4. The method according to any one of claims 1-3, further comprising, after said outputting said first image with a first display device:
determining a status of the infant;
and under the condition that the state of the infant is a crying state, playing a target image by adopting the first display device, wherein the target image is an image for soothing the infant.
5. The method of claim 4, wherein after said playing the target video with the first display device, the method further comprises:
continuing to determine the status of the infant;
and when the state of the infant is a sleep state, turning off the first display device.
6. The method of claim 1, wherein the determining that the target safety seat is seated with an infant comprises:
and determining that an infant is seated on the target safety seat when the safety buckle of the target safety seat is locked, or when a pressure signal generated by a pressure sensor in the target safety seat is acquired, or when an infant image is determined to be present at the target safety seat according to a shot image.
7. The method of any one of claims 1-3, further comprising:
acquiring ambient light intensity information of a carriage;
and adjusting the display brightness of the first display device according to the ambient light intensity information.
8. An interactive device, comprising:
the acquisition module is used for acquiring a first image formed by a driver through a first camera under the condition that an infant is determined to be seated in a target safety seat, and the first camera is positioned on the front side of the driver seat;
and the display module is used for outputting the first image by adopting a first display device, and the first display device is positioned at the front side of the target safety seat.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any of claims 1-7 when executing the program.
10. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any one of claims 1-7.
CN202210700404.6A 2022-06-20 2022-06-20 Interaction method, interaction device, electronic equipment and storage medium Pending CN115473987A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210700404.6A CN115473987A (en) 2022-06-20 2022-06-20 Interaction method, interaction device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210700404.6A CN115473987A (en) 2022-06-20 2022-06-20 Interaction method, interaction device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115473987A true CN115473987A (en) 2022-12-13

Family

ID=84365470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210700404.6A Pending CN115473987A (en) 2022-06-20 2022-06-20 Interaction method, interaction device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115473987A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003025911A (en) * 2001-07-11 2003-01-29 Denso Corp On-vehicle image display device
CN105882529A (en) * 2016-05-25 2016-08-24 沈阳航空航天大学 Vehicular child safety monitor and program control method implemented by same
CN106274682A (en) * 2015-05-15 2017-01-04 戴姆勒大中华区投资有限公司 Video display system in car
CN107170401A (en) * 2017-07-05 2017-09-15 深圳传音控股有限公司 The display methods and display device of a kind of in-vehicle information
CN108382296A (en) * 2018-03-06 2018-08-10 戴姆勒股份公司 Room light system
CN110065439A (en) * 2019-04-30 2019-07-30 戴姆勒股份公司 A kind of car interactive system
CN209852175U (en) * 2018-10-24 2019-12-27 北京汽车集团有限公司 Vehicle-mounted display screen control device and vehicle
CN211308472U (en) * 2019-12-31 2020-08-21 张云舒 Vehicle-mounted infant intelligent safety seat with safety alarm device
CN211791805U (en) * 2020-05-10 2020-10-27 深圳市尖峰时刻电子有限公司 Two camera monitoring device of net car of making an appointment
CN112572293A (en) * 2019-09-27 2021-03-30 宝能汽车集团有限公司 Vehicle-mounted control system, control method thereof and vehicle
CN112829584A (en) * 2021-01-12 2021-05-25 浙江吉利控股集团有限公司 Brightness adjusting method and system for vehicle display device
CN113657134A (en) * 2020-05-12 2021-11-16 北京地平线机器人技术研发有限公司 Voice playing method and device, storage medium and electronic equipment
CN113920492A (en) * 2021-10-29 2022-01-11 上海商汤临港智能科技有限公司 Method and device for detecting people in vehicle, electronic equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003025911A (en) * 2001-07-11 2003-01-29 Denso Corp On-vehicle image display device
CN106274682A (en) * 2015-05-15 2017-01-04 戴姆勒大中华区投资有限公司 Video display system in car
CN105882529A (en) * 2016-05-25 2016-08-24 沈阳航空航天大学 Vehicular child safety monitor and program control method implemented by same
CN107170401A (en) * 2017-07-05 2017-09-15 深圳传音控股有限公司 The display methods and display device of a kind of in-vehicle information
CN108382296A (en) * 2018-03-06 2018-08-10 戴姆勒股份公司 Room light system
CN209852175U (en) * 2018-10-24 2019-12-27 北京汽车集团有限公司 Vehicle-mounted display screen control device and vehicle
CN110065439A (en) * 2019-04-30 2019-07-30 戴姆勒股份公司 A kind of car interactive system
CN112572293A (en) * 2019-09-27 2021-03-30 宝能汽车集团有限公司 Vehicle-mounted control system, control method thereof and vehicle
CN211308472U (en) * 2019-12-31 2020-08-21 张云舒 Vehicle-mounted infant intelligent safety seat with safety alarm device
CN211791805U (en) * 2020-05-10 2020-10-27 深圳市尖峰时刻电子有限公司 Two camera monitoring device of net car of making an appointment
CN113657134A (en) * 2020-05-12 2021-11-16 北京地平线机器人技术研发有限公司 Voice playing method and device, storage medium and electronic equipment
CN112829584A (en) * 2021-01-12 2021-05-25 浙江吉利控股集团有限公司 Brightness adjusting method and system for vehicle display device
CN113920492A (en) * 2021-10-29 2022-01-11 上海商汤临港智能科技有限公司 Method and device for detecting people in vehicle, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10529071B2 (en) Facial skin mask generation for heart rate detection
JP6969493B2 (en) Video output device, video output method, and computer program
US8465289B2 (en) Training assisting apparatus, training assisting method, and training assisting program stored on a computer readable medium
US20110013038A1 (en) Apparatus and method for generating image including multiple people
CN111813491B (en) Vehicle-mounted assistant anthropomorphic interaction method and device and automobile
JPH09503891A (en) Video display device
JP3464754B2 (en) Method and apparatus for synthesizing a face image of a person wearing a head-mounted display
US20170281067A1 (en) Modification of behavior or physical characteristics through visual stimulation
JP2006217935A (en) Morbid fear treatment apparatus
CN115473987A (en) Interaction method, interaction device, electronic equipment and storage medium
CN113079337A (en) Method for injecting additional data
CN101454805A (en) Training assisting apparatus, training assisting method, and training assisting program
JP2014143595A (en) Image recorder
US20070239071A1 (en) Training assisting apparatus, training assisting method, and training assisting program
US20220254115A1 (en) Deteriorated video feed
CN112204409A (en) Method, computer program product and apparatus for classifying sound and training a patient
KR102185519B1 (en) Method of garbling real-world image for direct encoding type see-through head mount display and direct encoding type see-through head mount display with real-world image garbling function
KR20200099047A (en) Method of garbling real-world image for see-through head mount display and see-through head mount display with real-world image garbling function
WO2015044128A1 (en) Patient information system
CN112995747A (en) Content processing method and device, computer-readable storage medium and electronic device
US10992984B2 (en) Multiple data sources of captured data into single newly rendered video feed
US20240029261A1 (en) Portrait image processing method and portrait image processing device
CN215912175U (en) Cabin video monitoring system
WO2023120044A1 (en) Display device, display method, and display program
CN116360195A (en) Projector backlight adjusting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221213