WO2019058496A1 - Expression recording system - Google Patents

Expression recording system Download PDF

Info

Publication number
WO2019058496A1
WO2019058496A1 PCT/JP2017/034248 JP2017034248W WO2019058496A1 WO 2019058496 A1 WO2019058496 A1 WO 2019058496A1 JP 2017034248 W JP2017034248 W JP 2017034248W WO 2019058496 A1 WO2019058496 A1 WO 2019058496A1
Authority
WO
WIPO (PCT)
Prior art keywords
stroller
camera
camera image
terminal device
passenger
Prior art date
Application number
PCT/JP2017/034248
Other languages
French (fr)
Japanese (ja)
Inventor
卓也 武本
維摩 眞貝
俊輔 横尾
泰漢 星野
昌宏 暮橋
Original Assignee
株式会社電通
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社電通 filed Critical 株式会社電通
Priority to PCT/JP2017/034248 priority Critical patent/WO2019058496A1/en
Priority to CN201780094857.6A priority patent/CN111133752B/en
Publication of WO2019058496A1 publication Critical patent/WO2019058496A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62BHAND-PROPELLED VEHICLES, e.g. HAND CARTS OR PERAMBULATORS; SLEDGES
    • B62B9/00Accessories or details specially adapted for children's carriages or perambulators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an expression recording system for recording a specific expression of a passenger of a stroller.
  • a stroller has been used conventionally when taking an infant with you.
  • various devices have been made so that an infant seated on a seat can spend comfortably.
  • improvement in riding comfort is achieved by forming a seat on which an infant is seated by a member having excellent cushioning properties (see Patent Document 1).
  • An object of the present invention is to provide a facial expression recording system capable of recording a specific facial expression of an infant riding on the stroller while going out in the stroller.
  • One aspect of the present invention is a facial expression recording system, the facial expression recording system being attached to a stroller, being able to capture a camera image of a passenger of the stroller, carried by a user of the stroller, and communicating with the camera And a data input unit to which a camera image taken by a camera is input; an expression detection unit for detecting a specific expression of a passenger of a stroller from the camera image; And an ambient imaging request unit for sending an ambient imaging request for imaging a camera image around the stroller to the camera when the facial expression of is detected.
  • FIG. 1 Another aspect of the present invention is a stroller, the stroller comprising a camera capable of capturing a camera image of a passenger, the camera being capable of communicating with a terminal device carried by a user of the stroller
  • the terminal device sends an ambient imaging request to the camera when a specific expression of the passenger of the stroller is detected from the camera image captured by the camera, and the camera detects the stroller based on the ambient imaging request. Take a camera image around you.
  • Another aspect of the present invention is a program, which is a program executed by a terminal carried by a stroller user, the terminal being capable of communicating with a camera attached to the stroller
  • the camera is capable of capturing a camera image of a passenger of the stroller
  • the program when the camera image captured by the camera is input to the terminal device, determines from the camera image a specific expression of the passenger of the stroller.
  • a process of detecting and a process of sending an ambient shooting request for shooting a camera image around a stroller to a camera when a specific expression is detected are executed.
  • Another aspect of the present invention is a facial expression recording system, the facial expression recording system being attached to a mobile body, being capable of capturing a camera image of a passenger of the mobile body, carried by a user, and communicating with the camera And a data input unit to which a camera image taken by a camera is input, and a facial expression detection unit for detecting a specific facial expression of a passenger of a moving object from the camera image; And an ambient imaging request unit for sending an ambient imaging request for imaging a camera image around the moving object to the camera when a specific facial expression is detected.
  • FIG. 1 is an explanatory view of a facial expression recording system (smile recording system) according to an embodiment of the present invention.
  • FIG. 2 is a block diagram of a terminal according to an embodiment of the present invention.
  • FIG. 3 is an explanatory view of the same screen display in the embodiment of the present invention.
  • FIG. 4 is an explanatory view of smile detection position display in the embodiment of the present invention.
  • FIG. 5 is a flowchart of the same screen display processing in the embodiment of the present invention.
  • FIG. 6 is a flowchart of smile image selection processing according to the embodiment of this invention.
  • FIG. 7 is a flowchart of video / music reproduction processing according to the embodiment of the present invention.
  • FIG. 8 is a flowchart of smile detection position recording processing / approach notification processing according to the embodiment of this invention.
  • FIG. 9 is a flowchart of gaze direction detection processing in the embodiment of the present invention.
  • the facial expression recording system includes a camera attached to a stroller and capable of capturing a camera image of a passenger of the stroller, and a terminal device carried by a user of the stroller and capable of communicating with the camera.
  • an ambient imaging request unit for sending an ambient imaging request for imaging a camera image around the stroller.
  • a camera image of a passenger for example, a baby etc.
  • a camera image around the stroller at that point is taken.
  • a camera image of the surroundings of the stroller including a factor (such as a favorite object of the passenger) that the passenger has turned into a specific expression is obtained.
  • the terminal device displays on the same screen the camera image of the passenger when the specific facial expression is detected and the surrounding camera image when the specific facial expression is detected. You may provide the display process part to display.
  • the terminal device transmits the continuous imaging request to continuously capture the camera image of the passenger of the stroller to the camera while the specific facial expression is detected.
  • the image processing apparatus may further include: a unit, and an image selection unit configured to select a camera image of which the degree of a specific facial expression is equal to or more than a predetermined value from among camera images of a passenger taken continuously.
  • the camera image of the passenger (the camera image of the specific expression) is continuously taken, and from among them, the specific A camera image with a high degree of expression is automatically selected. This makes it possible to obtain a camera image of a characteristic expression.
  • the terminal device analyzes the emotion of the passenger of the stroller from the camera image, and the video or music according to the emotion of the passenger of the stroller obtained as the analysis result. And a reproduction processing unit that reproduces the information.
  • the emotion of the passenger of the stroller (for example, an infant or the like) is analyzed from the camera image of the passenger and video and music corresponding to the emotion are automatically reproduced. Thereby, it is possible to produce a video and music according to the emotions of the rider while going out in the stroller.
  • the terminal device records, as a facial expression detection position, a position information acquisition unit that acquires positional information of the stroller from the camera and a position of the stroller when a specific facial expression is detected.
  • the processing unit may include a notification processing unit that notifies a user of the stroller when the stroller approaches the facial expression detection position.
  • the position (expression detection position) at which the rider (for example, a baby or the like) of the stroller becomes a specific expression is recorded, and the user is notified when the position is next approached.
  • This allows the stroller user to know the point at which the rider of the stroller makes a specific expression (the favorite spot of the rider), and when the user approaches that point while going out with the stroller You can know
  • the terminal device uses the direction information acquisition unit for acquiring the direction information of the stroller from the camera and the direction of the stroller when the specific facial expression is detected as the line of sight of the rider. And a gaze direction detection processing unit for detecting.
  • the stroller according to the present invention is a stroller provided with a camera capable of capturing a camera image of a passenger, wherein the camera is capable of communicating with a terminal device carried by the user of the stroller, and the terminal device is captured by the camera
  • a specific expression of the stroller occupant is detected from the camera image
  • an ambient imaging request is sent to the camera, and the camera captures a camera image around the stroller based on the ambient imaging request.
  • this stroller also includes a camera image of a specific expression of the passenger of the stroller and a factor (specific object of the passenger's favorite object, etc.) of the specific expression of the passenger. Camera images around the stroller can be obtained.
  • the program of the present invention is a program executed by a terminal device carried by the user of the stroller, the terminal device being capable of communicating with a camera attached to the stroller, the camera being a camera of a passenger of the stroller An image can be taken, and when the camera image taken by the camera is input to the terminal device, the program detects the specific expression of the passenger of the stroller from the camera image and detects the specific expression. And a process of sending an ambient shooting request for shooting a camera image around the stroller to the camera.
  • a camera image of a specific expression of the passenger of the stroller and a factor (specific object of the passenger's favorite object, etc.) of the specific expression of the passenger are included. Camera images around the stroller can be obtained.
  • a facial expression recording system according to an embodiment of the present invention will be described using the drawings.
  • This facial expression recording system has a function of recording a specific facial expression of an infant riding on the stroller while going out in the stroller.
  • "smile” is illustrated and explained as an example of a specific facial expression below, it can implement similarly about other facial expressions, such as a "cry face", an anger face, and a funny face.
  • FIG. 1 is an explanatory view showing a schematic configuration of the facial expression recording system according to the present embodiment.
  • the facial expression recording system 1 includes a camera 3 attached to the stroller 2 and a terminal device 4 carried by a user of the stroller 2.
  • the passenger of the stroller 2 is, for example, an infant
  • the user of the stroller 2 is, for example, a guardian of the infant.
  • the camera 3 is attached to the stroller 2 so as to be able to capture a camera image of a passenger of the stroller 2.
  • the camera image may be a still image or a moving image.
  • the camera 3 is attached to the arm 5 or the like of the stroller 2 in a state where the photographing direction is directed to the rider of the stroller 2 (that is, the state of being directed to the inside of the arm 5).
  • the arm 5 is configured, for example, in an arc shape or an arch shape, and is disposed to cross the front of the seat 6 of the stroller 2 (the front of the passenger seated on the seat 6).
  • the camera 3 may be incorporated in the arm 5 or may be detachably attached to the arm 5.
  • the camera 3 is configured to be able to photograph around the stroller 2.
  • the camera 3 is configured to be able to capture a wide area around the stroller 2 using a wide angle lens.
  • the camera 3 is configured to be able to capture the entire circumference of the stroller 2 using a 360 degree lens.
  • the camera 3 may also have a pan function and a tilt function. In that case, the camera 3 is configured to be able to capture the periphery (or the entire periphery) of the stroller 2 by rotating the camera lens in the pan direction or the tilt direction.
  • the camera 3 has a GPS function of acquiring position information (for example, longitude and latitude information) indicating the current position of the camera 3 (position of the stroller 2) by communicating with GPS satellites.
  • the camera 3 is equipped with the gyro function which acquires direction information (for example, direction information) which shows the direction (direction of the stroller 2) of the camera 3 at present.
  • the camera 3 also has a function of communicating with the terminal device 4 wirelessly or by wire. Therefore, the camera 3 can transmit the position information and the direction information to the terminal device 4 in addition to the data of the captured camera image. Further, the camera 3 can receive from the terminal device 4 request signals such as an ambient imaging request and a continuous imaging request, which will be described later.
  • the battery of the camera 3 may be provided in the camera 3 itself or in the stroller 2.
  • FIG. 2 is a block diagram for explaining the configuration of the terminal device 4.
  • the terminal device 4 is, for example, a portable terminal device 4 such as a smartphone.
  • the terminal device 4 includes a touch panel 10, a speaker 11, a storage unit 12, a communication unit 13, a first control unit 14, and a second control unit 15.
  • the first control unit 14 and the second control unit 15 may be configured by one control unit.
  • the touch panel 10 has the functions of an input unit and a display unit. Therefore, the user of the terminal device 4 (the user of the stroller 2) can input various information from the touch panel 10. Further, various information is displayed on the touch panel 10 so that the user of the terminal device 4 (the user of the stroller 2) can confirm.
  • the speaker 11 has a function of outputting a voice to a user of the terminal device 4 (a user of the stroller 2).
  • the storage unit 12 is configured by a memory or the like, and can store various data. For example, data of a camera image captured by the camera 3 is stored in the storage unit 12.
  • the storage unit 12 may store data such as video and music.
  • the storage unit 12 also stores programs for realizing various functions (including an expression recording function) of the terminal device 4. It can be said that various functions of the terminal device 4 are realized by executing this program.
  • the communication unit 13 has a function of communicating with an external device wirelessly or by wire.
  • the external device also includes the camera 3 described above. Therefore, the communication unit 13 has a function of wirelessly or wiredly communicating with the camera 3, and the terminal device 4 receives from the camera 3 the position information and the direction information described above in addition to the data of the photographed camera image. can do.
  • the terminal device 4 can transmit, to the camera 3, request signals such as an ambient imaging request and a continuous imaging request, which will be described later.
  • a communication system can utilize a well-known system.
  • the first control unit 14 is a control unit for performing main control related to the facial expression recording function, and the data input unit 140, the facial expression detection unit 141, the surrounding photographing request unit 142, the display processing unit 143, the continuous photographing request unit 144, and the image A selection unit 145 is provided.
  • the data input unit 140 has a function as an input interface to which various data are input in order to realize the facial expression recording function.
  • a camera image for example, a camera image of a passenger of the stroller 2 captured by the camera 3 is input to the data input unit 140.
  • the facial expression detection unit 141 has a function of detecting a specific facial expression of a rider of the stroller 2 from the camera image input to the data input unit 140.
  • the facial expression detection unit 141 can detect the smile of the rider of the stroller 2 by performing image processing for smile detection on the camera image.
  • the method of a smile detection can utilize a well-known method.
  • known methods can be used for detection of other specific expressions (for example, cry face detection, angry face detection, face change detection, etc.).
  • the ambient shooting request unit 142 has a function of sending a ambient shooting request for shooting a camera image around the stroller 2 to the camera 3 when the facial expression detection unit 141 detects the smile of the passenger of the stroller 2 from the camera image. Is equipped.
  • the ambient imaging request is transmitted from the terminal device 4 to the camera 3 via the communication unit 13.
  • the camera 3 captures a camera image around the stroller 2 and returns the camera image (a camera image around the stroller 2) to the terminal device 4.
  • the display processing unit 143 has a function of displaying the camera image (camera image around the stroller 2) on the same screen along with the camera image of the passenger when the smile is detected.
  • FIG. 3 is an explanatory view showing an example of the same screen display.
  • the display processing unit 143 displays the camera image of the passenger when the smile is detected and the camera image around the stroller 2 at that time on the same screen.
  • the camera image of the passenger when the smile is detected and the camera images of the surroundings of the stroller 2 at that time are displayed side by side, but if they are displayed on the same screen , Other arrangement may be used, such as arranging side by side.
  • the display processing unit 143 may also have a function of connecting consecutive camera images and reproducing (displaying) them as a moving image. In that case, the moving image generated from the camera image of the passenger when the smile is detected and the moving image generated from the camera image around the stroller 2 at that time can be displayed side by side on the same screen.
  • the continuous shooting request unit 144 continuously shoots the camera image of the passenger of the stroller 2 with respect to the camera 3 while the smile of the passenger of the stroller 2 is detected from the camera image by the facial expression detection unit 141 It has a function to send a shooting request.
  • the continuous imaging request is transmitted from the terminal device 4 to the camera 3 via the communication unit 13.
  • the camera 3 keeps shooting the camera image of the passenger of the stroller 2.
  • a plurality of camera images (camera images of the smiles of the passenger taken continuously) photographed in this manner are returned from the camera 3 to the terminal device 4.
  • the image selection unit 145 has a function of selecting a camera image with a smile degree equal to or more than a predetermined value from among the plurality of camera images (camera images of the smiles of the passenger taken continuously). For example, the image selection unit 145 detects a smile from each of the plurality of camera images, and calculates the degree of smile for each detected smile. Then, the image selection unit 145 selects one camera image having the largest smile degree from among camera images having a smile degree equal to or more than a predetermined value. If the degree of smile is a camera image having a predetermined value or more, a plurality of camera images may be selected. In addition, a known method can be used to calculate the degree of smile. Similarly, well-known methods can be used to calculate the degree of another specific expression (for example, the degree of crying, the degree of angry face, the degree of oddity, etc.).
  • the second control unit 15 is a control unit for performing sub-control related to the facial expression recording function, and the emotion analysis unit 150, the reproduction processing unit 151, the position acquisition unit 152, the recording processing unit 153, the notification processing unit 154, and the orientation information acquisition
  • the unit 155 includes a gaze direction detection processing unit 156.
  • the emotion analysis unit 150 has a function of analyzing the emotion of the passenger of the stroller 2 from the camera image (camera image of the passenger of the stroller 2) taken by the camera 3. For example, the emotion analysis unit 150 analyzes, from the camera image (the camera image of the passenger of the stroller 2), whether the emotion of the passenger of the stroller 2 is “joy”, “anger”, “ ⁇ ”, “easy”. can do.
  • the method of emotion analysis can utilize a well-known method.
  • the reproduction processing unit 151 has a function of reproducing video or music in accordance with the analysis result in the emotion analysis unit 150 (the emotion of the rider of the stroller 2 obtained as the analysis result). For example, if the emotion of the rider of the stroller 2 obtained as the analysis result is “Kee” or “Raku”, the reproduction processing unit 151 reproduces a bright atmosphere image or music, and the stroller obtained as the analysis result If the emotions of the two passengers are "angry” or "smiling", the video or music in a dark atmosphere is reproduced.
  • the position acquisition unit 152 has a function of acquiring position information of the stroller 2.
  • the position acquisition unit 152 has a GPS function of acquiring current position information (for example, longitude and latitude information) of the terminal device 4.
  • the position acquisition unit 152 acquires the position information of the terminal device 4 as the position information of the stroller 2.
  • the position acquisition unit 152 detects the position of the camera 3.
  • Information may be acquired from the camera 3 as position information of the stroller 2 (position information of the stroller 2 to which the camera 3 is attached).
  • the recording processing unit 153 has a function of recording the position (for example, latitude and longitude) of the stroller 2 when the smile of the rider of the stroller 2 is detected from the camera image by the facial expression detection unit 141 as a smile detection position.
  • the information on the smile detection position is recorded in the storage unit 12.
  • the storage unit 12 may store a smile detection position of another user (a position at which a smile of a passenger of another stroller 2 is detected).
  • the notification processing unit 154 has a function of notifying the user of the stroller 2 when the stroller 2 approaches the smile detection position.
  • the notification processing unit 154 notifies the user of the stroller 2 when the current position of the stroller 2 (obtained from the camera 3 by the position acquisition unit 152) approaches the smile detection position (stored in the storage unit 12). I do. For example, when the user enters a circular area of a predetermined radius centered on the smile detection position, the user of the stroller 2 is notified.
  • the notification to the user of the stroller 2 can be performed by a known method such as sound, light or vibration.
  • the notification processing unit 154 has a function of notifying the user of the stroller 2 according to the analysis result in the emotion analysis unit 150 (the emotion of the passenger of the stroller 2 obtained as the analysis result). It is also good.
  • the notification processing unit 154 may notify the nearest smile detection position, when the emotion of the rider of the stroller 2 obtained as the analysis result is “angry” or “smiling”.
  • the notification to the user of the stroller 2 can be performed using, for example, the touch panel 10 (notification by screen display) or the speaker 11 (notification by sound).
  • the direction information acquisition unit 155 has a function of detecting the direction of the stroller 2. As described above, the camera 3 has a gyro function of acquiring information (for example, orientation information) indicating the current orientation of the camera 3, and the orientation information acquisition unit 155 determines the orientation information of the camera 3 as the stroller 2. Of the camera 3 (the direction of the stroller 2 to which the camera 3 is attached).
  • information for example, orientation information
  • the gaze direction detection processing unit 156 determines the direction of the stroller 2 when the smile of the rider of the stroller 2 is detected from the camera image by the facial expression detection unit 141 as the gaze direction of the rider of the stroller 2 (for example Orientation)).
  • the method of gaze direction detection can utilize a well-known method. For example, the eye direction of the rider is calculated by paying attention to the eye part of the camera image of the rider of the stroller 2 and image analysis is performed, and the direction (traveling direction) of the stroller 2 at that time is taken into consideration. It may be inserted to detect the gaze direction (for example, the azimuth of north, south, east, and the like) of the stroller 2 riding. Information on the detected gaze direction is stored in the storage unit 12.
  • the display processing unit 143 may display the smile detection position on the map.
  • FIG. 4 is an explanatory view showing an example of smile detection position display.
  • the smile detection position is displayed on the map by the “smile mark”.
  • the present position of the stroller 2 is displayed by a circle, and the direction (traveling direction) of the stroller 2 is displayed by a triangle.
  • the direction of the stroller 2 is rightward on the map.
  • FIG. 5 is a flowchart of the same screen display processing in the facial expression recording system 1 of the present embodiment.
  • the camera image (moving image) of the rider of the stroller 2 photographed by the camera 3 is always displayed in live view.
  • a camera image is input from the camera 3 to the terminal device 4 (S 10), and the stroller 2 is boarded from the input camera image
  • a process of detecting the smile of the person is executed (S11).
  • the terminal device 4 sends an ambient imaging request to the camera 3 (S13), and imaging is performed based on the ambient imaging request.
  • a camera image around the stroller 2 is taken by the camera 3.
  • the camera image around the stroller 2 taken in this way is input to the terminal device 4 (S14), the camera image (camera image of the smile) and the camera image of the passenger of the stroller 2 when a smile is detected
  • a camera image around the stroller 2 when a smile is detected is displayed on the same screen (S15).
  • FIG. 6 is a flowchart of smile image selection processing in the facial expression recording system 1 of the present embodiment.
  • smile image selection processing is performed by the terminal device 4, first, a camera image is input from the camera 3 to the terminal device 4 (S20), and the stroller 2 gets on the basis of the input camera image A process of detecting the smile of the person is executed (S21).
  • the terminal device 4 sends a continuous shooting request to the camera 3 (S23), and the camera image of the passenger of the stroller 2 (smile) The camera image of) is repeated.
  • the degree of smile is equal to or greater than a predetermined degree and the degree of smile from among the continuously captured camera images (camera images of smile).
  • One largest camera image is selected (S24). If the degree of smile is a camera image having a predetermined value or more, a plurality of camera images may be selected.
  • FIG. 7 is a flowchart of the video and music reproduction process in the facial expression recording system 1 of the present embodiment.
  • video and music reproduction processing is performed by the terminal device 4, when a camera image is input from the camera 3 to the terminal device 4 (S 30), the camera image of the passenger of the stroller 2
  • the emotions of e.g., "Kee”, “Angage”, “Samurai”, “Raku"
  • S31 the camera image of the passenger of the stroller 2
  • S31 The emotions of (e.g., "Kee”, “Angage”, “Samurai”, “Raku) are analyzed (S31).
  • a video or music corresponding to the emotion of the rider of the stroller 2 is reproduced (S32).
  • the emotions of the rider of the stroller 2 are analyzed as “joy” or “easy”, video and music with a bright atmosphere are reproduced.
  • the emotion of the rider of the stroller 2 is analyzed to be “angry” or "tear”, a video or music
  • FIG. 8 is a flowchart of smile detection position recording processing / approach notification processing in the facial expression recording system 1 according to the present embodiment.
  • smile detection position recording processing / approach notification processing is performed in the terminal device 4, first, when a camera image is input from the camera 3, position information of the camera 3 from the camera 3 (Position information of the stroller 2) is acquired (S40). Then, when the smile of the rider of the stroller 2 is detected from the input camera image (S41). The position of the stroller 2 at that time is recorded in the storage unit 12 as a smile detection position (S42).
  • FIG. 9 is a flowchart of gaze direction detection processing in the facial expression recording system 1 of the present embodiment.
  • the gaze direction detection process is performed by the terminal device 4, first, when a camera image is input from the camera 3, the direction information of the stroller 2 is acquired from the camera 3 (S50) ). Then, when the smile of the rider of the stroller 2 is detected from the input camera image (S51). The direction of the stroller 2 at that time is detected as the gaze direction of the rider of the stroller 2 (S52), and the detected gaze direction is recorded in the storage unit 12 (S53).
  • the present embodiment it is possible to capture a camera image of a passenger (for example, a baby etc.) while going out in the stroller 2. Then, for example, when the passenger of the stroller 2 smiles when passing through a certain point, a camera image around the stroller 2 at that point is photographed. As a result, together with the camera image of the smile of the rider of the stroller 2, it is possible to obtain a camera image of the surroundings of the stroller 2 including factors that make the rider smile (such as a favorite object of the rider). .
  • the correspondence between the smile of the passenger (for example, an infant etc.) of the stroller 2 and the factor (the favorite object of the passenger, etc.) that makes the rider smile It can be easily grasped.
  • the camera image (camera image of the smile) of the rider is continuously photographed and smiles are selected from among them.
  • a high degree (good smile) camera image is automatically selected. This makes it possible to obtain a camera image of a good smile.
  • the emotion of the rider is analyzed from the camera image of the rider (for example, an infant or the like) of the stroller 2, and video and music corresponding to the emotion are automatically reproduced. Thereby, it is possible to produce video and music according to the emotion of the rider while going out in the stroller 2.
  • the position (smile detection position) at which the rider (for example, an infant or the like) of the stroller 2 turns into a smile is recorded, and the user is notified when approaching that point next time.
  • the user of the stroller 2 can grasp the point at which the rider of the stroller 2 smiles (the favorite spot of the rider), and when the user approaches the spot while going out at the stroller 2 You can know that.
  • the rider for example, an infant or the like
  • the gaze direction the direction viewed
  • the camera 3 may be attached to a child seat installed in a seat of a mobile such as a passenger car.
  • the camera 3 is placed at an appropriate position outside the child seat (for example, the ceiling, the back of the front seat, the inner surface of the pillar, the console panel Can be installed on top of the
  • the number of cameras 3 is not limited to one.
  • a camera for outside the car
  • a camera for outside the car
  • a plurality of cameras 3 may be used for each of the in-vehicle camera and the out-of-vehicle camera.
  • a camera image of a passenger for example, an infant or the like
  • a passenger who is placed on a child seat while going out in a passenger car. Then, for example, when a passenger turns into a smile when passing a certain point, a camera image around the vehicle at that point is photographed.
  • a camera image of the surroundings of the vehicle including a camera image of the rider's smile and a factor (such as a favorite object of the rider) that makes the rider smile.
  • the facial expression recording system has the effect of being able to record the smiles of the infants who are riding in the stroller while going out with the stroller, and is usefully applied to a stroller or the like on which the infants ride. It is.
  • Facial expression recording system (smile recording system) Reference Signs List 2 stroller 3 camera 4 terminal device 5 arm 6 sheet 10 touch panel 11 speaker 12 storage unit 13 communication unit 14 first control unit 140 data input unit 141 facial expression detection unit 142 surrounding photographing request unit 143 display processing unit 144 continuous photographing request unit 145 image Selection unit 15 Second control unit 150 Emotion analysis unit 151 Reproduction processing unit 152 Position acquisition unit 153 Recording processing unit 154 Notification processing unit 155 Direction information acquisition unit 156 Eye direction detection processing unit

Abstract

An expression recording system (1) is provided with: a camera (3) mounted on a perambulator (2); and a terminal device (4) carried by a user of the perambulator (2). The camera (3) is able to photograph a camera image of an occupant of the perambulator (2), and the terminal device (4) is communicable with the camera (3). When the camera image photographed by the camera (3) is inputted, the terminal device (4) detects, from the camera image, a specific expression of the occupant of the perambulator (2), and when the specific expression has been detected, the terminal device (4) transmits, to the camera (3), a surrounding photographing demand for photographing a camera image of the surroundings of the perambulator (2). Accordingly, it is possible to record the specific expression of an infant riding in the perambulator (2) during an outing with the perambulator (2).

Description

表情記録システムFacial expression recording system
 本発明は、ベビーカーの乗車者の特定の表情を記録する表情記録システムに関する。 The present invention relates to an expression recording system for recording a specific expression of a passenger of a stroller.
 従来から、乳幼児を連れて外出するときにベビーカーが用いられている。従来のベビーカーでは、シートに着座した乳幼児が少しでも快適に過ごすことができるように、種々の工夫がなされている。例えば、乳幼児が着座するシートをクッション性に優れた部材によって形成することによって、乗り心地の向上が図られている(特許文献1参照)。 2. Description of the Related Art A stroller has been used conventionally when taking an infant with you. In the conventional stroller, various devices have been made so that an infant seated on a seat can spend comfortably. For example, improvement in riding comfort is achieved by forming a seat on which an infant is seated by a member having excellent cushioning properties (see Patent Document 1).
 また従来、ベビーカーを乳幼児にとって積極的に乗りたいと思える乗り物にするというアプローチから、ベビーカーに乗りながら映像や音楽といったコンテンツを楽しむことができるベビーカーも提案されている(特許文献2参照)。 Also, conventionally, a stroller that can enjoy contents such as video and music while riding on a stroller has been proposed from the approach of making the stroller into a vehicle that the toddlers want to ride positively (see Patent Document 2).
 上記従来のベビーカーでは、快適な乗り心地が得られたり、映像や音楽といったコンテンツを楽しむことができるため、ベビーカーに乗っている乳幼児は、ベビーカーでの外出を楽しむことができ、笑顔になる機会が増える。しかしながら、ベビーカーでの外出中にそのベビーカーに乗っている乳幼児の笑顔を記録に残すためのシステムは、これまでに提案されておらず、開発の余地があった。 With the above-mentioned conventional stroller, comfortable rides can be obtained, and content such as video and music can be enjoyed, so that the infants riding on the stroller can enjoy going out in the stroller and have a chance to smile Increase. However, a system for recording the smiles of infants riding on the stroller while going out on the stroller has not been proposed so far, and there has been room for development.
特開2004-216998号公報Japanese Patent Laid-Open No. 2004-216998 特開2008-308053号公報JP, 2008-308053, A
 本発明は、上記背景の下でなされたものである。本発明の目的は、ベビーカーでの外出中にそのベビーカーに乗っている乳幼児の特定の表情を記録することのできる表情記録システムを提供することにある。 The present invention has been made under the above background. An object of the present invention is to provide a facial expression recording system capable of recording a specific facial expression of an infant riding on the stroller while going out in the stroller.
 本発明の一の態様は、表情記録システムであり、この表情記録システムは、ベビーカーに取り付けられ、ベビーカーの乗車者のカメラ画像を撮影可能なカメラと、ベビーカーの利用者によって所持され、カメラと通信可能な端末装置と、を備え、端末装置は、カメラで撮影されたカメラ画像が入力されるデータ入力部と、カメラ画像から、ベビーカーの乗車者の特定の表情を検出する表情検出部と、特定の表情が検出されたときに、カメラに対してベビーカーの周囲のカメラ画像を撮影する周囲撮影要求を送る周囲撮影要求部と、を備えている。 One aspect of the present invention is a facial expression recording system, the facial expression recording system being attached to a stroller, being able to capture a camera image of a passenger of the stroller, carried by a user of the stroller, and communicating with the camera And a data input unit to which a camera image taken by a camera is input; an expression detection unit for detecting a specific expression of a passenger of a stroller from the camera image; And an ambient imaging request unit for sending an ambient imaging request for imaging a camera image around the stroller to the camera when the facial expression of is detected.
 本発明の別の態様は、ベビーカーであり、このベビーカーは、乗車者のカメラ画像を撮影可能なカメラを備えるベビーカーであって、カメラは、ベビーカーの利用者によって所持される端末装置と通信可能であり、端末装置は、カメラで撮影されたカメラ画像からベビーカーの乗車者の特定の表情を検出されたときに、カメラに対して周囲撮影要求を送り、カメラは、周囲撮影要求に基づいて、ベビーカーの周囲のカメラ画像を撮影する。 Another aspect of the present invention is a stroller, the stroller comprising a camera capable of capturing a camera image of a passenger, the camera being capable of communicating with a terminal device carried by a user of the stroller The terminal device sends an ambient imaging request to the camera when a specific expression of the passenger of the stroller is detected from the camera image captured by the camera, and the camera detects the stroller based on the ambient imaging request. Take a camera image around you.
 本発明の別の態様は、プログラムであり、このプログラムは、ベビーカーの利用者によって所持される端末装置で実行されるプログラムであって、端末装置は、ベビーカーに取り付けられたカメラと通信可能であり、カメラは、ベビーカーの乗車者のカメラ画像を撮影可能であり、プログラムは、端末装置に、カメラで撮影されたカメラ画像が入力されると、カメラ画像から、ベビーカーの乗車者の特定の表情を検出する処理と、特定の表情が検出されたときに、カメラに対してベビーカーの周囲のカメラ画像を撮影する周囲撮影要求を送る処理と、を実行させる。 Another aspect of the present invention is a program, which is a program executed by a terminal carried by a stroller user, the terminal being capable of communicating with a camera attached to the stroller The camera is capable of capturing a camera image of a passenger of the stroller, and the program, when the camera image captured by the camera is input to the terminal device, determines from the camera image a specific expression of the passenger of the stroller. A process of detecting and a process of sending an ambient shooting request for shooting a camera image around a stroller to a camera when a specific expression is detected are executed.
 本発明の別の態様は、表情記録システムであり、この表情記録システムは、移動体に取り付けられ、移動体の乗車者のカメラ画像を撮影可能なカメラと、利用者によって所持され、カメラと通信可能な端末装置と、を備え、端末装置は、カメラで撮影されたカメラ画像が入力されるデータ入力部と、カメラ画像から、移動体の乗車者の特定の表情を検出する表情検出部と、特定の表情が検出されたときに、カメラに対して移動体の周囲のカメラ画像を撮影する周囲撮影要求を送る周囲撮影要求部と、を備えている。 Another aspect of the present invention is a facial expression recording system, the facial expression recording system being attached to a mobile body, being capable of capturing a camera image of a passenger of the mobile body, carried by a user, and communicating with the camera And a data input unit to which a camera image taken by a camera is input, and a facial expression detection unit for detecting a specific facial expression of a passenger of a moving object from the camera image; And an ambient imaging request unit for sending an ambient imaging request for imaging a camera image around the moving object to the camera when a specific facial expression is detected.
 以下に説明するように、本発明には他の態様が存在する。したがって、この発明の開示は、本発明の一部の態様の提供を意図しており、ここで記述され請求される発明の範囲を制限することは意図していない。 As described below, there are other aspects of the present invention. Accordingly, the disclosure of the present invention is intended to provide some aspects of the present invention, and is not intended to limit the scope of the invention described and claimed herein.
図1は、本発明の実施の形態における表情記録システム(笑顔記録システム)の説明図である。FIG. 1 is an explanatory view of a facial expression recording system (smile recording system) according to an embodiment of the present invention. 図2は、本発明の実施の形態における端末装置のブロック図である。FIG. 2 is a block diagram of a terminal according to an embodiment of the present invention. 図3は、本発明の実施の形態における同一画面表示の説明図である。FIG. 3 is an explanatory view of the same screen display in the embodiment of the present invention. 図4は、本発明の実施の形態における笑顔検出位置表示の説明図である。FIG. 4 is an explanatory view of smile detection position display in the embodiment of the present invention. 図5は、本発明の実施の形態における同一画面表示処理のフロー図である。FIG. 5 is a flowchart of the same screen display processing in the embodiment of the present invention. 図6は、本発明の実施の形態における笑顔画像選択処理のフロー図である。FIG. 6 is a flowchart of smile image selection processing according to the embodiment of this invention. 図7は、本発明の実施の形態における映像・音楽再生処理のフロー図である。FIG. 7 is a flowchart of video / music reproduction processing according to the embodiment of the present invention. 図8は、本発明の実施の形態における笑顔検出位置記録処理・接近通知処理のフロー図である。FIG. 8 is a flowchart of smile detection position recording processing / approach notification processing according to the embodiment of this invention. 図9は、本発明の実施の形態における視線方向検出処理のフロー図である。FIG. 9 is a flowchart of gaze direction detection processing in the embodiment of the present invention.
 以下に本発明の詳細な説明を述べる。ただし、以下の詳細な説明と添付の図面は発明を限定するものではない。 The detailed description of the present invention will be described below. However, the following detailed description and the attached drawings do not limit the invention.
 本発明の表情記録システムは、ベビーカーに取り付けられ、ベビーカーの乗車者のカメラ画像を撮影可能なカメラと、ベビーカーの利用者によって所持され、カメラと通信可能な端末装置と、を備え、端末装置は、カメラで撮影されたカメラ画像が入力されるデータ入力部と、カメラ画像から、ベビーカーの乗車者の特定の表情を検出する表情検出部と、特定の表情が検出されたときに、カメラに対してベビーカーの周囲のカメラ画像を撮影する周囲撮影要求を送る周囲撮影要求部と、を備えている。 The facial expression recording system according to the present invention includes a camera attached to a stroller and capable of capturing a camera image of a passenger of the stroller, and a terminal device carried by a user of the stroller and capable of communicating with the camera. A data input unit into which a camera image captured by a camera is input, an expression detection unit for detecting a specific expression of a passenger of a stroller from the camera image, and a specific expression detected with respect to the camera And an ambient imaging request unit for sending an ambient imaging request for imaging a camera image around the stroller.
 この構成により、ベビーカーでの外出中に乗車者(例えば乳幼児など)のカメラ画像を撮影することができる。そして例えば、ある地点を通過したときにベビーカーの乗車者が特定の表情になると、その地点におけるベビーカーの周囲のカメラ画像が撮影される。これにより、ベビーカーの乗車者の特定の表情のカメラ画像とともに、その乗車者が特定の表情になった要因(その乗車者のお気に入りの対象物など)を含んだベビーカーの周囲のカメラ画像を得ることができる。 According to this configuration, it is possible to capture a camera image of a passenger (for example, a baby etc.) while going out in a stroller. Then, for example, when the passenger of the stroller becomes a specific expression when passing through a certain point, a camera image around the stroller at that point is taken. By this, together with a camera image of a specific expression of the passenger of the stroller, a camera image of the surroundings of the stroller including a factor (such as a favorite object of the passenger) that the passenger has turned into a specific expression is obtained. Can.
 また、本発明の表情記録システムでは、端末装置は、特定の表情が検出されたときの乗車者のカメラ画像と当該特定の表情が検出されたときの周囲のカメラ画像とを、同一画面上に表示する表示処理部を備えてもよい。 Further, in the facial expression recording system according to the present invention, the terminal device displays on the same screen the camera image of the passenger when the specific facial expression is detected and the surrounding camera image when the specific facial expression is detected. You may provide the display process part to display.
 この構成により、ベビーカーの乗車者(例えば乳幼児など)の特定の表情とその乗車者が特定の表情になった要因(その乗車者のお気に入りの対象物など)との対応を、同一画面上で容易に把握することができる。 With this configuration, it is easy to cope with a specific expression of a passenger of the stroller (for example, a baby etc.) and a factor (a favorite object of the passenger, etc.) for which the passenger has a specific expression on the same screen. Can be grasped.
 また、本発明の表情記録システムでは、端末装置は、特定の表情が検出されている間は、カメラに対してベビーカーの乗車者のカメラ画像を継続して撮影する継続撮影要求を送る継続撮影要求部と、継続して撮影された乗車者のカメラ画像の中から特定の表情の度合いが所定値以上のカメラ画像を選択する画像選択部と、備えてもよい。 Further, in the facial expression recording system according to the present invention, the terminal device transmits the continuous imaging request to continuously capture the camera image of the passenger of the stroller to the camera while the specific facial expression is detected. The image processing apparatus may further include: a unit, and an image selection unit configured to select a camera image of which the degree of a specific facial expression is equal to or more than a predetermined value from among camera images of a passenger taken continuously.
 この構成により、ベビーカーの乗車者(例えば乳幼児など)が特定の表情になっている間、継続して乗車者のカメラ画像(特定の表情のカメラ画像)を撮影しつづけて、その中から特定の表情の度合いの高いカメラ画像が自動的に選択される。これにより、特徴的な表情のカメラ画像を得ることができる。 According to this configuration, while the passenger of the stroller (for example, a baby or the like) has a specific expression, the camera image of the passenger (the camera image of the specific expression) is continuously taken, and from among them, the specific A camera image with a high degree of expression is automatically selected. This makes it possible to obtain a camera image of a characteristic expression.
 また、本発明の表情記録システムでは、端末装置は、カメラ画像から、ベビーカーの乗車者の感情を分析する感情分析部と、分析結果として得られたベビーカーの乗車者の感情に応じて映像または音楽を再生する再生処理部と、を備えてもよい。 Further, in the facial expression recording system of the present invention, the terminal device analyzes the emotion of the passenger of the stroller from the camera image, and the video or music according to the emotion of the passenger of the stroller obtained as the analysis result. And a reproduction processing unit that reproduces the information.
 この構成により、ベビーカーの乗車者(例えば乳幼児など)のカメラ画像からその乗車者の感情が分析され、その感情に応じた映像や音楽が自動的に再生される。これにより、ベビーカーでの外出中に乗車者の感情にあった映像や音楽を演出することができる。 According to this configuration, the emotion of the passenger of the stroller (for example, an infant or the like) is analyzed from the camera image of the passenger and video and music corresponding to the emotion are automatically reproduced. Thereby, it is possible to produce a video and music according to the emotions of the rider while going out in the stroller.
 また、本発明の表情記録システムでは、端末装置は、カメラから、ベビーカーの位置情報を取得する位置情報取得部と、特定の表情が検出されたときのベビーカーの位置を表情検出位置として記録する記録処理部と、ベビーカーが表情検出位置に接近するとベビーカーの利用者に通知を行う通知処理部と、を備えてもよい。 Further, in the facial expression recording system of the present invention, the terminal device records, as a facial expression detection position, a position information acquisition unit that acquires positional information of the stroller from the camera and a position of the stroller when a specific facial expression is detected. The processing unit may include a notification processing unit that notifies a user of the stroller when the stroller approaches the facial expression detection position.
 この構成により、ベビーカーの乗車者(例えば乳幼児など)が特定の表情になった位置(表情検出位置)が記録され、次にその地点に接近すると利用者に通知がされる。これにより、ベビーカーの利用者は、ベビーカーの乗車者が特定の表情になる地点(乗車者のお気に入りの地点)を把握することができ、ベビーカーでの外出中にその地点に近づいたときにそのことを知ることができる。 With this configuration, the position (expression detection position) at which the rider (for example, a baby or the like) of the stroller becomes a specific expression is recorded, and the user is notified when the position is next approached. This allows the stroller user to know the point at which the rider of the stroller makes a specific expression (the favorite spot of the rider), and when the user approaches that point while going out with the stroller You can know
 また、本発明の表情記録システムでは、端末装置は、カメラから、ベビーカーの向き情報を取得する向き情報取得部と、特定の表情が検出されたときのベビーカーの向きを、乗車者の視線方向として検出する視線方向検出処理部と、を備えてもよい。 Further, in the facial expression recording system of the present invention, the terminal device uses the direction information acquisition unit for acquiring the direction information of the stroller from the camera and the direction of the stroller when the specific facial expression is detected as the line of sight of the rider. And a gaze direction detection processing unit for detecting.
 この構成により、ベビーカーの乗車者(例えば乳幼児など)が特定の表情になったときに、その乗車者の視線方向(見ていた方向)を知ることができる。これにより、その乗車者のお気に入りの対象物を特定することが可能になる。 According to this configuration, when the passenger (for example, a baby or the like) of the stroller becomes a specific expression, it is possible to know the gaze direction (the direction viewed) of the passenger. This makes it possible to identify the passenger's favorite object.
 本発明のベビーカーは、乗車者のカメラ画像を撮影可能なカメラを備えるベビーカーであって、カメラは、ベビーカーの利用者によって所持される端末装置と通信可能であり、端末装置は、カメラで撮影されたカメラ画像からベビーカーの乗車者の特定の表情を検出されたときに、カメラに対して周囲撮影要求を送り、カメラは、周囲撮影要求に基づいて、ベビーカーの周囲のカメラ画像を撮影する。 The stroller according to the present invention is a stroller provided with a camera capable of capturing a camera image of a passenger, wherein the camera is capable of communicating with a terminal device carried by the user of the stroller, and the terminal device is captured by the camera When a specific expression of the stroller occupant is detected from the camera image, an ambient imaging request is sent to the camera, and the camera captures a camera image around the stroller based on the ambient imaging request.
 このベビーカーによっても、上記のシステムと同様に、ベビーカーの乗車者の特定の表情のカメラ画像とともに、その乗車者が特定の表情になった要因(その乗車者のお気に入りの対象物など)を含んだベビーカーの周囲のカメラ画像を得ることができる。 As with the above system, this stroller also includes a camera image of a specific expression of the passenger of the stroller and a factor (specific object of the passenger's favorite object, etc.) of the specific expression of the passenger. Camera images around the stroller can be obtained.
 本発明のプログラムは、ベビーカーの利用者によって所持される端末装置で実行されるプログラムであって、端末装置は、ベビーカーに取り付けられたカメラと通信可能であり、カメラは、ベビーカーの乗車者のカメラ画像を撮影可能であり、プログラムは、端末装置に、カメラで撮影されたカメラ画像が入力されると、カメラ画像から、ベビーカーの乗車者の特定の表情を検出する処理と、特定の表情が検出されたときに、カメラに対してベビーカーの周囲のカメラ画像を撮影する周囲撮影要求を送る処理と、を実行させる。 The program of the present invention is a program executed by a terminal device carried by the user of the stroller, the terminal device being capable of communicating with a camera attached to the stroller, the camera being a camera of a passenger of the stroller An image can be taken, and when the camera image taken by the camera is input to the terminal device, the program detects the specific expression of the passenger of the stroller from the camera image and detects the specific expression. And a process of sending an ambient shooting request for shooting a camera image around the stroller to the camera.
 このプログラムによっても、上記のシステムと同様に、ベビーカーの乗車者の特定の表情のカメラ画像とともに、その乗車者が特定の表情になった要因(その乗車者のお気に入りの対象物など)を含んだベビーカーの周囲のカメラ画像を得ることができる。 According to this program, as in the above-described system, a camera image of a specific expression of the passenger of the stroller and a factor (specific object of the passenger's favorite object, etc.) of the specific expression of the passenger are included. Camera images around the stroller can be obtained.
 本発明によれば、ベビーカーでの外出中にそのベビーカーに乗っている乳幼児の特定の表情を記録することができる。 According to the present invention, while going out in a stroller, it is possible to record a specific expression of the baby who is riding on the stroller.
(実施の形態)
 以下、本発明の実施の形態の表情記録システムについて、図面を用いて説明する。本実施の形態では、乳幼児を乗せるベビーカー等に用いられる表情記録システムの場合を例示する。この表情記録システムは、ベビーカーでの外出中にそのベビーカーに乗っている乳幼児の特定の表情を記録する機能を備えている。以下では、特定の表情の例として「笑顔」の場合を例示して説明するが、「泣き顔」や「怒り顔」や「変顔」などの他の表情についても同様に実施可能である。
Embodiment
Hereinafter, a facial expression recording system according to an embodiment of the present invention will be described using the drawings. In the present embodiment, the case of an expression recording system used for a stroller or the like on which an infant is placed is illustrated. This facial expression recording system has a function of recording a specific facial expression of an infant riding on the stroller while going out in the stroller. Although the case of "smile" is illustrated and explained as an example of a specific facial expression below, it can implement similarly about other facial expressions, such as a "cry face", an anger face, and a funny face.
 本発明の実施の形態の表情記録システム(笑顔記録システム)の構成を、図面を参照して説明する。図1は、本実施の形態の表情記録システムの概略構成を示す説明図である。図1に示すように、表情記録システム1は、ベビーカー2に取り付けられるカメラ3と、ベビーカー2の利用者によって所持される端末装置4とを備えている。なお、ベビーカー2の乗車者は、例えば乳幼児であり、ベビーカー2の利用者は、例えば乳幼児の保護者である。 The configuration of the facial expression recording system (smile recording system) according to the embodiment of the present invention will be described with reference to the drawings. FIG. 1 is an explanatory view showing a schematic configuration of the facial expression recording system according to the present embodiment. As shown in FIG. 1, the facial expression recording system 1 includes a camera 3 attached to the stroller 2 and a terminal device 4 carried by a user of the stroller 2. The passenger of the stroller 2 is, for example, an infant, and the user of the stroller 2 is, for example, a guardian of the infant.
 まず、図1を参照しながら、カメラ3の構成について説明する。カメラ3は、ベビーカー2の乗車者のカメラ画像を撮影することができるように、ベビーカー2に取り付けられている。カメラ画像は、静止画像であってもよく、動画像であってもよい。例えば、カメラ3は、ベビーカー2のアーム5などに、撮影方向がベビーカー2の乗車者に向けられた状態(すなわち、アーム5の内側に向けられた状態)で取り付けられる。アーム5は、例えば円弧状またはアーチ状に構成されており、ベビーカー2のシート6の前方(シート6に着座した乗車者の前方)を横切るように配置される。なお、カメラ3は、アーム5に内蔵されていてもよく、アーム5に着脱可能に取り付けられてもよい。 First, the configuration of the camera 3 will be described with reference to FIG. The camera 3 is attached to the stroller 2 so as to be able to capture a camera image of a passenger of the stroller 2. The camera image may be a still image or a moving image. For example, the camera 3 is attached to the arm 5 or the like of the stroller 2 in a state where the photographing direction is directed to the rider of the stroller 2 (that is, the state of being directed to the inside of the arm 5). The arm 5 is configured, for example, in an arc shape or an arch shape, and is disposed to cross the front of the seat 6 of the stroller 2 (the front of the passenger seated on the seat 6). The camera 3 may be incorporated in the arm 5 or may be detachably attached to the arm 5.
 また、カメラ3は、ベビーカー2の周囲を撮影できるように構成されている。例えば、カメラ3は、広角レンズを用いてベビーカー2の周囲を広角で撮影できるように構成される。あるいは、カメラ3は、360度レンズを用いてベビーカー2の全周囲を撮影できるように構成される。また、カメラ3は、パン機能やチルト機能を備えてもよい。その場合、カメラ3は、パン方向やチルト方向にカメラレンズを回転させることにより、ベビーカー2の周囲(または全周囲)を撮影できるように構成される。 In addition, the camera 3 is configured to be able to photograph around the stroller 2. For example, the camera 3 is configured to be able to capture a wide area around the stroller 2 using a wide angle lens. Alternatively, the camera 3 is configured to be able to capture the entire circumference of the stroller 2 using a 360 degree lens. The camera 3 may also have a pan function and a tilt function. In that case, the camera 3 is configured to be able to capture the periphery (or the entire periphery) of the stroller 2 by rotating the camera lens in the pan direction or the tilt direction.
 さらに、カメラ3は、GPS衛星との通信を行うことにより、現在のカメラ3の位置(ベビーカー2の位置)を示す位置情報(例えば、経度緯度情報)を取得するGPS機能を備えている。また、カメラ3は、現在のカメラ3の向き(ベビーカー2の向き)を示す向き情報(例えば、方位情報)を取得するジャイロ機能を備えている。また、カメラ3は、端末装置4と無線または有線で通信する機能を備えている。したがって、カメラ3は、撮影したカメラ画像のデータのほかに、これらの位置情報や向き情報を、端末装置4に送信することができる。また、カメラ3は、端末装置4から、後述する周囲撮影要求や継続撮影要求などの要求信号を受信することができる。なお、カメラ3のバッテリーは、カメラ3自体に備えられてもよく、ベビーカー2に備えられてもよい。 Furthermore, the camera 3 has a GPS function of acquiring position information (for example, longitude and latitude information) indicating the current position of the camera 3 (position of the stroller 2) by communicating with GPS satellites. Moreover, the camera 3 is equipped with the gyro function which acquires direction information (for example, direction information) which shows the direction (direction of the stroller 2) of the camera 3 at present. The camera 3 also has a function of communicating with the terminal device 4 wirelessly or by wire. Therefore, the camera 3 can transmit the position information and the direction information to the terminal device 4 in addition to the data of the captured camera image. Further, the camera 3 can receive from the terminal device 4 request signals such as an ambient imaging request and a continuous imaging request, which will be described later. The battery of the camera 3 may be provided in the camera 3 itself or in the stroller 2.
 つぎに、図2を参照しながら、端末装置4の構成について説明する。図2は、端末装置4の構成を説明するためのブロック図である。端末装置4は、例えばスマートフォンなどの携帯型の端末装置4である。図2に示すように、端末装置4は、タッチパネル10、スピーカ11、記憶部12、通信部13、第1制御部14、第2制御部15を備えている。なお、第1制御部14と第2制御部15は、一つの制御部で構成されてもい。 Next, the configuration of the terminal device 4 will be described with reference to FIG. FIG. 2 is a block diagram for explaining the configuration of the terminal device 4. The terminal device 4 is, for example, a portable terminal device 4 such as a smartphone. As shown in FIG. 2, the terminal device 4 includes a touch panel 10, a speaker 11, a storage unit 12, a communication unit 13, a first control unit 14, and a second control unit 15. The first control unit 14 and the second control unit 15 may be configured by one control unit.
 タッチパネル10は、入力部と表示部の機能を兼ね備えている。したがって、端末装置4の利用者(ベビーカー2の利用者)は、タッチパネル10から各種の情報を入力することができる。また、タッチパネル10には、端末装置4の利用者(ベビーカー2の利用者)が確認できるように各種の情報が表示される。スピーカ11は、端末装置4の利用者(ベビーカー2の利用者)に対して音声を出力する機能を備えている。 The touch panel 10 has the functions of an input unit and a display unit. Therefore, the user of the terminal device 4 (the user of the stroller 2) can input various information from the touch panel 10. Further, various information is displayed on the touch panel 10 so that the user of the terminal device 4 (the user of the stroller 2) can confirm. The speaker 11 has a function of outputting a voice to a user of the terminal device 4 (a user of the stroller 2).
 記憶部12は、メモリなどで構成されており、各種のデータを保存することができる。例えば、記憶部12には、カメラ3で撮影されたカメラ画像のデータが記憶される。この記憶部12には、映像や音楽などのデータが記憶されてもよい。また、この記憶部12には、端末装置4の各種の機能(表情記録機能も含まれる)を実現するためのプログラムが記憶されている。このプログラムを実行することにより、端末装置4の各種の機能が実現されるともいえる。 The storage unit 12 is configured by a memory or the like, and can store various data. For example, data of a camera image captured by the camera 3 is stored in the storage unit 12. The storage unit 12 may store data such as video and music. The storage unit 12 also stores programs for realizing various functions (including an expression recording function) of the terminal device 4. It can be said that various functions of the terminal device 4 are realized by executing this program.
 通信部13は、外部装置と無線または有線で通信する機能を備えている。外部装置には、上記のカメラ3も含まれる。したがって、通信部13は、カメラ3と無線または有線で通信する機能を備えており、端末装置4は、カメラ3から、撮影したカメラ画像のデータのほかに、上述した位置情報や向き情報を受信することができる。また、端末装置4は、カメラ3に、後述する周囲撮影要求や継続撮影要求などの要求信号を送信することができる。なお、通信方式は、公知の方式を利用することができる。 The communication unit 13 has a function of communicating with an external device wirelessly or by wire. The external device also includes the camera 3 described above. Therefore, the communication unit 13 has a function of wirelessly or wiredly communicating with the camera 3, and the terminal device 4 receives from the camera 3 the position information and the direction information described above in addition to the data of the photographed camera image. can do. In addition, the terminal device 4 can transmit, to the camera 3, request signals such as an ambient imaging request and a continuous imaging request, which will be described later. In addition, a communication system can utilize a well-known system.
 第1制御部14は、表情記録機能に関するメイン制御を行うための制御部であり、データ入力部140、表情検出部141、周囲撮影要求部142、表示処理部143、継続撮影要求部144、画像選択部145を備えている。 The first control unit 14 is a control unit for performing main control related to the facial expression recording function, and the data input unit 140, the facial expression detection unit 141, the surrounding photographing request unit 142, the display processing unit 143, the continuous photographing request unit 144, and the image A selection unit 145 is provided.
 データ入力部140は、表情記録機能を実現するために種々のデータが入力される入力インターフェースとして機能を有している。例えば、データ入力部140には、カメラ3で撮影されたカメラ画像(例えば、ベビーカー2の乗車者のカメラ画像)が入力される。表情検出部141は、データ入力部140に入力されたカメラ画像から、ベビーカー2の乗車者の特定の表情を検出する機能を備えている。例えば、表情検出部141は、カメラ画像に笑顔検出用の画像処理を施すことにより、ベビーカー2の乗車者の笑顔を検出することができる。なお、笑顔検出の手法は、公知の手法を利用することができる。同様に、他の特定の表情の検出(例えば、泣き顔検出、怒り顔検出、変顔検出など)についても、公知の手法を利用することができる。 The data input unit 140 has a function as an input interface to which various data are input in order to realize the facial expression recording function. For example, a camera image (for example, a camera image of a passenger of the stroller 2) captured by the camera 3 is input to the data input unit 140. The facial expression detection unit 141 has a function of detecting a specific facial expression of a rider of the stroller 2 from the camera image input to the data input unit 140. For example, the facial expression detection unit 141 can detect the smile of the rider of the stroller 2 by performing image processing for smile detection on the camera image. In addition, the method of a smile detection can utilize a well-known method. Similarly, known methods can be used for detection of other specific expressions (for example, cry face detection, angry face detection, face change detection, etc.).
 周囲撮影要求部142は、表情検出部141でカメラ画像からベビーカー2の乗車者の笑顔が検出されたときに、カメラ3に対してベビーカー2の周囲のカメラ画像を撮影する周囲撮影要求を送る機能を備えている。周囲撮影要求は、通信部13を介して、端末装置4からカメラ3へ送信される。カメラ3は、周囲撮影要求を受信すると、ベビーカー2の周囲のカメラ画像を撮影し、そのカメラ画像(ベビーカー2の周囲のカメラ画像)を端末装置4へ返送する。 The ambient shooting request unit 142 has a function of sending a ambient shooting request for shooting a camera image around the stroller 2 to the camera 3 when the facial expression detection unit 141 detects the smile of the passenger of the stroller 2 from the camera image. Is equipped. The ambient imaging request is transmitted from the terminal device 4 to the camera 3 via the communication unit 13. When receiving the ambient imaging request, the camera 3 captures a camera image around the stroller 2 and returns the camera image (a camera image around the stroller 2) to the terminal device 4.
 表示処理部143は、このカメラ画像(ベビーカー2の周囲のカメラ画像)を、笑顔が検出されたときの乗車者のカメラ画像とともに同一画面上に表示する機能を備えている。図3は、同一画面表示の一例を示す説明図である。図3に示すように、表示処理部143は、笑顔が検出されたときの乗車者のカメラ画像と、そのときのベビーカー2の周囲のカメラ画像とを、同一画面に並べて表示する。なお、図3の例では、笑顔が検出されたときの乗車者のカメラ画像と、そのときのベビーカー2の周囲のカメラ画像とが、上下に並べて表示されているが、同一画面上であれば、左右に並べるなど他の並べ方でもよい。また、表示処理部143は、連続するカメラ画像をつなげて動画として再生(表示)する機能も備えてもよい。その場合、笑顔が検出されたときの乗車者のカメラ画像から生成された動画と、そのときのベビーカー2の周囲のカメラ画像から生成された動画とを、同一画面に並べて表示することができる。 The display processing unit 143 has a function of displaying the camera image (camera image around the stroller 2) on the same screen along with the camera image of the passenger when the smile is detected. FIG. 3 is an explanatory view showing an example of the same screen display. As shown in FIG. 3, the display processing unit 143 displays the camera image of the passenger when the smile is detected and the camera image around the stroller 2 at that time on the same screen. In the example of FIG. 3, the camera image of the passenger when the smile is detected and the camera images of the surroundings of the stroller 2 at that time are displayed side by side, but if they are displayed on the same screen , Other arrangement may be used, such as arranging side by side. The display processing unit 143 may also have a function of connecting consecutive camera images and reproducing (displaying) them as a moving image. In that case, the moving image generated from the camera image of the passenger when the smile is detected and the moving image generated from the camera image around the stroller 2 at that time can be displayed side by side on the same screen.
 継続撮影要求部144は、表情検出部141でカメラ画像からベビーカー2の乗車者の笑顔が検出されている間は、カメラ3に対してベビーカー2の乗車者のカメラ画像を継続して撮影する継続撮影要求を送る機能を備えている。継続撮影要求は、通信部13を介して、端末装置4からカメラ3へ送信される。カメラ3は、継続撮影要求を受信すると、ベビーカー2の乗車者のカメラ画像を撮影しつづける。このようにして撮影された複数のカメラ画像(継続して撮影された乗車者の笑顔のカメラ画像)は、カメラ3から端末装置4へ返送される。 The continuous shooting request unit 144 continuously shoots the camera image of the passenger of the stroller 2 with respect to the camera 3 while the smile of the passenger of the stroller 2 is detected from the camera image by the facial expression detection unit 141 It has a function to send a shooting request. The continuous imaging request is transmitted from the terminal device 4 to the camera 3 via the communication unit 13. When the camera 3 receives the continuous shooting request, the camera 3 keeps shooting the camera image of the passenger of the stroller 2. A plurality of camera images (camera images of the smiles of the passenger taken continuously) photographed in this manner are returned from the camera 3 to the terminal device 4.
 画像選択部145は、これらの複数のカメラ画像(継続して撮影された乗車者の笑顔のカメラ画像)の中から笑顔度が所定値以上のカメラ画像を選択する機能を備えている。例えば、画像選択部145は、複数のカメラ画像の各々から笑顔を検出し、検出した笑顔ごとに笑顔度を算出する。そして、画像選択部145は、笑顔度が所定値以上のカメラ画像の中から笑顔度の最も大きいカメラ画像を一つ選択する。なお、笑顔度が所定値以上のカメラ画像であれば、複数のカメラ画像を選択してもよい。また、笑顔度の算出については、公知の手法を利用することができる。同様に、他の特定の表情の度合い(例えば、泣き顔度、怒り顔度、変顔度など)の算出についても、公知の手法を利用することができる。 The image selection unit 145 has a function of selecting a camera image with a smile degree equal to or more than a predetermined value from among the plurality of camera images (camera images of the smiles of the passenger taken continuously). For example, the image selection unit 145 detects a smile from each of the plurality of camera images, and calculates the degree of smile for each detected smile. Then, the image selection unit 145 selects one camera image having the largest smile degree from among camera images having a smile degree equal to or more than a predetermined value. If the degree of smile is a camera image having a predetermined value or more, a plurality of camera images may be selected. In addition, a known method can be used to calculate the degree of smile. Similarly, well-known methods can be used to calculate the degree of another specific expression (for example, the degree of crying, the degree of angry face, the degree of oddity, etc.).
 第2制御部15は、表情記録機能に関するサブ制御を行うための制御部であり、感情分析部150、再生処理部151、位置取得部152、記録処理部153、通知処理部154、向き情報取得部155、視線方向検出処理部156を備えている。 The second control unit 15 is a control unit for performing sub-control related to the facial expression recording function, and the emotion analysis unit 150, the reproduction processing unit 151, the position acquisition unit 152, the recording processing unit 153, the notification processing unit 154, and the orientation information acquisition The unit 155 includes a gaze direction detection processing unit 156.
 感情分析部150は、カメラ3で撮影されたカメラ画像(ベビーカー2の乗車者のカメラ画像)から、そのベビーカー2の乗車者の感情を分析する機能を備えている。例えば、感情分析部150は、そのベビーカー2の乗車者の感情が「喜」「怒」「哀」「楽」のいずれであるかを、カメラ画像(ベビーカー2の乗車者のカメラ画像)から分析することができる。なお、感情分析の手法は、公知の手法を利用することができる。 The emotion analysis unit 150 has a function of analyzing the emotion of the passenger of the stroller 2 from the camera image (camera image of the passenger of the stroller 2) taken by the camera 3. For example, the emotion analysis unit 150 analyzes, from the camera image (the camera image of the passenger of the stroller 2), whether the emotion of the passenger of the stroller 2 is “joy”, “anger”, “哀”, “easy”. can do. In addition, the method of emotion analysis can utilize a well-known method.
 再生処理部151は、感情分析部150での分析結果(分析結果として得られたベビーカー2の乗車者の感情)に応じて映像または音楽を再生する機能を備えている。例えば、再生処理部151は、分析結果として得られたベビーカー2の乗車者の感情が「喜」や「楽」であれば、明るい雰囲気の映像や音楽を再生し、分析結果として得られたベビーカー2の乗車者の感情が「怒」や「哀」であれば、暗い雰囲気の映像や音楽を再生する。 The reproduction processing unit 151 has a function of reproducing video or music in accordance with the analysis result in the emotion analysis unit 150 (the emotion of the rider of the stroller 2 obtained as the analysis result). For example, if the emotion of the rider of the stroller 2 obtained as the analysis result is “Kee” or “Raku”, the reproduction processing unit 151 reproduces a bright atmosphere image or music, and the stroller obtained as the analysis result If the emotions of the two passengers are "angry" or "smiling", the video or music in a dark atmosphere is reproduced.
 位置取得部152は、ベビーカー2の位置情報を取得する機能を備えている。例えば、位置取得部152は、端末装置4の現在の位置情報(例えば、経度緯度情報)を取得するGPS機能を備えている。位置取得部152は、この端末装置4の位置情報をベビーカー2の位置情報として取得する。また、上述のように、カメラ3は、カメラ3の現在の位置を示す位置情報(例えば、経度緯度情報)を取得するGPS機能を備えているので、位置取得部152は、このカメラ3の位置情報をベビーカー2の位置情報(そのカメラ3が取り付けられたベビーカー2の位置情報)として、カメラ3から取得してもよい。 The position acquisition unit 152 has a function of acquiring position information of the stroller 2. For example, the position acquisition unit 152 has a GPS function of acquiring current position information (for example, longitude and latitude information) of the terminal device 4. The position acquisition unit 152 acquires the position information of the terminal device 4 as the position information of the stroller 2. Further, as described above, since the camera 3 has the GPS function of acquiring position information (for example, longitude and latitude information) indicating the current position of the camera 3, the position acquisition unit 152 detects the position of the camera 3. Information may be acquired from the camera 3 as position information of the stroller 2 (position information of the stroller 2 to which the camera 3 is attached).
 記録処理部153は、表情検出部141でカメラ画像からベビーカー2の乗車者の笑顔が検出されたときのベビーカー2の位置(例えば、緯度経度)を笑顔検出位置として記録する機能を備えている。笑顔検出位置の情報は、記憶部12に記録される。なお、記憶部12には、他の利用者の笑顔検出位置(他のベビーカー2の乗車者の笑顔が検出された位置)が記憶されてもよい。 The recording processing unit 153 has a function of recording the position (for example, latitude and longitude) of the stroller 2 when the smile of the rider of the stroller 2 is detected from the camera image by the facial expression detection unit 141 as a smile detection position. The information on the smile detection position is recorded in the storage unit 12. The storage unit 12 may store a smile detection position of another user (a position at which a smile of a passenger of another stroller 2 is detected).
 通知処理部154は、ベビーカー2が笑顔検出位置に接近するとベビーカー2の利用者に通知を行う機能を備えている。通知処理部154は、ベビーカー2の現在の位置(位置取得部152によりカメラ3から取得される)が笑顔検出位置(記憶部12に記憶されている)に近づくと、ベビーカー2の利用者に通知を行う。例えば、笑顔検出位置を中心とする所定半径の円形エリア内に入ると、ベビーカー2の利用者に通知を行う。ベビーカー2の利用者への通知は、音や光や振動などの公知の手法で行うことができる。また、通知処理部154は、ベビーカー2の利用者に対して、感情分析部150での分析結果(分析結果として得られたベビーカー2の乗車者の感情)に応じた通知を行う機能を備えてもよい。例えば、通知処理部154は、分析結果として得られたベビーカー2の乗車者の感情が「怒」や「哀」である場合に、最寄りの笑顔検出位置を通知してもよい。ベビーカー2の利用者への通知は、例えば、タッチパネル10(画面表示による通知)やスピーカ11(音声による通知)を用いて行うことができる。 The notification processing unit 154 has a function of notifying the user of the stroller 2 when the stroller 2 approaches the smile detection position. The notification processing unit 154 notifies the user of the stroller 2 when the current position of the stroller 2 (obtained from the camera 3 by the position acquisition unit 152) approaches the smile detection position (stored in the storage unit 12). I do. For example, when the user enters a circular area of a predetermined radius centered on the smile detection position, the user of the stroller 2 is notified. The notification to the user of the stroller 2 can be performed by a known method such as sound, light or vibration. Further, the notification processing unit 154 has a function of notifying the user of the stroller 2 according to the analysis result in the emotion analysis unit 150 (the emotion of the passenger of the stroller 2 obtained as the analysis result). It is also good. For example, the notification processing unit 154 may notify the nearest smile detection position, when the emotion of the rider of the stroller 2 obtained as the analysis result is “angry” or “smiling”. The notification to the user of the stroller 2 can be performed using, for example, the touch panel 10 (notification by screen display) or the speaker 11 (notification by sound).
 向き情報取得部155は、ベビーカー2の向きを検出する機能を備えている。上述のように、カメラ3は、カメラ3の現在の向きを示す情報(例えば、方位情報)を取得するジャイロ機能を備えており、向き情報取得部155は、このカメラ3の向き情報をベビーカー2の向き(そのカメラ3が取り付けられたベビーカー2の向き)として、カメラ3から取得する。 The direction information acquisition unit 155 has a function of detecting the direction of the stroller 2. As described above, the camera 3 has a gyro function of acquiring information (for example, orientation information) indicating the current orientation of the camera 3, and the orientation information acquisition unit 155 determines the orientation information of the camera 3 as the stroller 2. Of the camera 3 (the direction of the stroller 2 to which the camera 3 is attached).
 視線方向検出処理部156は、表情検出部141でカメラ画像からベビーカー2の乗車者の笑顔が検出されたときのベビーカー2の向きを、そのベビーカー2の乗車者の視線方向(例えば、東西南北などの方位)として検出する機能を備えている。なお、視線方向検出の手法は、公知の手法を利用することができる。例えば、ベビーカー2の乗車者のカメラ画像のうち目の部分に注目し、画像解析を施すなどして、乗車者の視線方向を算出し、その時点におけるベビーカー2の向き(進行方向)を考慮に入れて、ベビーカー2の乗車の視線方向(例えば、東西南北などの方位)を検出してもよい。検出した視線方向の情報は、記憶部12に記憶される。 The gaze direction detection processing unit 156 determines the direction of the stroller 2 when the smile of the rider of the stroller 2 is detected from the camera image by the facial expression detection unit 141 as the gaze direction of the rider of the stroller 2 (for example Orientation)). In addition, the method of gaze direction detection can utilize a well-known method. For example, the eye direction of the rider is calculated by paying attention to the eye part of the camera image of the rider of the stroller 2 and image analysis is performed, and the direction (traveling direction) of the stroller 2 at that time is taken into consideration. It may be inserted to detect the gaze direction (for example, the azimuth of north, south, east, and the like) of the stroller 2 riding. Information on the detected gaze direction is stored in the storage unit 12.
 なお、表示処理部143は、笑顔検出位置を地図上に表示してもよい。図4は、笑顔検出位置表示の一例を示す説明図である。図4の例では、笑顔検出位置が「笑顔マーク」で地図上に表示されている。また、図4では、ベビーカー2の現在位置が丸印で表示されており、ベビーカー2の向き(進行方向)が三角印で表示されている。図4の例では、ベビーカー2の向きは地図上で右向きである。 The display processing unit 143 may display the smile detection position on the map. FIG. 4 is an explanatory view showing an example of smile detection position display. In the example of FIG. 4, the smile detection position is displayed on the map by the “smile mark”. Moreover, in FIG. 4, the present position of the stroller 2 is displayed by a circle, and the direction (traveling direction) of the stroller 2 is displayed by a triangle. In the example of FIG. 4, the direction of the stroller 2 is rightward on the map.
 以上のように構成された表情記録システム1について、図5~図9のフロー図を参照してその動作を説明する。 The operation of the facial expression recording system 1 configured as described above will be described with reference to the flowcharts of FIGS. 5 to 9.
 図5は、本実施の形態の表情記録システム1における同一画面表示処理のフロー図である。端末装置4では、常に、カメラ3で撮影したベビーカー2の乗車者のカメラ画像(動画像)がライブビューで表示されている。端末装置4で同一画面表示処理が行われる場合には、図5に示すように、まず、カメラ3から端末装置4へカメラ画像が入力され(S10)、入力されたカメラ画像からベビーカー2の乗車者の笑顔を検出する処理が実行される(S11)。 FIG. 5 is a flowchart of the same screen display processing in the facial expression recording system 1 of the present embodiment. In the terminal device 4, the camera image (moving image) of the rider of the stroller 2 photographed by the camera 3 is always displayed in live view. When the same screen display processing is performed by the terminal device 4, as shown in FIG. 5, first, a camera image is input from the camera 3 to the terminal device 4 (S 10), and the stroller 2 is boarded from the input camera image A process of detecting the smile of the person is executed (S11).
 そして、カメラ画像からベビーカー2の乗車者の笑顔が検出された場合には(S12)、端末装置4からカメラ3へ周囲撮影要求が送られ(S13)、この周囲撮影要求に基づいて撮影されたベビーカー2の周囲のカメラ画像がカメラ3で撮影される。このようにして撮影されたベビーカー2の周囲のカメラ画像が端末装置4に入力されると(S14)、笑顔が検出されたときのベビーカー2の乗車者のカメラ画像(笑顔のカメラ画像)とその笑顔が検出されたときのベビーカー2の周囲のカメラ画像が同一画面上に表示される(S15)。 Then, when the smile of the rider of the stroller 2 is detected from the camera image (S12), the terminal device 4 sends an ambient imaging request to the camera 3 (S13), and imaging is performed based on the ambient imaging request. A camera image around the stroller 2 is taken by the camera 3. When the camera image around the stroller 2 taken in this way is input to the terminal device 4 (S14), the camera image (camera image of the smile) and the camera image of the passenger of the stroller 2 when a smile is detected A camera image around the stroller 2 when a smile is detected is displayed on the same screen (S15).
 図6は、本実施の形態の表情記録システム1における笑顔画像選択処理のフロー図である。図6に示すように、端末装置4で笑顔画像選択処理が行われる場合には、まず、カメラ3から端末装置4へカメラ画像が入力され(S20)、入力されたカメラ画像からベビーカー2の乗車者の笑顔を検出する処理が実行される(S21)。 FIG. 6 is a flowchart of smile image selection processing in the facial expression recording system 1 of the present embodiment. As shown in FIG. 6, when smile image selection processing is performed by the terminal device 4, first, a camera image is input from the camera 3 to the terminal device 4 (S20), and the stroller 2 gets on the basis of the input camera image A process of detecting the smile of the person is executed (S21).
 そして、カメラ画像からベビーカー2の乗車者の笑顔が検出された場合には(S22)、端末装置4からカメラ3へ継続撮影要求が送られ(S23)、ベビーカー2の乗車者のカメラ画像(笑顔のカメラ画像)の撮影が繰り返される。そして、カメラ画像からベビーカー2の乗車者の笑顔が検出されなくなると(S22)、継続して撮影されたカメラ画像(笑顔のカメラ画像)の中から、笑顔度が所定以上でありかつ笑顔度が最も大きいカメラ画像を一つ選択する(S24)。なお、笑顔度が所定値以上のカメラ画像であれば、複数のカメラ画像を選択してもよい。 Then, when the smile of the passenger of the stroller 2 is detected from the camera image (S22), the terminal device 4 sends a continuous shooting request to the camera 3 (S23), and the camera image of the passenger of the stroller 2 (smile) The camera image of) is repeated. Then, when the smile of the rider of the stroller 2 is not detected from the camera image (S22), the degree of smile is equal to or greater than a predetermined degree and the degree of smile from among the continuously captured camera images (camera images of smile). One largest camera image is selected (S24). If the degree of smile is a camera image having a predetermined value or more, a plurality of camera images may be selected.
 図7は、本実施の形態の表情記録システム1における映像・音楽再生処理のフロー図である。図7に示すように、端末装置4で映像・音楽再生処理が行われる場合には、カメラ3から端末装置4へカメラ画像が入力されると(S30)、そのカメラ画像からベビーカー2の乗車者の感情(例えば「喜」「怒」「哀」「楽」)が分析される(S31)。そして、そのベビーカー2の乗車者の感情に応じた映像または音楽が再生される(S32)。例えば、ベビーカー2の乗車者の感情が「喜」や「楽」であると分析された場合には、明るい雰囲気の映像や音楽が再生される。一方、ベビーカー2の乗車者の感情が「怒」や「哀」であると分析された場合には、暗い雰囲気の映像や音楽が再生される。 FIG. 7 is a flowchart of the video and music reproduction process in the facial expression recording system 1 of the present embodiment. As shown in FIG. 7, when video and music reproduction processing is performed by the terminal device 4, when a camera image is input from the camera 3 to the terminal device 4 (S 30), the camera image of the passenger of the stroller 2 The emotions of (e.g., "Kee", "Angage", "Samurai", "Raku") are analyzed (S31). Then, a video or music corresponding to the emotion of the rider of the stroller 2 is reproduced (S32). For example, when the emotions of the rider of the stroller 2 are analyzed as "joy" or "easy", video and music with a bright atmosphere are reproduced. On the other hand, when the emotion of the rider of the stroller 2 is analyzed to be "angry" or "tear", a video or music with a dark atmosphere is reproduced.
 図8は、本実施の形態の表情記録システム1における笑顔検出位置記録処理・接近通知処理のフロー図である。図8に示すように、端末装置4で笑顔検出位置記録処理・接近通知処理が行われる場合には、まず、カメラ3からカメラ画像が入力されるときに、そのカメラ3からカメラ3の位置情報(ベビーカー2の位置情報)を取得する(S40)。そして、入力されたカメラ画像からベビーカー2の乗車者の笑顔を検出されると(S41)。そのときのベビーカー2の位置を笑顔検出位置として記憶部12に記録する(S42)。 FIG. 8 is a flowchart of smile detection position recording processing / approach notification processing in the facial expression recording system 1 according to the present embodiment. As shown in FIG. 8, when smile detection position recording processing / approach notification processing is performed in the terminal device 4, first, when a camera image is input from the camera 3, position information of the camera 3 from the camera 3 (Position information of the stroller 2) is acquired (S40). Then, when the smile of the rider of the stroller 2 is detected from the input camera image (S41). The position of the stroller 2 at that time is recorded in the storage unit 12 as a smile detection position (S42).
 その後(例えば、次回のベビーカー2での外出のときなど)、カメラ3からカメラ画像が入力されるときに、そのカメラ3の位置情報(ベビーカー2の位置情報)を取得し(S43)、そのベビーカー2の位置と記録された笑顔検出位置とを比較して、ベビーカー2が笑顔検出位置に接近している(例えば笑顔検出位置から所定半径内に近づいている)と判定されると(S44)、ベビーカー2の利用者にその旨が通知される(S45)。 Thereafter (for example, when going out at the next stroller 2), when a camera image is input from the camera 3, position information of the camera 3 (location information of the stroller 2) is acquired (S43), and the stroller If it is determined that the stroller 2 is approaching the smile detection position (for example, approaching within a predetermined radius from the smile detection position) by comparing the position 2 with the recorded smile detection position (S44), The user of the stroller 2 is notified of that (S45).
 図9は、本実施の形態の表情記録システム1における視線方向検出処理のフロー図である。図9に示すように、端末装置4で視線方向検出処理が行われる場合には、まず、カメラ3からカメラ画像が入力されるときに、そのカメラ3からベビーカー2の向き情報を取得する(S50)。そして、入力されたカメラ画像からベビーカー2の乗車者の笑顔を検出されると(S51)。そのときのベビーカー2の向きをベビーカー2の乗車者の視線方向として検出し(S52)、検出した視線方向を記憶部12に記録する(S53)。 FIG. 9 is a flowchart of gaze direction detection processing in the facial expression recording system 1 of the present embodiment. As shown in FIG. 9, when the gaze direction detection process is performed by the terminal device 4, first, when a camera image is input from the camera 3, the direction information of the stroller 2 is acquired from the camera 3 (S50) ). Then, when the smile of the rider of the stroller 2 is detected from the input camera image (S51). The direction of the stroller 2 at that time is detected as the gaze direction of the rider of the stroller 2 (S52), and the detected gaze direction is recorded in the storage unit 12 (S53).
 このような本実施の形態の表情記録システム1によれば、ベビーカー2での外出中にそのベビーカー2に乗っている乳幼児の笑顔を記録することができる。 According to such a facial expression recording system 1 of the present embodiment, it is possible to record smiles of infants riding on the stroller 2 while going out with the stroller 2.
 すなわち、本実施の形態では、ベビーカー2での外出中に乗車者(例えば乳幼児など)のカメラ画像を撮影することができる。そして例えば、ある地点を通過したときにベビーカー2の乗車者が笑顔になると、その地点におけるベビーカー2の周囲のカメラ画像が撮影される。これにより、ベビーカー2の乗車者の笑顔のカメラ画像とともに、その乗車者が笑顔になった要因(その乗車者のお気に入りの対象物など)を含んだベビーカー2の周囲のカメラ画像を得ることができる。 That is, in the present embodiment, it is possible to capture a camera image of a passenger (for example, a baby etc.) while going out in the stroller 2. Then, for example, when the passenger of the stroller 2 smiles when passing through a certain point, a camera image around the stroller 2 at that point is photographed. As a result, together with the camera image of the smile of the rider of the stroller 2, it is possible to obtain a camera image of the surroundings of the stroller 2 including factors that make the rider smile (such as a favorite object of the rider). .
 また、本実施の形態では、ベビーカー2の乗車者(例えば乳幼児など)の笑顔とその乗車者が笑顔になった要因(その乗車者のお気に入りの対象物など)との対応を、同一画面上で容易に把握することができる。 Further, in the present embodiment, on the same screen, the correspondence between the smile of the passenger (for example, an infant etc.) of the stroller 2 and the factor (the favorite object of the passenger, etc.) that makes the rider smile It can be easily grasped.
 また、本実施の形態では、ベビーカー2の乗車者(例えば乳幼児など)が笑顔になっている間、継続して乗車者のカメラ画像(笑顔のカメラ画像)を撮影しつづけて、その中から笑顔度の高い(良い笑顔の)カメラ画像が自動的に選択される。これにより、良い笑顔のカメラ画像を得ることができる。 Further, in the present embodiment, while the rider (for example, an infant or the like) of the stroller 2 is smiling, the camera image (camera image of the smile) of the rider is continuously photographed and smiles are selected from among them. A high degree (good smile) camera image is automatically selected. This makes it possible to obtain a camera image of a good smile.
 また、本実施の形態では、ベビーカー2の乗車者(例えば乳幼児など)のカメラ画像からその乗車者の感情が分析され、その感情に応じた映像や音楽が自動的に再生される。これにより、ベビーカー2での外出中に乗車者の感情にあった映像や音楽を演出することができる。 Further, in the present embodiment, the emotion of the rider is analyzed from the camera image of the rider (for example, an infant or the like) of the stroller 2, and video and music corresponding to the emotion are automatically reproduced. Thereby, it is possible to produce video and music according to the emotion of the rider while going out in the stroller 2.
 また、本実施の形態では、ベビーカー2の乗車者(例えば乳幼児など)が笑顔になった位置(笑顔検出位置)が記録され、次にその地点に接近すると利用者に通知がされる。これにより、ベビーカー2の利用者は、ベビーカー2の乗車者が笑顔になる地点(乗車者のお気に入りの地点)を把握することができ、ベビーカー2での外出中にその地点に近づいたときにそのことを知ることができる。 Further, in the present embodiment, the position (smile detection position) at which the rider (for example, an infant or the like) of the stroller 2 turns into a smile is recorded, and the user is notified when approaching that point next time. As a result, the user of the stroller 2 can grasp the point at which the rider of the stroller 2 smiles (the favorite spot of the rider), and when the user approaches the spot while going out at the stroller 2 You can know that.
 また、本実施の形態では、ベビーカー2の乗車者(例えば乳幼児など)が笑顔になったときに、その乗車者の視線方向(見ていた方向)を知ることができる。これにより、その乗車者のお気に入りの対象物を特定することが可能になる。 Further, in the present embodiment, when the rider (for example, an infant or the like) of the stroller 2 smiles, it is possible to know the gaze direction (the direction viewed) of the rider. This makes it possible to identify the passenger's favorite object.
 以上、本発明の実施の形態を例示により説明したが、本発明の範囲はこれらに限定されるものではなく、請求項に記載された範囲内において目的に応じて変更・変形することが可能である。 Although the embodiments of the present invention have been described above by way of example, the scope of the present invention is not limited to these, and may be changed or modified according to the purpose within the scope described in the claims. is there.
 例えば、上記の実施の形態では、カメラ3がベビーカー2に取り付けられる例について説明したが、本発明の範囲はこれに限定されない。カメラ3は、例えば、乗用車などの移動体の座席に設置されるチャイルドシートに取り付けられてもよい。なお、チャイルドシートが、ベビーカー2のアーム5に相当する部材を備えていない場合には、カメラ3は、チャイルドシートの外部の適切な位置(例えば、天井、前の座席の背面、ピラーの内面、コンソールパネルの上面など)に設置することができる。 For example, although the above-mentioned embodiment explained the example where camera 3 is attached to stroller 2, the scope of the present invention is not limited to this. For example, the camera 3 may be attached to a child seat installed in a seat of a mobile such as a passenger car. In the case where the child seat does not have a member corresponding to the arm 5 of the stroller 2, the camera 3 is placed at an appropriate position outside the child seat (for example, the ceiling, the back of the front seat, the inner surface of the pillar, the console panel Can be installed on top of the
 また、カメラ3の数は、1台に限られない。特に、カメラ3が乗用車に設置される場合には、乗車者の表情(笑顔)のカメラ画像を撮影するためのカメラ(車内カメラ)のほかに、車両の周囲のカメラ画像を撮影するカメラ(車外カメラ)を使用してもよい。また、車内カメラや車外カメラの各々についても、それぞれ複数台のカメラ3を使用してもよい。 Further, the number of cameras 3 is not limited to one. In particular, when the camera 3 is installed in a passenger car, in addition to a camera (in-car camera) for capturing a camera image of a passenger's expression (smile), a camera (for outside the car) capturing a camera image around the vehicle Camera) may be used. Also, a plurality of cameras 3 may be used for each of the in-vehicle camera and the out-of-vehicle camera.
 これにより、乗用車での外出中にチャイルドシートにのせた乗車者(例えば乳幼児など)のカメラ画像を撮影することができる。そして例えば、ある地点を通過したときに乗車者が笑顔になると、その地点における車両の周囲のカメラ画像が撮影される。これにより、乗車者の笑顔のカメラ画像とともに、その乗車者が笑顔になった要因(その乗車者のお気に入りの対象物など)を含んだ車両の周囲のカメラ画像を得ることができる。 Thus, it is possible to capture a camera image of a passenger (for example, an infant or the like) who is placed on a child seat while going out in a passenger car. Then, for example, when a passenger turns into a smile when passing a certain point, a camera image around the vehicle at that point is photographed. Thus, it is possible to obtain a camera image of the surroundings of the vehicle including a camera image of the rider's smile and a factor (such as a favorite object of the rider) that makes the rider smile.
 以上に現時点で考えられる本発明の好適な実施の形態を説明したが、本実施の形態に対して多様な変形が可能なことが理解され、そして、本発明の真実の精神と範囲内にあるそのようなすべての変形を添付の請求の範囲が含むことが意図されている。 While the presently preferred embodiments of the present invention have been described, it will be appreciated that various modifications may be made to the embodiments and are within the true spirit and scope of the present invention. It is intended that the appended claims cover all such variations.
 以上のように、本発明にかかる表情記録システムは、ベビーカーでの外出中にそのベビーカーに乗っている乳幼児の笑顔を記録することができるという効果を有し、乳幼児を乗せるベビーカー等に適用され有用である。 As described above, the facial expression recording system according to the present invention has the effect of being able to record the smiles of the infants who are riding in the stroller while going out with the stroller, and is usefully applied to a stroller or the like on which the infants ride. It is.
 1 表情記録システム(笑顔記録システム)
 2 ベビーカー
 3 カメラ
 4 端末装置
 5 アーム
 6 シート
 10 タッチパネル
 11 スピーカ
 12 記憶部
 13 通信部
 14 第1制御部
 140 データ入力部
 141 表情検出部
 142 周囲撮影要求部
 143 表示処理部
 144 継続撮影要求部
 145 画像選択部
 15 第2制御部
 150 感情分析部
 151 再生処理部
 152 位置取得部
 153 記録処理部
 154 通知処理部
 155 向き情報取得部
 156 視線方向検出処理部
1 Facial expression recording system (smile recording system)
Reference Signs List 2 stroller 3 camera 4 terminal device 5 arm 6 sheet 10 touch panel 11 speaker 12 storage unit 13 communication unit 14 first control unit 140 data input unit 141 facial expression detection unit 142 surrounding photographing request unit 143 display processing unit 144 continuous photographing request unit 145 image Selection unit 15 Second control unit 150 Emotion analysis unit 151 Reproduction processing unit 152 Position acquisition unit 153 Recording processing unit 154 Notification processing unit 155 Direction information acquisition unit 156 Eye direction detection processing unit

Claims (9)

  1.  ベビーカーに取り付けられ、前記ベビーカーの乗車者のカメラ画像を撮影可能なカメラと、
     前記ベビーカーの利用者によって所持され、前記カメラと通信可能な端末装置と、
    を備え、
     前記端末装置は、
     前記カメラで撮影された前記カメラ画像が入力されるデータ入力部と、
     前記カメラ画像から、前記ベビーカーの乗車者の特定の表情を検出する表情検出部と、
     前記特定の表情が検出されたときに、前記カメラに対して前記ベビーカーの周囲のカメラ画像を撮影する周囲撮影要求を送る周囲撮影要求部と、
    を備えることを特徴とする表情記録システム。
    A camera attached to the stroller and capable of capturing a camera image of a passenger of the stroller;
    A terminal device possessed by a user of the stroller and capable of communicating with the camera;
    Equipped with
    The terminal device is
    A data input unit to which the camera image captured by the camera is input;
    A facial expression detection unit that detects a specific facial expression of a rider of the stroller from the camera image;
    An ambient imaging request unit for sending an ambient imaging request for imaging a camera image around the stroller to the camera when the specific facial expression is detected;
    An expression recording system comprising:
  2.  前記端末装置は、
     前記特定の表情が検出されたときの前記乗車者のカメラ画像と当該特定の表情が検出されたときの前記周囲のカメラ画像とを、同一画面上に表示する表示処理部を備える、請求項1に記載の表情記録システム。
    The terminal device is
    The display processing unit is configured to display, on the same screen, a camera image of the passenger when the specific expression is detected and a surrounding camera image when the specific expression is detected. The facial expression recording system described in.
  3.  前記端末装置は、
     前記特定の表情が検出されている間は、前記カメラに対して前記ベビーカーの乗車者のカメラ画像を継続して撮影する継続撮影要求を送る継続撮影要求部と、
     継続して撮影された前記乗車者のカメラ画像の中から特定の表情の度合いが所定値以上のカメラ画像を選択する画像選択部と、
    備える、請求項1または請求項2に記載の表情記録システム。
    The terminal device is
    A continuous shooting request unit for sending a continuous shooting request for continuously shooting a camera image of a passenger of the stroller to the camera while the specific facial expression is detected;
    An image selection unit for selecting a camera image of which the degree of a specific facial expression is equal to or greater than a predetermined value from the camera images of the passenger taken continuously;
    The facial expression recording system according to claim 1 or 2, comprising.
  4.  前記端末装置は、
     前記カメラ画像から、前記ベビーカーの乗車者の感情を分析する感情分析部と、
     分析結果として得られた前記ベビーカーの乗車者の感情に応じて映像または音楽を再生する再生処理部と、
    を備える、請求項1~請求項3のいずれかに記載の表情記録システム。
    The terminal device is
    An emotion analysis unit that analyzes emotions of the rider of the stroller from the camera image;
    A reproduction processing unit that reproduces video or music according to the emotion of the rider of the stroller obtained as an analysis result;
    The facial expression recording system according to any one of claims 1 to 3, comprising:
  5.  前記端末装置は、
     前記ベビーカーの位置情報を取得する位置情報取得部と、
     前記特定の表情が検出されたときの前記ベビーカーの位置を表情検出位置として記録する記録処理部と、
     前記ベビーカーが前記表情検出位置に接近すると前記ベビーカーの利用者に通知を行う通知処理部と、
    を備える、請求項1~請求項4のいずれかに記載の表情記録システム。
    The terminal device is
    A position information acquisition unit that acquires position information of the stroller;
    A recording processing unit that records the position of the stroller when the specific facial expression is detected as the facial expression detection position;
    A notification processing unit that notifies the user of the stroller when the stroller approaches the facial expression detection position;
    The facial expression recording system according to any one of claims 1 to 4, comprising:
  6.  前記端末装置は、
     前記カメラから、前記ベビーカーの向き情報を取得する向き情報取得部と、
     前記特定の表情が検出されたときの前記ベビーカーの向きを、前記乗車者の視線方向として検出する視線方向検出処理部と、
    を備える、請求項1~請求項5のいずれかに記載の表情記録システム。
    The terminal device is
    A direction information acquisition unit that acquires direction information of the stroller from the camera;
    A gaze direction detection processing unit that detects, as the gaze direction of the passenger, the orientation of the stroller when the specific facial expression is detected;
    The facial expression recording system according to any one of claims 1 to 5, comprising:
  7.  乗車者のカメラ画像を撮影可能なカメラを備えるベビーカーであって、
     前記カメラは、前記ベビーカーの利用者によって所持される端末装置と通信可能であり、
     前記端末装置は、前記カメラで撮影された前記カメラ画像から前記ベビーカーの乗車者の特定の表情を検出されたときに、前記カメラに対して周囲撮影要求を送り、
     前記カメラは、前記周囲撮影要求に基づいて、前記ベビーカーの周囲のカメラ画像を撮影することを特徴とするベビーカー。
    A stroller equipped with a camera capable of capturing a camera image of a passenger,
    The camera can communicate with a terminal device carried by a user of the stroller,
    When the terminal device detects a specific expression of a passenger of the stroller from the camera image captured by the camera, the terminal device transmits an ambient imaging request to the camera.
    The stroller, wherein the camera captures a camera image around the stroller based on the surrounding imaging request.
  8.  ベビーカーの利用者によって所持される端末装置で実行されるプログラムであって、
     前記端末装置は、前記ベビーカーに取り付けられたカメラと通信可能であり、
     前記カメラは、前記ベビーカーの乗車者のカメラ画像を撮影可能であり、
     前記プログラムは、前記端末装置に、
     前記カメラで撮影された前記カメラ画像が入力されると、前記カメラ画像から、前記ベビーカーの乗車者の特定の表情を検出する処理と、
     前記特定の表情が検出されたときに、前記カメラに対して前記ベビーカーの周囲のカメラ画像を撮影する周囲撮影要求を送る処理と、
    を実行させることを特徴とするプログラム。
    A program executed by a terminal device carried by a stroller user,
    The terminal device can communicate with a camera attached to the stroller,
    The camera can capture a camera image of a passenger of the stroller,
    The program is stored in the terminal device
    A process of detecting a specific facial expression of a passenger of the stroller from the camera image when the camera image taken by the camera is input;
    Sending an ambient shooting request for shooting a camera image around the stroller to the camera when the specific facial expression is detected;
    A program characterized by causing
  9.  移動体に取り付けられ、前記移動体の乗車者のカメラ画像を撮影可能なカメラと、
     利用者によって所持され、前記カメラと通信可能な端末装置と、
    を備え、
     前記端末装置は、
     前記カメラで撮影された前記カメラ画像が入力されるデータ入力部と、
     前記カメラ画像から、前記移動体の乗車者の特定の表情を検出する表情検出部と、
     前記特定の表情が検出されたときに、前記カメラに対して前記移動体の周囲のカメラ画像を撮影する周囲撮影要求を送る周囲撮影要求部と、
    を備えることを特徴とする表情記録システム。
    A camera attached to a mobile body and capable of capturing a camera image of a passenger of the mobile body;
    A terminal device possessed by a user and capable of communicating with the camera;
    Equipped with
    The terminal device is
    A data input unit to which the camera image captured by the camera is input;
    An expression detection unit that detects a specific expression of a passenger of the moving object from the camera image;
    An ambient imaging request unit for sending an ambient imaging request for imaging a camera image around the moving body to the camera when the specific expression is detected;
    An expression recording system comprising:
PCT/JP2017/034248 2017-09-22 2017-09-22 Expression recording system WO2019058496A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2017/034248 WO2019058496A1 (en) 2017-09-22 2017-09-22 Expression recording system
CN201780094857.6A CN111133752B (en) 2017-09-22 2017-09-22 Expression recording system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/034248 WO2019058496A1 (en) 2017-09-22 2017-09-22 Expression recording system

Publications (1)

Publication Number Publication Date
WO2019058496A1 true WO2019058496A1 (en) 2019-03-28

Family

ID=65809576

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/034248 WO2019058496A1 (en) 2017-09-22 2017-09-22 Expression recording system

Country Status (2)

Country Link
CN (1) CN111133752B (en)
WO (1) WO2019058496A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114594698B (en) * 2020-11-02 2024-01-02 宁波星巡智能科技有限公司 Intelligent child dining chair based on pressure detection and control method thereof
CN113104093A (en) * 2021-04-14 2021-07-13 潍坊科技学院 Intelligent control system of baby carriage
US20220408063A1 (en) * 2021-06-18 2022-12-22 Ernesto Williams Stroller Camera Assembly

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003092747A (en) * 2001-09-18 2003-03-28 Fuji Photo Film Co Ltd Supervisory device
JP2010028341A (en) * 2008-07-17 2010-02-04 Nikon Corp Electronic still camera
JP2011217202A (en) * 2010-03-31 2011-10-27 Saxa Inc Image capturing apparatus
JP2012124767A (en) * 2010-12-09 2012-06-28 Canon Inc Imaging apparatus
WO2014199786A1 (en) * 2013-06-11 2014-12-18 シャープ株式会社 Imaging system
JP2015067254A (en) * 2013-10-01 2015-04-13 パナソニックIpマネジメント株式会社 On-vehicle equipment and vehicle mounted therewith

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101270998A (en) * 2007-03-20 2008-09-24 联发科技(合肥)有限公司 Electronic device and method for reminding interest point according to road section
US8154615B2 (en) * 2009-06-30 2012-04-10 Eastman Kodak Company Method and apparatus for image display control according to viewer factors and responses
CN102123194B (en) * 2010-10-15 2013-12-18 张哲颖 Method for optimizing mobile navigation and man-machine interaction functions by using augmented reality technology
US9572820B2 (en) * 2011-05-10 2017-02-21 Stc.Unm Methods of treating autophagy-associated disorders and related pharmaceutical compositions, diagnostics, screening techniques and kits
WO2013173640A1 (en) * 2012-05-18 2013-11-21 Martin Rawls-Meehan System and method of a bed with a safety stop
CN103900591B (en) * 2012-12-25 2017-11-07 上海博泰悦臻电子设备制造有限公司 Along the air navigation aid and device of navigation way periphery precise search point of interest
US8755824B1 (en) * 2013-06-28 2014-06-17 Google Inc. Clustering geofence-based alerts for mobile devices
CN103358996B (en) * 2013-08-13 2015-04-29 吉林大学 Automobile A pillar perspective vehicle-mounted display device
CN105203117B (en) * 2014-06-12 2018-05-04 昆达电脑科技(昆山)有限公司 Reckoning system
WO2016029939A1 (en) * 2014-08-27 2016-03-03 Metaio Gmbh Method and system for determining at least one image feature in at least one image
KR20180057366A (en) * 2016-11-22 2018-05-30 엘지전자 주식회사 Mobile terminal and method for controlling the same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003092747A (en) * 2001-09-18 2003-03-28 Fuji Photo Film Co Ltd Supervisory device
JP2010028341A (en) * 2008-07-17 2010-02-04 Nikon Corp Electronic still camera
JP2011217202A (en) * 2010-03-31 2011-10-27 Saxa Inc Image capturing apparatus
JP2012124767A (en) * 2010-12-09 2012-06-28 Canon Inc Imaging apparatus
WO2014199786A1 (en) * 2013-06-11 2014-12-18 シャープ株式会社 Imaging system
JP2015067254A (en) * 2013-10-01 2015-04-13 パナソニックIpマネジメント株式会社 On-vehicle equipment and vehicle mounted therewith

Also Published As

Publication number Publication date
CN111133752B (en) 2021-12-21
CN111133752A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN108028957B (en) Information processing apparatus, information processing method, and machine-readable medium
JP5414907B2 (en) AV system
JP2016128320A (en) Mobile device wireless camera integration with vehicle
US11410634B2 (en) Information processing apparatus, information processing method, display system, and mobile object
KR102155001B1 (en) Head mount display apparatus and method for operating the same
WO2019058496A1 (en) Expression recording system
JP2008147865A (en) Image display system, display device and display method
JP6077214B2 (en) Image processing apparatus, image processing method, program, and image processing system
US10912916B2 (en) Electronic display adjustments to mitigate motion sickness
WO2019235127A1 (en) Record/play device, record/play method, and program
CN111301284A (en) In-vehicle device, program, and vehicle
JP6679368B2 (en) Facial expression recording system
JP2010093515A (en) A plurality of monitor selection transfer systems of mobile device data
JP2015128915A (en) Rear seat occupant monitor system, and rear seat occupant monitor method
JPWO2018216402A1 (en) Information processing apparatus, information processing method, and program
TWI787205B (en) Expression recording system, stroller, and expression recording program
JP2009083791A (en) Image display method, on-vehicle image display system and image processing apparatus
EP3476673B1 (en) Vehicle image displaying apparatus and method of displaying image data on a head-mounted display device disposed in a vehicle
JP6620724B2 (en) Conversation support device for vehicle
JP6146017B2 (en) Mobile terminal device
US11930335B2 (en) Control device, control method, and control program
JP2020144332A (en) Virtual reality system and virtual reality method
EP4307722A1 (en) Road-based vehicle and method and system for controlling an acoustic output device in a road-based vehicle
US11921915B2 (en) Head-mounted display system and manned device
US20240087339A1 (en) Information processing device, information processing system, and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17925652

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17925652

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP