WO2013044445A1 - Intelligent video capture method and device - Google Patents

Intelligent video capture method and device Download PDF

Info

Publication number
WO2013044445A1
WO2013044445A1 PCT/CN2011/080219 CN2011080219W WO2013044445A1 WO 2013044445 A1 WO2013044445 A1 WO 2013044445A1 CN 2011080219 W CN2011080219 W CN 2011080219W WO 2013044445 A1 WO2013044445 A1 WO 2013044445A1
Authority
WO
WIPO (PCT)
Prior art keywords
primary
imager
housing
video capture
capturing
Prior art date
Application number
PCT/CN2011/080219
Other languages
French (fr)
Inventor
Bo Li
Original Assignee
Motorola Mobility, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Mobility, Inc. filed Critical Motorola Mobility, Inc.
Priority to PCT/CN2011/080219 priority Critical patent/WO2013044445A1/en
Publication of WO2013044445A1 publication Critical patent/WO2013044445A1/en

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance

Definitions

  • the present disclosure relates to an intelligent video capture method and device, and particularly to a portable wireless communication device with intelligent video capture.
  • a small form factor can present a problem, if high quality video capture with two imagers is desired for capturing 3D video.
  • the small form factor may not provide enough real estate for the electronics and components, distance or spacing between two imagers, to provide enhanced video capture.
  • FIG. 1 is an exemplary block diagram of a communication system including an intelligent video capture device, in a form of a wireless communication device, according to one embodiment.
  • FIG. 2 is an exemplary block diagram of a wireless communication device according to one embodiment.
  • Fig. 3 is an exemplary block diagram of an intelligent video capture method according to one embodiment.
  • Fig. 4 is an exemplary planar view of an intelligent video capture device in a video capture mode and in a slider structure, according to one embodiment.
  • FIG. 5 is an exemplary planar view of an intelligent video capture device in a non- video capture mode and in a slider structure, according to one embodiment.
  • FIG. 6 is an exemplary perspective view of an intelligent video capture device in a video capture mode and in a swivel structure, according to one embodiment.
  • Fig. 7 is an exemplary side view of an intelligent video capture device in a form of a wireless communication device such as a flip structure shown in Fig. 1, shown in a partially closed position, according to one embodiment.
  • Fig. 8 is an exemplary side view of an intelligent video capture device in a form of a wireless communication device such as a flip structure shown in Fig. 1, shown in a video capture mode or open position, according to one embodiment.
  • Fig. 9 is an exemplary side view of an intelligent video capture device in a form of a wireless communication device such as a tablet structure, shown with a USB port to accept a secondary imager, according to one embodiment.
  • Fig. 10 is an exemplary side view of an intelligent video capture device in a form of a wireless communication device such as a tablet structure, shown connected with a secondary housing with a secondary imager via a USB connection, according to one embodiment.
  • Fig. 1 is an exemplary block diagram of a system 100 according to one embodiment.
  • the system 100 can include a network 110, a terminal 120, a base station 130 and GPS satellites 140.
  • the terminal 120 may be a wireless
  • the network 110 may include any type of network that is capable of sending and receiving signals, such as wireless signals.
  • the network 110 may include a wireless telecommunications network, a cellular telephone network, a Time Division Multiple Access (TDMA) network, a Code Division Multiple Access (CDMA) network, Global System for Mobile Communications (GSM), a Third Generation (3G) network, a Fourth Generation (4G) network, a satellite communications network, and other like communications systems.
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • GSM Global System for Mobile Communications
  • Third Generation (3G) network Third Generation (3G) network
  • 4G Fourth Generation
  • satellite communications network and other like communications systems.
  • network 110 may include a Wide Area Network (WAN), a Local Area Network (LAN) and/or a Personal Area Network (PAN). Furthermore, the network 110 may include more than one network and may include a plurality of different types of networks. Thus, the network 110 may include a plurality of data networks, a plurality of telecommunications networks, a combination of data and
  • WAN Wide Area Network
  • LAN Local Area Network
  • PAN Personal Area Network
  • the network 110 may include more than one network and may include a plurality of different types of networks.
  • the network 110 may include a plurality of data networks, a plurality of telecommunications networks, a combination of data and
  • the terminal 120 can communicate with the network 110 and with other devices on the network 110 by sending and receiving wireless signals via the base station 130, which may also comprise local area, and/or personal area access points.
  • the terminal 120 is shown being in communication with a global positioning system (GPS) 140 satellite, global navigation satellite system (GNSS) or the like, for position sensing and determination.
  • GPS global positioning system
  • GNSS global navigation satellite system
  • Fig. 2 is an exemplary block diagram of a wireless communication device 200 configured with an energy storage device or module 205, such as in the terminal 120.
  • the wireless communication device 200 can include a housing 210, a controller 220 coupled to the housing 210, audio input and output circuitry 230 coupled to the housing 210, a display 240 coupled to the housing 210, a transceiver 250 coupled to the housing 210, a user interface 260 coupled to the housing 210, a memory 270 coupled to the housing 210, an antenna 280 coupled to the housing 210 and the transceiver 250, and a removable subscriber module 285 coupled to the controller 220.
  • the device 200 also includes imagers 280 and 282, for capturing video, preferably 3D video. Video as used herein includes stills and video.
  • the wireless communication device 200 further includes a user interface (UI) module 290 configured to receive a user input signal at, at least a button, actuator or touch screen display, for example, to perform a desired function and control the wireless communication device based on the received user inputs.
  • UI user interface
  • the UI module 290 can include a sensor module 292 and a processor 294, as detailed herein. In one embodiment, the UI module 290 and components are coupled to the controller 220.
  • the UI module 290 can reside within the controller 220, can reside within the memory 270, can be an autonomous module, can be software, can be hardware, or can be in any other format useful for a module on a wireless
  • the display 240 can be a liquid crystal display (LCD), a light emitting diode (LED) display, a plasma display, or any other means for displaying information.
  • the transceiver 250 may include a transmitter and/or a receiver.
  • the audio input and output circuitry 230 can include a microphone, a speaker, a transducer, or any other audio input and output circuitry.
  • the user interface 260 can include a keypad, buttons, a touch pad, a joystick, an additional display, or any other device useful for providing an interface between a user and an electronic device.
  • the memory 270 may include a random access memory, a read only memory, an optical memory or any other memory that can be coupled to a wireless communication device.
  • the wireless communication device 200 shown in Fig. 2 can include: a housing 210; a controller 220 coupled to the housing 210, the controller
  • ancillary computing operations which may be unrelated to wireless communications such as media, such as audio or video processing, gaming, network booking, application processing, etc.
  • the UI module 290 can receive a user input signal at the user interface 260 or actuating mechanism, for example, to perform a desired function and control, the wireless communication device or terminal 120, based on the received user inputs, for example.
  • FIG. 3 A block diagram of an intelligent video capturing method 300, is shown in Fig. 3.
  • the method 300 can include: providing 310 a housing including a primary housing including a primary imager and a secondary housing including a secondary imager; positioning 320 the primary imager and the secondary imager at least a threshold distance away from each other; and capturing 330 an image with the primary and secondary imagers.
  • the method 300 allows a user to function and operate an image capturing device intuitively, providing an enhanced user experience.
  • a user can simply provide a command on a touch screen display, keyboard or easily accessible actuating mechanism, for example, to take a still photo, a video for a predetermined period of time, while a button in touched or on until depressed a second time to turn off.
  • the method 300 is particularly adapted in a use case including a portable and light weight device.
  • the method 300 can further include targeting a desired target or subject to be captured.
  • the target is captured in a high quality 3D still picture or video, for example.
  • the capturing step 330 can include capturing a three dimensional image with the primary and secondary imagers, with an enhanced high quality picture, provided the primary imager and the secondary imager are positioned at least a threshold distance away from each other, per positioning step 320.
  • the threshold distance as used herein can provide a stereo camera which includes a type of camera with two or more lenses with separate image sensors (imagers) or film frame for each lens (imager), which allows the camera to simulate human binocular vision, and therefore capture three-dimensional images.
  • a Stereoscopic system usually employs two video cameras, slightly apart, looking at the same scene. By analyzing the slight differences between the images seen by each camera, it is possible to determine the distance at each point in the images. This method is similar to principles driving human stereoscopic vision.
  • a video capture mode can include at least one of the primary imager being located on a target side of the primary housing and the secondary imager being located on a target side of the secondary housing, for easy point and shoot or record, with one hand, as detailed herein.
  • an intelligent video capture device 400 is shown
  • the device 400 can include: a housing 402 including a primary housing 404 including a primary imager 406 and a secondary housing 408 including a secondary imager 410; a positioning mechanism 412 configured to allow the primary housing 404 and the secondary housing 408 to be positioned with respect to one another, and the housing 402 including a video capture mode 414 wherein the primary imager 406 and secondary imager 410 are spaced a threshold distance 418 away from each other.
  • a portable video capture device 400 can capture high quality 3D images, when the primary imager 406 and secondary imager 410 are sufficiently spaced a threshold distance 418, away from each other.
  • a portable video capture device 400 can capture high quality 3D images comprising simulated human binocular vision.
  • high quality 3D images means enhanced resolution and enhanced real time 3D image capture.
  • the video capture mode 414 is configured to capture a three dimensional image, in a portable device.
  • enhanced 3D can be provided in a portable device by providing and maintaining the threshold distance 418.
  • the threshold distance is at least about 60 mms, for enhanced 3D video capture.
  • the video capture mode 414 includes the primary imager 406 and the secondary imager 410 being directed at a same target 420, to capture an enhanced 3D image.
  • the video capture mode 414 includes at least one of the primary imager 406 being located on a target side 422 of the primary housing 404 and the secondary imager 410 being located on a target side 424 of the secondary housing 408.
  • this construction allows a user to use the device 400 with one hand, for example.
  • the primary housing 404 and secondary housing 408 can include user interfaces 426 and 428 on non-target sides 430 and 432, respectively. See also Fig. 1 for this structure.
  • the primary housing 404 and secondary housing 408 can include a user interface 426 and 428, on a non-target side 430 and 432, the user interfaces can include at least one of a keypad and touch screen display or actuating button 434 on an edge portion 436, as shown in Fig. 1.
  • an actuator 434 near an edge 436 can be included and configured to actuate the primary and secondary imagers 406 and 410.
  • the actuator 434 edge 436 location can provide easy of access and operation to a user.
  • a key board such as a traditional key pad or querty keyboard, and touch screen display can do the same.
  • the actuator 434 can include at least one of a button, toggle switch, slide switch and touch screen or strip, for easy actuation and de-actuation
  • the imagers can be actuated for a predetermined time, when actuated and subsequently de-actuated, or only when a button is depressed, for example, or any other way video is captured
  • the device 400 can capture a still image or real time video, for example.
  • a sensor module 292 can be configured to detect or sense whether the primary and secondary imagers 406 and410 are capturing substantially similar images. In the event that they are not capturing substantially similar images, a warning can be provided to a user.
  • the warning can include a prompt on a display, be audible, be visual or haptic (such as a vibration) or a similar other warning, to prompt a user, that the imagers are not capturing images as designed, such as not capturing an acceptable 3D image.
  • the sensor module 292 can indicate that the threshold distance 418 is not achieved or that a user's finger in blocking one of the imagers, for example.
  • the positioning mechanism 412 can be configured to include at least one of a pivot structure, flip structure and slider structure.
  • a pivot structure for example, in Fig. 1 the positioning mechanism 412 is a flip structure and in Fig. 4 it is shown as a slider structure.
  • the intelligent video capture device 400 includes a wireless communication device that includes the housing 402 including at least one of wireless telephone, a cellular telephone, a wireless computing device, a tablet, a smart phone, a personal digital assistant, a pager, a personal computer, and a selective call receiver.
  • the user interface module 290 in Fig. 2 includes a processor 294 configured to program and store a desired function based on defined user inputs.
  • FIG. 5 is an exemplary planar view of an intelligent video capture device 500 in a non-video capture mode and in a slider structure.
  • the intelligent video capture device 500 includes primary housing 404 with primary imager 406 and the secondary imager 410, in phantom, of secondary housing 408.
  • the secondary imager 410 would not be visible in a closed or non-capture position, as shown in Fig. 5, but its location is shown in phantom for illustrative purposes.
  • the positioning mechanism 414 is shown in a form of a slider in Fig. 5, as in Fig. 4.
  • a distance 504 in a non-capture position or mode is insufficient for capturing high quality 3D video, as the threshold distance is not attained or shown, in Fig. 5.
  • a target would be above Fig. 5 in a z-direction, as shown by compass 506.
  • FIG. 6 is an exemplary perspective view of an intelligent video capture device 600 in a video capture mode and in a form of a swivel structure, in one embodiment.
  • the positioning mechanism is shown as a swivel structure 602, with the primary housing 404 moved 90 degrees, from a closed position, in a clockwise direction, from the secondary housing 408.
  • the secondary housing 408 shows a UI in the form of a querty key board 604, but a touch screen display can be utilized as well at 604.
  • the target sides 422 and 424 can include UIs, such as touch screen displays.
  • a touch screen display 606 can be provided on non-target side 430, for viewing a target or making user commands.
  • the primary and secondary housing 404 and 408 are at a 90 degree angle.
  • the primary and secondary housings 404 and 408 can be further rotated or moved clockwise to provide 180 degrees.
  • the threshold distance 418 can be attained, for capturing high quality video, such as 3D video.
  • the primary and secondary images 406 and 410 are shown in Fig. 6, in dashed lines, for illustrative purposes, and are positioned on target sides 422 and 424, pointing down at target 420, the direction of the target is also shown in dashed lines 608.
  • Fig. 7 is an exemplary side view of an intelligent video capture device in a form of a wireless communication device, such as a flip structure shown in Fig. 1, shown in a partially closed position, according to one embodiment.
  • Fig. 8 shows it in a video capture mode or open position, in another embodiment.
  • the primary imager 406 is located on target side 422, and the secondary imager 410 is located on its target side 424. These two imagers 406 and 410 point in different directions in a closed position, as shown in Fig. 7.
  • the primary imager 406 works as a normal or rear facing camera sensor
  • the secondary imager 410 works as a front facing camera sensor.
  • both the primary and secondary imagers 406 and 410 are targeted to the same direction and the threshold distance 418 is attained and
  • FIGs. 9 and 10 exemplary side views of an intelligent video capture device in a form of a wireless communication device such as a tablet structure, are shown.
  • a USB port is shown adapted to accept a secondary imager 410.
  • a secondary housing 408 with a secondary imager 410 is shown connected with a primary housing 404 via a port 610 and connector 612, preferably a USB port and connector.
  • a port 610 and connector 612 preferably a USB port and connector.
  • other positioning mechanisms 412 utilizing port and connector constructions can be utilized.
  • the primary imager 406 is located on a target side 422 of the device 400.
  • the device can take non-3D images with one imager and 3D images when the secondary imager 410 is connected via the port 610 and connector 612 connection, as shown in Fig. 10.
  • the device is in a video capture mode 414, to capture high quality images, such as high quality 3D images and video, for example.
  • the devices 200, 400, 500 and 600 and method 300 are preferably implemented on a programmed processor.
  • the controllers, flowcharts, and modules may also be implemented on a general purpose or special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit elements, an integrated circuit, a hardware electronic or logic circuit such as a discrete element circuit, a programmable logic device, or the like.
  • any device on which resides a finite state machine capable of implemmting the flowcharts shown in the figures maybe used to implement the processor functions of this disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

An intelligent video capture method (300) and device (400) are disclosed. The video capture device (400) can include: a housing (402) including a primary housing (404) including a primary imager (406) and a secondary housing (408) including a secondary imager (410); a positioning mechanism (412) configured to allow the primary housing (404) and the secondary housing (408) to be positioned with respect to one another; and the housing (402) including a video capture mode (414) wherein the primary imager (406) and secondary imager (410) are spaced a threshold distance (418) away from each other. Advantageously, in a preferred video capture mode (414), a portable video capture device (400) can capture high quality 3D images of simulated human binocular vision, in the event the threshold distance (418) is maintained.

Description

INTELLIGENT VIDEO CAPTURE METHOD AND DEVICE
BACKGROUND
1. Field
[0001] The present disclosure relates to an intelligent video capture method and device, and particularly to a portable wireless communication device with intelligent video capture.
2. Introduction
[0002] Use of smart electronic devices is increasing rapidly. Touch enabled cameras and portable wireless communication device come with varied form factors. Small form factors are often preferred by consumers, as they can be placed in a pocket when not in use.
[0003] Disadvantageously, a small form factor can present a problem, if high quality video capture with two imagers is desired for capturing 3D video. For example, the small form factor may not provide enough real estate for the electronics and components, distance or spacing between two imagers, to provide enhanced video capture.
[0004] There is a need for providing portable or small form factor video capture devices and portable wireless communication devices, to be expanded or adjusted in such a way, as to provide a threshold distance between two imagers, to allow enhanced video capture, such as enhanced 3D video capture, in a small form factor device.
[0005] There is a need for intelligent video capture devices and portable wireless communication devices, that allow portable or small form factor devices, to provide a video capture mode that can warn a user that such device is not able to perform enhanced video capture, for a number of reasons, such as a threshold distance between the imagers is not attained, one of the imagers is blocked and the like. [0006] There is also a need to provide intelligent video capture devices with user interfaces (UI) intuitively positioned and edge mounted actuators, for enhanced control and manipiiation of desired commands or functions of such devices.
[0007] There is a further need to provide intelligent video capture devices with user interfaces that are intuitive, ergonomic and provide a pleasant user experience.
[0008] It would further be considered an improvement in the art, to provide an intelligent video capture device with a structure and function, configured to provide a simple user action for each desired operation, which is intuitive to a user.
[0009] It would further be considered an improvement, to provide an intelligent video capture device with accessible user interfaces and/or actuators, that provide simple actuation and unobstructed viewing, if desired.
[0010] Accordingly, there is a need to provide improved video capture devices and incorporating such improvements in portable wireless communication devices.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the disclosure briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
Understanding that these drawings depict only typical embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the disclosure will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
[0012] Fig. 1 is an exemplary block diagram of a communication system including an intelligent video capture device, in a form of a wireless communication device, according to one embodiment.
[0013] Fig. 2 is an exemplary block diagram of a wireless communication device according to one embodiment.
[0014] Fig. 3 is an exemplary block diagram of an intelligent video capture method according to one embodiment. [0015] Fig. 4 is an exemplary planar view of an intelligent video capture device in a video capture mode and in a slider structure, according to one embodiment.
[0016] Fig. 5 is an exemplary planar view of an intelligent video capture device in a non- video capture mode and in a slider structure, according to one embodiment.
[0017] Fig. 6 is an exemplary perspective view of an intelligent video capture device in a video capture mode and in a swivel structure, according to one embodiment.
[0018] Fig. 7 is an exemplary side view of an intelligent video capture device in a form of a wireless communication device such as a flip structure shown in Fig. 1, shown in a partially closed position, according to one embodiment.
[0019] Fig. 8 is an exemplary side view of an intelligent video capture device in a form of a wireless communication device such as a flip structure shown in Fig. 1, shown in a video capture mode or open position, according to one embodiment.
[0020] Fig. 9 is an exemplary side view of an intelligent video capture device in a form of a wireless communication device such as a tablet structure, shown with a USB port to accept a secondary imager, according to one embodiment.
[0021] Fig. 10 is an exemplary side view of an intelligent video capture device in a form of a wireless communication device such as a tablet structure, shown connected with a secondary housing with a secondary imager via a USB connection, according to one embodiment.
DETAILED DESCRIPTION [0022] Fig. 1 is an exemplary block diagram of a system 100 according to one embodiment. The system 100 can include a network 110, a terminal 120, a base station 130 and GPS satellites 140. The terminal 120 may be a wireless
communication device, such as a wireless telephone, a cellular telephone, a personal digital assistant, a pager, a personal computer, a selective call receiver, or any other device that is capable of sending and receiving communication signals on a network including a wireless network. The network 110 may include any type of network that is capable of sending and receiving signals, such as wireless signals. For example, the network 110 may include a wireless telecommunications network, a cellular telephone network, a Time Division Multiple Access (TDMA) network, a Code Division Multiple Access (CDMA) network, Global System for Mobile Communications (GSM), a Third Generation (3G) network, a Fourth Generation (4G) network, a satellite communications network, and other like communications systems. More generally, network 110 may include a Wide Area Network (WAN), a Local Area Network (LAN) and/or a Personal Area Network (PAN). Furthermore, the network 110 may include more than one network and may include a plurality of different types of networks. Thus, the network 110 may include a plurality of data networks, a plurality of telecommunications networks, a combination of data and
telecommunications networks and other like communication systems capable of sending and receiving communication signals. In operation, the terminal 120 can communicate with the network 110 and with other devices on the network 110 by sending and receiving wireless signals via the base station 130, which may also comprise local area, and/or personal area access points. The terminal 120 is shown being in communication with a global positioning system (GPS) 140 satellite, global navigation satellite system (GNSS) or the like, for position sensing and determination.
[0023] Fig. 2 is an exemplary block diagram of a wireless communication device 200 configured with an energy storage device or module 205, such as in the terminal 120. The wireless communication device 200 can include a housing 210, a controller 220 coupled to the housing 210, audio input and output circuitry 230 coupled to the housing 210, a display 240 coupled to the housing 210, a transceiver 250 coupled to the housing 210, a user interface 260 coupled to the housing 210, a memory 270 coupled to the housing 210, an antenna 280 coupled to the housing 210 and the transceiver 250, and a removable subscriber module 285 coupled to the controller 220. The device 200 also includes imagers 280 and 282, for capturing video, preferably 3D video. Video as used herein includes stills and video.
As shown in Fig. 2, the wireless communication device 200 further includes a user interface (UI) module 290 configured to receive a user input signal at, at least a button, actuator or touch screen display, for example, to perform a desired function and control the wireless communication device based on the received user inputs.
[0024] The UI module 290 can include a sensor module 292 and a processor 294, as detailed herein. In one embodiment, the UI module 290 and components are coupled to the controller 220. The UI module 290 can reside within the controller 220, can reside within the memory 270, can be an autonomous module, can be software, can be hardware, or can be in any other format useful for a module on a wireless
communication device 200.
[0025] The display 240 can be a liquid crystal display (LCD), a light emitting diode (LED) display, a plasma display, or any other means for displaying information. The transceiver 250 may include a transmitter and/or a receiver. The audio input and output circuitry 230 can include a microphone, a speaker, a transducer, or any other audio input and output circuitry. The user interface 260 can include a keypad, buttons, a touch pad, a joystick, an additional display, or any other device useful for providing an interface between a user and an electronic device. The memory 270 may include a random access memory, a read only memory, an optical memory or any other memory that can be coupled to a wireless communication device.
[0026] In more detail, the wireless communication device 200 shown in Fig. 2, can include: a housing 210; a controller 220 coupled to the housing 210, the controller
220 configured to control the operations of the wireless communication device, and to provide ancillary computing operations which may be unrelated to wireless communications such as media, such as audio or video processing, gaming, network booking, application processing, etc.
[0027] Advantageously, the UI module 290 can receive a user input signal at the user interface 260 or actuating mechanism, for example, to perform a desired function and control, the wireless communication device or terminal 120, based on the received user inputs, for example.
[0028] A block diagram of an intelligent video capturing method 300, is shown in Fig. 3. In its simplest form, the method 300 can include: providing 310 a housing including a primary housing including a primary imager and a secondary housing including a secondary imager; positioning 320 the primary imager and the secondary imager at least a threshold distance away from each other; and capturing 330 an image with the primary and secondary imagers. Advantageously, the method 300 allows a user to function and operate an image capturing device intuitively, providing an enhanced user experience. For example, a user can simply provide a command on a touch screen display, keyboard or easily accessible actuating mechanism, for example, to take a still photo, a video for a predetermined period of time, while a button in touched or on until depressed a second time to turn off. The method 300 is particularly adapted in a use case including a portable and light weight device.
[0029] The method 300 can further include targeting a desired target or subject to be captured. In a preferred embodiment, the target is captured in a high quality 3D still picture or video, for example.
[0030] Beneficially, in a preferred embodiment, the capturing step 330 can include capturing a three dimensional image with the primary and secondary imagers, with an enhanced high quality picture, provided the primary imager and the secondary imager are positioned at least a threshold distance away from each other, per positioning step 320.
[0031] In one embodiment, the threshold distance as used herein, can provide a stereo camera which includes a type of camera with two or more lenses with separate image sensors (imagers) or film frame for each lens (imager), which allows the camera to simulate human binocular vision, and therefore capture three-dimensional images. For example, a Stereoscopic system usually employs two video cameras, slightly apart, looking at the same scene. By analyzing the slight differences between the images seen by each camera, it is possible to determine the distance at each point in the images. This method is similar to principles driving human stereoscopic vision.
[0032] In one embodiment, a video capture mode can include at least one of the primary imager being located on a target side of the primary housing and the secondary imager being located on a target side of the secondary housing, for easy point and shoot or record, with one hand, as detailed herein.
[0033] Referring to Fig. 4, an intelligent video capture device 400 is shown The device 400 can include: a housing 402 including a primary housing 404 including a primary imager 406 and a secondary housing 408 including a secondary imager 410; a positioning mechanism 412 configured to allow the primary housing 404 and the secondary housing 408 to be positioned with respect to one another, and the housing 402 including a video capture mode 414 wherein the primary imager 406 and secondary imager 410 are spaced a threshold distance 418 away from each other.
[0034] As is known in electronics, in portable electronic devices compromises are made. Advantageously, in a preferred video capture mode 414, a portable video capture device 400 can capture high quality 3D images, when the primary imager 406 and secondary imager 410 are sufficiently spaced a threshold distance 418, away from each other.
[0035] Stated differently, in the event the threshold distance 418 is maintained, in a preferred video capture mode 414, a portable video capture device 400 can capture high quality 3D images comprising simulated human binocular vision. As used herein, high quality 3D images means enhanced resolution and enhanced real time 3D image capture.
[0036] In a preferred embodiment, the video capture mode 414 is configured to capture a three dimensional image, in a portable device. Advantageously, enhanced 3D can be provided in a portable device by providing and maintaining the threshold distance 418. In one embodiment, the threshold distance is at least about 60 mms, for enhanced 3D video capture.
[0037] In more detail, the video capture mode 414 includes the primary imager 406 and the secondary imager 410 being directed at a same target 420, to capture an enhanced 3D image.
[0038] In Fig. 4, the video capture mode 414 includes at least one of the primary imager 406 being located on a target side 422 of the primary housing 404 and the secondary imager 410 being located on a target side 424 of the secondary housing 408. Beneficially, this construction allows a user to use the device 400 with one hand, for example.
[0039] The primary housing 404 and secondary housing 408 can include user interfaces 426 and 428 on non-target sides 430 and 432, respectively. See also Fig. 1 for this structure.
[0040] For example, the primary housing 404 and secondary housing 408 can include a user interface 426 and 428, on a non-target side 430 and 432, the user interfaces can include at least one of a keypad and touch screen display or actuating button 434 on an edge portion 436, as shown in Fig. 1.
[0041] In Fig. 1, an actuator 434 near an edge 436 can be included and configured to actuate the primary and secondary imagers 406 and 410. Beneficially, the actuator 434 edge 436 location, can provide easy of access and operation to a user. Likewise, a key board, such as a traditional key pad or querty keyboard, and touch screen display can do the same. The actuator 434 can include at least one of a button, toggle switch, slide switch and touch screen or strip, for easy actuation and de-actuation The imagers can be actuated for a predetermined time, when actuated and subsequently de-actuated, or only when a button is depressed, for example, or any other way video is captured The device 400 can capture a still image or real time video, for example.
[0042] As shown in Fig. 2, a sensor module 292 can be configured to detect or sense whether the primary and secondary imagers 406 and410 are capturing substantially similar images. In the event that they are not capturing substantially similar images, a warning can be provided to a user. The warning can include a prompt on a display, be audible, be visual or haptic (such as a vibration) or a similar other warning, to prompt a user, that the imagers are not capturing images as designed, such as not capturing an acceptable 3D image. Thus, the sensor module 292 can indicate that the threshold distance 418 is not achieved or that a user's finger in blocking one of the imagers, for example.
[0043] The positioning mechanism 412 can be configured to include at least one of a pivot structure, flip structure and slider structure. For example, in Fig. 1 the positioning mechanism 412 is a flip structure and in Fig. 4 it is shown as a slider structure.
[0044] In one embodiment, the intelligent video capture device 400 includes a wireless communication device that includes the housing 402 including at least one of wireless telephone, a cellular telephone, a wireless computing device, a tablet, a smart phone, a personal digital assistant, a pager, a personal computer, and a selective call receiver. In a preferred embodiment, the user interface module 290 in Fig. 2 includes a processor 294 configured to program and store a desired function based on defined user inputs.
[0045] Referring to Fig. 5 is an exemplary planar view of an intelligent video capture device 500 in a non-video capture mode and in a slider structure. In more detail, the intelligent video capture device 500 includes primary housing 404 with primary imager 406 and the secondary imager 410, in phantom, of secondary housing 408.The secondary imager 410 would not be visible in a closed or non-capture position, as shown in Fig. 5, but its location is shown in phantom for illustrative purposes. The positioning mechanism 414, is shown in a form of a slider in Fig. 5, as in Fig. 4. A distance 504 in a non-capture position or mode, is insufficient for capturing high quality 3D video, as the threshold distance is not attained or shown, in Fig. 5. A target would be above Fig. 5 in a z-direction, as shown by compass 506.
[0046] Referring to Fig. 6 is an exemplary perspective view of an intelligent video capture device 600 in a video capture mode and in a form of a swivel structure, in one embodiment. In Fig. 6, the positioning mechanism is shown as a swivel structure 602, with the primary housing 404 moved 90 degrees, from a closed position, in a clockwise direction, from the secondary housing 408. The secondary housing 408 shows a UI in the form of a querty key board 604, but a touch screen display can be utilized as well at 604. Although not shown in Fig. 6, the target sides 422 and 424 can include UIs, such as touch screen displays. A touch screen display 606 can be provided on non-target side 430, for viewing a target or making user commands. As previously stated, in Fig. 6, the primary and secondary housing 404 and 408 are at a 90 degree angle. The primary and secondary housings 404 and 408 can be further rotated or moved clockwise to provide 180 degrees. In either case, the threshold distance 418 can be attained, for capturing high quality video, such as 3D video. The primary and secondary images 406 and 410 are shown in Fig. 6, in dashed lines, for illustrative purposes, and are positioned on target sides 422 and 424, pointing down at target 420, the direction of the target is also shown in dashed lines 608.
[0047] Fig. 7 is an exemplary side view of an intelligent video capture device in a form of a wireless communication device, such as a flip structure shown in Fig. 1, shown in a partially closed position, according to one embodiment. Likewise, Fig. 8 shows it in a video capture mode or open position, in another embodiment. As illustrated, the primary imager 406 is located on target side 422, and the secondary imager 410 is located on its target side 424. These two imagers 406 and 410 point in different directions in a closed position, as shown in Fig. 7. Thus when the flip is closed, the primary imager 406 works as a normal or rear facing camera sensor, and the secondary imager 410 works as a front facing camera sensor. After opening the flip structure in Fig. 8, both the primary and secondary imagers 406 and 410 are targeted to the same direction and the threshold distance 418 is attained and
maintained, between two imagers406 and 410, to make it possible to capture high quality 3D images and video.
[0048] Referring to Figs. 9 and 10, exemplary side views of an intelligent video capture device in a form of a wireless communication device such as a tablet structure, are shown. In Fig. 9, a USB port is shown adapted to accept a secondary imager 410. In Fig. 10, a secondary housing 408 with a secondary imager 410 is shown connected with a primary housing 404 via a port 610 and connector 612, preferably a USB port and connector. As should be understood, other positioning mechanisms 412 utilizing port and connector constructions can be utilized.
[0049] In this embodiment, the primary imager 406 is located on a target side 422 of the device 400. The device can take non-3D images with one imager and 3D images when the secondary imager 410 is connected via the port 610 and connector 612 connection, as shown in Fig. 10. When the threshold distance 418 is attained and maintained between the primary and secondary imagers 406 and 410, the device is in a video capture mode 414, to capture high quality images, such as high quality 3D images and video, for example.
[0050] The devices 200, 400, 500 and 600 and method 300 are preferably implemented on a programmed processor. However, the controllers, flowcharts, and modules may also be implemented on a general purpose or special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit elements, an integrated circuit, a hardware electronic or logic circuit such as a discrete element circuit, a programmable logic device, or the like. In general, any device on which resides a finite state machine capable of implemmting the flowcharts shown in the figures maybe used to implement the processor functions of this disclosure.
[0051] While this disclosure has been described with specific embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. For example, various components of the embodiments may be interchanged, added, or substituted in the other embodiments. Also, all of the elements of each figure are not necessary for operation of the disclosed embodiments. For example, one of ordinary skill in the art of the disclosed embodiments would be enabled to make and use the teachings of the disclosure by simply employing the elements of the independent claims. Accordingly, the preferred embodiments of the disclosure as set forth herein are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the disclosure.
[0052] In this document, relational terms such as "first," "second," and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by "a," "an," or the like does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element. Also, the term "another" is defined as at least a second or more. The terms
"including," "having," and the like, as used herein, are defined as "comprising."

Claims

CLAIMS We claim:
1. A video capture device, comprising: a housing including a primary housing including a primary imager and a secondary housing including a secondary imager; a positioning mechanism configured to allow the primary housing and the secondary housing to be positioned with respect to one another, and the housing including a video capture mode wherein the primary imager and secondary imager are spaced a threshold distance away from each other.
2. The device of claim 1 wherein the video capture mode is configured to capture a three dimensional image.
3. The device of claim 1 wherein the threshold distance being adapted to capture three dimensional images.
4. The device of claim 1 wherein the video capture mode includes the primary imager and the secondary imager being directed at a same target.
5. The device of claim 1 wherein the video capture mode includes the primary imager and the secondary imager being directed at a same target, to capture a three dimensional image.
6. The device of claim 1 wherein the video capture mode includes at least one of the primary imager being located on a target side of the primary housing and the secondary imager being located on a target side of the secondary housing.
7. The device of claim 1 wherein at least one of the primary housing and secondary housing includes a user interface on a non-target side.
8. The device of claim 1 wherein at least one of the primary housing and secondary housing includes a user interface on a non-target side, the user interface including at least one of a keypad and touch screen display.
9. The device of claim 1 wherein the threshold distance being at least about 60 millimeters.
10. The device of claim 1 further comprising an actuator configured to actuate the primary and secondary imagers.
11. The device of claim 1 further comprising an actuator configured to actuate the primary and secondary imagers, the actuator including at least one of a toggle switch, slide switch, button and touch screen display.
12. The device of claim 1 further comprising a sensor module configured to sense whether the primary and secondary imagers are capturing substantially similar images.
13. The device of claim 1 further comprising a sensor module configured to sense whether the primary and secondary imagers are capturing substantially similar images, and if not capturing substantially similar images, providing a warning.
14. The device of claim 1 wherein the positioning mechanism is configured to include at least one of a pivot structure, flip structure and slider structure.
15. The device of claim 1 wherein the housing includes a wireless communication device.
16. The device of claim 1 wherein the housing includes at least one of wireless telephone, a cellular telephone, a wireless computing device, a tablet, a smart phone, a personal digital assistant, a pager, a personal computer, and a selective call receiver.
17. A video capturing method comprising:
providing a housing including a primary housing including a primary imager and a secondary housing including a secondary imager;
positioning the primary imager and the secondary imager at least a threshold distance away from each other; and
capturing an image with the primary and secondary imagers.
18. The method of claim 17 further comprising targeting a desired target to be captured.
19. The method of claim 17 wherein the capturing an image with the primary and secondary imagers includes capturing a three dimensional image.
20. The method of claim 17 further comprising providing a video capture mode including at least one of the primary imager being located on a target side of the primary housing and the secondary imager being located on a target side of the secondary housing.
PCT/CN2011/080219 2011-09-27 2011-09-27 Intelligent video capture method and device WO2013044445A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/080219 WO2013044445A1 (en) 2011-09-27 2011-09-27 Intelligent video capture method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/080219 WO2013044445A1 (en) 2011-09-27 2011-09-27 Intelligent video capture method and device

Publications (1)

Publication Number Publication Date
WO2013044445A1 true WO2013044445A1 (en) 2013-04-04

Family

ID=47994115

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/080219 WO2013044445A1 (en) 2011-09-27 2011-09-27 Intelligent video capture method and device

Country Status (1)

Country Link
WO (1) WO2013044445A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107678243A (en) * 2016-08-01 2018-02-09 刘捷 Smart mobile phone digital stereo(3D)Photography and vedio recording and its viewing system
WO2020159451A1 (en) * 2019-01-31 2020-08-06 Şahin Hakan The 3d wieving and recording method for smartphones

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692139A (en) * 2009-09-11 2010-04-07 丁守谦 Full color high definition eyeglass stereoscopic viewer
CN101990035A (en) * 2010-10-09 2011-03-23 辜进荣 Broadband network mobile phone for acquisition of three-dimensional images
US20110117958A1 (en) * 2009-11-19 2011-05-19 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN201878210U (en) * 2010-12-01 2011-06-22 温泉 Multifunctional glasses cell phone based on three-dimensional (3D) technique

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692139A (en) * 2009-09-11 2010-04-07 丁守谦 Full color high definition eyeglass stereoscopic viewer
US20110117958A1 (en) * 2009-11-19 2011-05-19 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN101990035A (en) * 2010-10-09 2011-03-23 辜进荣 Broadband network mobile phone for acquisition of three-dimensional images
CN201878210U (en) * 2010-12-01 2011-06-22 温泉 Multifunctional glasses cell phone based on three-dimensional (3D) technique

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107678243A (en) * 2016-08-01 2018-02-09 刘捷 Smart mobile phone digital stereo(3D)Photography and vedio recording and its viewing system
WO2020159451A1 (en) * 2019-01-31 2020-08-06 Şahin Hakan The 3d wieving and recording method for smartphones

Similar Documents

Publication Publication Date Title
USRE48677E1 (en) Mobile terminal and control method thereof
US9635227B2 (en) 3D camera assembly having a bracket for cameras and mobile terminal having the same
KR101785458B1 (en) Camera module and mobile terminal having the same
CN104767874B (en) Mobile terminal and control method thereof
US8957919B2 (en) Mobile terminal and method for displaying image of mobile terminal
EP2498168B1 (en) Mobile terminal and method of controlling the same
KR101806891B1 (en) Mobile terminal and control method for mobile terminal
CN102333228B (en) Electronic device and method of controlling electronic device
KR101661969B1 (en) Mobile terminal and operation control method thereof
KR101912408B1 (en) Mobile terminal and control method for mobile terminal
KR101858604B1 (en) Mobile terminal and control method thereof
CN106067833B (en) Mobile terminal and control method thereof
EP3247170B1 (en) Mobile terminal
KR20180013151A (en) Mobile terminal
KR20170057058A (en) Mobile terminal and method for controlling the same
KR101560389B1 (en) Mobile terminal and controling method for mobile terminal
KR101917071B1 (en) Mobile terminal and control method therof
CN112799582A (en) Terminal, holding posture recognition method and device of terminal, and storage medium
KR101809946B1 (en) Mobile terminal
WO2013044445A1 (en) Intelligent video capture method and device
KR101622220B1 (en) Mobile terminal
KR102660783B1 (en) Mobile terminal, electronic device equipped with mobile terminal, and method of controlling the electronic device
KR20120109151A (en) Mobile terminal and control method therof
KR101619934B1 (en) Mobile terminal, control method therof and information displaying system conpriisng it
KR101673409B1 (en) Electronic device and control method for electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11873515

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11873515

Country of ref document: EP

Kind code of ref document: A1