WO2017016511A1 - 一种图像处理方法及装置、终端 - Google Patents

一种图像处理方法及装置、终端 Download PDF

Info

Publication number
WO2017016511A1
WO2017016511A1 PCT/CN2016/092203 CN2016092203W WO2017016511A1 WO 2017016511 A1 WO2017016511 A1 WO 2017016511A1 CN 2016092203 W CN2016092203 W CN 2016092203W WO 2017016511 A1 WO2017016511 A1 WO 2017016511A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
foreground
module
unit
Prior art date
Application number
PCT/CN2016/092203
Other languages
English (en)
French (fr)
Inventor
李嵩
姚智
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017016511A1 publication Critical patent/WO2017016511A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Definitions

  • Embodiments of the present invention relate to, but are not limited to, electronic technologies, and in particular, to an image processing method and apparatus, and a terminal.
  • Photographic photography is a creative photography technique that refers to a shooting mode in which a long-time exposure of a shooting device in a dark environment creates a special image through changes in the light source.
  • the photographic photography technique in the related art is limited to generating a two-dimensional (2D) illuminating image, and a technique for generating a three-dimensional (3D) illuminating image has not been provided in the related art.
  • An embodiment of the present invention provides an image processing method and apparatus, and a terminal, which can provide a three-dimensional light-drawn image, thereby increasing the interest and enjoyment of the image, thereby improving the user experience. .
  • an embodiment of the present invention provides an image processing method, including:
  • a three-dimensional image of the foreground portion of the light painting is synthesized onto the background image to obtain a three-dimensional light-drawn image.
  • the acquiring the background image includes:
  • the initial image set includes two or more initial images
  • Background modeling is performed on each initial image in the initial image set to obtain a background image.
  • the obtaining the viewing angle includes:
  • a viewing angle of view is determined according to the first operation.
  • the method further includes:
  • a stereo correction is performed on each of the original images to obtain a corrected initial image.
  • the generating the three-dimensional illuminating object by using the foreground area of each initial image according to the viewing angle includes:
  • a three-dimensional illuminating object is generated by using the foreground region according to the two-dimensional image coordinates of each of the foreground regions and the relationship between adjacent frames.
  • the photographic transformation matrix P is a 3*4 matrix
  • the column vector C represents the camera center
  • the 3*3 matrix R represents the rotation angle
  • K represents the camera internal parameter matrix
  • I represents the 3*3 unit matrix.
  • the generating the three-dimensional illuminating object by using the foreground area of each initial image according to the viewing angle further comprising:
  • the generating includes:
  • the three-dimensional illuminating object is generated by using the adjusted foreground region according to the relationship between the two-dimensional image coordinates of each of the foreground regions and adjacent frames.
  • An embodiment of the present invention further provides an image processing method, including:
  • an initial image set where the initial image set includes two or more initial images; performing background modeling on each initial image in the initial image set to obtain a background image;
  • the generating the three-dimensional illuminating object by using the foreground area of each initial image according to the viewing angle includes:
  • An embodiment of the present invention further provides an image processing apparatus, including a first acquiring unit, a second acquiring unit, a generating unit, a third obtaining unit, and a synthesizing unit, wherein:
  • a first acquiring unit configured to acquire a viewing angle
  • a second acquiring unit configured to acquire a foreground area of the initial image in the initial image set
  • Generating unit configured to generate a three-dimensional illuminating object by using a foreground area of each initial image according to a viewing angle
  • a third obtaining unit configured to acquire a background image generated by using the initial image
  • a synthesizing unit configured to synthesize the three-dimensional image of the foreground portion of the illuminating onto the background image to obtain a three-dimensional illuminating image.
  • the third obtaining unit includes a first acquiring module and a modeling module, where:
  • a first acquiring module configured to acquire an initial image set, where the initial image set includes two or more initial images
  • a modeling module configured to perform background modeling on each initial image in the initial image set to obtain a background image
  • the second obtaining unit is configured to generate a foreground area of each initial image by using each initial image and a corresponding background image.
  • the first obtaining unit includes a second acquiring module and a first determining module, where:
  • a second obtaining module configured to acquire a first operation, where the first operation is used to input a viewing angle
  • the first determining module is configured to determine a viewing angle according to the first operation.
  • the apparatus further includes a fourth obtaining unit and a correcting unit, wherein:
  • a fourth obtaining unit configured to acquire a sheet and two or more original images
  • a correction unit configured to perform stereo correction on each of the original images to obtain a corresponding initial image.
  • An embodiment of the present invention further provides an image processing apparatus, including a first acquiring unit, a second acquiring unit, a generating unit, a third acquiring unit, and a synthesizing unit;
  • the generating unit includes a third acquiring module, a fourth obtaining module, a second determining module, a fifth obtaining module, and a generating module, where:
  • a first acquiring unit configured to acquire a viewing angle
  • a second acquiring unit configured to acquire a foreground area of the initial image in the initial image set
  • a third acquiring module configured to acquire a photographic transformation matrix P by using the viewing angle, wherein the photographic transformation matrix P is used for projecting a three-dimensional space point onto a two-dimensional space of an imaging plane;
  • a fourth acquiring module configured to acquire a three-dimensional center coordinate of each of the foreground regions
  • a second determining module configured to determine, by using the three-dimensional center coordinates of each of the foreground regions and the photographic transformation matrix P, two-dimensional image coordinates for projecting each of the foreground regions from a three-dimensional space to a two-dimensional space ;
  • a fifth acquiring module configured to acquire an association relationship between adjacent frames
  • And generating a module configured to generate a three-dimensional illuminating object by using the foreground region according to the two-dimensional image coordinates of each of the foreground regions and an association relationship between adjacent frames.
  • a third obtaining unit configured to acquire a background image generated by using the initial image
  • a synthesizing unit configured to synthesize the three-dimensional image of the foreground portion of the illuminating onto the background image to obtain a three-dimensional illuminating image.
  • the photographic transformation matrix P is a 3*4 matrix
  • the column vector C represents the camera center
  • the 3*3 matrix R represents the rotation angle
  • K represents the camera internal parameter matrix
  • I represents the 3*3 unit matrix.
  • the generating unit further includes a sixth obtaining module, a calculating module, and an adjusting module, where:
  • a sixth obtaining module configured to acquire depth of field coordinates of each of the foreground regions
  • a calculation module configured to calculate a zoom ratio of each of the foreground regions according to a focal length and the depth of field coordinates
  • Adjusting a module configured to adjust each of the foreground regions according to the zoom ratio, To the adjusted foreground area;
  • the generating module is configured to generate the three-dimensional illuminating object by using the adjusted foreground region according to the relationship between the two-dimensional image coordinates of each of the foreground regions and adjacent frames.
  • the embodiment of the invention further provides a terminal, the terminal comprising a processor and an imaging device, wherein:
  • An imaging device comprising a connecting device and two or more image capturing devices fixed together by the connecting device, wherein images can be acquired from different viewing angles at the same time during image capturing, and the collected images are given to Processing by the processor;
  • the processor is configured to acquire a viewing angle; acquire a foreground area of the initial image in the initial image set; generate a three-dimensional light drawing object by using the foreground area of each initial image according to the viewing angle; and acquire the generated image by using the initial image
  • a background image is obtained by synthesizing three-dimensional imaging of the foreground portion of the illuminating onto the background image to obtain a three-dimensional illuminating image.
  • the imaging device includes a first digital camera and a second digital camera, and a connecting component; wherein
  • the first digital camera and the second digital camera are integrally connected by a connecting member, and the first digital camera and the second digital camera are fixed on the connecting member, and the imaging planes of the first digital camera and the second digital camera are parallel.
  • An embodiment of the present invention further provides a three-dimensional illuminating imaging system, including a stereoscopic imaging device, a front and rear view segmentation module, a depth measurement module, and a three-dimensional rendering module, wherein:
  • the stereoscopic imaging device comprises two or more digital cameras fixed together by a connecting device, wherein the cameras are relatively fixed in position, can collect images from different viewing angles at the same time, and hand the collected images to the front and rear view segmentation module and Subsequent modules are processed;
  • the front and rear scene segmentation module is configured to extract the background during the light painting process and segment the foreground object of the light drawing;
  • a depth measuring module configured to acquire images of different viewing angles captured by the stereoscopic imaging device, and generate a depth map by using a stereo measurement method for the foreground regions of the two images;
  • the three-dimensional drawing module is set to obtain the output of each of the above modules, and can be arbitrarily selected from the three-dimensional space
  • the angle is used to draw a three-dimensional image of the foreground portion of the light and is synthesized onto the background image.
  • the embodiment of the invention further provides a computer readable storage medium storing computer executable instructions for performing the data communication method of any of the above.
  • An image processing method and apparatus, and a terminal provided by the embodiment of the present invention, wherein: acquiring a viewing angle; acquiring a foreground area of an initial image in the initial image set; and generating a three-dimensional light by using a foreground area of each initial image according to a viewing angle Drawing a background image; acquiring a background image generated by using the initial image; synthesizing the three-dimensional image of the foreground portion of the light drawing onto the background image to obtain a three-dimensional light-drawn image, thereby providing a three-dimensional light-drawn image, thereby increasing The fun and enjoyment of the image enhances the user experience.
  • 1-1 is a schematic structural diagram of hardware of a mobile terminal that implements various embodiments of the present invention.
  • FIG. 1-1 is a schematic diagram of a wireless communication system of the mobile terminal shown in FIG. 1-1;
  • FIG. 1-3 are schematic diagrams showing an implementation process of an image processing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of an implementation process of an image processing method according to Embodiment 2 of the present invention
  • FIG. 3 is a schematic structural diagram of a structure of an image processing apparatus according to Embodiment 3 of the present invention.
  • FIG. 4 is a schematic structural diagram of components of an image processing apparatus according to Embodiment 4 of the present invention.
  • 5-1 is a schematic structural diagram of a three-dimensional three-dimensional photo-imaging system according to an embodiment of the present invention.
  • 5-2 is a schematic diagram of an embodiment of a composition of a stereoscopic imaging device
  • FIG. 6 is a schematic diagram of an implementation flow of a three-dimensional measurement according to Embodiment 6 of the present invention.
  • FIG. 7-1 is a schematic flowchart of an implementation process of three-dimensional drawing according to Embodiment 7 of the present invention.
  • FIG. 7-2 is a schematic diagram of inserting an auxiliary foreground map between adjacent frame positions according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a terminal of an eighth embodiment of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminal described in the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device, etc.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1-1 is a schematic diagram of a hardware structure of a mobile terminal that implements various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (audio/video) input unit 120, and a user.
  • Figure 1-1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits broadcast signals and/or broadcast related information or before receiving The generated broadcast signal and/or broadcast related information is transmitted to the server of the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the broadcast signal may exist in various forms, for example, it may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast systems.
  • the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H)
  • MediaFLO forward link media
  • the digital broadcasting system of the @) data broadcasting system, the terrestrial digital broadcasting integrated service (ISDB-T), and the like receives digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
  • the short range communication module 114 is a module for supporting short range communication.
  • Some examples of short-range communication technologies include BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wide Band (UWB), ZigbeeTM, and the like.
  • the location information module 115 is a module for checking or acquiring location information of the mobile terminal.
  • a typical example of a location information module is GPS (Global Positioning System).
  • GPS Global Positioning System
  • the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate the three-dimensional current position based on longitude, latitude, and altitude. information.
  • the method for calculating position and time information uses three satellites and corrects the calculated position and time information errors by using another satellite.
  • the GPS module 115 is capable of calculating speed information by continuously calculating current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 1220 that processes image data of a still image or video obtained by an image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 1210 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal.
  • the sensing unit 140 can sense whether the slide type phone is turned on or off.
  • the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
  • Sensing unit 140 may include proximity sensor 1410 which will be described below in connection with a touch screen.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • an external device can include a wired or wireless headset port, an external power source (or Pool charger) port, wired or wireless data port, memory card port, port for connecting devices with identification modules, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • Mobile terminal 100 may include two or more display units (or according to a particular desired implementation) Other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown).
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 executes the language Control and processing related to audio calls, data communications, video calls, etc. Additionally, the controller 180 can include a multimedia module 1810 for reproducing or playing back multimedia data, and the multimedia module 1810 can be constructed within the controller 180 or can be configured to be separate from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or an image drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 1-1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via a frame or a packet.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via a frame or a packet.
  • Such communication systems may use different air interfaces and/or physical layers.
  • air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
  • a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in Figures 1-2 can include multiple BSC 2750s.
  • Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
  • a Broadcast Transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system.
  • a broadcast receiving module 111 as shown in FIG. 1-1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • GPS Global Positioning System
  • the satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the GPS module 115 as shown in Figure 1-1 is typically configured to cooperate with the satellite 300 to obtain the desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC275 also routes the received data to The MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • the embodiment of the invention provides an image processing method, which is applied to a terminal.
  • the function implemented by the method can be implemented by a processor calling a program code in a terminal.
  • the program code can be saved in a computer storage medium.
  • the terminal includes at least a processor and a storage medium.
  • the image processing method includes:
  • Step S101 acquiring a viewing angle
  • the viewing angle of view may be a user selecting a viewing angle, wherein the angle of view includes a position and an angle of the viewing point, and the angle includes a rotation angle of three dimensions of X, Y, and Z in a three-dimensional coordinate system in which the foreground object is located.
  • the viewing point is the center of the camera, in the same coordinate system as the 3D center coordinates of the foreground.
  • Step S102 acquiring a foreground area of the initial image in the initial image set
  • the image includes a picture, a photo, and the like;
  • the terminal may obtain the foreground area from the storage medium.
  • the foreground area may be obtained by: firstly modeling the acquired multiple images, according to modeling The resulting result generates a background image, and the foreground region is segmented by making the difference between the brightness of the original image or the corrected image and the brightness of the background image.
  • the obtained multiple images may be corrected by a stereo correction algorithm, and the corrected image is modeled by a mixed Gaussian model and a multi-frame median method, according to the modeled results. Generate a background image.
  • the multi-frame median method is taken as an example for the front and back scene segmentation.
  • the background image is extracted as B and the original picture is taken as I
  • the pixel set belonging to the foreground is:
  • is a threshold
  • x and y represent position coordinates of pixels in the image, where x represents the abscissa in the two-dimensional image, and y represents the ordinate in the two-dimensional image.
  • the formula (1) indicates that when the absolute value of the difference between the luminances of the pixels I x, y and the background pixels B x, y at the position x, y on the image is larger than the threshold ⁇ , the pixel I x, y is determined as the foreground region.
  • Step S103 generating a three-dimensional light drawing object by using a foreground area of each initial image according to a viewing angle
  • the illuminating object can be any dynamic object on the initial image, such as a dynamic object.
  • the initial image includes a plurality of dynamic objects, wherein the object refers to an object, a character, a scene, and the like displayed in the first image, and the target object is a part of the dynamic objects in the plurality of dynamic objects in the initial image; for example, the initial image includes: The rotating electric fan and the electric fan behind the street; the target object can be an electric fan or a running car on the street.
  • an object refers to an image area representing an "object" in an image, for example, a face of a person is an object, and an object in an image actually refers to a face area of a person; for example, a flower is an object, in an image Flowers actually refer to the area of flowers.
  • Step S104 acquiring a background image generated by using the initial image
  • the terminal can acquire the generated background image from the storage medium.
  • Step S105 synthesizing the three-dimensional imaging of the foreground portion of the illuminating onto the background image to obtain a three-dimensional illuminating image.
  • the image processing method provided by the embodiment of the present invention may be implemented in the form of an application program (APP) in a specific implementation process, that is, the programmer uses the program APP to implement the method provided by the embodiment of the present invention, and then the user installs the APP on the terminal. .
  • the user can open the 3D illuminating function.
  • the image capturing device on the terminal collects an image, and then the terminal performs a series of processing on the collected image to obtain
  • the foreground area and the background area are generated based on the use of the background area, and then the foreground area and the background image are stored on the storage medium.
  • the user wants to watch the user can see the user wants to see it.
  • the terminal provided by the embodiment of the present invention captures a 3D illuminating image, thereby increasing the interest and enjoyment of the image.
  • the “acquiring viewing angle” includes:
  • Step 1011 Obtain a first operation, where the first operation is used to input a viewing angle
  • the first operation may be an operation input by a user through an input device of the terminal, where the touch screen is used as an input device of the terminal, and the user operates on the touch screen of the terminal (ie, the first operation), and then the terminal acquires User's operation.
  • Step 1012 Determine a viewing angle according to the first operation.
  • the terminal determines the position of the camera center according to the first operation of the user, and the rotation angles of the three dimensions of X, Y, and Z in the three-dimensional coordinate system in which the foreground region is located.
  • the embodiment of the invention provides an image processing method, which is applied to a terminal.
  • the function implemented by the method can be implemented by a processor calling a program code in a terminal.
  • the program code can be saved in a computer storage medium.
  • the terminal includes at least a processor and a storage medium.
  • the image processing method includes:
  • Step 201 obtaining two or more original images
  • the original image refers to an image acquired by an image acquisition device on the terminal, and the image includes a plurality of dynamic objects.
  • Step 202 Perform stereo correction on each of the original images to obtain a corresponding initial image.
  • the stereo imaging device is stereo-calibrated to determine internal parameters and external parameters of the two cameras, wherein the internal parameters include a focal length, an optical center, and a distortion parameter, and the external parameters include an image acquisition device ( Translation and selection such as camera.
  • the terminal includes two cameras installed at the left and right. Therefore, the terminal can simultaneously acquire two original images, taking two original images I l and I r as an example, where I represents the original image, and subscript l represents the camera on the left.
  • the acquired image, r represents the image captured by the camera on the right.
  • the two images I l and I r taken by the camera of the terminal at the same time are corrected by a stereo correction algorithm to obtain a corrected initial image I′ 1 and an initial image I′ r in which the image content is only shifted in the x direction.
  • Step 203 Acquire an initial image set, where the initial image set includes two or more initial images.
  • Step 204 Perform background modeling on each initial image in the initial image set to obtain a background image.
  • step 204 can be understood by referring to step S102 in the first embodiment, and the above steps 203 and 204 actually provide an implementation for determining the background image.
  • Step 205 Generate, by using each initial image and a corresponding background image, a foreground area of each initial image;
  • the background area is extracted during the illuminating process, and the foreground area is segmented to determine the illuminating object of the illuminating, that is, the foreground object.
  • a hybrid Gaussian model, a multi-frame median method, or the like may be used to separately extract a background image for each camera.
  • is a threshold
  • x and y represent position coordinates of pixels in the image, where x represents the abscissa in the two-dimensional image, and y represents the ordinate in the two-dimensional image.
  • the formula (1) indicates that when the absolute value of the difference between the luminances of the pixels I x, y and the background pixels B x, y at the position x, y on the image is larger than the threshold ⁇ , the pixel I x, y is determined as the foreground region.
  • Step 206 Obtain a viewing angle
  • the viewing angle of view may be a user selecting a viewing angle, wherein the angle of view includes a position and an angle of the viewing point, and the angle includes a rotation angle of three dimensions of X, Y, and Z in a three-dimensional coordinate system in which the foreground object is located.
  • the viewing point is the center of the camera, in the same coordinate system as the 3D center coordinates of the foreground.
  • Step 207 Obtain a foreground area of the initial image in the initial image set.
  • Step S102 the terminal may obtain a foreground area from the storage medium;
  • Step 208 Generate a three-dimensional light drawing object by using a foreground area of each initial image according to a viewing angle
  • the illuminating object can be any dynamic object on the initial image, such as a dynamic object.
  • Step 209 Acquire a background image generated by using the initial image.
  • the terminal can acquire the generated background image from the storage medium.
  • Step 210 Synthesize three-dimensional imaging of the foreground portion of the illuminating onto the background image to obtain a three-dimensional illuminating image.
  • the step 206 to the step 210 are respectively corresponding to the step S101 to the step S105 in the first embodiment. Therefore, those skilled in the art can refer to the first embodiment to understand the step 206 to the step in the embodiment. 210, in order to save space, I will not repeat them here.
  • step 208 may be implemented in the following manner.
  • the generating, by using the foreground area of each initial image according to the viewing angle, a three-dimensional illuminating object includes:
  • Step 2081 acquiring a photographic transformation matrix P by using the viewing angle, and the photographic transformation matrix P is used for projecting a three-dimensional space point onto a two-dimensional space of an imaging plane;
  • the photographic transformation matrix P can be obtained by using the viewing angle query relationship list, wherein the relationship list is used to indicate the mapping relationship between the viewing angle and the matrix P.
  • the viewing angle includes a rotation angle of three dimensions of X, Y, and Z in a three-dimensional coordinate system in which the position of the viewing point and the foreground region are located, and the photographic transformation matrix P is obtained by:
  • P KR[I
  • Step 2082 acquiring three-dimensional center coordinates of each of the foreground regions
  • the three-dimensional center coordinates (X i , Y i , Z i ) of the foreground region are the three-dimensional coordinates of the foreground region depth map.
  • the three-dimensional center coordinates are determined in the following way:
  • centroid coordinates (X i , Y i ) on the two-dimensional image. The calculation of the centroid coordinates is:
  • n Number of pixels
  • each foreground region is stereo-matched, and the average depth Z i of the region is calculated, and the average depth Z i is used for subsequent three-dimensional rendering of the region.
  • the calculation method is:
  • z ij is The depth corresponding to the jth pixel in the middle, n is The number of pixels.
  • centroid coordinates and the average depth can be obtained The three-dimensional center coordinates (X i , Y i , Z i ).
  • Step 2083 using the three-dimensional center coordinates of each of the foreground regions and the photographic transformation matrix P, determining two-dimensional image coordinates for projecting each of the foreground regions from a three-dimensional space into a two-dimensional space;
  • the three-dimensional center coordinates (X i , Y i , Z i ), expressed in homogeneous coordinates as A(X a , Y a , Z a , 1), can be transformed from the photographic transformation matrix P to the image point a of the user's perspective :
  • a(X' a , Y' a , c) is also in the form of homogeneous coordinates, and takes the first component c as 1:
  • Step 2084 Acquire an association relationship between adjacent frames.
  • Step 2085 acquiring depth of field coordinates of each of the foreground regions
  • the depth of field coordinates are Z i in the three-dimensional center coordinates
  • Step 2086 calculating a zoom magnification s of each of the foreground regions according to a focal length and the depth of field coordinates;
  • g is the distance between the imaging plane of the camera and the principal plane, which is equivalent to the focal length of the aperture imaging model
  • Step 2087 adjusting each of the foreground regions according to the zoom ratio to obtain an adjusted foreground region.
  • Step 2088 The three-dimensional illuminating object is generated by using the adjusted foreground region according to the relationship between the two-dimensional image coordinates of each of the foreground regions and adjacent frames.
  • an embodiment of the present invention further provides an image processing apparatus, where the first acquiring unit, the second acquiring unit, the generating unit, the third obtaining unit, the synthesizing unit, the fourth obtaining unit, and the correcting
  • the unit, and each module included in each unit can be implemented by a processor in the terminal; of course, it can also be implemented by a specific logic circuit; in the process of the specific embodiment, the processor can be a central processing unit (CPU) ), microprocessor (MPU), digital signal processor (DSP) or field programmable gate array (FPGA).
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • the apparatus 300 includes a first acquiring unit 301, a second acquiring unit 302, a generating unit 303, a third obtaining unit 304, and a synthesizing unit. 305, where:
  • the first obtaining unit 301 is configured to acquire a viewing angle
  • the second obtaining unit 302 is configured to acquire a foreground area of the initial image in the initial image set
  • the generating unit 303 is configured to generate a three-dimensional light drawing object by using a foreground area of each initial image according to a viewing angle;
  • the third obtaining unit 304 is configured to acquire a background image generated by using the initial image
  • the synthesizing unit 305 is configured to synthesize the three-dimensional imaging of the foreground portion of the illuminating onto the background image to obtain a three-dimensional illuminating image.
  • the third obtaining unit 304 includes a first acquiring module and a modeling module, where:
  • a first acquiring module configured to acquire an initial image set, where the initial image set includes two or more initial images
  • a modeling module configured to perform background modeling on each initial image in the initial image set to obtain a background image
  • the second acquisition unit is arranged to generate a foreground region of each initial image using each of the initial images and the corresponding background image.
  • the first obtaining unit includes a second acquiring module and a first determining module, where:
  • a second obtaining module configured to acquire a first operation, where the first operation is used to input a viewing angle
  • the first determining module is configured to determine a viewing angle according to the first operation.
  • the device further includes a fourth acquiring unit and a correcting unit, where:
  • a fourth obtaining unit configured to acquire a sheet and two or more original images
  • a correction unit configured to perform stereo correction on each of the original images to obtain a corresponding initial image.
  • An embodiment of the present invention further provides an image processing apparatus, where the first acquiring unit, the second acquiring unit, the generating unit, the third obtaining unit, the synthesizing unit, the fourth obtaining unit, and the correcting unit are And each module included in each unit can be implemented by a processor in the terminal; of course, it can also be implemented by a specific logic circuit; in the process of the specific embodiment, the processor can be a central processing unit (CPU) , microprocessor (MPU), digital signal processor (DSP) or field programmable gate array (FPGA).
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • the apparatus 300 includes a first obtaining unit 301, a second obtaining unit 302, a generating unit 303, a third obtaining unit 304, and a synthesizing unit. 305.
  • the generating unit 303 includes a third obtaining module 331, a fourth obtaining module 332, a second determining module 333, a fifth obtaining module 334, and a generating module 335, where:
  • the first obtaining unit 301 is configured to acquire a viewing angle
  • the second obtaining unit 302 is configured to acquire a foreground area of the initial image in the initial image set
  • the third obtaining module 331 is configured to acquire a photographic transformation matrix P by using the viewing angle, and the photographic transformation matrix P is used for projecting a three-dimensional space point onto a two-dimensional space of an imaging plane;
  • a fourth obtaining module 332, configured to acquire a three-dimensional center coordinate of each of the foreground regions
  • a second determining module 333 configured to determine, by using the three-dimensional center coordinates of each of the foreground regions and the photographic transformation matrix P, a two-dimensional image for projecting each of the foreground regions from a three-dimensional space into a two-dimensional space coordinate;
  • the fifth obtaining module 334 is configured to acquire an association relationship between adjacent frames.
  • the generating module 335 is configured to generate a three-dimensional illuminating object by using the foreground region according to the two-dimensional image coordinates of each of the foreground regions and an association relationship between adjacent frames.
  • the third obtaining unit 304 is configured to acquire a background image generated by using the initial image
  • the synthesizing unit 305 is configured to synthesize the three-dimensional imaging of the foreground portion of the illuminating onto the background image to obtain a three-dimensional illuminating image.
  • the viewing angle includes a rotation angle of three dimensions of X, Y, and Z in a three-dimensional coordinate system in which the viewing point is located, and the third acquiring module is further configured to:
  • the photographic transformation matrix P is a 3*4 matrix
  • the column vector C represents the camera center
  • the 3*3 matrix R represents the rotation angle
  • K represents the camera internal parameter matrix
  • I represents the 3*3 unit matrix.
  • the generating unit 305 further includes a sixth obtaining module 356, a calculating module 357, and an adjusting module 358, where:
  • a sixth obtaining module 356, configured to acquire depth of field coordinates of each of the foreground regions
  • the calculating module 357 is configured to calculate a zoom ratio of each of the foreground regions according to the focal length and the depth of field coordinates;
  • the adjusting module 358 is configured to adjust each of the foreground regions according to the zoom ratio to obtain an adjusted foreground region;
  • the generating module is configured to generate the three-dimensional illuminating object by using the adjusted foreground region according to the relationship between the two-dimensional image coordinates of each of the foreground regions and adjacent frames.
  • Each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integrated unit It can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the image processing apparatus described above may be implemented by the front and rear scene segmentation module 52, the depth measurement module 53, and the three-dimensional rendering module 54.
  • the three-dimensional imaging system 50 includes a stereo imaging device 51, a front-view segmentation module 52, and a depth measurement module. 53 and a three-dimensional drawing module 54, wherein:
  • the stereoscopic imaging device 51 comprises two or more digital cameras fixed together by a connecting device, wherein the cameras are relatively fixed in position, can acquire images from different viewing angles at the same time, and hand over the collected images to the front and rear view segmentation module. And subsequent modules for processing.
  • FIG. 5-2 is a schematic view showing an embodiment of a configuration of a stereoscopic imaging device, wherein a diagram of FIG. 5-2 is a top view of the stereoscopic imaging device, and FIG. 5-2 is a front view of the stereoscopic imaging device, such as As shown in a and b of Fig. 5-2, the digital camera 511 and the digital camera 512 are integrally connected by a connecting member 513, and the digital camera 511 and the digital camera 512 are fixed to the connecting member 513, and the digital camera 511 and the digital are as far as possible.
  • the imaging plane of the camera 512 is parallel.
  • the stereoscopic imaging device can obtain two images at the same time, and then the stereoscopic imaging device passes the two images to the front and rear scene segmentation module and subsequent modules for processing.
  • the front and rear scene segmentation module 52 is configured to extract a background during the light painting process and segment the light-emitting light-emitting object, that is, the foreground object.
  • a hybrid Gaussian model, a multi-frame median method, or the like may be used to separately extract a background for each camera in the stereoscopic imaging device.
  • the median method front and back scene segmentation is taken as an example.
  • the extraction background image is B and the original image is taken as I
  • the pixel set belonging to the foreground is as shown in formula (1).
  • For a threshold, x and y represent the position coordinates of the pixels in the image, where x represents the abscissa in the two-dimensional image and y represents the ordinate in the two-dimensional image.
  • the formula (1) indicates that when the absolute value of the difference between the luminance of the pixel I x, y and the background pixel B x, y at the position x on the image is larger than the threshold ⁇ , the pixel I x, y is determined to be the foreground.
  • the depth measuring module 53 is configured to acquire images of different viewing angles captured by the stereoscopic imaging device, and two The foreground area in the image is generated using a stereo measurement method.
  • the depth measurement module 53 generates a depth map including at least the following steps.
  • the stereo imaging device is stereo-calibrated to determine internal parameters and external parameters of the two cameras, wherein the internal parameters include at least a focal length, an optical center, and a distortion parameter, and the external parameters include at least two cameras' translation and selection.
  • the stereo correction algorithm is used to correct, and the corrected images I′ l and I′ r of the image content only in the x direction are obtained.
  • g is the focal length of two digital cameras in the stereoscopic imaging device (here, the focal lengths of the two cameras are assumed), and T is the distance between the two digital cameras.
  • the three-dimensional drawing module 54 is configured to obtain the output of each of the above modules, and the three-dimensional imaging of the foreground portion of the light painting can be drawn from any angle of the three-dimensional space and synthesized into the background image.
  • the three-dimensional measurement is first realized, and the three-dimensional measurement process outputs the front and rear scene regions to be synthesized, the foreground space coordinates, and the inter-frame correlation relationship, which are used for the subsequent three-dimensional drawing process.
  • the embodiment of the present invention implements a three-dimensional measurement imaging system by using the three-dimensional illuminating imaging system provided in the above fifth embodiment, wherein the three-dimensional photographic imaging system includes a stereoscopic imaging device, a front and rear view segmentation module, and a depth measurement module;
  • FIG. 6 is a schematic diagram of an implementation process of three-dimensional measurement according to Embodiment 6 of the present invention. As shown in FIG. 6, the three-dimensional measurement process includes:
  • step S61 the stereoscopic imaging device captures two original images of different viewing angles in the fth frame from the start of the photographic photography.
  • Step S62 the two original images are stereo-corrected to obtain a corrected image.
  • Step S63 the front and rear scene segmentation module pair with Split the foreground and the background.
  • n is The number of pixels
  • step S64 for each foreground region Dif of the f- th frame, the depth Z i is calculated, thereby obtaining three-dimensional coordinates (X i , Y i , Z i ).
  • the depth measurement module performs stereo matching on each foreground area, and the average depth Z i of the calculation area is used for subsequent three-dimensional rendering of the area, and the calculation method is:
  • z ij is The depth corresponding to the jth pixel in the middle, n is The number of pixels;
  • centroid coordinates and the average depth can be obtained The three-dimensional center coordinates (X i , Y i , Z i ).
  • Step S65 searching for the foreground area Dif-1 associated with the Dif in the previous frame, recording the association relationship, and saving the foreground area image Fif;
  • step S65 is all foreground regions appearing for this frame.
  • the basis for the association is the two-dimensional centroid coordinates of the foreground area and the similarity of the foreground areas.
  • the specific method can use an existing video tracking algorithm and an image similarity comparison algorithm. Association relationship Indicates that the foreground area is considered to be in the f-1th frame when subsequent synthesis is indicated Corresponding to the foreground area in the fth frame Save all associations for this frame
  • Step S66 determining whether the foreground area is traversed, if yes, proceeding to step S67, otherwise proceeding to step S64;
  • step S67 it is determined whether the user stops shooting, and if so, the flow ends, and the process proceeds to step S61.
  • the 3D illuminating imaging system starts the 3D rendering process.
  • the three-dimensional drawing process uses the imaging model of photographic geometry, and uses the same focal length, image plane, optical center, optical axis and other parameters as the camera in the stereo imaging device to ensure the realism of the composite image.
  • the embodiment of the present invention will implement the three-dimensional drawing process by using the three-dimensional illuminating imaging system provided in the above fifth embodiment and on the basis of the above-mentioned sixth embodiment;
  • FIG. 7-1 is a schematic flowchart of an implementation process of three-dimensional drawing according to Embodiment 7 of the present invention, as shown in FIG. 7-1.
  • the three-dimensional drawing process includes:
  • Step S71 the viewer selects a three-dimensional viewing angle
  • the user selects a viewing angle, wherein the viewing angle includes the position and angle of the viewing point, and the angle includes the rotation angles of the three dimensions of X, Y, and Z in the three-dimensional coordinate system of the foreground.
  • the viewing point is the center of the camera, in the same coordinate system as the 3D center coordinates of the foreground.
  • Step S72 calculating a photographic transformation matrix P
  • the photographic transformation matrix P can be calculated, and P is a 3*4 matrix for transforming the three-dimensional spatial point projection onto the two-dimensional space of the imaging plane.
  • the camera center is represented by a column vector C
  • the rotation angle is represented by a 3*3 matrix R
  • the camera internal parameter matrix is represented by K
  • I is a 3*3 unit matrix.
  • step S73 the background image B of the front and rear scene segmentation is taken as the background of the composite image.
  • Step S74 taking a foreground image saved by one frame f;
  • Step S75 for each foreground region of the fth frame Calculating the zoom magnification using the photographic transformation matrix P, and calculating the coordinates of the center point two-dimensional image;
  • a(X' a , Y' a , c) is also in the form of homogeneous coordinates.
  • Z is the Z coordinate of the foreground region in the camera coordinate system, and can be obtained by transforming the coordinates in the foreground three-dimensional space through the R[I
  • Step S76 drawing a foreground area according to a zoom ratio, a center point position, and an adjacent frame association relationship
  • FIG. 7-2 is a schematic diagram of inserting an auxiliary foreground map between adjacent frame positions according to an embodiment of the present invention. As shown in FIG. 7-2, frame 71 and frame 77 are adjacent frames, and gray patterns 72 to 76 are inserted auxiliary. Prospect area.
  • Step S77 determining whether the foreground area is traversed, if yes, proceeding to step S78, otherwise, proceeding to step S75;
  • step S78 it is judged whether all the frames are traversed, and if so, the flow ends, otherwise, the process proceeds to step S74.
  • step S76 of the embodiment of the present invention for adjacent frames with if it exists Right with The trajectory between the lines is drawn to obtain the inserted auxiliary foreground area.
  • the drawing method is as follows:
  • m pixels per distance on the line segment as a composite position wherein the size set by m is related to the coherent effect of the synthesis of the moving light source, and m can be a fixed value. For example, if 2 is taken, the auxiliary foreground is inserted every two pixels. Figure. If the line length is L, you need to insert Auxiliary prospects.
  • the scaling parameter is The scaling parameter is from to The length of the line segment is L, then to The scaling parameter of the nth auxiliary foreground of the line segment is calculated as
  • FIG. 8 is a schematic structural diagram of a terminal according to Embodiment 8 of the present invention.
  • the terminal 80 includes a processor 81 and an imaging device 82, where:
  • the imaging device 82 comprises a connecting device and two or more image capturing devices fixed together by the connecting device, wherein images can be acquired from different viewing angles at the same time during image capturing, and the collected images are handed over Processing the processor;
  • the imaging device 82 can be implemented by the stereoscopic imaging device 51 of the above-described fifth embodiment.
  • the processor 81 is configured to acquire a viewing angle; acquire a foreground area of the initial image in the initial image set; generate a three-dimensional light drawing object by using the foreground area of each initial image according to the viewing angle; and acquire the initial image generated by using the initial image a background image; three-dimensional imaging of the foreground portion of the light painting is synthesized onto the background image to obtain a three-dimensional light-drawn image.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication through some interface, device or unit.
  • the letter connection can be electrical, mechanical or other form.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and when executed, the program includes The foregoing steps of the method embodiment; and the foregoing storage medium includes: a removable storage device, a read-only memory (ROM), a magnetic disk, or an optical disk, and the like, which can store program codes.
  • ROM read-only memory
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a magnetic disk, or an optical disk.
  • An image processing method and apparatus, and a terminal provided by the embodiment of the present invention includes: acquiring Viewing a viewing angle; acquiring a foreground region of the initial image in the initial image set; generating a three-dimensional light drawing object using the foreground region of each of the initial images according to a viewing angle; acquiring a background image generated using the initial image; Three-dimensional imaging of the foreground portion is synthesized onto the background image to obtain a three-dimensional illuminating image.
  • the embodiment of the invention provides for generating a three-dimensional light-drawn image, thereby increasing the interest and enjoyment of the image, thereby improving the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法及装置、终端,所述方法包括:获取观看视角;获取初始图像集合中初始图像的前景区域;按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;获取利用所述初始图像生成的背景图;将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。

Description

一种图像处理方法及装置、终端 技术领域
本发明实施例涉及但不限于电子技术,尤指一种图像处理方法及装置、终端。
背景技术
光绘摄影是一种创意摄影技巧,指在暗光环境中拍摄装置长时间曝光,通过光源变化创造出特殊影像的一种拍摄模式。相关技术中的光绘摄影技术都局限于生成一张二维(2D)的光绘图像,相关技术中还没有提供生成三维(3D)光绘图像的技术。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本发明实施例为解决现有技术中存在的至少一个问题而提供一种图像处理方法及装置、终端,能够提供生成三维光绘图像,从而增加了图像的趣味性和观赏性,进而提升用户体验。
本发明实施例的技术方案是这样实现的:
第一方面,本发明实施例提供一种图像处理方法,包括:
获取观看视角;
获取初始图像集合中初始图像的前景区域;
按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;
获取利用所述初始图像生成的背景图;
将光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
可选地,所述获取背景图包括:
获取所述初始图像集合,所述初始图像集合中包括两张及两张以上的初始图像;
对所述初始图像集合中的每一初始图像进行背景建模,得到背景图。
可选地,所述获取观看视角包括:
获取用于输入所述观看视角第一操作;
根据所述第一操作确定观看视角。
可选地,所述方法还包括:
获取两张及两张以上的原始图像;
对每一所述原始图像进行立体校正,得到校正后的初始图像。
可选地,所述按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象,包括:
利用所述观看视角获取摄影变换矩阵P,所述摄影变换矩阵P用于把三维空间点投射变换到成像平面的二维空间上;
获取每一所述前景区域的三维中心坐标;
利用每一所述前景区域的三维中心坐标和所述摄影变换矩阵P,确定用于将每一所述前景区域从三维空间投射变换到二维空间的二维图像坐标;
获取相邻帧之间的关联关系;
按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用所述前景区域生成三维的光绘对象。
可选地,所述视角包括观看点的位置和前景区域所在三维坐标系下X、Y、Z三个维度的旋转角度,所述摄影变换矩阵P通过以下方式得到:P=KR[I|-C];
其中,摄影变换矩阵P为一个3*4的矩阵,列向量C表示摄像机中心,3*3矩阵R表示旋转角度,K表示摄像机内部参数矩阵,I表示3*3单位矩阵。
可选地,所述按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象,还包括:
获取每一所述前景区域的景深坐标;
根据焦距和所述景深坐标计算每一所述前景区域的缩放倍率;
按照所述缩放倍率对每一所述前景区域进行调整,得到调整后的前景区 域;
对应地,所述按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用所述前景区域生成三维的光绘对象,包括:
所述按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用调整后的前景区域生成三维的光绘对象。
本发明实施例还提供了一种图像处理方法,包括:
获取两张及两张以上的原始图像;
对每一所述原始图像进行立体校正,得到对应的初始图像;
获取初始图像集合,所述初始图像集合中包括两张及两张以上的初始图像;对所述初始图像集合中的每一初始图像进行背景建模,得到背景图;
利用每一初始图像和对应的背景图,生成每一初始图像的前景区域;
获取观看视角;
获取初始图像集合中初始图像的前景区域;
按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;
获取利用所述初始图像生成的背景图;将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
可选地,所述按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象,包括:
利用所述观看视角获取摄影变换矩阵P,所述摄影变换矩阵P用于把三维空间点投射变换到成像平面的二维空间上;
获取每一所述前景区域的三维中心坐标;
利用每一所述前景区域的三维中心坐标和所述摄影变换矩阵P,确定用于将每一所述前景区域从三维空间投射变换到二维空间的二维图像坐标;
获取相邻帧之间的关联关系;
获取每一所述前景区域的景深坐标;
根据焦距和所述景深坐标计算每一所述前景区域的缩放倍率s;
按照所述缩放倍率对每一所述前景区域进行调整,得到调整后的前景区域;
所述按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用调整后的前景区域生成所述三维的光绘对象。
本发明实施例又提供了一种图像处理装置,包括第一获取单元、第二获取单元、生成单元、第三获取单元和合成单元,其中:
第一获取单元,设置为获取观看视角;
第二获取单元,设置为获取初始图像集合中初始图像的前景区域;
生成单元,设置为按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;
第三获取单元,设置为获取利用所述初始图像生成的背景图;
合成单元,设置为将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
可选地,所述第三获取单元包括第一获取模块和建模模块,其中:
第一获取模块,设置为获取初始图像集合,所述初始图像集合中包括两张及两张以上的初始图像;
建模模块,设置为对所述初始图像集合中的每一初始图像进行背景建模,得到背景图;
对应地,所述第二获取单元,设置为利用每一初始图像和对应的背景图,生成每一初始图像的前景区域。
可选地,所述第一获取单元包括第二获取模块和第一确定模块,其中:
第二获取模块,设置为获取第一操作,所述第一操作用于输入观看视角;
所述第一确定模块,设置为根据所述第一操作确定观看视角。
可选地,所述装置还包括第四获取单元和校正单元,其中:
第四获取单元,设置为获取张及两张以上的原始图像;
校正单元,设置为对每一所述原始图像进行立体校正,得到对应的初始图像。
本发明实施例还提供了一种图像处理装置,包括第一获取单元、第二获取单元、生成单元、第三获取单元和合成单元;
其中,生成单元包括第三获取模块、第四获取模块、第二确定模块、第五获取模块和生成模块,其中:
第一获取单元,设置为获取观看视角;
第二获取单元,设置为获取初始图像集合中初始图像的前景区域;
第三获取模块,设置为利用所述观看视角获取摄影变换矩阵P,所述摄影变换矩阵P用于把三维空间点投射变换到成像平面的二维空间上;
第四获取模块,设置为获取每一所述前景区域的三维中心坐标;
第二确定模块,设置为利用每一所述前景区域的三维中心坐标和所述摄影变换矩阵P,确定用于将每一所述前景区域从三维空间投射变换到二维空间的二维图像坐标;
第五获取模块,设置为获取相邻帧之间的关联关系;
生成模块,设置为按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用所述前景区域生成三维的光绘对象。
第三获取单元,设置为获取利用所述初始图像生成的背景图;
合成单元,设置为将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
可选地,所述第三获取模块,还设置为:P=KR[I|-C];
其中,摄影变换矩阵P为一个3*4的矩阵,列向量C表示摄像机中心,3*3矩阵R表示旋转角度,K表示摄像机内部参数矩阵,I表示3*3单位矩阵。
可选地,所述生成单元还包括第六获取模块、计算模块和调整模块,其中:
第六获取模块,设置为获取每一所述前景区域的景深坐标;
计算模块,设置为根据焦距和所述景深坐标计算每一所述前景区域的缩放倍率;
调整模块,设置为按照所述缩放倍率对每一所述前景区域进行调整,得 到调整后的前景区域;
对应地,所述生成模块,设置为所述按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用调整后的前景区域生成所述三维的光绘对象。
本发明实施例又提供了一种终端,所述终端包括处理器和成像装置,其中:
成像装置,包括连接装置、以及通过所述连接装置固定在一起的两个或多个图像采集器件组成,所述图像采集期间能够在同一时刻从不同视角采集图像,并将采集到的图像交给处理器进行处理;
所述处理器,用于获取观看视角;获取初始图像集合中初始图像的前景区域;按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;获取利用所述初始图像生成的背景图;将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
可选地,所述成像装置包括第一数码摄像头和第二数码摄像头、连接部件;其中,
第一数码摄像头和第二数码摄像头通过连接部件连接成一体,第一数码摄像头和第二数码摄像头固定在连接部件上,并使第一数码摄像头和第二数码摄像头的成像平面平行。
本发明实施例还提供了一种三维光绘成像系统,包括立体成像装置、前后景分割模块、深度测量模块和三维绘制模块,其中:
立体成像装置包括通过连接装置固定在一起的两个或多个数码摄像头组成,其中这些摄像头相对位置固定,能够在同一时刻从不同视角采集图像,并将采集到的图像交给前后景分割模块及后续模块进行处理;
前后景分割模块,设置为在光绘过程中提取背景,并分割出光绘的前景对象;
深度测量模块,设置为获取立体成像装置拍摄的不同视角的图像,对两幅图像中前景区域利用立体测量方法生成深度图;
三维绘制模块,设置为获取上述各个模块的输出,可以从三维空间任意 角度绘制光绘前景部分的三维成像,并合成到背景图上。
本发明实施例再提供了一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行上述任一项的数据通信方法。
本发明实施例提供的一种图像处理方法及装置、终端,其中:获取观看视角;获取初始图像集合中初始图像的前景区域;按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;获取利用所述初始图像生成的背景图;将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像,如此,能够提供生成三维光绘图像,从而增加了图像的趣味性和观赏性,进而提升用户体验。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1-1为实现本发明各个实施例的移动终端的硬件结构示意图;
图1-2为如图1-1所示的移动终端的无线通信系统示意图;
图1-3为本发明实施例一图像处理方法的实现流程示意图;
图2为本发明实施例二图像处理方法的实现流程示意图;
图3为本发明实施例三图像处理装置的组成结构示意图;
图4为本发明实施例四图像处理装置的组成结构示意图;
图5-1为本发明实施例三三维光绘成像系统的组成结构示意图;
图5-2为立体成像装置的组成结构的一种实施方式示意图;
图6为本发明实施例六三维测量的实现流程示意图;
图7-1为本发明实施例七三维绘制的实现流程示意图;
图7-2为本发明实施例在相邻帧位置之间插入辅助前景图的示意图;
图8为本发明实施例八终端的组成结构示意图。
本发明的较佳实施方式
为使本发明实施例的目的、技术方案和优点更加清楚明白,下文中将结合附图对本发明的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
以下结合附图对本发明进行详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不限定本发明。
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,“模块”与“部件”可以混合地使用。
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1-1为实现本发明各个实施例的移动终端的硬件结构示意,如图1-1所示,移动终端100可以包括无线通信单元110、A/V(音频/视频)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1-1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信系统或网络之间的无线电通信。例如,无线通信单元可以包括广播接收模块111、移动通信模块112、无线互联网模块113、短程通信模块114和位置信息模块115中的至少一个。
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前 生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信模块112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广播(DMB)的电子节目指南(EPG)、数字视频广播手持(DVB-H)的电子服务指南(ESG)等等的形式而存在。广播接收模块111可以通过使用各种类型的广播系统接收信号广播。特别地,广播接收模块111可以通过使用诸如多媒体广播-地面(DMB-T)、数字多媒体广播-卫星(DMB-S)、数字视频广播-手持(DVB-H),前向链路媒体(MediaFLO@)的数据广播系统、地面数字广播综合服务(ISDB-T)等等的数字广播系统接收数字广播。广播接收模块111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括WLAN(无线LAN)(Wi-Fi)、Wibro(无线宽带)、Wimax(全球微波互联接入)、HSDPA(高速下行链路分组接入)等等。
短程通信模块114是用于支持短程通信的模块。短程通信技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、紫蜂TM等等。
位置信息模块115是用于检查或获取移动终端的位置信息的模块。位置信息模块的典型示例是GPS(全球定位系统)。根据当前的技术,GPS模块115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置 信息。当前,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,GPS模块115能够通过实时地连续计算当前位置信息来计算速度信息。
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风1220,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图像或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机1210。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的加速或减速移动和方向等等,并且生成用于控制移动终端100的操作的命令或信号。例如,当移动终端100实施为滑动型移动电话时,感测单元140可以感测该滑动型电话是打开还是关闭。另外,感测单元140能够检测电源单元190是否提供电力或者接口单元170是否与外部装置耦接。感测单元140可以包括接近传感器1410将在下面结合触摸屏来对此进行描述。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电 池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为“识别装置”)可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152、警报单元153等等。
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或 其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。
警报单元153可以提供输出以将事件的发生通知给移动终端100。典型的事件可以包括呼叫接收、消息接收、键信号输入、触摸输入等等。除了音频或视频输出之外,警报单元153可以以不同的方式提供输出以通知事件的发生。例如,警报单元153可以以振动的形式提供输出,当接收到呼叫、消息或一些其它进入通信(incomingcommunication)时,警报单元153可以提供触觉输出(即,振动)以将其通知给用户。通过提供这样的触觉输出,即使在用户的移动电话处于用户的口袋中时,用户也能够识别出各种事件的发生。警报单元153也可以经由显示单元151或音频输出模块152提供通知事件的发生的输出。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语 音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现或回放多媒体数据的多媒体模块1810,多媒体模块1810可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图像绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
如图1-1中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通信系统以及基于卫星的通信系统来操作。
现在将参考图1-2描述其中根据本发明的移动终端能够操作的通信系统。
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码分多址(CDMA)和通用移动通信系统(UMTS)(特别地,长期演进(LTE))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它类型的系统。
参考图1-2,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC280被构造为与公共电话交换网络(PSTN)290形成接口。MSC280还被构造为与可以经由回程线路耦接到基站270的BSC275形成接口。回程线路可以根据若干己知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM,IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图1-2中所示的系统可以包括多个BSC2750。
每个BS270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS270。或者,每个分区可以由用于分集接收的两个或更多天线覆盖。每个BS270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。
分区与频率分配的交叉可以被称为CDMA信道。BS270也可以被称为基站收发器子系统(BTS)或者其它等效术语。在这样的情况下,术语"基站"可以用于笼统地表示单个BSC275和至少一个BS270。基站也可以被称为"蜂窝站"。或者,特定BS270的各分区可以被称为多个蜂窝站。
如图1-2中所示,广播发射器(BT)295将广播信号发送给在系统内操作的移动终端100。如图1-1中所示的广播接收模块111被设置在移动终端100处以接收由BT295发送的广播信号。在图1-2中,示出了几个全球定位系统(GPS)卫星300。卫星300帮助定位多个移动终端100中的至少一个。
在图1-2中,描绘了多个卫星300,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。如图1-1中所示的GPS模块115通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。
作为无线通信系统的一个典型操作,BS270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通信。特定基站270接收的每个反向链路信号被在特定BS270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到 MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS270以将正向链路信号发送到移动终端100。
下面结合附图和具体实施例对本发明的技术方案进一步详细阐述。
实施例一
基于上述移动终端硬件结构以及通信系统,提出本发明各个实施例。本发明实施例提供一种图像处理方法,该方法应用于终端,该方法所实现的功能可以通过终端中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该终端至少包括处理器和存储介质。
图1-3为本发明实施例一图像处理方法的实现流程示意图,如图1-3所示,该图像处理方法包括:
步骤S101,获取观看视角;
这里,观看视角可以是用户选择观看视角,其中视角包括观看点的位置和角度,角度包括前景对象所在三维坐标系下X、Y、Z三个维度的旋转角度。在几何集合中,观看点即摄像机中心,与前景的三维中心坐标处于同一坐标系下。
步骤S102,获取初始图像集合中初始图像的前景区域;
这里,图像包括图片、照片等;
这里,步骤S102在具体实现的过程,终端可以从存储介质上获取前景区域;在本发明实施例中,前景区域可以采用如下方式得到:首先对获取的多张图像进行背景建模,根据建模后的结果生成背景图,再通过将原始图像或校正后的图像的亮度与背景图的亮度做差分割出前景区域。
在具体实现的过程中,可以先对获取到的多张图像采用立体校正算法校正,对于校正后的图像采用混合高斯模型、多帧中值法等方法进行背景建模,根据建模后的结果生成背景图。
下面以多帧中值法为例进行前后景分割,对于某一摄像头,假设提取背景图为B,拍摄原始图片为I,则属于前景的像素集合为:
Figure PCTCN2016092203-appb-000001
公式(1)中,θ为一阈值,x和y表示图像中像素的位置坐标,其中x表示二维图像中的横坐标,y表示二维图像中的纵坐标。
公式(1)表示:图像上位置为x,y的像素Ix,y与背景像素Bx,y亮度之差的绝对值大于阈值θ时,该像素Ix,y判断为前景区域。
步骤S103,按照观看视角利用每一初始图像的前景区域生成三维的光绘对象;
其中,光绘对象可以是初始图像上的任何动态对象,例如动态的物体。
其中,初始图像中包括多个动态对象,其中对象是指第一图像中所展示的物体、人物、景物等,目标对象为初始图像中多个动态对象中的部分动态对象;例如初始图像包括:处于旋转状态的电风扇和电风扇后面车水马龙的街道;其中目标对象可以为电风扇,也可以为街道上奔跑的车子。这里,对象是指图像中代表“对象”的图像区域,例如,人物的脸即为对象,而图像中的对象实际上指的是人物的脸部区域;再如,花是对象,图像中的花实际上指的是花朵的区域。
步骤S104,获取利用所述初始图像生成的背景图;
这里,终端可以从存储介质上获取生成好的背景图。
步骤S105,将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
本发明实施例提供的图像处理方法在具体实现的过程中可以采用应用程序(APP)的形式来实现,即编程人员采用程序APP实现本发明实施例提供的方法,然后用户将APP安装在终端上。该APP在使用的过程中,用户可以通过打开3D光绘功能,当用户对准要拍摄的对象,终端上的图像采集器件就会采集图像,然后终端对采集到的图像进行一系列处理,得到前景区域、后景区域,基于利用后景区域生成背景图等,然后将这些前景区域、背景图存储在存储介质上,当用户想要观看时,通过用户的操作就能将用户想要看到的3D光绘图像显示终端的显示屏上,从上面可以看出,本发明实施例提供的终端,拍摄出了3D光绘图像,从而增加了图像的趣味性和观赏性。
基于前述的APP实现方式,可以采用下面的方式实现上述步骤S101, 所述“获取观看视角”包括:
步骤1011,获取第一操作,所述第一操作用于输入观看视角;
这里,所述第一操作可以是用户通过终端的输入设备输入的操作,这里以触摸屏作为终端的输入设备为例,用户在终端的触摸屏上进行操作(即第一操作),然后终端就获取到用户的操作。
步骤1012,根据所述第一操作确定观看视角。
这里,终端根据用户的第一操作,确定摄像机中心的位置,以及中前景区域所在三维坐标系下X、Y、Z三个维度的旋转角度。
实施例二
基于上述的各个实施例。本发明实施例提供一种图像处理方法,该方法应用于终端,该方法所实现的功能可以通过终端中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该终端至少包括处理器和存储介质。
图2为本发明实施例二图像处理方法的实现流程示意图,如图2所示,该图像处理方法包括:
步骤201,获取两张及两张以上的原始图像;
这里,所述原始图像是指终端上的图像采集器件所采集的图像,该图像中包括多个动态对象。
步骤202,对每一所述原始图像进行立体校正,得到对应的初始图像。
这里,对原始图像进行校正之前,先对立体成像装置进行立体标定,以确定两个摄像头的内部参数和外部参数,其中内部参数包括焦距、光心、畸变参数,而外部参数包括图像采集器件(如摄像头)的平移和选择。下面以终端的包括两个左右安装的摄像头,因此,终端可以同时采集两张原始图像,以两张原始图像Il和Ir为例,其中,I表示原始图像,下标l表示左边的摄像头所采集的图像,r表示右边的摄像头所采集的图像。终端的摄像头在同一时刻拍摄的两幅图像Il和Ir,利用立体校正算法校正,得到图像内容只有x方向平移的校正后的初始图像I′l和初始图像I′r
步骤203,获取初始图像集合,所述初始图像集合中包括两张及两张以上的初始图像;
步骤204,对所述初始图像集合中的每一初始图像进行背景建模,得到背景图;
这里,步骤204可以参阅上述实施例一中步骤S102而理解,上述的步骤203和步骤204实际上提供了一种确定上述背景图的实现方式。
步骤205,利用每一初始图像和对应的背景图,生成每一初始图像的前景区域;
这里,在光绘过程中提取后景区域,并分割出前景区域,以便确定出光绘的发光物体即前景对象。在具体实现的过程中,可以使用混合高斯模型、多帧中值法等方法分别对每个摄像头分别提取背景图。
下面以中值法前后景分割为例,对于某一摄像头,假设提取背景图为B,拍摄原始图像为I,则属于前景的像素集合如公式(1)所示。
公式(1)中,θ为一阈值,x和y表示图像中像素的位置坐标,其中x表示二维图像中的横坐标,y表示二维图像中的纵坐标。
公式(1)表示:图像上位置为x,y的像素Ix,y与背景像素Bx,y亮度之差的绝对值大于阈值θ时,该像素Ix,y判断为前景区域。
步骤206,获取观看视角;
这里,观看视角可以是用户选择观看视角,其中视角包括观看点的位置和角度,角度包括前景对象所在三维坐标系下X、Y、Z三个维度的旋转角度。在几何集合中,观看点即摄像机中心,与前景的三维中心坐标处于同一坐标系下。
步骤207,获取初始图像集合中初始图像的前景区域;
这里,对终端上的成像装置所拍摄的图像进行分割后,得到前景区域和后景区域,其中终端利用后景区域形成背景图保存起来。步骤S102在具体实现的过程,终端可以从存储介质上获取前景区域;
步骤208,按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;
这里,光绘对象可以是初始图像上的任何动态对象,例如动态的物体。
步骤209,获取利用所述初始图像生成的背景图;
这里,终端可以从存储介质上获取生成好的背景图。
步骤210,将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
本发明实施例中,上述的步骤206至步骤210分别对应于实施例一中的步骤S101至步骤S105,因此,本领域的技术人员可以参阅实施例一而理解本实施例中的步骤206至步骤210,为节约篇幅,这里不再赘述。
本发明实施例中,上述的步骤208可以采用下面的方式来实现,步骤208,所述按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象,包括:
步骤2081,利用所述观看视角获取摄影变换矩阵P,所述摄影变换矩阵P用于把三维空间点投射变换到成像平面的二维空间上;
这里,在具体实现的过程中,可以利用观看视角查询关系列表得到摄影变换矩阵P,其中关系列表用于表明观看视角与矩阵P之间的映射关系。
这里,所述视角包括观看点的位置和前景区域所在三维坐标系下X、Y、Z三个维度的旋转角度,所述摄影变换矩阵P通过以下方式得到:
P=KR[I|-C];其中,摄影变换矩阵P为一个3*4的矩阵,列向量C表示摄像机中心,3*3矩阵R表示旋转角度,K表示摄像机内部参数矩阵,I表示3*3单位矩阵。
步骤2082,获取每一所述前景区域的三维中心坐标;
这里,前景区域的三维中心坐标(Xi,Yi,Zi)即为前景区域景深图的三维坐标。其中,三维中心坐标采用下面的方式来确定:
由于是
Figure PCTCN2016092203-appb-000002
Figure PCTCN2016092203-appb-000003
同一时刻拍摄,则对与
Figure PCTCN2016092203-appb-000004
中某一前景区域,
Figure PCTCN2016092203-appb-000005
中一定有相似区域与其对应。这里假设都以左图
Figure PCTCN2016092203-appb-000006
为基准(以右图为基准同理),保存分割出的前景区域
Figure PCTCN2016092203-appb-000007
的和对应的前景对象
Figure PCTCN2016092203-appb-000008
其中,是第i个前景中像素坐标的集合,
Figure PCTCN2016092203-appb-000010
是这些由这个集合分割出的图像。然后统计
Figure PCTCN2016092203-appb-000011
在二维图像上的质心坐标(Xi,Yi)。质心坐标的计算方法为:
Figure PCTCN2016092203-appb-000012
对于
Figure PCTCN2016092203-appb-000013
中第j个像素,
Figure PCTCN2016092203-appb-000014
n为
Figure PCTCN2016092203-appb-000015
像素个数
然后对每一个前景区域进行立体匹配,计算区域的平均深度Zi,平均深 度Zi用于该区域后续的三维绘制,计算方法为:
Figure PCTCN2016092203-appb-000016
zij
Figure PCTCN2016092203-appb-000017
中第j个像素对应的深度,n为
Figure PCTCN2016092203-appb-000018
像素个数。
由前景区域
Figure PCTCN2016092203-appb-000019
的质心坐标和平均深度,可以得到
Figure PCTCN2016092203-appb-000020
的三维中心坐标(Xi,Yi,Zi)。
步骤2083,利用每一所述前景区域的三维中心坐标和所述摄影变换矩阵P,确定用于将每一所述前景区域从三维空间投射变换到二维空间的二维图像坐标;
这里,对于前景区域
Figure PCTCN2016092203-appb-000021
的三维中心坐标(Xi,Yi,Zi),用齐次坐标表示为A(Xa,Ya,Za,1),可以由摄影变换矩阵P变换到用户视角的成像点a上:
a=PA;
上式中a(X′a,Y′a,c)也是齐次坐标的形式,对第三个分量c取1:
Figure PCTCN2016092203-appb-000022
Figure PCTCN2016092203-appb-000023
是成像后的二维图像坐标。
步骤2084,获取相邻帧之间的关联关系;
这里,对本帧出现的所有前景区域
Figure PCTCN2016092203-appb-000024
在先前帧寻找关联关系。关联的依据是前景区域的二维质心坐标,以及前景对象的相似性。具体方法可以使用现有的视频跟踪算法和图像相似性比较算法。关联关系用
Figure PCTCN2016092203-appb-000025
表示,
指示后续合成时认为在第f-1帧中前景区域
Figure PCTCN2016092203-appb-000026
对应于第f帧中的前景区域
Figure PCTCN2016092203-appb-000027
对本帧保存所有的关联关系
Figure PCTCN2016092203-appb-000028
步骤2085,获取每一所述前景区域的景深坐标;
这里,景深坐标即为三维中心坐标中的Zi
步骤2086,根据焦距和所述景深坐标计算每一所述前景区域的缩放倍率s;
这里,根据射影变换原理可知:
Figure PCTCN2016092203-appb-000029
其中g是摄像头成像平面和主平面的距离,等同于小孔成像模型的焦距;
步骤2087,按照所述缩放倍率对每一所述前景区域进行调整,得到调整后的前景区域;
步骤2088,所述按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用调整后的前景区域生成三维的光绘对象。
实施例三
基于前述的方法实施例,本发明实施例再提供一种图像处理装置,该装置中的第一获取单元、第二获取单元、生成单元、第三获取单元、合成单元、第四获取单元和校正单元,以及各单元各自所包括的各模块,都可以通过终端中的处理器来实现;当然也可通过具体的逻辑电路实现;在具体实施例的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
图3为本发明实施例三图像处理装置的组成结构示意图,如图3所示,该装置300包括第一获取单元301、第二获取单元302、生成单元303、第三获取单元304和合成单元305,其中:
第一获取单元301,设置为获取观看视角;
第二获取单元302,设置为获取初始图像集合中初始图像的前景区域;
生成单元303,设置为按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;
第三获取单元304,设置为获取利用所述初始图像生成的背景图;
合成单元305,设置为将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
本发明实施例中,第三获取单元304,包括第一获取模块和建模模块,其中:
第一获取模块,设置为获取初始图像集合,所述初始图像集合中包括两张及两张以上的初始图像;
建模模块,设置为对所述初始图像集合中的每一初始图像进行背景建模,得到背景图;
对应地,第二获取单元,设置为利用每一初始图像和对应的背景图,生成每一初始图像的前景区域。
本发明实施例中,第一获取单元包括第二获取模块和第一确定模块,其中:
第二获取模块,设置为获取第一操作,所述第一操作用于输入观看视角;
所述第一确定模块,设置为根据所述第一操作确定观看视角。
本发明实施例中,所述装置还包括第四获取单元和校正单元,其中:
第四获取单元,设置为获取张及两张以上的原始图像;
校正单元,设置为对每一所述原始图像进行立体校正,得到对应的初始图像。
这里需要指出的是:以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果,因此不做赘述。对于本发明装置实施例中未披露的技术细节,请参照本发明方法实施例的描述而理解,为节约篇幅,因此不再赘述。
实施例四
基于前述的方法实施例,本发明实施例再提供一种图像处理装置,该装置中的第一获取单元、第二获取单元、生成单元、第三获取单元、合成单元、四获取单元和校正单元,以及各单元各自所包括的各模块,都可以通过终端中的处理器来实现;当然也可通过具体的逻辑电路实现;在具体实施例的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
图4为本发明实施例四图像处理装置的组成结构示意图,如图4所示,该装置300包括第一获取单元301、第二获取单元302、生成单元303、第三获取单元304和合成单元305,其中所述生成单元303包括第三获取模块331、第四获取模块332、第二确定模块333、第五获取模块334和生成模块335,其中:
第一获取单元301,设置为获取观看视角;
第二获取单元302,设置为获取初始图像集合中初始图像的前景区域;
第三获取模块331,设置为利用所述观看视角获取摄影变换矩阵P,所述摄影变换矩阵P用于把三维空间点投射变换到成像平面的二维空间上;
第四获取模块332,设置为获取每一所述前景区域的三维中心坐标;
第二确定模块333,设置为利用每一所述前景区域的三维中心坐标和所述摄影变换矩阵P,确定用于将每一所述前景区域从三维空间投射变换到二维空间的二维图像坐标;
第五获取模块334,设置为获取相邻帧之间的关联关系;
生成模块335,设置为按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用所述前景区域生成三维的光绘对象。
第三获取单元304,设置为获取利用所述初始图像生成的背景图;
合成单元305,设置为将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
本发明实施例中,所述视角包括观看点的位置和前景区域所在三维坐标系下X、Y、Z三个维度的旋转角度,所述第三获取模块,还设置为:
P=KR[I|-C];
其中,摄影变换矩阵P为一个3*4的矩阵,列向量C表示摄像机中心,3*3矩阵R表示旋转角度,K表示摄像机内部参数矩阵,I表示3*3单位矩阵。
本发明实施例中,生成单元305还包括第六获取模块356、计算模块357和调整模块358,其中:
第六获取模块356,设置为获取每一所述前景区域的景深坐标;
计算模块357,设置为根据焦距和所述景深坐标计算每一所述前景区域的缩放倍率;
调整模块358,设置为按照所述缩放倍率对每一所述前景区域进行调整,得到调整后的前景区域;
对应地,生成模块,设置为所述按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用调整后的前景区域生成三维的光绘对象。
这里需要指出的是:以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果,因此不做赘述。对于本发明装置实施例中未披露的技术细节,请参照本发明方法实施例的描述而理解,为节约篇幅,因此不再赘述。
实施例五
在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。例如在下面的图5-1中,上述图像处理装置可以采用前后景分割模块52、深度测量模块53和三维绘制模块54来实现。
图5-1为本发明实施例三三维光绘成像系统的组成结构示意图,如图5-1所示,该三维光绘成像系统50包括立体成像装置51、前后景分割模块52、深度测量模块53和三维绘制模块54,其中:
立体成像装置51包括通过连接装置固定在一起的两个或多个数码摄像头组成,其中这些摄像头相对位置固定,能够在同一时刻从不同视角采集图像,并将采集到的图像交给前后景分割模块及后续模块进行处理。
图5-2为立体成像装置的组成结构的一种实施方式示意图,其中,图5-2的a图为立体成像装置的俯视图,图5-2的b图为立体成像装置的正视图,如图5-2的a图和b图所示,数码摄像头511和数码摄像头512通过连接部件513连接成一体,数码摄像头511和数码摄像头512固定在连接部件513上,并尽量使数码摄像头511和数码摄像头512的成像平面平行。此立体成像装置可以在同一时刻得到两张图像,然后立体成像装置将两张图像交由前后景分割模块及后续模块进行处理。
前后景分割模块52,用于在光绘过程中提取背景,并分割出光绘的发光物体,即前景对象。在具体实现的过程中,可以使用混合高斯模型、多帧中值法等方法分别对立体成像装置中的每个摄像头分别提取背景。
下面以中值法前后景分割为例,对于某一摄像头,假设提取背景图为B,拍摄原始图像为I,则属于前景的像素集合如公式(1)所示,公式(1)中,θ为一阈值,x和y表示图像中像素的位置坐标,其中x表示二维图像中的横坐标,y表示二维图像中的纵坐标。
公式(1)表示:图像上位置为x,y的像素Ix,y与背景像素Bx,y亮度之差的绝对值大于阈值θ时,该像素Ix,y判断为前景。
深度测量模块53,用于获取立体成像装置拍摄的不同视角的图像,对两 幅图像中前景区域利用立体测量方法生成深度图。
下面给出一种具体实施方式,深度测量模块53生成深度图至少包括以下步骤。
首先,对立体成像装置进行立体标定,以确定两个摄像头的内部参数和外部参数,其中内部参数至少包括焦距、光心、畸变参数,外部参数至少包括两个摄像头的平移和选择。
然后,对于立体成像系统在同一时刻拍摄的两幅图像Il和Ir,利用立体校正算法校正,得到图像内容只有x方向平移的校正后图像I′l和I′r
再,使用立体匹配算法获得I′l和I′r的视差图D。
在本发明实施例中忽略I′l和I′r中的背景区域,因此视差图D只计算前景区域即可。由D中任意像素点的视差d,使用以下公式计算出景深Z,从而得出关于I′l(或I′r)的景深图Z,如公式(2)所示,
Figure PCTCN2016092203-appb-000030
公式(2)中,g是立体成像装置中两个数码摄像头的焦距(这里假设两个摄像头焦距一样),T是两个数码摄像头之间的间距。
三维绘制模块54,设置为获取上述各个模块的输出,可以从三维空间任意角度绘制光绘前景部分的三维成像,并合成到背景图上。
实施例六
在输出三维光绘图像的过程中首先要实现三维测量,三维测量流程输出待合成的前后景区域、前景空间坐标、帧间相关关系,用于后续三维绘制流程使用。本发明实施例将采用上述实施例五提供的三维光绘成像系统来实现三维测量流程,其中三维光绘成像系统包括立体拍摄装置、前后景分割模块、深度测量模块;
图6为本发明实施例六三维测量的实现流程示意图,如图6所示,该三维测量流程包括:
步骤S61,立体拍摄装置自光绘摄影开始起第f帧拍摄不同视角的两张原始图像。
步骤S62,两张原始图像经过立体校正后得到校正后图像
Figure PCTCN2016092203-appb-000031
Figure PCTCN2016092203-appb-000032
步骤S63,前后景分割模块对
Figure PCTCN2016092203-appb-000033
Figure PCTCN2016092203-appb-000034
分割前景和后景。
由于是
Figure PCTCN2016092203-appb-000035
Figure PCTCN2016092203-appb-000036
同一时刻拍摄,则对与
Figure PCTCN2016092203-appb-000037
中某一前景区域,
Figure PCTCN2016092203-appb-000038
中一定有相似 区域与其对应。这里,假设都以左图
Figure PCTCN2016092203-appb-000039
为基准(以右图为基准同理),保存分割出的前景区域
Figure PCTCN2016092203-appb-000040
和对应的前景对象
Figure PCTCN2016092203-appb-000041
是第i个前景中像素坐标的集合,
Figure PCTCN2016092203-appb-000042
是这些由这个集合分割出的图像。然后统计
Figure PCTCN2016092203-appb-000043
在二维图像上的质心坐标(xi,yi)。计算方法为:
Figure PCTCN2016092203-appb-000044
对于
Figure PCTCN2016092203-appb-000045
中第j个像素,
Figure PCTCN2016092203-appb-000046
n为
Figure PCTCN2016092203-appb-000047
像素个数;
步骤S64,对于第f帧每一个前景区域Dif,计算其深度Zi,从而得到三维坐标(Xi,Yi,Zi)。
深度测量模块对每一个前景区域进行立体匹配,计算区域的平均深度Zi用于该区域后续的三维绘制,计算方法为:
Figure PCTCN2016092203-appb-000048
zij
Figure PCTCN2016092203-appb-000049
中第j个像素对应的深度,n为
Figure PCTCN2016092203-appb-000050
像素个数;
由前景区域
Figure PCTCN2016092203-appb-000051
的质心坐标和平均深度,可以得到
Figure PCTCN2016092203-appb-000052
的三维中心坐标(Xi,Yi,Zi)。
步骤S65,在先前帧搜索Dif相关联的前景区域Dif-1,记录关联关系,并保存前景区域图像Fif;
这里,步骤S65是对本帧出现的所有前景区域
Figure PCTCN2016092203-appb-000053
在先前帧寻找关联关系。关联的依据是前景区域的二维质心坐标,以及前景区域的相似性。具体方法可以使用现有的视频跟踪算法和图像相似性比较算法。关联关系用
Figure PCTCN2016092203-appb-000054
表示,指示后续合成时认为在第f-1帧中前景区域
Figure PCTCN2016092203-appb-000055
对应于第f帧中的前景区域
Figure PCTCN2016092203-appb-000056
对本帧保存所有的关联关系
Figure PCTCN2016092203-appb-000057
步骤S66,判断前景区域是否遍历完成,是时,进入步骤S67,反之进入步骤S64;
步骤S67,判断用户是否停止拍摄,是时,结束流程,反之进入步骤S61。
用户停止拍摄后,三维测量流程结束。
实施例七
当用户需要查看三维光绘图像时,三维光绘成像系统会启动三维绘制流程。三维绘制流程使用摄影几何的成像模型,采用和立体成像装置中摄像头相同的焦距、像平面、光心、光轴等参数,保证合成图像的真实感。本发明实施例将采用上述实施例五提供的三维光绘成像系统以及在上述实施例六的基础上,来实现三维绘制流程;
图7-1为本发明实施例七三维绘制的实现流程示意图,如图7-1所示, 该三维绘制流程包括:
步骤S71,观看者选择三维观看视角;
用户选择观看视角,其中视角包括观看点的位置和角度,角度包括前景所在三维坐标系下X、Y、Z三个维度的旋转角度。在几何集合中,观看点即摄像机中心,与前景的三维中心坐标处于同一坐标系下。
步骤S72,计算摄影变换矩阵P;
具体地,由于确定观看点和角度之后,可以计算出摄影变换矩阵P,P是一个3*4的矩阵,用于把三维空间点投射变换到成像平面的二维空间上。摄像机中心用一个列向量C表示,旋转角度用3*3矩阵R表示,摄像机内部参数矩阵用K表示,则P由下式构成:P=KR[I|-C];
上式中,I是一个3*3单位矩阵。
步骤S73,取前后景分割的背景图B作为合成图的背景。
步骤S74,取一帧f保存的前景图;
取拍摄过程中第f帧的前景信息,包括本帧中所有前景对象
Figure PCTCN2016092203-appb-000058
前景三维中心坐标(Xi,Yi,Zi),相邻帧关联关系
Figure PCTCN2016092203-appb-000059
其中i为大于0小于f的整数。
步骤S75,对于第f帧每一个前景区域
Figure PCTCN2016092203-appb-000060
利用摄影变换矩阵P计算缩放倍率,并计算中心点二维图像坐标;
具体地,对于前景区域
Figure PCTCN2016092203-appb-000061
的中心坐标,用齐次坐标表示为A(Xa,Ya,Za,1),可以由摄影变换矩阵P变换到用户视角的成像点a上,则a可以采用下式来计算:a=PA;
上式中,a(X′a,Y′a,c)也是齐次坐标的形式。对第三个分量c可以取1:
Figure PCTCN2016092203-appb-000062
Figure PCTCN2016092203-appb-000063
是成像后的二维图像坐标。
然后计算出成像后前景区域应当缩放的倍率s。根据射影变换原理可知:
Figure PCTCN2016092203-appb-000064
其中,f是摄像头成像平面和主平面的距离,等同于小孔成像模型的焦距。Z是前景区域在摄像头坐标系下的Z坐标,可以由前景三维空间下的坐标经过R[I|-C]矩阵变换得到。
步骤S76,按照缩放倍率、中心点位置、相邻帧关联关系绘制前景区域;
具体地,所有前景区域按照拍摄时序逐帧合成。对于光源连续运动形成 的线条,经过三维不同观察点观察可能存在不连贯的现象,即相邻帧的前景区域距离太远。这时候需要在前景区域的相邻帧
Figure PCTCN2016092203-appb-000065
Figure PCTCN2016092203-appb-000066
位置之间插入辅助前景图表现出连续的轨迹。图7-2为本发明实施例在相邻帧位置之间插入辅助前景图的示意图,如图7-2所示,帧71和帧77为相邻帧,灰色图案72至76是插入的辅助前景区域。
步骤S77,判断前景区域是否遍历完成,是时,进入步骤S78,反之,进入步骤S75;
步骤S78,判断所有帧是否遍历完成,是时,流程结束,反之,进入步骤S74。
本发明实施例的步骤S76中,对于相邻帧
Figure PCTCN2016092203-appb-000067
Figure PCTCN2016092203-appb-000068
如果存在
Figure PCTCN2016092203-appb-000069
则对
Figure PCTCN2016092203-appb-000070
Figure PCTCN2016092203-appb-000071
之间的轨迹进行绘制,得到插入的辅助前景区域。其中,绘制方法如下:
1)、计算以
Figure PCTCN2016092203-appb-000072
Figure PCTCN2016092203-appb-000073
中心点为两段点的线段。
2)、在线段上每距离m个像素作为一个合成位置,其中m设置的大小和运动光源合成的连贯效果有关,m可以为固定值,例如取2,则每隔两个像素插入一次辅助前景图。如果线段长度为L,则需要插入
Figure PCTCN2016092203-appb-000074
个辅助前景。
3)、在每个合成位置上pn计算该位置辅助前景图的缩放参数。假设
Figure PCTCN2016092203-appb-000075
的缩放参数为
Figure PCTCN2016092203-appb-000076
的缩放参数为
Figure PCTCN2016092203-appb-000077
Figure PCTCN2016092203-appb-000078
Figure PCTCN2016092203-appb-000079
的线段长度为L,则
Figure PCTCN2016092203-appb-000080
Figure PCTCN2016092203-appb-000081
的线段第n个辅助前景的缩放参数计算为
Figure PCTCN2016092203-appb-000082
4)、在合成位置pn按照计算的缩放参数合成一次前景对象。具体方法为:对前景对象
Figure PCTCN2016092203-appb-000083
按照sn缩放,得到缩放后前景对象
Figure PCTCN2016092203-appb-000084
Figure PCTCN2016092203-appb-000085
逐像素合成到生成图像中,使
Figure PCTCN2016092203-appb-000086
质心位置与pn对应。每个像素的合成原则为,对前景对象
Figure PCTCN2016092203-appb-000087
每个像素与它要合成坐标的像素亮度进行比较,保留亮度较大的像素值作为生成图像在该坐标下的像素值。
实施例八
基于前述的实施例,本发明实施例提供一种终端,图8为本发明实施例八终端的组成结构示意图,如图8所示,该终端80包括处理器81和成像装置82,其中:
成像装置82,包括连接装置、以及通过所述连接装置固定在一起的两个或多个图像采集器件组成,所述图像采集期间能够在同一时刻从不同视角采集图像,并将采集到的图像交给处理器进行处理;
这里,成像装置82可以采用上述实施例五中的立体成像装置51来实现。
所述处理器81,设置为获取观看视角;获取初始图像集合中初始图像的前景区域;按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;获取利用所述初始图像生成的背景图;将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
这里需要指出的是:以上终端实施例项的描述,与上述方法描述是类似的,具有同方法实施例相同的有益效果,因此不做赘述。对于本发明终端实施例中未披露的技术细节,本领域的技术人员请参照本发明方法实施例的描述而理解,为节约篇幅,这里不再赘述。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通 信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ReadOnly Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
工业实用性
本发明实施例提出的图像处理方法及装置、终端,所述方法包括:获取 观看视角;获取初始图像集合中初始图像的前景区域;按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;获取利用所述初始图像生成的背景图;将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。通过本发明实施例提供了生成三维光绘图像,从而增加了图像的趣味性和观赏性,进而提升了用户体验。

Claims (20)

  1. 一种图像处理方法,包括:
    获取观看视角;
    获取初始图像集合中初始图像的前景区域;
    按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;
    获取利用所述初始图像生成的背景图;
    将光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
  2. 根据权利要求1所述的图像处理方法,所述获取背景图包括:
    获取所述初始图像集合,所述初始图像集合中包括两张及两张以上的初始图像;
    对所述初始图像集合中的每一初始图像进行背景建模,得到背景图。
  3. 根据权利要求1所述的图像处理方法,其中,所述获取观看视角包括:
    获取用于输入所述观看视角第一操作;
    根据所述第一操作确定观看视角。
  4. 根据权利要求2所述的图像处理方法,所述方法还包括:
    获取两张及两张以上的原始图像;
    对每一所述原始图像进行立体校正,得到校正后的初始图像。
  5. 根据权利要求1所述的图像处理方法,其中,所述按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象,包括:
    利用所述观看视角获取摄影变换矩阵P,所述摄影变换矩阵P用于把三维空间点投射变换到成像平面的二维空间上;
    获取每一所述前景区域的三维中心坐标;
    利用每一所述前景区域的三维中心坐标和所述摄影变换矩阵P,确定用于将每一所述前景区域从三维空间投射变换到二维空间的二维图像坐标;
    获取相邻帧之间的关联关系;
    按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用 所述前景区域生成三维的光绘对象。
  6. 根据权利要求5所述的图像处理方法,其中,所述视角包括观看点的位置和前景区域所在三维坐标系下X、Y、Z三个维度的旋转角度,所述摄影变换矩阵P通过以下方式得到:P=KR[I|-C];
    其中,摄影变换矩阵P为一个3*4的矩阵,列向量C表示摄像机中心,3*3矩阵R表示旋转角度,K表示摄像机内部参数矩阵,I表示3*3单位矩阵。
  7. 根据权利要求5所述的图像处理方法,所述按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象,还包括:
    获取每一所述前景区域的景深坐标;
    根据焦距和所述景深坐标计算每一所述前景区域的缩放倍率;
    按照所述缩放倍率对每一所述前景区域进行调整,得到调整后的前景区域;
    对应地,所述按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用所述前景区域生成三维的光绘对象,包括:
    所述按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用调整后的前景区域生成三维的光绘对象。
  8. 一种图像处理方法,包括:
    获取两张及两张以上的原始图像;
    对每一所述原始图像进行立体校正,得到对应的初始图像;
    获取初始图像集合,所述初始图像集合中包括两张及两张以上的初始图像;对所述初始图像集合中的每一初始图像进行背景建模,得到背景图;
    利用每一初始图像和对应的背景图,生成每一初始图像的前景区域;
    获取观看视角;
    获取初始图像集合中初始图像的前景区域;
    按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;
    获取利用所述初始图像生成的背景图;将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
  9. 根据权利要求8所述的图像处理方法,其中,所述按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象,包括:
    利用所述观看视角获取摄影变换矩阵P,所述摄影变换矩阵P用于把三维空间点投射变换到成像平面的二维空间上;
    获取每一所述前景区域的三维中心坐标;
    利用每一所述前景区域的三维中心坐标和所述摄影变换矩阵P,确定用于将每一所述前景区域从三维空间投射变换到二维空间的二维图像坐标;
    获取相邻帧之间的关联关系;
    获取每一所述前景区域的景深坐标;
    根据焦距和所述景深坐标计算每一所述前景区域的缩放倍率s;
    按照所述缩放倍率对每一所述前景区域进行调整,得到调整后的前景区域;
    所述按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用调整后的前景区域生成所述三维的光绘对象。
  10. 一种图像处理装置,包括第一获取单元、第二获取单元、生成单元、第三获取单元和合成单元,其中:
    第一获取单元,设置为获取观看视角;
    第二获取单元,设置为获取初始图像集合中初始图像的前景区域;
    生成单元,设置为按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;
    第三获取单元,设置为获取利用所述初始图像生成的背景图;
    合成单元,设置为将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
  11. 根据权利要求10所述的装置,其中,所述第三获取单元包括第一获取模块和建模模块,其中:
    第一获取模块,设置为获取初始图像集合,所述初始图像集合中包括两张及两张以上的初始图像;
    建模模块,设置为对所述初始图像集合中的每一初始图像进行背景建模,得到背景图;
    对应地,所述第二获取单元,设置为利用每一初始图像和对应的背景图,生成每一初始图像的前景区域。
  12. 根据权利要求10所述的图像处理装置,其中,所述第一获取单元包括第二获取模块和第一确定模块,其中:
    第二获取模块,设置为获取第一操作,所述第一操作用于输入观看视角;
    所述第一确定模块,设置为根据所述第一操作确定观看视角。
  13. 根据权利要求10~12任一项所述的图像处理装置,所述装置还包括第四获取单元和校正单元,其中:
    第四获取单元,设置为获取张及两张以上的原始图像;
    校正单元,设置为对每一所述原始图像进行立体校正,得到对应的初始图像。
  14. 一种图像处理装置,包括第一获取单元、第二获取单元、生成单元、第三获取单元和合成单元;
    其中,生成单元包括第三获取模块、第四获取模块、第二确定模块、第五获取模块和生成模块,其中:
    第一获取单元,设置为获取观看视角;
    第二获取单元,设置为获取初始图像集合中初始图像的前景区域;
    第三获取模块,设置为利用所述观看视角获取摄影变换矩阵P,所述摄影变换矩阵P用于把三维空间点投射变换到成像平面的二维空间上;
    第四获取模块,设置为获取每一所述前景区域的三维中心坐标;
    第二确定模块,设置为利用每一所述前景区域的三维中心坐标和所述摄影变换矩阵P,确定用于将每一所述前景区域从三维空间投射变换到二维空间的二维图像坐标;
    第五获取模块,设置为获取相邻帧之间的关联关系;
    生成模块,设置为按照每一所述前景区域的二维图像坐标和相邻帧之间 的关联关系,利用所述前景区域生成三维的光绘对象;
    第三获取单元,设置为获取利用所述初始图像生成的背景图;
    合成单元,设置为将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
  15. 根据权利要求14所述的图像处理装置,所述第三获取模块,还设置为:P=KR[I|-C];
    其中,摄影变换矩阵P为一个3*4的矩阵,列向量C表示摄像机中心,3*3矩阵R表示旋转角度,K表示摄像机内部参数矩阵,I表示3*3单位矩阵。
  16. 根据权利要求15所述的图像处理装置,所述生成单元还包括第六获取模块、计算模块和调整模块,其中:
    第六获取模块,设置为获取每一所述前景区域的景深坐标;
    计算模块,设置为根据焦距和所述景深坐标计算每一所述前景区域的缩放倍率;
    调整模块,设置为按照所述缩放倍率对每一所述前景区域进行调整,得到调整后的前景区域;
    对应地,所述生成模块,设置为所述按照每一所述前景区域的二维图像坐标和相邻帧之间的关联关系,利用调整后的前景区域生成所述三维的光绘对象。
  17. 一种终端,所述终端包括处理器和成像装置,其中:
    成像装置,包括连接装置、以及通过所述连接装置固定在一起的两个或多个图像采集器件组成,所述图像采集期间能够在同一时刻从不同视角采集图像,并将采集到的图像交给处理器进行处理;
    所述处理器,用于获取观看视角;获取初始图像集合中初始图像的前景区域;按照观看视角利用所述每一初始图像的前景区域生成三维的光绘对象;获取利用所述初始图像生成的背景图;将所述光绘前景部分的三维成像合成到所述背景图上,得到三维光绘图像。
  18. 根据权利要求17所述的终端,其中,所述成像装置包括第一数码摄像头和第二数码摄像头、连接部件;其中,
    第一数码摄像头和第二数码摄像头通过连接部件连接成一体,第一数码摄像头和第二数码摄像头固定在连接部件上,并使第一数码摄像头和第二数码摄像头的成像平面平行。
  19. 一种三维光绘成像系统,包括立体成像装置、前后景分割模块、深度测量模块和三维绘制模块,其中:
    立体成像装置包括通过连接装置固定在一起的两个或多个数码摄像头组成,其中这些摄像头相对位置固定,能够在同一时刻从不同视角采集图像,并将采集到的图像交给前后景分割模块及后续模块进行处理;
    前后景分割模块,设置为在光绘过程中提取背景,并分割出光绘的前景对象;
    深度测量模块,设置为获取立体成像装置拍摄的不同视角的图像,对两幅图像中前景区域利用立体测量方法生成深度图;
    三维绘制模块,设置为获取上述各个模块的输出,可以从三维空间任意角度绘制光绘前景部分的三维成像,并合成到背景图上。
  20. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求9-7任一项的数据通信方法,和/或用于执行权利要求8-9任一项的数据通信方法。
PCT/CN2016/092203 2015-07-29 2016-07-29 一种图像处理方法及装置、终端 WO2017016511A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510455832.7A CN105100775B (zh) 2015-07-29 2015-07-29 一种图像处理方法及装置、终端
CN201510455832.7 2015-07-29

Publications (1)

Publication Number Publication Date
WO2017016511A1 true WO2017016511A1 (zh) 2017-02-02

Family

ID=54580188

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/092203 WO2017016511A1 (zh) 2015-07-29 2016-07-29 一种图像处理方法及装置、终端

Country Status (2)

Country Link
CN (1) CN105100775B (zh)
WO (1) WO2017016511A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765537A (zh) * 2018-06-04 2018-11-06 北京旷视科技有限公司 一种图像的处理方法、装置、电子设备和计算机可读介质
CN109993824A (zh) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 图像处理方法、智能终端及具有存储功能的装置
CN110176064A (zh) * 2019-05-24 2019-08-27 武汉大势智慧科技有限公司 一种摄影测量生成三维模型的主体对象自动识别方法
CN110310318A (zh) * 2019-07-03 2019-10-08 北京字节跳动网络技术有限公司 一种特效处理方法及装置、存储介质与终端
CN110555892A (zh) * 2019-08-09 2019-12-10 北京字节跳动网络技术有限公司 多角度图像生成方法、装置及电子设备
CN110838163A (zh) * 2018-08-15 2020-02-25 浙江宇视科技有限公司 贴图处理方法及装置
CN110853130A (zh) * 2019-09-25 2020-02-28 咪咕视讯科技有限公司 三维图像的生成方法、电子设备及存储介质
CN111583334A (zh) * 2020-05-26 2020-08-25 广东电网有限责任公司培训与评价中心 一种变电站人员三维空间定位方法、装置和设备
CN112950772A (zh) * 2021-04-06 2021-06-11 杭州今奥信息科技股份有限公司 一种正射影像提取方法及系统
CN112991147A (zh) * 2019-12-18 2021-06-18 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN113409457A (zh) * 2021-08-20 2021-09-17 宁波博海深衡科技有限公司武汉分公司 立体图像的三维重构与可视化方法及设备

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100775B (zh) * 2015-07-29 2017-12-05 努比亚技术有限公司 一种图像处理方法及装置、终端
CN105488756B (zh) * 2015-11-26 2019-03-29 努比亚技术有限公司 图片合成方法及装置
CN110412828A (zh) * 2018-09-07 2019-11-05 广东优世联合控股集团股份有限公司 一种三维光迹影像的打印方法及系统
CN111739145A (zh) * 2019-03-19 2020-10-02 上海汽车集团股份有限公司 一种汽车模型显示系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453662A (zh) * 2007-12-03 2009-06-10 华为技术有限公司 立体视频通信终端、系统及方法
US20100260486A1 (en) * 2007-12-28 2010-10-14 Huawei Device Co., Ltd Apparatus, System and Method for Recording a Multi-View Video and Processing Pictures, and Decoding Method
US20130002815A1 (en) * 2011-07-01 2013-01-03 Disney Enterprises, Inc. 3d drawing system for providing a real time, personalized, and immersive artistic experience
CN104159040A (zh) * 2014-08-28 2014-11-19 深圳市中兴移动通信有限公司 拍摄方法和拍摄装置
CN104202521A (zh) * 2014-08-28 2014-12-10 深圳市中兴移动通信有限公司 拍摄方法及拍摄装置
CN104539925A (zh) * 2014-12-15 2015-04-22 北京邮电大学 基于深度信息的三维场景增强现实的方法及系统
CN105100775A (zh) * 2015-07-29 2015-11-25 努比亚技术有限公司 一种图像处理方法及装置、终端

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557535A (zh) * 2009-05-13 2009-10-14 北京水晶石数字科技有限公司 一种动态效果图的制作方法
CN101931823A (zh) * 2009-06-24 2010-12-29 夏普株式会社 显示3d图像的方法和设备
WO2012085045A1 (de) * 2010-12-22 2012-06-28 Seereal Technologies S.A. Kombinierte lichtmodulationsvorrichtung zur benutzernachführung
CN104134225B (zh) * 2014-08-06 2016-03-02 深圳市中兴移动通信有限公司 图片的合成方法及装置
CN104159033B (zh) * 2014-08-21 2016-01-27 努比亚技术有限公司 一种拍摄效果的优化方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453662A (zh) * 2007-12-03 2009-06-10 华为技术有限公司 立体视频通信终端、系统及方法
US20100260486A1 (en) * 2007-12-28 2010-10-14 Huawei Device Co., Ltd Apparatus, System and Method for Recording a Multi-View Video and Processing Pictures, and Decoding Method
US20130002815A1 (en) * 2011-07-01 2013-01-03 Disney Enterprises, Inc. 3d drawing system for providing a real time, personalized, and immersive artistic experience
CN104159040A (zh) * 2014-08-28 2014-11-19 深圳市中兴移动通信有限公司 拍摄方法和拍摄装置
CN104202521A (zh) * 2014-08-28 2014-12-10 深圳市中兴移动通信有限公司 拍摄方法及拍摄装置
CN104539925A (zh) * 2014-12-15 2015-04-22 北京邮电大学 基于深度信息的三维场景增强现实的方法及系统
CN105100775A (zh) * 2015-07-29 2015-11-25 努比亚技术有限公司 一种图像处理方法及装置、终端

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993824A (zh) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 图像处理方法、智能终端及具有存储功能的装置
CN108765537A (zh) * 2018-06-04 2018-11-06 北京旷视科技有限公司 一种图像的处理方法、装置、电子设备和计算机可读介质
CN110838163B (zh) * 2018-08-15 2024-02-02 浙江宇视科技有限公司 贴图处理方法及装置
CN110838163A (zh) * 2018-08-15 2020-02-25 浙江宇视科技有限公司 贴图处理方法及装置
CN110176064A (zh) * 2019-05-24 2019-08-27 武汉大势智慧科技有限公司 一种摄影测量生成三维模型的主体对象自动识别方法
CN110310318A (zh) * 2019-07-03 2019-10-08 北京字节跳动网络技术有限公司 一种特效处理方法及装置、存储介质与终端
CN110555892B (zh) * 2019-08-09 2023-04-25 北京字节跳动网络技术有限公司 多角度图像生成方法、装置及电子设备
CN110555892A (zh) * 2019-08-09 2019-12-10 北京字节跳动网络技术有限公司 多角度图像生成方法、装置及电子设备
CN110853130A (zh) * 2019-09-25 2020-02-28 咪咕视讯科技有限公司 三维图像的生成方法、电子设备及存储介质
CN110853130B (zh) * 2019-09-25 2024-03-22 咪咕视讯科技有限公司 三维图像的生成方法、电子设备及存储介质
CN112991147A (zh) * 2019-12-18 2021-06-18 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
US11651529B2 (en) 2019-12-18 2023-05-16 Beijing Bytedance Network Technology Co., Ltd. Image processing method, apparatus, electronic device and computer readable storage medium
CN112991147B (zh) * 2019-12-18 2023-10-27 抖音视界有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN111583334A (zh) * 2020-05-26 2020-08-25 广东电网有限责任公司培训与评价中心 一种变电站人员三维空间定位方法、装置和设备
CN111583334B (zh) * 2020-05-26 2023-03-14 广东电网有限责任公司培训与评价中心 一种变电站人员三维空间定位方法、装置和设备
CN112950772A (zh) * 2021-04-06 2021-06-11 杭州今奥信息科技股份有限公司 一种正射影像提取方法及系统
CN112950772B (zh) * 2021-04-06 2023-06-02 杭州今奥信息科技股份有限公司 一种正射影像提取方法及系统
CN113409457A (zh) * 2021-08-20 2021-09-17 宁波博海深衡科技有限公司武汉分公司 立体图像的三维重构与可视化方法及设备
CN113409457B (zh) * 2021-08-20 2023-06-16 宁波博海深衡科技有限公司武汉分公司 立体图像的三维重构与可视化方法及设备

Also Published As

Publication number Publication date
CN105100775B (zh) 2017-12-05
CN105100775A (zh) 2015-11-25

Similar Documents

Publication Publication Date Title
WO2017016511A1 (zh) 一种图像处理方法及装置、终端
CN106454121B (zh) 双摄像头拍照方法及装置
WO2018019124A1 (zh) 一种图像处理方法及电子设备、存储介质
WO2017045650A1 (zh) 一种图片处理方法及终端
WO2017050115A1 (zh) 一种图像合成方法和装置
WO2017067526A1 (zh) 图像增强方法及移动终端
CN106530241B (zh) 一种图像虚化处理方法和装置
WO2016180325A1 (zh) 图像处理方法及装置
US8780258B2 (en) Mobile terminal and method for generating an out-of-focus image
CN106909274B (zh) 一种图像显示方法和装置
WO2017071476A1 (zh) 一种图像合成方法和装置、存储介质
WO2017140182A1 (zh) 一种图像合成方法及装置、存储介质
WO2017020836A1 (zh) 一种虚化处理深度图像的装置和方法
CN106713716B (zh) 一种双摄像头的拍摄控制方法和装置
CN106878588A (zh) 一种视频背景虚化终端及方法
WO2017071475A1 (zh) 一种图像处理方法及终端、存储介质
WO2017071542A1 (zh) 图像处理方法及装置
WO2017206656A1 (zh) 一种图像处理方法及终端、计算机存储介质
WO2017067523A1 (zh) 图像处理方法、装置及移动终端
WO2018045945A1 (zh) 一种对焦方法及终端、存储介质
WO2017088618A1 (zh) 图片合成方法及装置
WO2018019128A1 (zh) 一种夜景图像的处理方法和移动终端
WO2017071469A1 (zh) 一种移动终端和图像拍摄方法、计算机存储介质
CN106911881B (zh) 一种基于双摄像头的动态照片拍摄装置、方法和终端
CN106954020B (zh) 一种图像处理方法及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16829876

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 18/06/2018)

122 Ep: pct application non-entry in european phase

Ref document number: 16829876

Country of ref document: EP

Kind code of ref document: A1