WO2018019124A1 - 一种图像处理方法及电子设备、存储介质 - Google Patents

一种图像处理方法及电子设备、存储介质 Download PDF

Info

Publication number
WO2018019124A1
WO2018019124A1 PCT/CN2017/092497 CN2017092497W WO2018019124A1 WO 2018019124 A1 WO2018019124 A1 WO 2018019124A1 CN 2017092497 W CN2017092497 W CN 2017092497W WO 2018019124 A1 WO2018019124 A1 WO 2018019124A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate information
video data
reference image
video frame
electronic device
Prior art date
Application number
PCT/CN2017/092497
Other languages
English (en)
French (fr)
Inventor
聂洪浩
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2018019124A1 publication Critical patent/WO2018019124A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Definitions

  • the present invention relates to image processing technologies, and in particular, to an image processing method, an electronic device, and a storage medium.
  • the embodiment of the present invention provides an image processing method, an electronic device, and a storage medium to solve at least one problem existing in the prior art.
  • a first aspect of the embodiments of the present invention provides an image processing method, including:
  • the electronic device uses the image acquisition device to collect video data
  • the adjusted video frames are stored and combined based on the stored adjusted video frames into target video data for the collected video data.
  • the selecting a reference area from the reference image includes:
  • the fixed target body is a target body in a fixed state collected by the electronic device in the collection area corresponding to the video data;
  • At least an area corresponding to the fixed target body in the reference image is used as a reference area.
  • the method further includes:
  • the determining the first set of coordinate information corresponding to the pixel points in the reference area includes:
  • the video frame in the collected video data is adjusted by using the first set of coordinate information, so that the second set of coordinate information of the pixel corresponding to the reference area in the adjusted video frame is Corresponding to the first set of coordinate information, including:
  • the video frames in the collected video data are adjusted according to the positional relationship.
  • the first set of coordinate information includes at least two first coordinates; and the second set of coordinate information includes at least two second coordinates;
  • determining the positional relationship between the reference image and each video frame based on the first set of coordinate information and the second set of coordinate information corresponding to each video frame including:
  • the coordinate pair includes a first coordinate and a pixel feature matching a second coordinate
  • a positional relationship of the reference image with each video frame is determined based on at least two sets of coordinate pairs.
  • the method further includes:
  • the selected reference image is presented in a partial area of the display area so that the user can observe the reference image.
  • a second aspect of the embodiments of the present invention provides an electronic device, including:
  • An image acquisition unit configured to acquire video data by using an image acquisition device
  • a determining unit configured to determine a first video frame in the collected video data as a reference image; the first video frame is a preview image for the video data presented in the electronic device; Selecting a reference area in the reference image, and determining a first set of coordinate information corresponding to the pixel points in the reference area;
  • the adjusting unit is configured to adjust, by using the first set of coordinate information, the video frame in the collected video data, so that the second set of coordinate information of the pixel corresponding to the reference area in the adjusted video frame is Corresponding to the first set of coordinate information;
  • a storage unit configured to store the adjusted video frame and based on the stored adjusted video The frames are combined into target video data for the collected video data.
  • the determining unit is further configured to: select an area corresponding to the fixed target body from the reference image, and at least use an area corresponding to the fixed target body in the reference image as a reference area;
  • the body is a target body in a fixed state collected by the electronic device in the collection area corresponding to the video data.
  • the determining unit is further configured to: according to the pixel feature of the pixel corresponding to the reference region in the reference image, select a target pixel from the reference region of the reference image to determine the target The coordinate information corresponding to the pixel point, and the coordinate information corresponding to the target pixel point is used as the first group of coordinate information.
  • the determining unit is further configured to determine a second set of coordinate information of the pixel points corresponding to the reference area in each video frame of the video data, based on the first set of coordinate information and each Determining, by the second set of coordinate information corresponding to a video frame, a positional relationship between the reference image and each video frame;
  • the adjusting unit is further configured to adjust a video frame in the collected video data according to the positional relationship.
  • the first set of coordinate information includes at least two first coordinates; and the second set of coordinate information includes at least two second coordinates;
  • the determining unit is further configured to: select, from the at least two first coordinates and the at least two second coordinates, at least two sets of coordinate pairs that match the pixel features, and determine the location based on the at least two sets of coordinate pairs
  • the positional relationship between the reference image and each video frame is included; the coordinate pair includes a first coordinate and a second coordinate that match the pixel feature.
  • the determining unit is further configured to present the selected reference image in a partial area of the display area, so that the user can observe the reference image.
  • a third aspect of the embodiments of the present invention provides an electronic device, including:
  • An image capture device configured to collect video data
  • a processor configured to determine a first video frame of the collected video data as a reference image; the first video frame is a preview image for the video data presented in the electronic device; Selecting a reference area in the reference image, and determining a first set of coordinate information corresponding to the pixel points in the reference area; adjusting the video frame in the collected video data by using the first set of coordinate information, so that The second set of coordinate information of the pixel corresponding to the reference area in the adjusted video frame corresponds to the first set of coordinate information; storing the adjusted video frame, and combining the stored adjusted video frames into The captured target video data of the video data.
  • the processor is further configured to: select an area corresponding to the fixed target from the reference image, and at least use an area corresponding to the fixed target in the reference image as a reference area;
  • the body is a target body in a fixed state collected by the electronic device in the collection area corresponding to the video data.
  • the processor is further configured to: select, according to a pixel feature of the pixel corresponding to the reference region in the reference image, a target pixel from the reference region of the reference image, and determine the target The coordinate information corresponding to the pixel point, and the coordinate information corresponding to the target pixel point is used as the first group of coordinate information.
  • the processor is further configured to determine a second set of coordinate information of the pixel points corresponding to the reference area in each video frame of the video data, based on the first set of coordinate information and each A second set of coordinate information corresponding to a video frame determines a positional relationship between the reference image and each video frame; and is further configured to adjust a video frame in the collected video data according to the positional relationship.
  • the first set of coordinate information includes at least two first coordinates; and the second set of coordinate information includes at least two second coordinates;
  • the processor is further configured to: select at least two sets of coordinate pairs that match the pixel features from the at least two first coordinates and the at least two second coordinates, based on at least The two sets of coordinate pairs determine a positional relationship between the reference image and each video frame; the coordinate pair includes a first coordinate and a second coordinate that match the pixel feature.
  • the processor is further configured to present the selected reference image in a partial area of the display area, so that the user can observe the reference image.
  • a fourth aspect of the embodiments of the present invention provides an electronic device, including: a processor and a memory for storing a computer program executable on a processor, wherein the processor is configured to execute the computer program The steps of the method.
  • a fifth aspect of embodiments of the present invention provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of the method described above.
  • the image processing method and the electronic device and the storage medium determine a reference image by collecting video data, and select a reference area from the reference image, and further use a pixel corresponding to the reference area.
  • the first set of coordinate information adjusts the video frame in the collected video data, so that the second set of coordinate information of the pixel corresponding to the reference area in the adjusted video frame corresponds to the first set of coordinate information, In this way, the purpose of performing adjustment on the collected video data is implemented; further, the electronic device does not directly store the collected video data, but stores the adjusted video frames to be combined into the collected video.
  • the target video data of the data thus avoiding the problem of blurring of the stored video image due to jitter, and improving the user experience.
  • FIG. 1 is a schematic structural diagram of hardware of an optional mobile terminal 100 for implementing various embodiments of the present invention
  • FIG. 2 is a schematic diagram of a wireless communication system of the mobile terminal 100 shown in FIG. 1;
  • FIG. 3 is a schematic flowchart of an implementation process of an image processing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an electronic device presenting video data in a framing interface according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of an electronic device selecting a reference image based on a user operation according to an embodiment of the present invention
  • FIG. 6 is a schematic flowchart of an implementation process of an image processing method according to Embodiment 2 of the present invention.
  • FIG. 7 is a schematic flowchart of implementation of a specific application of an image processing method according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a principle for adjusting P(u, v) by using a neighbor interpolation formula according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a logic unit of an electronic device according to an embodiment of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal 100 that implements various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an audio/video (A/V) input unit 120, and a user input unit 130.
  • FIG. 1 illustrates a mobile terminal 100 having various components, but it should be understood that not all illustrated components are required to be implemented. Can be substituted Implement more or fewer components.
  • the elements of the mobile terminal 100 will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • the wireless communication unit 110 may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the broadcast signal may exist in various forms, for example, it may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast systems.
  • the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H)
  • MediaFLO forward link media
  • the digital broadcasting system of the @ ) data broadcasting system, the terrestrial digital broadcasting integrated service (ISDB-T), and the like receives digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or according to text and/or Various types of data transmitted and/or received by multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal 100.
  • the wireless internet module 113 can be internally or externally coupled to the terminal.
  • the wireless internet access technologies involved in the wireless internet module 113 may include wireless local area network (WLAN), wireless compatibility authentication (Wi-Fi), wireless broadband (Wibro), global microwave interconnection access (Wimax), and high speed downlink. Packet Access (HSDPA) and more.
  • the short range communication module 114 is a module for supporting short range communication.
  • Some examples of short-range communication technology include Bluetooth TM, a radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, etc. TM.
  • the location information module 115 is a module for checking or acquiring location information of the mobile terminal 100.
  • a typical example of location information module 115 is Global Positioning System (GPS) module 115.
  • GPS Global Positioning System
  • the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate three-dimensional current position information based on longitude, latitude, and altitude.
  • the method for calculating position and time information uses three satellites and corrects the calculated position and time information errors by using another satellite.
  • the GPS module 115 is capable of calculating speed information by continuously calculating current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal 100.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the audio data may be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal 100.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal.
  • the sensing unit 140 can sense whether the slide type phone is turned on or off.
  • the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port (a typical example is a universal serial bus USB port), for connection having The port of the device that identifies the module, the audio input/output (I/O) port, the video I/O port, the headphone port, and so on.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module (hereinafter referred to as "identification device”) may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input (eg, data information, power, etc.) from an external device and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal 100 and externally Data is transferred between devices.
  • input eg, data information, power, etc.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path of the terminal 100.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal 100 is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal 100 may include an external display unit (not Shown) and an internal display unit (not shown).
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may output audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal 100 is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is converted and output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
  • the memory 160 may store a software program or the like that performs processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, and the like) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic Sex memory, disk, CD, etc.
  • a flash memory e.g, a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic Sex memory, disk, CD, etc.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal 100.
  • the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing or playing back multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal 100 has been described in terms of its function.
  • the slide type mobile terminal 100 in various types of mobile terminals 100 such as a folding type, a bar type, a swing type, a slide type mobile terminal 100, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal 100, and is not limited to the slide type mobile terminal 100.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to utilize the number of transmissions via a frame or a packet It operates according to, for example, wired and wireless communication systems as well as satellite-based communication systems.
  • a communication system in which the mobile terminal 100 according to the present invention can operate will now be described with reference to FIG.
  • Such communication systems may use different air interfaces and/or physical layers.
  • air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
  • a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 may include multiple BSCs 2750.
  • Each BS 270 can serve one or more partitions (or regions), with each partition covered by a multi-directional antenna or an antenna pointing in a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally mean a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as multiple cellular stations.
  • a broadcast transmitter (BT) 295 transmits a broadcast signal to the system for operation.
  • the mobile terminal 100 is made.
  • a broadcast receiving module 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • several satellites 300 are shown, for example, a Global Positioning System (GPS) satellite 300 can be employed.
  • GPS Global Positioning System
  • the satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the GPS module 115 as shown in Figure 1 is typically configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking techniques or in addition to GPS tracking techniques, other techniques that can track the location of the mobile terminal 100 can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • the mobile communication module 112 of the wireless communication unit 110 in the mobile terminal accesses the mobile based on necessary data (including user identification information and authentication information) of the mobile communication network (such as 2G/3G/4G mobile communication network) built in the mobile terminal.
  • the communication network transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for services such as web browsing and network multimedia playback of the mobile terminal user.
  • the wireless internet module 113 of the wireless communication unit 110 implements a function of a wireless hotspot by operating a related protocol function of a wireless hotspot, and the wireless hotspot supports access of a plurality of mobile terminals (any mobile terminal other than the mobile terminal) by multiplexing the mobile communication module.
  • 112 and mobile communication networks The mobile communication connection transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for mobile terminal users such as web browsing and network multimedia broadcasting, since the mobile terminal is substantially a multiplexed mobile terminal and a communication network.
  • the mobile communication connection between the mobile communication data is transmitted, so that the traffic of the mobile communication data consumed by the mobile terminal is included in the communication tariff of the mobile terminal by the charging entity on the communication network side, thereby consuming the communication tariff included in the subscription of the mobile terminal.
  • Data traffic for mobile communication data is transmitted, so that the traffic of the mobile communication data consumed by the mobile terminal is included in the communication tariff of the mobile terminal by the charging entity on the communication network side, thereby consuming the communication tariff included in the subscription of the mobile terminal.
  • the embodiment provides an image processing method.
  • the image processing method according to the embodiment is used.
  • the process of video shooting if a hand shake causes a change in the position of the screen, the changed pixel is pulled. Return to the position of the calibration before the change, so that the video screen will not shake and achieve the effect of the picture; for example, when the user uses the mobile phone to perform video shooting on the concert site, the background of the static state in the concert scene can be used as the reference point. And based on the changed pixel points in the background, the video frame is pulled back to the original position, and the moving object in the concert scene moves normally, thus avoiding the video image blurring caused by the jitter under the premise of normal acquisition.
  • the problem In the process of video shooting, if a hand shake causes a change in the position of the screen, the changed pixel is pulled. Return to the position of the calibration before the change, so that the video screen will not shake and achieve the effect of the picture; for example, when the user uses the mobile phone to perform video shooting on the concert site,
  • FIG. 3 is a schematic flowchart of an implementation of an image processing method according to an embodiment of the present invention. As shown in FIG. 3, the image processing method includes:
  • Step 301 The electronic device uses the image collection device to collect video data.
  • the image capturing device may be specifically a camera disposed in the electronic device, for example, a front or rear camera in the mobile phone; or an external camera connected to the electronic device.
  • Step 302 Determine a first video frame in the collected video data as a reference image; the first video frame is a preview image for the video data presented in the electronic device;
  • the first video frame does not specifically refer to the first frame image in the video data, but refers to any frame video image in the video data; correspondingly, the reference image may be specifically The captured image corresponding to any one of the video data, for example, as shown in FIG. 4, the electronic device displays the collected video data in real time on the framing interface. At this time, the electronic device can automatically determine Deriving a reference image, such as automatically determining a first frame image corresponding to the video data as the reference image; or the reference image is selected by the electronic device based on the preset rule in the collected video data. A video frame.
  • the reference image may also be selected based on a user operation. For example, as shown in FIG.
  • the electronic device may select a reference image based on a touch operation of the user, and present the selected reference image in a partial area of the display area, such as presenting the reference image in a lower right corner of the display area, so that the user can observe To the intercepted image, a convenient condition is provided for timely replacement of the reference image.
  • the electronic device may start storing the video data after determining the reference image, and store the video.
  • the acquisition process before the data is only serviced by determining the reference image, and will not be stored.
  • the problem of blurring of the video image in the stored video data can be effectively avoided, which lays a foundation for providing user experience.
  • Step 303 Select a reference area from the reference image, and determine a first group of coordinate information corresponding to the pixel points in the reference area;
  • the reference area needs to be selected in the reference image; here, in order to facilitate adjustment of the video frame, the entity corresponding to the collected video data may be Selecting an area corresponding to the fixed target body in the scene; specifically, the electronic device selects an area corresponding to the fixed target body from the reference image; where the fixed target body is in the collection area corresponding to the video data
  • the electronic equipment collected a target body in a fixed state (that is, a stationary state); and further, the electronic device uses, as a reference region, an area corresponding to the fixed target body in the reference image, for example, a region corresponding to a background in a fixed state in a concert scene Set as the reference area.
  • the selected reference area is the area corresponding to the fixed object, for the adjacent two video frames, in the video collection process, the coordinates of the fixed object in the adjacent two video frames can be determined to be unchanged. Further, when the coordinates of the fixed object in the next video frame corresponding to the reference image change, the coordinates of the fixed object in the reference image may be used to adjust the next video frame, thereby avoiding blurring of the video image. . Moreover, since the selected reference area is an area corresponding to the fixed object, the adjustment algorithm is more complicated and is convenient to be implemented in the electronic device.
  • an area corresponding to the fixed part in the moving target body may be selected from the physical scene corresponding to the collected video data, and then the fixed part in the moving target body in the reference image is correspondingly
  • the area serves as a reference area.
  • the selection process of the reference region can be arbitrarily determined according to actual conditions.
  • the electronic device may select a target pixel point from a reference area of the reference image based on a pixel feature of the pixel corresponding to the reference area in the reference image, that is, in the In practical applications, not all pixel points in the reference area of the determined reference image are used as reference pixel points, but a part of the pixel points are selected as target pixel points, for example, pixels with obvious gray-scale changes are selected as targets.
  • the pixel points further determine coordinate information corresponding to the target pixel point, and the coordinate information corresponding to the target pixel point is used as the first set of coordinate information, thereby reducing the complexity of the adjustment algorithm.
  • Step 304 Adjust, by using the first set of coordinate information, the video frame in the collected video data, so that the second set of coordinate information of the pixel corresponding to the reference area in the adjusted video frame and the first A set of coordinate information corresponds to;
  • the pixel points corresponding to the reference area of each video frame in the collected video data may be compared with the pixel points corresponding to the first group of coordinate information, thereby determining the phase.
  • the associated pair of pixels and then the adjustment of the adjustment of the video frame based on the associated pair of pixels.
  • Step 305 Store the adjusted video frames and combine the stored adjusted video frames into target video data for the collected video data.
  • the method according to the embodiment of the present invention determines a reference image in the collected video data, and selects a reference area from the reference image, and further utilizes a first set of coordinate information corresponding to the pixel points in the reference area.
  • the video frame in the collected video data is adjusted, so that the second set of coordinate information of the pixel corresponding to the reference area in the adjusted video frame corresponds to the first set of coordinate information, so that the collection is performed.
  • the video data is used for the purpose of performing adjustment; further, the electronic device does not directly store the collected video data, but stores the adjusted video frames to be combined into target video data for the collected video data. In this way, the problem of blurring of the stored video image due to the jitter is avoided, and the user experience is improved.
  • the embodiment provides an image processing method.
  • the image processing method according to the embodiment is used.
  • the process of video shooting if a hand shake causes a change in the position of the screen, the changed pixel is pulled. Return to the position of the calibration before the change, so that the video screen will not shake and achieve the effect of the picture; for example, when the user uses the mobile phone to perform video shooting on the concert site, the background of the static state in the concert scene can be used as the reference point. And based on the changed pixel points in the background, the video frame is pulled back to the original position, and the moving object in the concert scene moves normally, thus avoiding the video image blurring caused by the jitter under the premise of normal acquisition.
  • the problem In the process of video shooting, if a hand shake causes a change in the position of the screen, the changed pixel is pulled. Return to the position of the calibration before the change, so that the video screen will not shake and achieve the effect of the picture; for example, when the user uses the mobile phone to perform video shooting on the concert site,
  • FIG. 6 is a schematic flowchart of an implementation of an image processing method according to Embodiment 2 of the present invention. As shown in FIG. 6, the image processing method includes:
  • Step 601 The electronic device uses the image collection device to collect video data.
  • the image collection device may be specifically disposed in the electronic device.
  • a camera for example, a front or rear camera in a mobile phone; or an external camera connected to an electronic device.
  • Step 602 Determine a first video frame in the collected video data as a reference image; the first video frame is a preview image for the video data presented in the electronic device;
  • the first video frame does not specifically refer to the first frame image in the video data, but refers to any frame video image in the video data; correspondingly, the reference image may be specifically The captured image corresponding to any one of the video data, for example, as shown in FIG. 4, the electronic device displays the collected video data in real time on the framing interface. At this time, the electronic device can automatically determine Deriving a reference image, such as automatically determining a first frame image corresponding to the video data as the reference image; or the reference image is selected by the electronic device based on the preset rule in the collected video data. A video frame.
  • the reference image may also be selected based on a user operation. For example, as shown in FIG.
  • the electronic device may select a reference image based on a touch operation of the user, and present the selected reference image in a partial area of the display area, such as presenting the reference image in a lower right corner of the display area, so that the user can observe To the intercepted image, a convenient condition is provided for timely replacement of the reference image.
  • the electronic device may start storing the video data after determining the reference image, and store the video.
  • the acquisition process before the data is only serviced by determining the reference image, and will not be stored.
  • the problem of blurring of the video image in the stored video data can be effectively avoided, which lays a foundation for providing user experience.
  • Step 603 Select a reference area from the reference image, and determine a first group of coordinate information corresponding to the pixel points in the reference area;
  • the reference parameter is also needed.
  • a reference area is selected in the test image; here, in order to facilitate adjustment of the video frame, an area corresponding to the fixed target body may be selected from the physical scene corresponding to the collected video data; specifically, the electronic device is from the reference image Selecting an area corresponding to the fixed target body; where the fixed target body is a target body in a fixed state (that is, a stationary state) collected by the electronic device in the collection area corresponding to the video data;
  • the electronic device uses an area corresponding to the fixed target body in the reference image as a reference area, for example, a region corresponding to a background in a fixed state in the concert scene as a reference area.
  • the selected reference area is the area corresponding to the fixed object, for the adjacent two video frames, in the video collection process, the coordinates of the fixed object in the adjacent two video frames can be determined to be unchanged. Further, when the coordinates of the fixed object in the next video frame corresponding to the reference image change, the coordinates of the fixed object in the reference image may be used to adjust the next video frame, thereby avoiding blurring of the video image. . Moreover, since the selected reference area is an area corresponding to the fixed object, the adjustment algorithm is more complicated and is convenient to be implemented in the electronic device.
  • an area corresponding to the fixed part in the moving target body may be selected from the physical scene corresponding to the collected video data, and then the fixed part in the moving target body in the reference image is correspondingly
  • the area serves as a reference area.
  • the selection process of the reference region can be arbitrarily determined according to actual conditions.
  • the electronic device may select a target pixel point from a reference area of the reference image based on a pixel feature of the pixel corresponding to the reference area in the reference image, that is, in the In practical applications, not all pixel points in the reference area of the determined reference image are used as reference pixel points, but a part of the pixel points are selected as target pixel points, for example, pixels with obvious gray-scale changes are selected as targets.
  • the pixel points further determine coordinate information corresponding to the target pixel point, and the coordinate information corresponding to the target pixel point is used as the first set of coordinate information, thereby reducing the complexity of the adjustment algorithm.
  • Step 604 Determine, corresponding to the reference area in each video frame of the collected video data. a second set of coordinate information of the pixel points;
  • the pixel points corresponding to the reference area of each video frame in the collected video data may be compared with the pixel points corresponding to the first group of coordinate information, thereby determining the associated pixel pair, and further A method of adjusting the adjustment of the video frame is determined based on the associated pair of pixels.
  • the collected video data is based on the same physical scene, for example, collecting the concert scene. Therefore, after the reference area is determined, each video frame of the subsequently collected video data corresponds to the video frame. Reference area, therefore, the positional relationship of the two frames can be determined by using the change value of the coordinates of the same reference area in different video frames (such as the reference image and each video frame) in the same coordinate, and then the adjustment is determined based on the positional relationship. the way.
  • the second set of coordinate information is coordinate information of pixel points corresponding to the reference area in a video frame different from the reference image in the same video data.
  • Step 605 Determine a positional relationship between the reference image and each video frame based on the first set of coordinate information and the second set of coordinate information corresponding to each video frame.
  • the first set of coordinate information includes at least two first coordinates; the second set of coordinate information includes at least two second coordinates; correspondingly, step 605 is configured to include:
  • the coordinate pair includes a first coordinate and a pixel feature matching a second coordinate;
  • the matching of the pixel features refers to matching pixel features of the same pixel in different video frames, for example, the first pixel of the reference region in the reference image and the reference in a certain video frame The same first pixel of the region, the pixel features of the two first pixels match, and can be combined into one coordinate pair.
  • the electronic device determines a positional relationship between the reference image and each video frame based on at least two sets of coordinate pairs.
  • the embodiment utilizes the variation feature to determine the positional relationship between the reference image and any video frame, and then adjusts each video frame based on the positional relationship.
  • Step 606 Adjust a video frame in the collected video data according to the location relationship.
  • Step 607 Store the adjusted video frames, and combine the stored adjusted video frames into target video data for the collected video data.
  • the method according to the embodiment of the present invention determines a reference image in the collected video data, and selects a reference area from the reference image, and further utilizes a first set of coordinate information corresponding to the pixel points in the reference area.
  • the video frame in the collected video data is adjusted, so that the second set of coordinate information of the pixel corresponding to the reference area in the adjusted video frame corresponds to the first set of coordinate information, so that the collection is performed.
  • the video data is used for the purpose of performing adjustment; further, the electronic device does not directly store the collected video data, but stores the adjusted video frames to be combined into target video data for the collected video data. In this way, the problem of blurring of the stored video image due to the jitter is avoided, and the user experience is improved.
  • Step 701 The electronic device intercepts a reference image from the preview image and determines a reference area.
  • the electronic device After the electronic device enters the video mode, the user selects a scene to be taken, and the preview interface includes the scene and angle that the user wants to shoot. At this time, click on the shooting, the electronic device will intercept a preview photo, use the preview image as a reference image, and select the reference area of the reference image.
  • Step 702 After determining the reference area, a pixel with a significant feature may be selected as the target pixel in the reference area, and the coordinates of the target pixel are used as the first set of coordinate information for subsequent image registration.
  • a region forest, lake, farmland, etc.
  • a line region boundary, road, etc.
  • a point a corner point of the region, an intersection of the line, a high curvature point on the curve, etc.
  • the pixel is equal to the target pixel point; here, the extracted target pixel point can be distributed anywhere in the image.
  • the pixel feature change feature may also be referred to, such as a grayscale change.
  • a pixel corresponding to a point, a line, an area, and the like whose gray level changes significantly in the reference region may be used.
  • Point as the target pixel;
  • Step 703 Determine coordinate information of the target pixel point in the reference area corresponding to each frame in the collected video data, to obtain a second group of coordinate information, based on the first group of coordinate information and each video frame. Corresponding the second set of coordinate information, selecting a coordinate pair that matches the pixel features;
  • the feature matching algorithm is used to determine the coordinate pair of the pixel feature matching.
  • the coordinates corresponding to the pixel points that do not match the pixel feature may be Delete, you can also use the interpolation method to calculate the corresponding coordinates, so as to achieve pixel-by-pixel registration between the two images.
  • Step 704 Determine a position of each video frame in the reference image and the video data based on a coordinate pair formed by the second set of coordinate information corresponding to each video frame in the first set of coordinate information. relationship;
  • the positional relationship may be specifically a spatial variation model, that is, a mapping model between two images to be registered (such as a reference image and each video frame), for example, two images are rotated and translated, and then overlapped.
  • a spatial variation model that is, a mapping model between two images to be registered (such as a reference image and each video frame), for example, two images are rotated and translated, and then overlapped.
  • the rigid body transformation model for example, scaling is required, and even the amplitudes of the X and Y directions are the same.
  • the model is an affine transformation or a nonlinear transformation model.
  • a certain video frame in the video data is the image A
  • A is shifted by 2 pixels to the left
  • 3 pixels are moved down, and then rotated 60 degrees clockwise to obtain the image B, which is assumed to be the reference image.
  • the process of registering A and B is the process of determining the three parameters of 2, 3, and 60. Since the three parameters are determined, the correspondence between the image B and the image A can be obtained. Of course, in practical applications, it is difficult to directly determine the above three parameters. Therefore, the group of parameters with the largest number of pixels overlapping the two images can be determined as the variation parameter.
  • the solution space of the three parameters is traversed, for example, the number of translation pixels in the X direction is from -100 to +100, the step is 1 pixel, and the translation pixel in the Y direction is also searched from -100 to +100. , the step is 1 pixel, the rotation angle is searched from 0 to 360 degrees, the step is 1 degree, to complete a cycle of 200*200*360 times, and then in each cycle, it is determined that after the cycle parameters are transformed
  • the number of coincident pixels of image A and image B find the parameters used in the set of loops with the largest number of coincident pixels in 200*200*360 cycles, and the parameter can be a change parameter.
  • Step 705 Adjust each video frame in the video data based on the location relationship.
  • the change method can be selected according to actual needs, as long as the video frame can be calibrated.
  • the nearest neighbor interpolation is used as an example to explain the adjustment process in detail;
  • the target pixel that is, the coordinate of the target pixel in the video frame to be adjusted is F(x1, y1)
  • the corresponding value floating point coordinate P(u, the target pixel on the reference image can be obtained by the inverse transform method.
  • v here, you can use the positional relationship obtained in step 704, such as using 3, 3, 60 three parameters, inverse adjustment (x1, y1), to obtain the coordinate point of the target pixel on the reference image, that is, coordinates P(u,v).
  • u, v may be decimal, and the actual image does not have pixels corresponding to the points with decimal coordinates. Therefore, it is necessary to further determine the determined floating point coordinates P(u, v).
  • the P(u,v) is adjusted by the nearest neighbor interpolation formula. Specifically, (u+0.5) may be rounded to obtain m, and (v+0.5) may be rounded to obtain v. Further determining pixel points (m, n), (m+1, n), (m, n+1), (m+1, n+1), and determining (m, n), (m+1) , n), (m, n+1), (m+1, n+1) and the difference between the pixel values of P(u, v), respectively, and the pixel with the smallest difference is adjusted as (x1, y1) After the corresponding point, if the point with the smallest difference is (m, n), the coordinate g(m, n) is the point corresponding to the adjustment of (x1, y1); here, the coordinate g(m, n) Can be identified as corresponding to the reference image coordinate of.
  • the embodiment provides an electronic device.
  • the electronic device includes:
  • the image collecting unit 91 is configured to collect video data by using an image capturing device
  • a determining unit 92 configured to determine a first video frame in the collected video data as a reference image; the first video frame is a preview image for the video data presented in the electronic device; Selecting a reference area in the reference image, and determining a first set of coordinate information corresponding to the pixel points in the reference area;
  • the adjusting unit 93 is configured to adjust the video frame in the collected video data by using the first set of coordinate information, so that the second set of coordinate information of the pixel corresponding to the reference area in the adjusted video frame is The first set of coordinate information corresponds to;
  • the storage unit 94 is configured to store the adjusted video frames and combine the stored adjusted video frames into target video data for the collected video data.
  • the determining unit 92 is further configured to: select an area corresponding to the fixed target body from the reference image, and at least use an area corresponding to the fixed target body in the reference image as a reference area;
  • the fixed target body is a target body in a fixed state collected by the electronic device in the collection area corresponding to the video data.
  • the determining unit 92 is further configured to: according to the pixel feature of the pixel corresponding to the reference region in the reference image, select a target pixel from the reference region of the reference image, and determine The coordinate information corresponding to the target pixel point, and the coordinate information corresponding to the target pixel point is used as the first set of coordinate information.
  • the determining unit 92 is further configured to determine that the video data is collected. Determining the reference image and each of the second set of coordinate information of the pixel corresponding to the reference area in each video frame based on the first set of coordinate information and the second set of coordinate information corresponding to each video frame Positional relationship of video frames;
  • the adjusting unit 93 is further configured to adjust the video frame in the collected video data according to the positional relationship.
  • the first set of coordinate information includes at least two first coordinates; and the second set of coordinate information includes at least two second coordinates;
  • the determining unit 92 is further configured to: select, from the at least two first coordinates and the at least two second coordinates, at least two sets of coordinate pairs that match pixel features, and determine based on at least two sets of coordinate pairs a positional relationship between the reference image and each video frame; the coordinate pair includes a first coordinate and a second coordinate that match the pixel feature.
  • the determining unit 92 is further configured to present the selected reference image in a partial area of the display area, so that the user can observe the reference image.
  • the image acquisition unit 91 can be implemented by an image processing device such as a camera; the determination unit 92, the adjustment unit 93, and the storage unit 94 can each be a central processing unit (CPU), or a microprocessor (MPU). ), or digital signal processor (DSP), or programmable gate array (FPGA) implementation.
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA programmable gate array
  • the embodiment further provides an electronic device, including:
  • An image capture device configured to collect video data
  • a processor configured to determine a first video frame of the collected video data as a reference image; the first video frame is a preview image for the video data presented in the electronic device; Selecting a reference area in the reference image, and determining a first set of coordinate information corresponding to the pixel points in the reference area; adjusting the video frame in the collected video data by using the first set of coordinate information, so that The second set of coordinate information of the pixel corresponding to the reference area in the adjusted video frame corresponds to the first set of coordinate information; and the adjusted video frame is stored, And combining the stored adjusted video frames into target video data for the collected video data.
  • the processor is further configured to: select an area corresponding to the fixed target from the reference image, and at least use an area corresponding to the fixed target in the reference image as a reference area;
  • the fixed target body is a target body in a fixed state collected by the electronic device in the collection area corresponding to the video data.
  • the processor is further configured to: select, according to a pixel feature of the pixel corresponding to the reference region in the reference image, a target pixel from the reference region of the reference image, and determine The coordinate information corresponding to the target pixel point is obtained, and coordinate information corresponding to the target pixel point is used as the first group of coordinate information.
  • the processor is further configured to determine, according to the first group, the second set of coordinate information of the pixel corresponding to the reference area in each video frame of the video data.
  • the coordinate information and the second set of coordinate information corresponding to each video frame determine a positional relationship between the reference image and each video frame; and are further configured to adjust a video frame in the collected video data according to the positional relationship.
  • the first set of coordinate information includes at least two first coordinates; and the second set of coordinate information includes at least two second coordinates;
  • the processor is further configured to: select, from the at least two first coordinates and the at least two second coordinates, at least two sets of coordinate pairs that match pixel features, based on at least two sets of coordinate pairs Determining a positional relationship between the reference image and each video frame; the coordinate pair includes a first coordinate and a second coordinate that match the pixel feature.
  • the processor is further configured to present the selected reference image in a partial region of the display area so that the user can observe the reference image.
  • the embodiment further provides an electronic device, comprising: a processor and a memory for storing a computer program capable of running on a processor, wherein the processor is used for transporting When the computer program is executed, the steps of the method described above are performed.
  • the processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method may be completed by an integrated logic circuit of hardware in a processor or an instruction in a form of software.
  • the above described processor may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like.
  • the processor may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present invention.
  • a general purpose processor can be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiment of the present invention may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a storage medium, the storage medium being located in the memory, the processor reading the information in the memory, and completing the steps of the foregoing methods in combination with the hardware thereof.
  • the present embodiment also provides a computer readable storage medium, such as a memory including a computer program, which can be executed by a processor of an electronic device to perform the steps described in the foregoing methods.
  • the computer readable storage medium may be a magnetic random access memory (FRAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), and an erasable memory. Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash Memory, Magnetic Surface Memory, Optical Disc Or a memory such as a CD-ROM (Compact Disc Read-Only Memory); or a device including one or any combination of the above memories.
  • FRAM magnetic random access memory
  • ROM Read Only Memory
  • PROM Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • Flash Memory Magnetic Surface Memory
  • an embodiment or “an embodiment” is intended to mean that a particular feature, structure, or characteristic that is associated with an embodiment is included in at least one embodiment of the invention.
  • “a” or “an” In addition, these particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be taken to the embodiments of the present invention.
  • the implementation process constitutes any limitation.
  • the serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and when executed, the program includes The foregoing steps of the method embodiment; and the foregoing storage medium includes: a removable storage device, a ROM, a magnetic disk, or an optical disk, and the like, which can store program codes.
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a magnetic disk, or an optical disk.
  • a reference image is determined in the collected video data, and a reference area is selected from the reference image, and then the first set of coordinate information corresponding to the pixel point in the reference area is used in the collected video data.
  • the video frame is adjusted to be described in the adjusted video frame
  • the second set of coordinate information of the pixel corresponding to the reference area corresponds to the first set of coordinate information, so that the purpose of performing adjustment on the collected video data is implemented; and the electronic device is not directly collected.
  • the video data is stored, and the adjusted video frames are stored to be combined into the target video data for the collected video data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Telephone Function (AREA)

Abstract

本发明实施例公开了一种图像处理方法,包括:电子设备利用图像采集装置采集视频数据;将采集到的视频数据中的第一视频帧确定为参考图像;所述第一视频帧为所述电子设备中呈现的针对所述视频数据的预览图像;从所述参考图像中选取出参考区域,并确定出所述参考区域中像素点对应的第一组坐标信息;利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应;存储调整后的视频帧,并基于存储的调整后的视频帧组合成针对采集到的所述视频数据的目标视频数据。本发明实施例同时还公开了一种电子设备和存储介质。

Description

一种图像处理方法及电子设备、存储介质
相关申请的交叉引用
本申请基于申请号为201610619353.9、申请日为2016年07月29日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本发明涉及图像处理技术,尤其涉及一种图像处理方法及电子设备、存储介质。
背景技术
通常用户使用手机采集视频时会出现这样的问题:视频采集时预览界面清晰的,但是播放采集到的视频数据时确是模糊的;这是由于用户在视频采集过程中会有轻微抖动的现象,而对于没有光学防抖功能的手机而言,这些抖动足以影响最终的成像质量,且上述情况在光线较暗快门较慢的情况下更容易发生。因此,如何防止视频采集时由于用户抖动而导致的图像模糊的问题,成为了现有图像处理中亟需解决的问题。
发明内容
有鉴于此,本发明实施例为解决现有技术中存在的至少一个问题而提供一种图像处理方法及电子设备、存储介质。
本发明实施例的技术方案是这样实现的:
本发明实施例第一方面提供了一种图像处理方法,包括:
电子设备利用图像采集装置采集视频数据;
将采集到的视频数据中的第一视频帧确定为参考图像;所述第一视频帧为所述电子设备中呈现的针对所述视频数据的预览图像;
从所述参考图像中选取出参考区域,并确定出所述参考区域中像素点对应的第一组坐标信息;
利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应;
存储调整后的视频帧,并基于存储的调整后的视频帧组合成针对采集到的所述视频数据的目标视频数据。
上述方案中,所述从参考图像中选取出参考区域,包括:
从所述参考图像中选取出固定目标体对应的区域;所述固定目标体为所述视频数据对应的采集区域中所述电子设备采集到的处于固定状态的目标体;
至少将所述参考图像中所述固定目标体对应的区域作为参考区域。
上述方案中,所述方法还包括:
基于所述参考图像中所述参考区域对应的像素点的像素特征,从所述参考图像的参考区域中选取出目标像素点;
对应地,所述确定出所述参考区域中像素点对应的第一组坐标信息,包括:
确定出所述目标像素点对应的坐标信息,并将所述目标像素点对应的坐标信息作为第一组坐标信息。
上述方案中,所述利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应,包括:
确定采集到所述视频数据的每一视频帧中所述参考区域对应的像素点 的第二组坐标信息;
基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,确定所述参考图像与每一视频帧的位置关系;
根据所述位置关系对采集到的视频数据中的视频帧进行调整。
上述方案中,所述第一组坐标信息中至少包含有两个第一坐标;所述第二组坐标信息中至少包括有两个第二坐标;
对应地,所述基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,确定所述参考图像与每一视频帧的位置关系,包括:
从所述至少两个第一坐标以及所述至少两个第二坐标中,选取出像素特征相匹配的至少两组坐标对;所述坐标对中包括有像素特征相匹配的一个第一坐标和一个第二坐标;
基于至少两组坐标对,确定所述参考图像与每一视频帧的位置关系。
上述方案中,所述方法还包括:
将选取出的参考图像呈现于显示区域的部分区域中,以便于用户能够观测到所述参考图像。
本发明实施例第二方面提供了一种电子设备,包括:
图像采集单元,配置为利用图像采集装置采集视频数据;
确定单元,配置为将采集到的视频数据中的第一视频帧确定为参考图像;所述第一视频帧为所述电子设备中呈现的针对所述视频数据的预览图像;还配置为从所述参考图像中选取出参考区域,并确定出所述参考区域中像素点对应的第一组坐标信息;
调整单元,配置为利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应;
存储单元,配置为存储调整后的视频帧,并基于存储的调整后的视频 帧组合成针对采集到的所述视频数据的目标视频数据。
上述方案中,所述确定单元,还配置为从所述参考图像中选取出固定目标体对应的区域,至少将所述参考图像中所述固定目标体对应的区域作为参考区域;所述固定目标体为所述视频数据对应的采集区域中所述电子设备采集到的处于固定状态的目标体。
上述方案中,所述确定单元,还配置为基于所述参考图像中所述参考区域对应的像素点的像素特征,从所述参考图像的参考区域中选取出目标像素点,确定出所述目标像素点对应的坐标信息,并将所述目标像素点对应的坐标信息作为第一组坐标信息。
上述方案中,所述确定单元,还配置为确定采集到所述视频数据的每一视频帧中所述参考区域对应的像素点的第二组坐标信息,基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,确定所述参考图像与每一视频帧的位置关系;
所述调整单元,还配置为根据所述位置关系对采集到的视频数据中的视频帧进行调整。
上述方案中,所述第一组坐标信息中至少包含有两个第一坐标;所述第二组坐标信息中至少包括有两个第二坐标;
所述确定单元,还配置为从所述至少两个第一坐标以及所述至少两个第二坐标中,选取出像素特征相匹配的至少两组坐标对,基于至少两组坐标对,确定所述参考图像与每一视频帧的位置关系;所述坐标对中包括有像素特征相匹配的一个第一坐标和一个第二坐标。
上述方案中,所述确定单元,还配置为将选取出的参考图像呈现于显示区域的部分区域中,以便于用户能够观测到所述参考图像。
本发明实施例第三方面提供了一种电子设备,包括:
图像采集装置,配置为采集视频数据;
处理器,配置为将采集到的视频数据中的第一视频帧确定为参考图像;所述第一视频帧为所述电子设备中呈现的针对所述视频数据的预览图像;还配置为从所述参考图像中选取出参考区域,并确定出所述参考区域中像素点对应的第一组坐标信息;利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应;存储调整后的视频帧,并基于存储的调整后的视频帧组合成针对采集到的所述视频数据的目标视频数据。
上述方案中,所述处理器,还配置为从所述参考图像中选取出固定目标体对应的区域,至少将所述参考图像中所述固定目标体对应的区域作为参考区域;所述固定目标体为所述视频数据对应的采集区域中所述电子设备采集到的处于固定状态的目标体。
上述方案中,所述处理器,还配置为基于所述参考图像中所述参考区域对应的像素点的像素特征,从所述参考图像的参考区域中选取出目标像素点,确定出所述目标像素点对应的坐标信息,并将所述目标像素点对应的坐标信息作为第一组坐标信息。
上述方案中,所述处理器,还配置为确定采集到所述视频数据的每一视频帧中所述参考区域对应的像素点的第二组坐标信息,基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,确定所述参考图像与每一视频帧的位置关系;还配置为根据所述位置关系对采集到的视频数据中的视频帧进行调整。
上述方案中,所述第一组坐标信息中至少包含有两个第一坐标;所述第二组坐标信息中至少包括有两个第二坐标;
对应地,所述处理器,还配置为从所述至少两个第一坐标以及所述至少两个第二坐标中,选取出像素特征相匹配的至少两组坐标对,基于至少 两组坐标对,确定所述参考图像与每一视频帧的位置关系;所述坐标对中包括有像素特征相匹配的一个第一坐标和一个第二坐标。
上述方案中,所述处理器,还配置为将选取出的参考图像呈现于显示区域的部分区域中,以便于用户能够观测到所述参考图像。
本发明实施例第四方面提供了一种电子设备,包括:处理器和用于存储能够在处理器上运行的计算机程序的存储器,所述处理器用于运行所述计算机程序时,执行以上所述方法的步骤。
本发明实施例第五方面提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现以上所述方法的步骤。
本发明实施例所述的图像处理方法及电子设备、存储介质,通过在采集视频数据中确定参考图像,并从所述参考图像中选取出参考区域,进而利用所述参考区域中像素点对应的第一组坐标信息对采集到的视频数据中的视频帧进行调整,使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应,如此,实现对采集到的视频数据进行实施调整的目的;而且,所述电子设备并非直接对采集得到的视频数据进行存储,而是存储调整后的视频帧以组合成针对采集到的所述视频数据的目标视频数据,这样,避免了由于抖动而是存储的视频图像模糊的问题,提升了用户体验。
附图说明
图1为实现本发明各个实施例的一个可选的移动终端100的硬件结构示意图;
图2为如图1所示的移动终端100的无线通信系统示意图;
图3为本发明实施例一图像处理方法的实现流程示意图;
图4为本发明实施例电子设备在取景界面中呈现视频数据的示意图;
图5为本发明实施例电子设备基于用户操作选取参考图像的示意图;
图6为本发明实施例二图像处理方法的实现流程示意图;
图7为本发明实施例图像处理方法的具体应用的实现流程示意图;
图8为本发明实施例利用近邻插值公式对P(u,v)进行调整的原理示意图;
图9为本发明实施例电子设备的逻辑单元的结构示意图。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明的技术方案,并不用于限定本发明的保护范围。
现在将参考附图描述实现本发明各个实施例的电子设备,这里,所述电子设备可以具体为移动终端。进一步地,在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,“模块”与“部件”可以混合地使用。
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1为实现本发明各个实施例的移动终端100的硬件结构示意,如图1所示,移动终端100可以包括无线通信单元110、音频/视频(A/V)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1示出了具有各种组件的移动终端100,但是应理解的是,并不要求实施所有示出的组件。可以替代 地实施更多或更少的组件。将在下面详细描述移动终端100的元件。
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信系统或网络之间的无线电通信。例如,无线通信单元110可以包括广播接收模块111、移动通信模块112、无线互联网模块113、短程通信模块114和位置信息模块115中的至少一个。
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信模块112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广播(DMB)的电子节目指南(EPG)、数字视频广播手持(DVB-H)的电子服务指南(ESG)等等的形式而存在。广播接收模块111可以通过使用各种类型的广播系统接收信号广播。特别地,广播接收模块111可以通过使用诸如多媒体广播-地面(DMB-T)、数字多媒体广播-卫星(DMB-S)、数字视频广播-手持(DVB-H),前向链路媒体(MediaFLO@)的数据广播系统、地面数字广播综合服务(ISDB-T)等等的数字广播系统接收数字广播。广播接收模块111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或 多媒体消息发送和/或接收的各种类型的数据。
无线互联网模块113支持移动终端100的无线互联网接入。无线互联网模块113可以内部或外部地耦接到终端。无线互联网模块113所涉及的无线互联网接入技术可以包括无线局域网(WLAN)、无线相容性认证(Wi-Fi)、无线宽带(Wibro)、全球微波互联接入(Wimax)、高速下行链路分组接入(HSDPA)等等。
短程通信模块114是用于支持短程通信的模块。短程通信技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、紫蜂TM等等。
位置信息模块115是用于检查或获取移动终端100的位置信息的模块。位置信息模块115的典型示例是全球定位系统(GPS)模块115。根据当前的技术,GPS模块115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。当前,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,GPS模块115能够通过实时地连续计算当前位置信息来计算速度信息。
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端100的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语 音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端100的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的加速或减速移动和方向等等,并且生成用于控制移动终端100的操作的命令或信号。例如,当移动终端100实施为滑动型移动电话时,感测单元140可以感测该滑动型电话是打开还是关闭。另外,感测单元140能够检测电源单元190是否提供电力或者接口单元170是否与外部装置耦接。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口(典型示例是通用串行总线USB端口)、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为“识别装置”)可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。
接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端100和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端100的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端100是否准确地安装在底座上的信号。
输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152、警报单元153等等。
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端100可以包括外部显示单元(未 示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。
音频输出模块152可以在移动终端100处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。
警报单元153可以提供输出以将事件的发生通知给移动终端100。典型的事件可以包括呼叫接收、消息接收、键信号输入、触摸输入等等。除了音频或视频输出之外,警报单元153可以以不同的方式提供输出以通知事件的发生。例如,警报单元153可以以振动的形式提供输出,当接收到呼叫、消息或一些其它进入通信(incoming communication)时,警报单元153可以提供触觉输出(即,振动)以将其通知给用户。通过提供这样的触觉输出,即使在用户的移动电话处于用户的口袋中时,用户也能够识别出各种事件的发生。警报单元153也可以经由显示单元151或音频输出模块152提供通知事件的发生的输出。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储已经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁 性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端100的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现或回放多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,已经按照其功能描述了移动终端100。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端100等等的各种类型的移动终端100中的滑动型移动终端100作为示例。因此,本发明能够应用于任何类型的移动终端100,并且不限于滑动型移动终端100。
如图1中所示的移动终端100可以被构造为利用经由帧或分组发送数 据的诸如有线和无线通信系统以及基于卫星的通信系统来操作。
现在将参考图2描述其中根据本发明的移动终端100能够操作的通信系统。
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码分多址(CDMA)和通用移动通信系统(UMTS)(特别地,长期演进(LTE))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它类型的系统。
参考图2,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC 280被构造为与公共电话交换网络(PSTN)290形成接口。MSC 280还被构造为与可以经由回程线路耦接到基站270的BSC 275形成接口。回程线路可以根据若干己知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM、IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图2中所示的系统可以包括多个BSC 2750。
每个BS 270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS 270。或者,每个分区可以由用于分集接收的两个或更多天线覆盖。每个BS 270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。
分区与频率分配的交叉可以被称为CDMA信道。BS 270也可以被称为基站收发器子系统(BTS)或者其它等效术语。在这样的情况下,术语“基站”可以用于笼统地表示单个BSC 275和至少一个BS 270。基站也可以被称为“蜂窝站”。或者,特定BS 270的各分区可以被称为多个蜂窝站。
如图2中所示,广播发射器(BT)295将广播信号发送给在系统内操 作的移动终端100。如图1中所示的广播接收模块111被设置在移动终端100处以接收由BT295发送的广播信号。在图2中,示出了几个卫星300,例如可以采用全球定位系统(GPS)卫星300。卫星300帮助定位多个移动终端100中的至少一个。
在图2中,描绘了多个卫星300,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。如图1中所示的GPS模块115通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技术之外,可以使用可以跟踪移动终端100的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。
作为无线通信系统的一个典型操作,BS 270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通信。特定基站270接收的每个反向链路信号被在特定BS 270内进行处理。获得的数据被转发给相关的BSC 275。BSC提供通话资源分配和包括BS 270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC 280,其提供用于与PSTN 290形成接口的额外的路由服务。类似地,PSTN 290与MSC 280形成接口,MSC与BSC 275形成接口,并且BSC 275相应地控制BS 270以将正向链路信号发送到移动终端100。
移动终端中无线通信单元110的移动通信模块112基于移动终端内置的接入移动通信网络(如2G/3G/4G等移动通信网络)的必要数据(包括用户识别信息和鉴权信息)接入移动通信网络为移动终端用户的网页浏览、网络多媒体播放等业务传输移动通信数据(包括上行的移动通信数据和下行的移动通信数据)。
无线通信单元110的无线互联网模块113通过运行无线热点的相关协议功能而实现无线热点的功能,无线热点支持多个移动终端(移动终端之外的任意移动终端)接入,通过复用移动通信模块112与移动通信网络之 间的移动通信连接为移动终端用户的网页浏览、网络多媒体播放等业务传输移动通信数据(包括上行的移动通信数据和下行的移动通信数据),由于移动终端实质上是复用移动终端与通信网络之间的移动通信连接传输移动通信数据的,因此移动终端消耗的移动通信数据的流量由通信网络侧的计费实体计入移动终端的通信资费,从而消耗移动终端签约使用的通信资费中包括的移动通信数据的数据流量。
基于上述移动终端100硬件结构以及通信系统,提出本发明方法各个实施例。
实施例一
本实施例提供了一种图像处理方法;这里,采用本实施例所述的图像处理方法,在视频拍摄过程中,若出现手抖导致画面位置发现变化的情况时,会将变化的像素点拉回到变化之前标定的位置,从而使视频画面不会晃动,达到画面稳定的效果;例如,用户利用手机对演唱会现场进行视频拍摄时,可以利用演唱会现场中处于静止状态的背景作为参考点,并基于背景中变化的像素点,对视频帧拉回到原有位置,而演唱会现场中的移动物体正常移动,如此,保证正常采集的前提下,避免了由于抖动而导致的视频图像模糊的问题。
具体地,图3为本发明实施例一图像处理方法的实现流程示意图,如图3所示,该图像处理方法包括:
步骤301:电子设备利用图像采集装置采集视频数据;
本实施例中,所述图像采集装置可以具体为设置于所述电子设备中的摄像头,例如,手机中的前置或后置摄像头;也可以具体为与电子设备连接的外置摄像头。
步骤302:将采集到的视频数据中的第一视频帧确定为参考图像;所述第一视频帧为所述电子设备中呈现的针对所述视频数据的预览图像;
本实施例中,所述第一视频帧并非具体指所述视频数据中的第一帧图像;而是指所述视频数据中的任一帧视频图像;对应地,所述参考图像可以具体为采集到的所述视频数据中的任一视频帧对应的图像,例如,如图4所示,所述电子设备在取景界面实时呈现采集到的视频数据,此时,所述电子设备可以自动确定出参考图像,如自动将采集视频数据时所对应的第一帧图像确定为所述参考图像;或者,所述参考图像为电子设备基于预设规则,在采集到的视频数据中所筛选出的某一视频帧。当然,在实际应用中,所述参考图像还可以是基于用户操作而选取出的,例如,如图5所示,当所述电子设备在取景界面呈现实时采集到的视频数据时,所述电子设备可以基于用户的触控操作选取出参考图像,并将选取出的参考图像呈现于显示区域的部分区域中,如将所述参考图像呈现在所述显示区域的右下角,以便于用户能够观察到该截取的图像,为及时更换所述参考图像提供了便利条件。
进一步地,在实际应用中,为使所述电子设备存储的视频数据中的视频帧即为调整后的视频帧,所述电子设备可以在确定出参考图像后才开始存储视频数据,而存储视频数据之前的采集过程仅为确定参考图像而服务,并不会被存储,这样,能够有效避免存储的视频数据中存在视频图像模糊的问题,为提供用户体验奠定了基础。
步骤303:从所述参考图像中选取出参考区域,并确定出所述参考区域中像素点对应的第一组坐标信息;
在一实施例中,当所述电子设备确定出参考图像后,还需要在所述参考图像中选取出参考区域;这里,为便于对视频帧进行调整,可以从采集到的视频数据对应的实体场景中选取固定目标体对应的区域;具体地,所述电子设备从所述参考图像中选取出固定目标体对应的区域;这里,所述固定目标体为所述视频数据对应的采集区域中所述电子设备采集到的处于 固定状态(也即静止状态)的目标体;进而所述电子设备将所述参考图像中所述固定目标体对应的区域作为参考区域,比如,将演唱会现场中处于固定状态的背景对应的区域设置为参考区域。这样,由于选取出的参考区域为固定物体对应的区域,所以,对于相邻两个视频帧而言,在视频采集过程中,可以认定相邻两个视频帧中该固定物体的坐标不变,进而当与参考图像对应的下一视频帧中的固定物体的坐标发生变化时,可以利用所述参考图像中所述固定物体的坐标来对该下一视频帧进行调整,如此,避免视频图像模糊。而且,由于选取的参考区域为固定物体对应的区域,所以,调整算法复杂度较,便于在电子设备中实现。
当然,在实际应用中,还可以从采集到的视频数据对应的实体场景中选取移动目标体中的固定部分对应的区域,进而将所述参考图像中所述移动目标体中的固定部分对应的区域作为参考区域。这里,为匹配不同的算法复杂度,参考区域的选取过程可以根据实际情况而任意确定。
在另一实施例中,所述电子设备可以基于所述参考图像中所述参考区域对应的像素点的像素特征,从所述参考图像的参考区域中选取出目标像素点,也就是说,在实际应用中,并非将确定出的参考图像的参考区域中的所有像素点均作为参考像素点,而是从中选取出部分像素点作为目标像素点,例如选取出灰度变化明显的像素点作为目标像素点,进而确定出所述目标像素点对应的坐标信息,并将所述目标像素点对应的坐标信息作为第一组坐标信息,进而降低调整算法复杂度。
步骤304:利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应;
在实际应用中,可以将采集到的视频数据中每一视频帧的所述参考区域对应的像素点与第一组坐标信息对应的像素点进行比对,进而确定出相 关联的像素对,进而基于相关联的像素对确定对视频帧进行调整的调整方式。
步骤305:存储调整后的视频帧,并基于存储的调整后的视频帧组合成针对采集到的所述视频数据的目标视频数据。
这样,本发明实施例所述的方法,通过在采集视频数据中确定参考图像,并从所述参考图像中选取出参考区域,进而利用所述参考区域中像素点对应的第一组坐标信息对采集到的视频数据中的视频帧进行调整,使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应,如此,实现对采集到的视频数据进行实施调整的目的;而且,所述电子设备并非直接对采集得到的视频数据进行存储,而是存储调整后的视频帧以组合成针对采集到的所述视频数据的目标视频数据,这样,避免了由于抖动而是存储的视频图像模糊的问题,提升了用户体验。
实施例二
本实施例提供了一种图像处理方法;这里,采用本实施例所述的图像处理方法,在视频拍摄过程中,若出现手抖导致画面位置发现变化的情况时,会将变化的像素点拉回到变化之前标定的位置,从而使视频画面不会晃动,达到画面稳定的效果;例如,用户利用手机对演唱会现场进行视频拍摄时,可以利用演唱会现场中处于静止状态的背景作为参考点,并基于背景中变化的像素点,对视频帧拉回到原有位置,而演唱会现场中的移动物体正常移动,如此,保证正常采集的前提下,避免了由于抖动而导致的视频图像模糊的问题。
具体地,图6为本发明实施例二图像处理方法的实现流程示意图,如图6所示,该图像处理方法包括:
步骤601:电子设备利用图像采集装置采集视频数据;
本实施例中,所述图像采集装置可以具体为设置于所述电子设备中的 摄像头,例如,手机中的前置或后置摄像头;也可以具体为与电子设备连接的外置摄像头。
步骤602:将采集到的视频数据中的第一视频帧确定为参考图像;所述第一视频帧为所述电子设备中呈现的针对所述视频数据的预览图像;
本实施例中,所述第一视频帧并非具体指所述视频数据中的第一帧图像;而是指所述视频数据中的任一帧视频图像;对应地,所述参考图像可以具体为采集到的所述视频数据中的任一视频帧对应的图像,例如,如图4所示,所述电子设备在取景界面实时呈现采集到的视频数据,此时,所述电子设备可以自动确定出参考图像,如自动将采集视频数据时所对应的第一帧图像确定为所述参考图像;或者,所述参考图像为电子设备基于预设规则,在采集到的视频数据中所筛选出的某一视频帧。当然,在实际应用中,所述参考图像还可以是基于用户操作而选取出的,例如,如图5所示,当所述电子设备在取景界面呈现实时采集到的视频数据时,所述电子设备可以基于用户的触控操作选取出参考图像,并将选取出的参考图像呈现于显示区域的部分区域中,如将所述参考图像呈现在所述显示区域的右下角,以便于用户能够观察到该截取的图像,为及时更换所述参考图像提供了便利条件。
进一步地,在实际应用中,为使所述电子设备存储的视频数据中的视频帧即为调整后的视频帧,所述电子设备可以在确定出参考图像后才开始存储视频数据,而存储视频数据之前的采集过程仅为确定参考图像而服务,并不会被存储,这样,能够有效避免存储的视频数据中存在视频图像模糊的问题,为提供用户体验奠定了基础。
步骤603:从所述参考图像中选取出参考区域,并确定出所述参考区域中像素点对应的第一组坐标信息;
在一实施例中,当所述电子设备确定出参考图像后,还需要在所述参 考图像中选取出参考区域;这里,为便于对视频帧进行调整,可以从采集到的视频数据对应的实体场景中选取固定目标体对应的区域;具体地,所述电子设备从所述参考图像中选取出固定目标体对应的区域;这里,所述固定目标体为所述视频数据对应的采集区域中所述电子设备采集到的处于固定状态(也即静止状态)的目标体;进而所述电子设备将所述参考图像中所述固定目标体对应的区域作为参考区域,比如,将演唱会现场中处于固定状态的背景对应的区域设置为参考区域。这样,由于选取出的参考区域为固定物体对应的区域,所以,对于相邻两个视频帧而言,在视频采集过程中,可以认定相邻两个视频帧中该固定物体的坐标不变,进而当与参考图像对应的下一视频帧中的固定物体的坐标发生变化时,可以利用所述参考图像中所述固定物体的坐标来对该下一视频帧进行调整,如此,避免视频图像模糊。而且,由于选取的参考区域为固定物体对应的区域,所以,调整算法复杂度较,便于在电子设备中实现。
当然,在实际应用中,还可以从采集到的视频数据对应的实体场景中选取移动目标体中的固定部分对应的区域,进而将所述参考图像中所述移动目标体中的固定部分对应的区域作为参考区域。这里,为匹配不同的算法复杂度,参考区域的选取过程可以根据实际情况而任意确定。
在另一实施例中,所述电子设备可以基于所述参考图像中所述参考区域对应的像素点的像素特征,从所述参考图像的参考区域中选取出目标像素点,也就是说,在实际应用中,并非将确定出的参考图像的参考区域中的所有像素点均作为参考像素点,而是从中选取出部分像素点作为目标像素点,例如选取出灰度变化明显的像素点作为目标像素点,进而确定出所述目标像素点对应的坐标信息,并将所述目标像素点对应的坐标信息作为第一组坐标信息,进而降低调整算法复杂度。
步骤604:确定采集到所述视频数据的每一视频帧中所述参考区域对应 的像素点的第二组坐标信息;
在实际应用中,可以将采集到的视频数据中每一视频帧的所述参考区域对应的像素点与第一组坐标信息对应的像素点进行比对,进而确定出相关联的像素对,进而基于相关联的像素对确定对视频帧进行调整的调整方式。
本实施例中,采集到的视频数据是基于同一实体场景的,例如,对演唱会现场进行采集,所以,在确定出参考区域后,随后采集的视频数据的每一视频帧中均对应有该参考区域,因此,可以利用不同视频帧(如参考图像与每一视频帧)中的同一参考区域在相同坐标下坐标的变化值来确定两帧图像的位置关系,进而基于该位置关系确定出调整方式。这里,所述第二组坐标信息即为同一视频数据中的与参考图像不同的视频帧中所述参考区域对应的像素点的坐标信息。
步骤605:基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,确定所述参考图像与每一视频帧的位置关系;
在一具体实施例中,所述第一组坐标信息中至少包含有两个第一坐标;所述第二组坐标信息中至少包括有两个第二坐标;对应的,步骤605具备包括:
从所述至少两个第一坐标以及所述至少两个第二坐标中,选取出像素特征相匹配的至少两组坐标对;所述坐标对中包括有像素特征相匹配的一个第一坐标和一个第二坐标;这里,所述像素特征相匹配指的是不同视频帧中的同一像素点像素特征相匹配,例如,参考图像中的参考区域的第一像素点与某一视频帧中该参考区域的同一第一像素点,此两个第一像素点的像素特征相匹配,能够组合成一个坐标对。进一步地,所述电子设备基于至少两组坐标对,确定所述参考图像与每一视频帧的位置关系。这里,当用户抖动而使得电子设备抖动时,同一像素点在同一坐标系下的坐标必 然会发生变化,所以,本实施例利用了该变化特征来确定参考图像与任一视频帧之间的位置关系,进而基于位置关系,对每一视频帧进行调整。
步骤606:根据所述位置关系对采集到的视频数据中的视频帧进行调整;
步骤607:存储调整后的视频帧,并基于存储的调整后的视频帧组合成针对采集到的所述视频数据的目标视频数据。
这样,本发明实施例所述的方法,通过在采集视频数据中确定参考图像,并从所述参考图像中选取出参考区域,进而利用所述参考区域中像素点对应的第一组坐标信息对采集到的视频数据中的视频帧进行调整,使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应,如此,实现对采集到的视频数据进行实施调整的目的;而且,所述电子设备并非直接对采集得到的视频数据进行存储,而是存储调整后的视频帧以组合成针对采集到的所述视频数据的目标视频数据,这样,避免了由于抖动而是存储的视频图像模糊的问题,提升了用户体验。
以下通过具体应用场景对本发明实施例做进一步详细说明;具体地,所述电子设备开启相机,并选择视频模式;随后,如图7所示,
步骤701:所述电子设备从预览画面中截取参考图像,并确定出参考区域。
在实际应用中,电子设备进入视频模式后,用户会选择一个要拍的景物,预览界面中包含用户预要拍摄的景物和角度。此时,点击拍摄,电子设备会截取一张预览照片,将该预览图像作为参考图像,同时,选取出该参考图像的参考区域。
步骤702:当确定出参考区域后,可以在参考区域中选取出特征显著的像素点作为目标像素点,进而将目标像素点的坐标作为第一组坐标信息,以用于后续的图像配准。
这里,可以根据要拍摄的景物的特征,选取区域(森林、湖泊、农田等)、线(区域的边界、道路等)或点(区域的角点、线的交点、曲线上的高曲率点等)等的像素作为目标像素点;这里,提取出的目标像素点可以分布在图像任何地方。
当然,在实际应用中,在确定目标像素点的时候还可以参考像素特征变化特征,如灰度变化情况,具体地,可以将参考区域中灰度变化明显的点、线、区域等对应的像素点作为目标像素点;
步骤703:确定采集到的视频数据中每一帧对应的所述参考区域中的所述目标像素点的坐标信息,得到第二组坐标信息,基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,选取出像素特征相匹配的坐标对;
这里,在两幅图像(参考图像与任一视频帧对应的图像)对应的坐标组中利用特征匹配算法确定出像素特征匹配的坐标对,当然,对于像素特征不匹配的像素点对应的坐标可以删除,也可以利用插值等方法推算出对应的坐标,从而实现两幅图像之间逐像素的配准。
步骤704:基于由所述第一组坐标信息与视频数据中的每一视频帧对应的第二组坐标信息形成的坐标对,确定出所述参考图像与视频数据中的每一视频帧的位置关系;
这里,该位置关系可以具体为空间变化模型,即两幅要配准的图像(如参考图像与每一视频帧)之间的映射模型,比如两图像旋转、平移后即会重合,此时模型为刚体变换模型,又比如需要缩放,甚至X方向和Y方向缩放的幅度都一样,此时,模型即为仿射变换或者非线性变换模型。
具体地,比如视频数据中的某一视频帧为图像A,将A左移2个像素,下移3个像素,再顺时针旋转60度后,得到图像B,假设图像B为参考图像。此时,对A和B进行配准的过程就是确定2、3、60这三个参数的过程, 因为这三个参数一旦确定了,就能够得到图像B和图像A的对应关系。当然,在实际应用中,难以直接确定出上述三个参数,因此,可以将两幅图像重合的像素最多的那组参数确定为变化参数。例如,对这三个参数的解空间进行遍历,比如对X方向的平移像素个数从-100搜索到+100,步距为1像素,对Y方向的平移像素也从-100搜索到+100,步距为1像素,对旋转角度从0搜索到360度,步距为1度,以完成一个200*200*360次的循环,然后在每次循环中,确定当次循环参数进行变换后的图像A与图像B的重合像素的个数,找出200*200*360次循环中重合像素个数最多的那组循环中所使用的参数,该参数即可为变化参数。
步骤705:基于位置关系对视频数据中的每一视频帧进行调整。
当然,在实际应用中可以根据实际需求选取变化方法,只要能够对视频帧进行校准即可。这里,以最近邻插值为例,对调整过程做详细说明;
假设目的像素,也即待调整视频帧中目标像素的坐标为F(x1,y1),此时可以通过反向变换方法得到在该目标像素在参考图像上的对应值浮点坐标P(u,v),这里,可以利用步骤704得到的位置关系,如利用2、3、60三个参数,将(x1,y1)反向调整,得到所述参考图像上该目标像素的坐标点,即坐标P(u,v)。这里,在实际应用中,u,v可能会为小数,而实际图像中并未有坐标为小数的点对应的像素点,所以,需要对确定出的浮点坐标P(u,v)做进一步调整,如图8所示,利用近邻插值公式对P(u,v)进行调整,具体地,可以对(u+0.5)进行取整得到m,对(v+0.5)进行取整得到v,进而确定出像素点(m,n)、(m+1,n)、(m,n+1)、(m+1,n+1),并确定出(m,n)、(m+1,n)、(m,n+1)、(m+1,n+1)分别与P(u,v)的像素值的差值,将差值最小的像素点作为(x1,y1)调整后所对应的点,如差值最小的点为(m,n),则坐标g(m,n)即为(x1,y1)调整后所对应的点;这里,坐标g(m,n)即可认定为对应于参考图像上 的坐标。
这样,即可提高视频流畅程度,提升电子产品的竞争力。这里,值得注意的是,本实施例仅适用于拍摄场景不变的情况,而不适用场景变换的情况
实施例三
本实施例提供了一种电子设备,如图9所示,所述电子设备包括:
图像采集单元91,配置为利用图像采集装置采集视频数据;
确定单元92,配置为将采集到的视频数据中的第一视频帧确定为参考图像;所述第一视频帧为所述电子设备中呈现的针对所述视频数据的预览图像;还配置为从所述参考图像中选取出参考区域,并确定出所述参考区域中像素点对应的第一组坐标信息;
调整单元93,配置为利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应;
存储单元94,配置为存储调整后的视频帧,并基于存储的调整后的视频帧组合成针对采集到的所述视频数据的目标视频数据。
在一实施例中,所述确定单元92,还配置为从所述参考图像中选取出固定目标体对应的区域,至少将所述参考图像中所述固定目标体对应的区域作为参考区域;所述固定目标体为所述视频数据对应的采集区域中所述电子设备采集到的处于固定状态的目标体。
在一实施例中,所述确定单元92,还配置为基于所述参考图像中所述参考区域对应的像素点的像素特征,从所述参考图像的参考区域中选取出目标像素点,确定出所述目标像素点对应的坐标信息,并将所述目标像素点对应的坐标信息作为第一组坐标信息。
在一实施例中,所述确定单元92,还配置为确定采集到所述视频数据 的每一视频帧中所述参考区域对应的像素点的第二组坐标信息,基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,确定所述参考图像与每一视频帧的位置关系;
所述调整单元93,还配置为根据所述位置关系对采集到的视频数据中的视频帧进行调整。
在一实施例中,所述第一组坐标信息中至少包含有两个第一坐标;所述第二组坐标信息中至少包括有两个第二坐标;
所述确定单元92,还配置为从所述至少两个第一坐标以及所述至少两个第二坐标中,选取出像素特征相匹配的至少两组坐标对,基于至少两组坐标对,确定所述参考图像与每一视频帧的位置关系;所述坐标对中包括有像素特征相匹配的一个第一坐标和一个第二坐标。
在一实施例中,所述确定单元92,还配置为将选取出的参考图像呈现于显示区域的部分区域中,以便于用户能够观测到所述参考图像。
这里,值得注意的是,所述图像采集单元91可由图像处理装置,如摄像头实现;所述确定单元92、调整单元93以及存储单元94均可由中央处理器(CPU)、或微处理器(MPU)、或数字信号处理器(DSP)、或可编程门阵列(FPGA)实现。
本实施例还提供了一种电子设备,包括:
图像采集装置,配置为采集视频数据;
处理器,配置为将采集到的视频数据中的第一视频帧确定为参考图像;所述第一视频帧为所述电子设备中呈现的针对所述视频数据的预览图像;还配置为从所述参考图像中选取出参考区域,并确定出所述参考区域中像素点对应的第一组坐标信息;利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应;存储调整后的视频帧, 并基于存储的调整后的视频帧组合成针对采集到的所述视频数据的目标视频数据。
在一具体实施例中,所述处理器,还配置为从所述参考图像中选取出固定目标体对应的区域,至少将所述参考图像中所述固定目标体对应的区域作为参考区域;所述固定目标体为所述视频数据对应的采集区域中所述电子设备采集到的处于固定状态的目标体。
在另一具体实施例中,所述处理器,还配置为基于所述参考图像中所述参考区域对应的像素点的像素特征,从所述参考图像的参考区域中选取出目标像素点,确定出所述目标像素点对应的坐标信息,并将所述目标像素点对应的坐标信息作为第一组坐标信息。
在另一具体实施例中,所述处理器,还配置为确定采集到所述视频数据的每一视频帧中所述参考区域对应的像素点的第二组坐标信息,基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,确定所述参考图像与每一视频帧的位置关系;还配置为根据所述位置关系对采集到的视频数据中的视频帧进行调整。
在另一具体实施例中,所述第一组坐标信息中至少包含有两个第一坐标;所述第二组坐标信息中至少包括有两个第二坐标;
对应地,所述处理器,还配置为从所述至少两个第一坐标以及所述至少两个第二坐标中,选取出像素特征相匹配的至少两组坐标对,基于至少两组坐标对,确定所述参考图像与每一视频帧的位置关系;所述坐标对中包括有像素特征相匹配的一个第一坐标和一个第二坐标。
在另一具体实施例中,所述处理器,还配置为将选取出的参考图像呈现于显示区域的部分区域中,以便于用户能够观测到所述参考图像。
进一步地,本实施例又提供了一种电子设备,包括:处理器和用于存储能够在处理器上运行的计算机程序的存储器,其中,所述处理器用于运 行所述计算机程序时,执行以上所述方法的步骤。这里,所述处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、DSP,或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本发明实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成前述方法的步骤。
这里需要指出的是:以上电子设备实施例项的描述,与上述方法描述是类似的,具有同方法实施例相同的有益效果,因此不做赘述。对于本发明电子设备实施例中未披露的技术细节,本领域的技术人员请参照本发明方法实施例的描述而理解,为节约篇幅,这里不再赘述。
本实施例还提供了一种计算机可读存储介质,例如包括计算机程序的存储器,上述计算机程序可由电子设备的处理器执行,以完成前述方法所述步骤。计算机可读存储介质可以是磁性随机存取存储器(FRAM,ferromagnetic random access memory)、只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory)等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
应理解,说明书通篇中提到的“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“在一实施例中”或“在另一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
工业实用性
本发明实施例通过在采集视频数据中确定参考图像,并从所述参考图像中选取出参考区域,进而利用所述参考区域中像素点对应的第一组坐标信息对采集到的视频数据中的视频帧进行调整,使调整后的视频帧中所述 参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应,如此,实现对采集到的视频数据进行实施调整的目的;而且,所述电子设备并非直接对采集得到的视频数据进行存储,而是存储调整后的视频帧以组合成针对采集到的所述视频数据的目标视频数据,这样,避免了由于抖动而是存储的视频图像模糊的问题,提升了用户体验。

Claims (20)

  1. 一种图像处理方法,所述方法包括:
    电子设备利用图像采集装置采集视频数据;
    将采集到的视频数据中的第一视频帧确定为参考图像;所述第一视频帧为所述电子设备中呈现的针对所述视频数据的预览图像;
    从所述参考图像中选取出参考区域,并确定出所述参考区域中像素点对应的第一组坐标信息;
    利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应;
    存储调整后的视频帧,并基于存储的调整后的视频帧组合成针对采集到的所述视频数据的目标视频数据。
  2. 根据权利要求1所述的方法,其中,所述从参考图像中选取出参考区域,包括:
    从所述参考图像中选取出固定目标体对应的区域;所述固定目标体为所述视频数据对应的采集区域中所述电子设备采集到的处于固定状态的目标体;
    至少将所述参考图像中所述固定目标体对应的区域作为参考区域。
  3. 根据权利要求1所述的方法,其中,所述方法还包括:
    基于所述参考图像中所述参考区域对应的像素点的像素特征,从所述参考图像的参考区域中选取出目标像素点;
    对应地,所述确定出所述参考区域中像素点对应的第一组坐标信息,包括:
    确定出所述目标像素点对应的坐标信息,并将所述目标像素点对应的 坐标信息作为第一组坐标信息。
  4. 根据权利要求1至3任一项所述的方法,其中,所述利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应,包括:
    确定采集到所述视频数据的每一视频帧中所述参考区域对应的像素点的第二组坐标信息;
    基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,确定所述参考图像与每一视频帧的位置关系;
    根据所述位置关系对采集到的视频数据中的视频帧进行调整。
  5. 根据权利要求4所述的方法,其中,所述第一组坐标信息中至少包含有两个第一坐标;所述第二组坐标信息中至少包括有两个第二坐标;
    对应地,所述基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,确定所述参考图像与每一视频帧的位置关系,包括:
    从所述至少两个第一坐标以及所述至少两个第二坐标中,选取出像素特征相匹配的至少两组坐标对;所述坐标对中包括有像素特征相匹配的一个第一坐标和一个第二坐标;
    基于至少两组坐标对,确定所述参考图像与每一视频帧的位置关系。
  6. 根据权利要求1所述的方法,其中,所述方法还包括:
    将选取出的参考图像呈现于显示区域的部分区域中,以便于用户能够观测到所述参考图像。
  7. 一种电子设备,包括:
    图像采集单元,配置为利用图像采集装置采集视频数据;
    确定单元,配置为将采集到的视频数据中的第一视频帧确定为参考图像;所述第一视频帧为所述电子设备中呈现的针对所述视频数据的预览图 像;还配置为从所述参考图像中选取出参考区域,并确定出所述参考区域中像素点对应的第一组坐标信息;
    调整单元,配置为利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应;
    存储单元,配置为存储调整后的视频帧,并基于存储的调整后的视频帧组合成针对采集到的所述视频数据的目标视频数据。
  8. 根据权利要求7所述的电子设备,其中,所述确定单元,还配置为从所述参考图像中选取出固定目标体对应的区域,至少将所述参考图像中所述固定目标体对应的区域作为参考区域;所述固定目标体为所述视频数据对应的采集区域中所述电子设备采集到的处于固定状态的目标体。
  9. 根据权利要求7所述的电子设备,其中,所述确定单元,还配置为基于所述参考图像中所述参考区域对应的像素点的像素特征,从所述参考图像的参考区域中选取出目标像素点,确定出所述目标像素点对应的坐标信息,并将所述目标像素点对应的坐标信息作为第一组坐标信息。
  10. 根据权利要求7至9任一项所述的电子设备,其中,所述确定单元,还配置为确定采集到所述视频数据的每一视频帧中所述参考区域对应的像素点的第二组坐标信息,基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,确定所述参考图像与每一视频帧的位置关系;
    所述调整单元,还配置为根据所述位置关系对采集到的视频数据中的视频帧进行调整。
  11. 根据权利要求10所述的电子设备,其中,所述第一组坐标信息中至少包含有两个第一坐标;所述第二组坐标信息中至少包括有两个第二坐标;
    所述确定单元,还配置为从所述至少两个第一坐标以及所述至少两个 第二坐标中,选取出像素特征相匹配的至少两组坐标对,基于至少两组坐标对,确定所述参考图像与每一视频帧的位置关系;所述坐标对中包括有像素特征相匹配的一个第一坐标和一个第二坐标。
  12. 根据权利要求7所述的电子设备,其中,所述确定单元,还配置为将选取出的参考图像呈现于显示区域的部分区域中,以便于用户能够观测到所述参考图像。
  13. 一种电子设备,包括:
    图像采集装置,配置为采集视频数据;
    处理器,配置为将采集到的视频数据中的第一视频帧确定为参考图像;所述第一视频帧为所述电子设备中呈现的针对所述视频数据的预览图像;还配置为从所述参考图像中选取出参考区域,并确定出所述参考区域中像素点对应的第一组坐标信息;利用所述第一组坐标信息对采集到的视频数据中的视频帧进行调整,以使调整后的视频帧中所述参考区域对应的像素点的第二组坐标信息与所述第一组坐标信息相对应;存储调整后的视频帧,并基于存储的调整后的视频帧组合成针对采集到的所述视频数据的目标视频数据。
  14. 根据权利要求13所述的电子设备,其中,所述处理器,还配置为从所述参考图像中选取出固定目标体对应的区域,至少将所述参考图像中所述固定目标体对应的区域作为参考区域;所述固定目标体为所述视频数据对应的采集区域中所述电子设备采集到的处于固定状态的目标体。
  15. 根据权利要求13所述的电子设备,其中,所述处理器,还配置为基于所述参考图像中所述参考区域对应的像素点的像素特征,从所述参考图像的参考区域中选取出目标像素点,确定出所述目标像素点对应的坐标信息,并将所述目标像素点对应的坐标信息作为第一组坐标信息。
  16. 根据权利要求13至15任一项所述的电子设备,其中,所述处理 器,还配置为确定采集到所述视频数据的每一视频帧中所述参考区域对应的像素点的第二组坐标信息,基于所述第一组坐标信息以及每一视频帧对应的第二组坐标信息,确定所述参考图像与每一视频帧的位置关系;还配置为根据所述位置关系对采集到的视频数据中的视频帧进行调整。
  17. 根据权利要求16所述的电子设备,其中,所述第一组坐标信息中至少包含有两个第一坐标;所述第二组坐标信息中至少包括有两个第二坐标;
    对应地,所述处理器,还配置为从所述至少两个第一坐标以及所述至少两个第二坐标中,选取出像素特征相匹配的至少两组坐标对,基于至少两组坐标对,确定所述参考图像与每一视频帧的位置关系;所述坐标对中包括有像素特征相匹配的一个第一坐标和一个第二坐标。
  18. 根据权利要求13所述的电子设备,其中,所述处理器,还配置为将选取出的参考图像呈现于显示区域的部分区域中,以便于用户能够观测到所述参考图像。
  19. 一种电子设备,包括:处理器和用于存储能够在处理器上运行的计算机程序的存储器,其中,所述处理器用于运行所述计算机程序时,执行权利要求1至6所述方法的步骤。
  20. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至6所述方法的步骤。
PCT/CN2017/092497 2016-07-29 2017-07-11 一种图像处理方法及电子设备、存储介质 WO2018019124A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610619353.9 2016-07-29
CN201610619353.9A CN106303225A (zh) 2016-07-29 2016-07-29 一种图像处理方法及电子设备

Publications (1)

Publication Number Publication Date
WO2018019124A1 true WO2018019124A1 (zh) 2018-02-01

Family

ID=57663816

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/092497 WO2018019124A1 (zh) 2016-07-29 2017-07-11 一种图像处理方法及电子设备、存储介质

Country Status (2)

Country Link
CN (1) CN106303225A (zh)
WO (1) WO2018019124A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872388A (zh) * 2019-02-27 2019-06-11 维沃移动通信有限公司 一种图像处理方法及装置
CN110059685A (zh) * 2019-04-26 2019-07-26 腾讯科技(深圳)有限公司 文字区域检测方法、装置及存储介质
CN110580730A (zh) * 2018-06-11 2019-12-17 北京搜狗科技发展有限公司 一种图片处理方法及装置
CN110598562A (zh) * 2019-08-15 2019-12-20 阿里巴巴集团控股有限公司 车辆图像采集引导方法以及装置
CN112116655A (zh) * 2019-06-20 2020-12-22 北京地平线机器人技术研发有限公司 目标对象的图像的位置信息确定方法和装置
CN112950717A (zh) * 2019-11-26 2021-06-11 华为技术有限公司 一种空间标定方法和系统
CN112995533A (zh) * 2021-02-04 2021-06-18 上海哔哩哔哩科技有限公司 视频制作方法及装置
CN113763229A (zh) * 2020-06-01 2021-12-07 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN114339031A (zh) * 2021-12-06 2022-04-12 深圳市金九天视实业有限公司 画面调节方法、装置、设备以及存储介质
CN114820679A (zh) * 2022-07-01 2022-07-29 小米汽车科技有限公司 图像标注方法、装置、电子设备和存储介质
CN116126568A (zh) * 2021-11-12 2023-05-16 博泰车联网(大连)有限公司 故障复现方法、装置、设备和可读存储介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303225A (zh) * 2016-07-29 2017-01-04 努比亚技术有限公司 一种图像处理方法及电子设备
CN107564084B (zh) * 2017-08-24 2022-07-01 腾讯科技(深圳)有限公司 一种动图合成方法、装置及存储设备
CN108109186B (zh) * 2017-11-30 2021-06-11 维沃移动通信有限公司 一种视频文件处理方法、装置及移动终端
CN108664912B (zh) * 2018-05-04 2022-12-20 北京学之途网络科技有限公司 一种信息处理方法、装置、计算机存储介质及终端
CN110619257B (zh) * 2018-06-20 2023-11-07 北京搜狗科技发展有限公司 一种文字区域确定方法和装置
CN111225180B (zh) * 2018-11-26 2021-07-20 浙江宇视科技有限公司 画面处理方法及装置
CN110264546B (zh) * 2019-06-24 2023-03-21 北京向上一心科技有限公司 图像合成的方法、装置、计算机可读存储介质及终端
CN113301411B (zh) * 2020-02-21 2023-03-14 西安诺瓦星云科技股份有限公司 视频处理方法、装置及系统和视频处理设备
CN111915494B (zh) * 2020-07-21 2024-05-28 东软医疗系统股份有限公司 校准方法、装置及系统
CN112184854B (zh) * 2020-09-04 2023-11-21 上海硬通网络科技有限公司 动画合成方法、装置及电子设备
CN112508959B (zh) * 2020-12-15 2022-11-11 清华大学 视频目标分割方法、装置、电子设备及存储介质
CN113920497B (zh) * 2021-12-07 2022-04-08 广东电网有限责任公司东莞供电局 一种铭牌识别模型的训练、铭牌的识别方法及相关装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096912A (zh) * 2009-12-14 2011-06-15 北京中星微电子有限公司 一种图像处理方法及装置
CN102890771A (zh) * 2011-07-20 2013-01-23 上海银晨智能识别科技有限公司 判断图像摄取装置是否发生移动的方法和系统
US20140176737A1 (en) * 2012-12-20 2014-06-26 Olympus Corporation Imaging device and imaging method
CN104618627A (zh) * 2014-12-31 2015-05-13 小米科技有限责任公司 视频处理方法和装置
CN105096266A (zh) * 2015-06-16 2015-11-25 努比亚技术有限公司 一种信息处理方法及装置、终端
CN106303225A (zh) * 2016-07-29 2017-01-04 努比亚技术有限公司 一种图像处理方法及电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101041366B1 (ko) * 2007-11-02 2011-06-14 주식회사 코아로직 객체 추적을 이용한 디지털 영상의 손떨림 보정 장치 및방법
CN101729763A (zh) * 2009-12-15 2010-06-09 中国科学院长春光学精密机械与物理研究所 数字视频电子稳像方法
CN105573612A (zh) * 2015-05-27 2016-05-11 宇龙计算机通信科技(深圳)有限公司 一种在拍摄时图像的处理方法及移动终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096912A (zh) * 2009-12-14 2011-06-15 北京中星微电子有限公司 一种图像处理方法及装置
CN102890771A (zh) * 2011-07-20 2013-01-23 上海银晨智能识别科技有限公司 判断图像摄取装置是否发生移动的方法和系统
US20140176737A1 (en) * 2012-12-20 2014-06-26 Olympus Corporation Imaging device and imaging method
CN104618627A (zh) * 2014-12-31 2015-05-13 小米科技有限责任公司 视频处理方法和装置
CN105096266A (zh) * 2015-06-16 2015-11-25 努比亚技术有限公司 一种信息处理方法及装置、终端
CN106303225A (zh) * 2016-07-29 2017-01-04 努比亚技术有限公司 一种图像处理方法及电子设备

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580730A (zh) * 2018-06-11 2019-12-17 北京搜狗科技发展有限公司 一种图片处理方法及装置
CN110580730B (zh) * 2018-06-11 2024-03-26 北京搜狗科技发展有限公司 一种图片处理方法及装置
CN109872388B (zh) * 2019-02-27 2023-11-17 维沃移动通信有限公司 一种图像处理方法及装置
CN109872388A (zh) * 2019-02-27 2019-06-11 维沃移动通信有限公司 一种图像处理方法及装置
CN110059685A (zh) * 2019-04-26 2019-07-26 腾讯科技(深圳)有限公司 文字区域检测方法、装置及存储介质
CN110059685B (zh) * 2019-04-26 2022-10-21 腾讯科技(深圳)有限公司 文字区域检测方法、装置及存储介质
CN112116655B (zh) * 2019-06-20 2024-04-05 北京地平线机器人技术研发有限公司 目标对象的位置确定方法和装置
CN112116655A (zh) * 2019-06-20 2020-12-22 北京地平线机器人技术研发有限公司 目标对象的图像的位置信息确定方法和装置
CN110598562A (zh) * 2019-08-15 2019-12-20 阿里巴巴集团控股有限公司 车辆图像采集引导方法以及装置
CN110598562B (zh) * 2019-08-15 2023-03-07 创新先进技术有限公司 车辆图像采集引导方法以及装置
CN112950717A (zh) * 2019-11-26 2021-06-11 华为技术有限公司 一种空间标定方法和系统
CN113763229A (zh) * 2020-06-01 2021-12-07 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN112995533A (zh) * 2021-02-04 2021-06-18 上海哔哩哔哩科技有限公司 视频制作方法及装置
CN116126568A (zh) * 2021-11-12 2023-05-16 博泰车联网(大连)有限公司 故障复现方法、装置、设备和可读存储介质
CN116126568B (zh) * 2021-11-12 2024-02-09 博泰车联网(大连)有限公司 故障复现方法、装置、设备和可读存储介质
CN114339031A (zh) * 2021-12-06 2022-04-12 深圳市金九天视实业有限公司 画面调节方法、装置、设备以及存储介质
CN114820679A (zh) * 2022-07-01 2022-07-29 小米汽车科技有限公司 图像标注方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN106303225A (zh) 2017-01-04

Similar Documents

Publication Publication Date Title
WO2018019124A1 (zh) 一种图像处理方法及电子设备、存储介质
CN106454121B (zh) 双摄像头拍照方法及装置
WO2017050115A1 (zh) 一种图像合成方法和装置
US8780258B2 (en) Mobile terminal and method for generating an out-of-focus image
WO2017067526A1 (zh) 图像增强方法及移动终端
WO2016180325A1 (zh) 图像处理方法及装置
WO2017045650A1 (zh) 一种图片处理方法及终端
WO2017020836A1 (zh) 一种虚化处理深度图像的装置和方法
WO2017016511A1 (zh) 一种图像处理方法及装置、终端
WO2018076935A1 (zh) 图像虚化处理方法、装置、移动终端和计算机存储介质
CN106909274B (zh) 一种图像显示方法和装置
CN106713716B (zh) 一种双摄像头的拍摄控制方法和装置
WO2017071476A1 (zh) 一种图像合成方法和装置、存储介质
WO2017143855A1 (zh) 具有截屏功能的装置和截屏方法
WO2018019128A1 (zh) 一种夜景图像的处理方法和移动终端
WO2017071542A1 (zh) 图像处理方法及装置
WO2017071475A1 (zh) 一种图像处理方法及终端、存储介质
WO2018050014A1 (zh) 对焦方法及拍照设备、存储介质
CN106060407A (zh) 一种对焦方法及终端
WO2018076938A1 (zh) 图像处理装置及方法和计算机存储介质
WO2017041714A1 (zh) 一种获取rgb数据的方法和装置
WO2017067523A1 (zh) 图像处理方法、装置及移动终端
CN106911881B (zh) 一种基于双摄像头的动态照片拍摄装置、方法和终端
CN106657782B (zh) 一种图片处理方法和终端
WO2017071469A1 (zh) 一种移动终端和图像拍摄方法、计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17833427

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/07/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17833427

Country of ref document: EP

Kind code of ref document: A1