CN106412324B - Device and method for prompting focusing object - Google Patents

Device and method for prompting focusing object Download PDF

Info

Publication number
CN106412324B
CN106412324B CN201610952598.3A CN201610952598A CN106412324B CN 106412324 B CN106412324 B CN 106412324B CN 201610952598 A CN201610952598 A CN 201610952598A CN 106412324 B CN106412324 B CN 106412324B
Authority
CN
China
Prior art keywords
focusing
value
focusing object
line
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610952598.3A
Other languages
Chinese (zh)
Other versions
CN106412324A (en
Inventor
程嘉麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Apexel Technology Co., Ltd.
Original Assignee
Shenzhen Apexel Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Apexel Technology Co ltd filed Critical Shenzhen Apexel Technology Co ltd
Publication of CN106412324A publication Critical patent/CN106412324A/en
Application granted granted Critical
Publication of CN106412324B publication Critical patent/CN106412324B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Abstract

The invention discloses a device for prompting a focusing object, which comprises: the acquisition module is used for acquiring data of a current image during manual focusing; the calculation module is used for obtaining the gray value of the edge point of the focusing object in the current image by utilizing an edge detection technology according to the data; the determining module is used for determining prompt information of the focusing object according to the gray value of the edge point, a preset threshold, a preset peak value and a preset rule; the prompt information comprises the thickness degree of the prompt line and/or the depth degree of the color of the prompt line; and the presentation module is used for presenting the prompt information of the focusing object in the current image. The invention also discloses a method for prompting the focusing object. The device and the method for prompting the focusing object realize that the prompting line of the focusing object is fuller as the focusing becomes clearer gradually, and the color of the prompting line is deepened gradually, so that a user can clearly and clearly find the focusing degree, and a striking focusing prompting effect is achieved.

Description

Device and method for prompting focusing object
Technical Field
The invention relates to the technical field of image processing, in particular to a device and a method for prompting a focusing object.
Background
The auto-focusing technology of the mobile terminal when taking a picture is essentially a data calculation method integrated in an Image Signal Processing (ISP) of the mobile terminal; when the viewfinder captures the most original target image, the target image data is taken as original data and transmitted to the ISP, and the ISP analyzes the original data and checks the density difference between adjacent pixels in the image; if the focusing of the original target image is inaccurate, the adjacent pixel density is very close, and at this time, the mobile phone drives the lens locked into the coil to move through the motor, so as to adjust the position of the lens and realize automatic focusing.
However, in some scenes, such as a low-light environment and a low-contrast environment, the existing automatic focusing process is often disabled, and a clear image cannot be captured, at this time, manual focusing is required to determine the clear image, and the mobile terminal does not provide an effective prompt for the shooting object for the existing manual focusing mode, so that the user cannot acquire the current focusing condition in the manual focusing process, and the user experience is affected.
Disclosure of Invention
In view of this, the embodiments of the present invention are expected to provide a device and a method for prompting a focused object, so as to achieve that as focusing becomes clearer gradually, a prompt line of the focused object is fuller, and the color of the prompt line is deepened gradually, thereby achieving a striking focusing prompting effect, and improving user experience.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the invention provides a device for prompting a focusing object, which comprises:
the acquisition module is used for acquiring data of a current image during manual focusing;
the calculation module is used for obtaining the gray value of the edge point of the focusing object in the current image by utilizing an edge detection technology according to the data;
the determining module is used for determining the prompt information of the focusing object according to the gray value of the edge point, a preset threshold, a preset peak value and a preset rule; the prompt information comprises the thickness degree of a prompt line and/or the depth degree of the color of the prompt line;
and the presentation module is used for presenting the prompt information of the focusing object in the current image.
In the foregoing scheme, the calculating module is specifically configured to obtain, according to the data, a gray value of an edge point of the focusing object in the current image by using a Sobel edge detection algorithm.
In the foregoing solution, the determining module is specifically configured to determine whether an absolute value of a difference between the gray value of the edge point and the preset peak value is smaller than or equal to the preset threshold, and if the absolute value of the difference between the gray value of the edge point and the preset peak value is smaller than or equal to the preset threshold, determine the prompt information of the focused object according to a rule corresponding to the gray value and the preset peak value;
and if the absolute value of the difference value between the gray value of the edge point and the preset peak value is greater than the preset threshold value, determining the prompt information of the focusing object according to the default rule of the gray value and the preset peak value.
In the foregoing scheme, the determining module is further specifically configured to determine the thickness degree of the cue line of the focusing object and/or the depth degree of the cue line color according to a relationship that the thickness degree of the cue line of the focusing object is more and more full and/or the depth degree of the cue line color is more and more deepened as the gray value is closer to the preset peak value.
In the foregoing scheme, the determining module is further specifically configured to determine, according to a default corresponding relationship between the gray value and a preset peak value, that the thickness degree of the cue line of the focusing object is an initial thickness degree and/or that the lightness degree of the color of the cue line is an initial lightness degree.
The invention also provides a method for prompting the focusing object, which comprises the following steps:
acquiring data of a current image during manual focusing;
obtaining the gray value of the edge point of the focusing object in the current image by utilizing an edge detection technology according to the data;
determining prompt information of the focusing object according to the gray value of the edge point, a preset threshold, a preset peak value and a preset rule; the prompt information comprises the thickness degree of a prompt line and/or the depth degree of the color of the prompt line;
and presenting the prompt information of the focusing object in the current image.
In the foregoing solution, the obtaining a gray value of an edge point of a focusing object in the current image according to the data by using an edge detection technology includes:
and obtaining the gray value of the edge point of the focusing object in the current image by utilizing a Sobel edge detection algorithm according to the data.
In the foregoing scheme, determining the prompt information of the object to be focused according to the gray value of the edge point, the preset threshold, the preset peak value, and the preset rule includes:
judging whether the absolute value of the difference between the gray value of the edge point and the preset peak value is smaller than or equal to the preset threshold value or not, and if the absolute value of the difference between the gray value of the edge point and the preset peak value is smaller than or equal to the preset threshold value, determining prompt information of the focusing object according to a corresponding rule of the gray value and the preset peak value;
and if the absolute value of the difference value between the gray value of the edge point and the preset peak value is greater than the preset threshold value, determining the prompt information of the focusing object according to the default rule of the gray value and the preset peak value.
In the foregoing solution, the determining the prompt information of the focusing object according to the rule corresponding to the gray value and the preset peak value includes:
and determining the thickness degree of the prompt line of the focusing object and/or the depth degree of the color of the prompt line according to the relation that the thickness degree of the prompt line of the focusing object is more and more full and/or the depth degree of the color of the prompt line is more and more deepened as the gray value is closer to the preset peak value.
In the foregoing solution, the determining the prompt information of the focusing object according to the default rule of the gray value and the preset peak value includes:
and determining the thickness degree of the prompt line of the focusing object as an initial thickness degree and/or the depth degree of the color of the prompt line as an initial depth degree according to the default corresponding relation between the gray value and the preset peak value.
According to the device and the method for prompting the focusing object, provided by the embodiment of the invention, the mobile terminal acquires the data of the current image during manual focusing; obtaining the gray value of the edge point of the focusing object in the current image by utilizing an edge detection technology according to the data; determining prompt information of the focusing object according to the gray value of the edge point, a preset threshold, a preset peak value and a preset rule; the prompt information comprises the thickness degree of a prompt line and/or the depth degree of the color of the prompt line; presenting prompt information of the focusing object in the current image; the focusing reminding method has the advantages that the prompt lines of the focusing object are full as the focusing becomes clear gradually, and the color of the prompt lines is deepened gradually, so that a user can clearly and clearly find the focusing degree, and a striking focusing reminding effect is achieved; the focusing effect is previewed in real time through manual focal length adjustment, so that a user can be effectively prompted to finish accurate focusing on a shooting main body, a picture which is clear in focusing and bright in theme is shot better, and user experience is improved.
Drawings
Fig. 1 is a schematic hardware configuration diagram of an alternative mobile terminal implementing various embodiments of the present invention;
fig. 2 is a schematic structural diagram of a communication system in which a mobile terminal according to an embodiment of the present invention can operate;
FIG. 3 is a flowchart illustrating a first method for prompting a focused object according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a second method for prompting a focused object according to an embodiment of the present invention;
FIG. 5a is a first focusing diagram illustrating a second method for prompting a focused object according to an embodiment of the present invention;
FIG. 5b is a second focusing diagram illustrating a second method for prompting a focusing object according to a second embodiment of the present invention;
FIG. 5c is a third focusing diagram illustrating a second method for prompting a focused object according to an embodiment of the present invention;
FIG. 5d is a fourth focusing diagram illustrating a second method for prompting a focused object according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an apparatus for presenting a focused object according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to fig. 1. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a navigation device, etc., and a stationary terminal such as a digital TV, a desktop computer, etc. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic hardware configuration of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 100 may include an audio/video (a/V) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190, and the like. Fig. 1 illustrates a mobile terminal having various components, but it is to be understood that not all illustrated components are required to be implemented, and that more or fewer components may instead be implemented, the elements of the mobile terminal being described in detail below.
The a/V input unit 120 is used to receive an audio or video signal. The a/V input unit 120 may include a camera 121 and a microphone 122, and the camera 121 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 151. The image frames processed by the cameras 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the construction of the mobile terminal. The microphone 122 may receive sounds (audio data) via the microphone 122 in a phone call mode, a recording mode, a voice recognition mode, or the like, and is capable of processing such sounds into audio data. The microphone 122 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The user input unit 130 may generate key input data according to a command input by a user to control various operations of the mobile terminal. The user input unit 130 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
The sensing unit 140 detects a current photographing state of the mobile terminal 100 (e.g., an open or closed photographing state of the mobile terminal 100), a position of the mobile terminal 100, presence or absence of contact (i.e., touch input) by a user with respect to the mobile terminal 100, an orientation of the mobile terminal 100, acceleration or deceleration movement and direction of the mobile terminal 100, and the like, and generates a command or signal for controlling an operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide-type mobile phone, the sensing unit 140 may sense whether the slide-type phone is opened or closed. In addition, the sensing unit 140 can detect whether the power supply unit 190 supplies power or whether the interface unit 170 is coupled with an external device. The sensing unit 140 may include a proximity sensor 141 as will be described below in connection with a touch screen.
The interface unit 170 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification module may store various information for authenticating a user using the mobile terminal 100 and may include a User Identity Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like. In addition, a device having an identification module (hereinafter, referred to as an "identification device") may take the form of a smart card, and thus, the identification device may be connected with the mobile terminal 100 via a port or other connection means. The interface unit 170 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal and the external device.
In addition, when the mobile terminal 100 is connected with an external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the mobile terminal 100 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal. Various command signals or power input from the cradle may be used as signals for recognizing whether the mobile terminal is accurately mounted on the cradle. The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The display unit 151 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The audio output module 152 may convert audio data stored in the memory 160 into an audio signal and output as sound when the mobile terminal is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output module 152 may provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output module 152 may include a speaker, a buzzer, and the like.
The alarm unit 153 may provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alarm unit 153 may provide output in different ways to notify the occurrence of an event. For example, the alarm unit 153 may provide an output in the form of vibration, and when a call, a message, or some other Incoming Communication (Incoming Communication) is received, the alarm unit 153 may provide a tactile output (e.g., vibration) to inform the user thereof. By providing such a tactile output, the user can recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 may also provide an output notifying the occurrence of an event via the display unit 151 or the audio output module 152.
The memory 160 may store software programs or the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, etc.) that has been output or is to be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, and the multimedia module 181 may be constructed within the controller 180 or may be constructed separately from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate various elements and components under the control of the controller 180.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the controller 180. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory 160 and executed by the controller 180.
Up to now, the mobile terminal has been described in terms of its functions. Hereinafter, a slide-type mobile terminal among various types of mobile terminals, such as a folder-type, bar-type, swing-type, slide-type mobile terminal, and the like, will be described as an example for the sake of brevity. Accordingly, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
The mobile terminal 100 as shown in fig. 1 may be configured to operate with communication systems such as wired and wireless communication systems and satellite-based communication systems that transmit data via frames or packets.
A communication system in which a mobile terminal according to the present invention is operable will now be described with reference to fig. 2.
Such communication systems may use different air interfaces and/or physical layers. For example, the air interface used by the communication system includes, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)), global system for mobile communications (GSM), and the like. By way of non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
Referring to fig. 2, the CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of Base Stations (BSs) 270, Base Station Controllers (BSCs) 275, and a Mobile Switching Center (MSC) 280. The MSC280 is configured to interface with a Public Switched Telephone Network (PSTN) 290. The MSC280 is also configured to interface with a BSC275, which may be coupled to the base station 270 via a backhaul. The backhaul may be constructed according to any of several known interfaces including, for example, E1/T1, ATM, IP, PPP, frame Relay, HDSL, ADSL, or xDSL. It will be understood that a system as shown in fig. 2 may include multiple BSCs 275.
Each BS270 may serve one or more sectors (or regions), each sector covered by a multi-directional antenna or an antenna pointing in a particular direction being radially distant from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS270 may be configured to support multiple frequency allocations, with each frequency allocation having a particular frequency spectrum (e.g., 1.25MHz, 5MHz, etc.).
The intersection of partitions with frequency allocations may be referred to as a CDMA channel. The BS270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" may be used to generically refer to a single BSC275 and at least one BS 270. The base stations may also be referred to as "cells". Alternatively, each sector of a particular BS270 may be referred to as a plurality of cell sites.
As shown in fig. 2, a Broadcast Transmitter (BT)295 transmits a broadcast signal to the mobile terminal 100 operating within the system. In fig. 2, several Global Positioning System (GPS) satellites 300 are shown. The satellite 300 assists in locating at least one of the plurality of mobile terminals 100.
In fig. 2, a plurality of satellites 300 are depicted, but it is understood that useful positioning information may be obtained with any number of satellites. Other techniques that can track the location of the mobile terminal may be used instead of or in addition to GPS tracking techniques. In addition, at least one GPS satellite 300 may selectively or additionally process satellite DMB transmission.
As a typical operation of the wireless communication system, the BS270 receives reverse link signals from various mobile terminals 100. The mobile terminal 100 is generally engaged in conversations, messaging, and other types of communications. Each reverse link signal received by a particular base station 270 is processed within the particular BS 270. The obtained data is forwarded to the associated BSC 275. The BSC provides call resource allocation and mobility management functions including coordination of soft handoff procedures between BSs 270. The BSCs 275 also route the received data to the MSC280, which provides additional routing services for interfacing with the PSTN 290. Similarly, the PSTN290 interfaces with the MSC280, the MSC280 interfaces with the BSCs 275, and the BSCs 275 accordingly control the BS270 to transmit forward link signals to the mobile terminal 100.
Based on the above mobile terminal hardware structure and communication system, the present invention provides various embodiments of the method.
Example one
The method for prompting the focusing object provided by the invention can be implemented on a device for prompting the focusing object, wherein the device can comprise a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a navigation device and the like.
If the mobile terminal has an operating system, the operating system may be UNIX, Linux, Windows, Mac OSX, Android (Android), Windows Phone, etc.
Application software (APP) is a third-party Application program of the intelligent terminal, and a user can work, entertain, acquire information and the like through various Application software, and the formats of the APP include ipa, pxl, deb, apk and the like.
FIG. 3 is a flowchart illustrating a first method for prompting a focused object according to an embodiment of the present invention; as shown in fig. 3, a method for prompting a focused object according to an embodiment of the present invention may include the following steps:
step 301, acquiring data of a current image during manual focusing.
In the embodiment of the present invention, the device for prompting the focusing object is exemplified by a mobile terminal; when a user uses the mobile terminal to take a picture and the picture taking mode is manual focusing, the mobile terminal acquires data of a current image in the camera, wherein the data is pixels for example.
And 302, obtaining the gray value of the edge point of the focusing object in the current image by utilizing an edge detection technology according to the data.
The mobile terminal calculates the gray value of the edge point of the focusing object in the current image by using an edge detection technology according to the data acquired in the step 301; the edge detection technology of the image can be simply understood as extracting the outline of the area in the image, the division of the area in the image is based on the gray level of pixels, the edge points are actually the points with severe gray level jump in the image, the gray level of the pixels in each area is approximately the same, the boundaries between the areas are called edges, and the purpose of searching the edges is the edge detection of the image.
Edge detection techniques can be broadly classified into two categories: based on the search and based on the zero crossing; search-based edge detection methods first calculate the edge strength, usually expressed in terms of the first derivative, e.g. the gradient mode, and then estimate the local direction of the edge by calculation, usually taking the direction of the gradient, and use this direction to find the maximum of the local gradient mode; zero-crossing based methods find the zero-crossing points of the second derivative derived from the image to locate the edges, typically using the laplace operator or the zero-crossing points of a non-linear differential equation.
The following is exemplified by a Soble edge detection algorithm, which is simple, and in many practical application occasions, a sobel edge is preferred, especially when the requirement on efficiency is high; soble edge detection is usually directional and can detect only vertical edges or both.
An operator in the Soble edge detection algorithm filters an image by using a 3x3 filter to obtain a gradient image, wherein the operator comprises two groups of 3x3 matrixes which are respectively horizontal and vertical, and the horizontal and vertical brightness difference approximate values can be obtained by performing plane convolution on the matrix and the image, if A represents an original image, and Gx and Gy represent the image subjected to horizontal and vertical edge detection respectively, the formula is as follows:
Figure BDA0001141058340000121
the lateral and longitudinal gradient approximations for each pixel of the image may be combined using the following equations to calculate the magnitude of the gradient.
Figure BDA0001141058340000122
The gradient direction can then be calculated using the following formula.
In the above example, if the above angle θ is equal to zero, it represents that the image has a longitudinal edge there, and is darker to the left and to the right.
Finally, since the edge point is actually a point in the image where the gray level jump is severe, it is a simple edge portion to calculate the gradient image and then extract the brighter portion of the gradient image.
The Sobel edge detection result can well describe the image by lines, and the area with higher contrast in the source image is represented as a high-gray pixel in the result image, namely the source image is simply subjected to 'edge tracing'.
Step 303, determining prompt information of the focusing object according to the gray value of the edge point, a preset threshold, a preset peak value and a preset rule; the prompt information comprises the thickness degree of the prompt line and/or the depth degree of the color of the prompt line.
The preset rules include a corresponding rule of the gray value and the preset peak value and a default rule of the gray value and the preset peak value.
Specifically, the mobile terminal determines whether the absolute value of the difference between the gray value of the edge point calculated in step 302 and the preset peak value is less than or equal to a preset threshold, and if the absolute value of the difference between the gray value of the edge point and the preset peak value is less than or equal to the preset threshold, determines the prompt information of the focusing object according to the rule corresponding to the gray value and the preset peak value; and determining the thickness degree of the prompt line of the focusing object and/or the depth degree of the color of the prompt line according to the relation that the thickness degree of the prompt line of the focusing object is more and more full and/or the depth degree of the color of the prompt line is more and more deepened along with the closer approach of the gray value to the preset peak value.
If the absolute value of the difference value between the gray value of the edge point and the preset peak value is larger than the preset threshold value, determining the prompt information of the focusing object according to the default rule of the gray value and the preset peak value; and determining the thickness degree of the prompt line of the focusing object as an initial degree and/or the depth degree of the color of the prompt line as an initial degree according to the default corresponding relation between the gray value and the preset peak value.
Here, the preset threshold and the preset peak of the gray value may be set in advance according to actual requirements, or obtained in the background of the mobile terminal according to an auto-focusing function when a focusing object is determined.
For example, the rule of correspondence between the gray value and the preset peak value is shown in table 1, because the absolute value of the difference between the gray value of the edge point and the preset peak value is less than or equal to the preset threshold, the position of the current gray value is divided into five intervals according to the preset peak value, the third interval is the interval closest to the preset peak value, and the second interval and the fourth interval are closer to the preset peak value than the first interval and the fifth interval; the thickness degree of the cue line and/or the depth degree of the cue line color corresponding to each section are as follows:
TABLE 1
Figure BDA0001141058340000141
When the current gray value is in the third interval, the thickness degree of the corresponding prompt line is the third thickness degree and/or the depth degree of the color of the prompt line is the third depth degree; when the current gray value is in the second and fourth intervals, the thickness degree of the corresponding prompt line is the second thickness degree and/or the depth degree of the color of the prompt line is the second depth degree; when the current gray value is in the first interval and the fifth interval, the thickness degree of the corresponding prompt line is the first thickness degree and/or the depth degree of the color of the prompt line is the second depth degree; the third thickness degree is the fullest, and the third depth degree is the deepest, and later, the second thickness degree is fuller than the first thickness degree, and the second depth degree is darker than the first depth degree.
It should be noted that, the section corresponding to the position where the current gray value is located, the thickness degree of the cue line, and/or the depth degree of the color of the cue line may be set according to actual requirements, and are not limited herein.
For example, as shown in table 2, the gray value and the preset peak value are determined as default corresponding relationships, that is, the thickness degree of the cue line is the initial thickness degree and/or the lightness degree of the cue line color is the initial lightness degree, because the absolute value of the difference between the gray value of the edge point and the preset peak value is less than or equal to the preset threshold value.
TABLE 2
Figure BDA0001141058340000142
The initial thickness is thinner than the first thickness, and the initial depth is shallower than the first depth.
And step 304, presenting the prompt information of the focusing object in the current image.
The mobile terminal presents the thickness degree of the prompt line and/or the depth degree of the color of the prompt line of the focusing object finally determined in the step 303 in the current image for the reference of the user; the user can clearly find the focusing degree according to the information, and the effect of remarkably reminding focusing is achieved.
According to the method for prompting the focusing object, the mobile terminal obtains data of a current image during manual focusing; obtaining the gray value of the edge point of the focusing object in the current image by utilizing an edge detection technology according to the data; determining prompt information of the focusing object according to the gray value of the edge point, a preset threshold, a preset peak value and a preset rule; the prompt information comprises the thickness degree of a prompt line and/or the depth degree of the color of the prompt line; presenting prompt information of the focusing object in the current image; the focusing reminding method has the advantages that the prompt lines of the focusing object are full as the focusing becomes clear gradually, and the color of the prompt lines is deepened gradually, so that a user can clearly and clearly find the focusing degree, and a striking focusing reminding effect is achieved; the focusing effect is previewed in real time through manual focal length adjustment, so that a user can be effectively prompted to finish accurate focusing on a shooting main body, a picture which is clear in focusing and bright in theme is shot better, and user experience is improved.
Example two
FIG. 4 is a flowchart illustrating a second method for prompting a focused object according to an embodiment of the present invention; as shown in fig. 4, a method for prompting a focused object according to an embodiment of the present invention may include the following steps:
step 401, acquiring data of a current image during manual focusing.
In the embodiment of the present invention, the device for prompting the focusing object is exemplified by a smart phone (hereinafter referred to as a mobile phone) having a photographing function; when a user uses the mobile phone to take a picture and the picture taking mode is manual focusing, the mobile phone acquires data of a current image in the camera, wherein the data is pixels for example.
And 402, obtaining the gray value of the edge point of the focusing object in the current image by utilizing an edge detection technology according to the data.
The mobile phone calculates the gray value of the edge point of the focusing object in the current image by using a Soble edge detection algorithm according to the data acquired in the step 401; the Soble edge detection algorithm can be simply understood as extracting the outline of a region in an image, the division of the region in the image is based on the gray level of pixels, edge points are actually points with severe gray level jump in the image, the gray level of the pixels in each region is approximately the same, the boundaries between the regions are called edges, and the purpose of finding the edges is the Soble edge detection algorithm; the Sobel edge detection result can well describe an image by lines, and a region with higher contrast in a source image is represented as a high-gray pixel in a result image, namely the source image is subjected to 'edge tracing' operation; specifically, the description of the cable edge detection algorithm refers to that described in step 302, and is not repeated herein.
And 403, judging whether the absolute value of the difference value between the gray value of the edge point and the preset peak value is less than or equal to a preset threshold value.
The mobile phone determines whether the absolute value of the difference between the gray value of the edge point calculated in step 402 and the preset peak value is less than or equal to a preset threshold, and if the absolute value of the difference between the gray value of the edge point and the preset peak value is less than or equal to the preset threshold, step 404a is executed; if the absolute value of the difference between the gray value of the edge point and the preset peak value is greater than the preset threshold, step 404b is performed.
And step 404a, determining the prompt information of the focusing object according to the corresponding rule of the gray value and the preset peak value.
The mobile phone determines the thickness degree of the prompt line of the focusing object and/or the depth degree of the color of the prompt line according to the relationship that the thickness degree of the prompt line of the focusing object is more and more full and/or the depth degree of the color of the prompt line is more and more deepened as the gray value is closer to the preset peak value.
For example, as the gray value is closer to the preset peak value, the cue line of the focusing object is more and more full and/or the cue line is darker in color; fig. 5a is a first focusing schematic diagram of a second method for prompting a focused object according to the present invention, fig. 5b is a second focusing schematic diagram of the second method for prompting a focused object according to the present invention, fig. 5c is a third focusing schematic diagram of the second method for prompting a focused object according to the present invention, specific focusing changes are shown in fig. 5a, 5b, and 5c, a prompt line of the focused object in fig. 5a is a thin line, and a prompt line of the focused object has a light color (light red); as the gray value approaches the preset peak value, the cue line of the focusing object in fig. 5b becomes a middle line, and the color of the cue line of the focusing object is light color (light red); as the gray value is closer to the preset peak value, the cue line of the focusing object in fig. 5c becomes a thick line (full), and the cue line of the focusing object is dark in color (deep red); thus, as the gray value is closer to the preset peak value, the mobile phone determines the prompt information of the focusing object; thereafter, step 405 is performed.
And step 404b, determining the prompt information of the focusing object according to the gray value and the default rule of the preset peak value.
For example, after the absolute value of the difference between the gray value of the edge point and the preset peak value is determined to be greater than the preset threshold value, the mobile phone determines the thickness degree of the cue line of the focusing object as the initial thickness degree and/or the depth degree of the color of the cue line as the initial depth degree according to the default corresponding relationship between the gray value and the preset peak value; fig. 5d is a fourth focusing schematic diagram of the second method for prompting a focused object according to the present invention, as shown in fig. 5d, the prompting line of the focused object in fig. 5d is an initial line (thinner than a thin line), and the color of the prompting line of the focused object is an initial color (lighter than light red); after the mobile phone determines the prompt information of the focusing object, step 405 is executed.
And step 405, presenting the prompt information of the focusing object in the current image.
The mobile phone presents the thickness degree of the prompt line of the focusing object and/or the depth degree of the color of the prompt line finally determined in the step 404a or 404b in the current image for the user to refer to; the user can clearly find the focusing degree according to the information, and the effect of remarkably reminding focusing is achieved.
According to the method for prompting the focusing object provided by the embodiment of the invention, the mobile phone acquires the data of the current image during manual focusing; obtaining the gray value of the edge point of the focusing object in the current image by utilizing an edge detection technology according to the data; judging whether the absolute value of the difference value between the gray value of the edge point and the preset peak value is smaller than or equal to a preset threshold value or not, if the absolute value of the difference value between the gray value of the edge point and the preset peak value is smaller than or equal to the preset threshold value, determining the prompt information of the focusing object according to the corresponding rule of the gray value and the preset peak value, and if the absolute value of the difference value between the gray value of the edge point and the preset peak value is larger than the preset threshold value, determining the prompt information of the focusing object according to the default rule of the gray value and the preset peak; presenting prompt information of the focusing object in the current image; the focusing reminding method has the advantages that the prompt lines of the focusing object are full as the focusing becomes clear gradually, and the color of the prompt lines is deepened gradually, so that a user can clearly and clearly find the focusing degree, and a striking focusing reminding effect is achieved; the focusing effect is previewed in real time through manual focal length adjustment, so that a user can be effectively prompted to finish accurate focusing on a shooting main body, a picture which is clear in focusing and bright in theme is shot better, and user experience is improved.
EXAMPLE III
Fig. 6 is a schematic structural diagram of an apparatus for presenting a focused object according to an embodiment of the present invention, and as shown in fig. 6, an apparatus 06 for presenting a focused object according to an embodiment of the present invention includes: the device comprises an acquisition module 61, a calculation module 62, a determination module 63 and a presentation module 64; wherein the content of the first and second substances,
the acquiring module 61 is configured to acquire data of a current image during manual focusing;
the calculating module 62 is configured to obtain a gray value of an edge point of a focusing object in the current image according to the data by using an edge detection technique;
the determining module 63 is configured to determine prompt information of the focusing object according to the gray value of the edge point, a preset threshold, a preset peak value, and a preset rule; the prompt information comprises the thickness degree of a prompt line and/or the depth degree of the color of the prompt line;
the presenting module 64 is configured to present the prompt information of the focusing object in the current image.
Further, the calculating module 62 is specifically configured to obtain, according to the data, a gray value of an edge point of the focusing object in the current image by using a Sobel edge detection algorithm.
Further, the determining module 63 is specifically configured to determine whether an absolute value of a difference between the gray value of the edge point and the preset peak value is smaller than or equal to the preset threshold, and if the absolute value of the difference between the gray value of the edge point and the preset peak value is smaller than or equal to the preset threshold, determine the prompt information of the focusing object according to a rule corresponding to the gray value and the preset peak value;
and if the absolute value of the difference value between the gray value of the edge point and the preset peak value is greater than the preset threshold value, determining the prompt information of the focusing object according to the default rule of the gray value and the preset peak value.
Further, the determining module 63 is further specifically configured to determine the thickness degree of the cue line of the focusing object and/or the depth degree of the cue line color according to a relationship that the thickness degree of the cue line of the focusing object is more and more full and/or the depth degree of the cue line color is more and more deepened as the gray value is closer to the preset peak value.
Further, the determining module 63 is further specifically configured to determine, according to a default corresponding relationship between the gray value and a preset peak value, that the thickness degree of the cue line of the focusing object is an initial thickness degree and/or the lightness degree of the color of the cue line is an initial lightness degree.
The apparatus of this embodiment may be configured to implement the technical solutions of the above-described method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
In practical applications, the obtaining module 61, the calculating module 62, the determining module 63, and the presenting module 64 may be implemented by a Central Processing Unit (CPU), a microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like, which are located in the device 06 for prompting the focusing object.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method described in the embodiments of the present invention.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (4)

1. An apparatus for prompting a focused object, the apparatus comprising:
the acquisition module is used for acquiring data of a current image during manual focusing;
the calculation module is used for obtaining the gray value of the edge point of the focusing object in the current image by utilizing an edge detection technology according to the data;
the determining module is used for determining the prompt information of the focusing object according to the gray value of the edge point, a preset threshold, a preset peak value and a preset rule; the prompt information comprises the thickness degree of a prompt line and/or the depth degree of the color of the prompt line;
the determining module is further configured to determine the thickness degree of the cue line of the focusing object and/or the lightness degree of the cue line color according to a relationship that the thickness degree of the cue line of the focusing object is more and more full and/or the lightness degree of the cue line color is more and more deepened as the gray value is closer to the preset peak value when the absolute value of the difference between the gray value of the edge point and the preset peak value is less than or equal to the preset threshold value;
the determining module is further configured to determine, according to a default corresponding relationship between the gray value and the preset peak value, that the thickness degree of the cue line of the focusing object is an initial thickness degree and/or the lightness degree of the color of the cue line is an initial lightness degree, when the absolute value of the difference between the gray value of the edge point and the preset peak value is greater than the preset threshold value;
and the presentation module is used for presenting the prompt information of the focusing object in the current image.
2. The apparatus according to claim 1, wherein the computing module is specifically configured to obtain a gray scale value of an edge point of the focusing object in the current image according to the data by using a Sobel edge detection algorithm.
3. A method of prompting a focused object, the method comprising:
acquiring data of a current image during manual focusing;
obtaining the gray value of the edge point of the focusing object in the current image by utilizing an edge detection technology according to the data;
determining prompt information of the focusing object according to the gray value of the edge point, a preset threshold, a preset peak value and a preset rule; the prompt information comprises the thickness degree of a prompt line and/or the depth degree of the color of the prompt line; when the absolute value of the difference value between the gray value of the edge point and the preset peak value is less than or equal to the preset threshold value, determining the thickness degree of the cue line of the focusing object and/or the depth degree of the cue line color according to the relation that the thickness degree of the cue line of the focusing object is more and more full and/or the depth degree of the cue line color is more and more deepened along with the closer of the gray value to the preset peak value; when the absolute value of the difference value between the gray value of the edge point and the preset peak value is greater than the preset threshold value, determining the thickness degree of the prompt line of the focusing object as an initial thickness degree and/or the depth degree of the color of the prompt line as an initial depth degree according to the default corresponding relation between the gray value and the preset peak value;
and presenting the prompt information of the focusing object in the current image.
4. The method of claim 3, wherein obtaining the gray value of the edge point of the focusing object in the current image by using the edge detection technology according to the data comprises:
and obtaining the gray value of the edge point of the focusing object in the current image by utilizing a Sobel edge detection algorithm according to the data.
CN201610952598.3A 2016-10-17 2016-11-02 Device and method for prompting focusing object Active CN106412324B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201610907296 2016-10-17
CN2016109072964 2016-10-17
CN201610941145 2016-10-25
CN2016109411450 2016-10-25

Publications (2)

Publication Number Publication Date
CN106412324A CN106412324A (en) 2017-02-15
CN106412324B true CN106412324B (en) 2020-02-14

Family

ID=57892765

Family Applications (11)

Application Number Title Priority Date Filing Date
CN201610958157.4A Pending CN106504280A (en) 2016-10-17 2016-11-02 A kind of method and terminal for browsing video
CN201610947314.1A Active CN106572303B (en) 2016-10-17 2016-11-02 Picture processing method and terminal
CN201610946494.1A Active CN106572302B (en) 2016-10-17 2016-11-02 A kind of image information processing method and equipment
CN201610958160.6A Pending CN106572249A (en) 2016-10-17 2016-11-02 Region enlargement method and apparatus
CN201610952598.3A Active CN106412324B (en) 2016-10-17 2016-11-02 Device and method for prompting focusing object
CN201610944748.6A Active CN106453924B (en) 2016-10-17 2016-11-02 A kind of image capturing method and device
CN201610947313.7A Pending CN106534675A (en) 2016-10-17 2016-11-02 Method and terminal for microphotography background blurring
CN201610945947.9A Pending CN106375595A (en) 2016-10-17 2016-11-02 Auxiliary focusing apparatus and method
CN201610946035.3A Pending CN106534674A (en) 2016-10-17 2016-11-02 Method for displaying focus area and mobile terminal
CN201610946668.4A Active CN106502693B (en) 2016-10-17 2016-11-02 A kind of image display method and device
CN201610946623.7A Active CN106375596B (en) 2016-10-17 2016-11-02 Device and method for prompting focusing object

Family Applications Before (4)

Application Number Title Priority Date Filing Date
CN201610958157.4A Pending CN106504280A (en) 2016-10-17 2016-11-02 A kind of method and terminal for browsing video
CN201610947314.1A Active CN106572303B (en) 2016-10-17 2016-11-02 Picture processing method and terminal
CN201610946494.1A Active CN106572302B (en) 2016-10-17 2016-11-02 A kind of image information processing method and equipment
CN201610958160.6A Pending CN106572249A (en) 2016-10-17 2016-11-02 Region enlargement method and apparatus

Family Applications After (6)

Application Number Title Priority Date Filing Date
CN201610944748.6A Active CN106453924B (en) 2016-10-17 2016-11-02 A kind of image capturing method and device
CN201610947313.7A Pending CN106534675A (en) 2016-10-17 2016-11-02 Method and terminal for microphotography background blurring
CN201610945947.9A Pending CN106375595A (en) 2016-10-17 2016-11-02 Auxiliary focusing apparatus and method
CN201610946035.3A Pending CN106534674A (en) 2016-10-17 2016-11-02 Method for displaying focus area and mobile terminal
CN201610946668.4A Active CN106502693B (en) 2016-10-17 2016-11-02 A kind of image display method and device
CN201610946623.7A Active CN106375596B (en) 2016-10-17 2016-11-02 Device and method for prompting focusing object

Country Status (1)

Country Link
CN (11) CN106504280A (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909274B (en) * 2017-02-27 2020-12-15 南京车链科技有限公司 Image display method and device
CN106973164B (en) * 2017-03-30 2019-03-01 维沃移动通信有限公司 A kind of take pictures weakening method and the mobile terminal of mobile terminal
CN107145285B (en) * 2017-05-12 2019-12-03 维沃移动通信有限公司 A kind of information extracting method and terminal
CN107222676B (en) * 2017-05-26 2020-06-02 Tcl移动通信科技(宁波)有限公司 Blurred picture generation method, storage device and mobile terminal
CN107247535B (en) * 2017-05-31 2021-11-30 北京小米移动软件有限公司 Intelligent mirror adjusting method and device and computer readable storage medium
WO2019014861A1 (en) * 2017-07-18 2019-01-24 Hangzhou Taruo Information Technology Co., Ltd. Intelligent object tracking
CN107613202B (en) * 2017-09-21 2020-03-10 维沃移动通信有限公司 Shooting method and mobile terminal
CN107807770A (en) * 2017-09-27 2018-03-16 阿里巴巴集团控股有限公司 A kind of screenshot method, device and electronic equipment
WO2019113746A1 (en) * 2017-12-11 2019-06-20 深圳市大疆创新科技有限公司 Manual-focus prompt method, control apparatus, photography device, and controller
CN109963200A (en) * 2017-12-25 2019-07-02 上海全土豆文化传播有限公司 Video broadcasting method and device
CN108536364A (en) * 2017-12-28 2018-09-14 努比亚技术有限公司 A kind of image pickup method, terminal and computer readable storage medium
CN108093181B (en) * 2018-01-16 2021-03-30 奇酷互联网络科技(深圳)有限公司 Picture shooting method and device, readable storage medium and mobile terminal
CN108471524B (en) * 2018-02-28 2020-08-07 北京小米移动软件有限公司 Focusing method and device and storage medium
CN108495029B (en) 2018-03-15 2020-03-31 维沃移动通信有限公司 Photographing method and mobile terminal
CN110349223B (en) * 2018-04-08 2021-04-30 中兴通讯股份有限公司 Image processing method and device
CN108876782A (en) * 2018-06-27 2018-11-23 Oppo广东移动通信有限公司 Recall video creation method and relevant apparatus
CN108989674A (en) * 2018-07-26 2018-12-11 努比亚技术有限公司 A kind of browsing video method, terminal and computer readable storage medium
CN109525888A (en) * 2018-09-28 2019-03-26 Oppo广东移动通信有限公司 Image display method, device, electronic equipment and storage medium
CN109816485B (en) * 2019-01-17 2021-06-15 口碑(上海)信息技术有限公司 Page display method and device
CN109648568B (en) * 2019-01-30 2022-01-04 深圳镁伽科技有限公司 Robot control method, system and storage medium
CN110333813A (en) * 2019-05-30 2019-10-15 平安科技(深圳)有限公司 Method, electronic device and the computer readable storage medium of invoice picture presentation
CN111355998B (en) * 2019-07-23 2022-04-05 杭州海康威视数字技术股份有限公司 Video processing method and device
CN110908558B (en) * 2019-10-30 2022-10-18 维沃移动通信(杭州)有限公司 Image display method and electronic equipment
CN112770042B (en) * 2019-11-05 2022-11-15 RealMe重庆移动通信有限公司 Image processing method and device, computer readable medium, wireless communication terminal
CN110896451B (en) * 2019-11-20 2022-01-28 维沃移动通信有限公司 Preview picture display method, electronic device and computer readable storage medium
CN111026316A (en) * 2019-11-25 2020-04-17 维沃移动通信有限公司 Image display method and electronic equipment
CN113132618B (en) * 2019-12-31 2022-09-09 华为技术有限公司 Auxiliary photographing method and device, terminal equipment and storage medium
CN111182211B (en) * 2019-12-31 2021-09-24 维沃移动通信有限公司 Shooting method, image processing method and electronic equipment
CN111526425B (en) * 2020-04-26 2022-08-09 北京字节跳动网络技术有限公司 Video playing method and device, readable medium and electronic equipment
CN111722775A (en) * 2020-06-24 2020-09-29 维沃移动通信(杭州)有限公司 Image processing method, device, equipment and readable storage medium
CN112188260A (en) * 2020-10-26 2021-01-05 咪咕文化科技有限公司 Video sharing method, electronic device and readable storage medium
CN112584043B (en) * 2020-12-08 2023-03-24 维沃移动通信有限公司 Auxiliary focusing method and device, electronic equipment and storage medium
CN114666490B (en) * 2020-12-23 2024-02-09 北京小米移动软件有限公司 Focusing method, focusing device, electronic equipment and storage medium
CN116055869B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Video processing method and terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103988489A (en) * 2011-09-30 2014-08-13 富士胶片株式会社 Imaging device, imaging method, recording medium and program
CN104038699A (en) * 2014-06-27 2014-09-10 深圳市中兴移动通信有限公司 Focusing state prompting method and shooting device

Family Cites Families (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN87200129U (en) * 1987-01-08 1988-01-27 李传琪 Multifunction enlarger
JP4233624B2 (en) * 1997-12-26 2009-03-04 カシオ計算機株式会社 Electronic camera device
JP2003143144A (en) * 2001-11-01 2003-05-16 Matsushita Electric Ind Co Ltd Transmission system and method for detecting delay amount of signal propagation
JP2004064259A (en) * 2002-07-26 2004-02-26 Kyocera Corp System for confirming focus of digital camera
JP4012015B2 (en) * 2002-08-29 2007-11-21 キヤノン株式会社 Image forming apparatus
JP2006295242A (en) * 2005-04-05 2006-10-26 Olympus Imaging Corp Digital camera
CN101202873B (en) * 2006-12-13 2012-07-25 株式会社日立制作所 Method and device for information record reproduction
JP4678603B2 (en) * 2007-04-20 2011-04-27 富士フイルム株式会社 Imaging apparatus and imaging method
JP4961282B2 (en) * 2007-07-03 2012-06-27 キヤノン株式会社 Display control apparatus and control method thereof
CN101398527B (en) * 2007-09-27 2011-09-21 联想(北京)有限公司 Method for implementing zooming-in function on photo terminal and photo terminal thereof
JP5173453B2 (en) * 2008-01-22 2013-04-03 キヤノン株式会社 Imaging device and display control method of imaging device
CN101247489A (en) * 2008-03-20 2008-08-20 南京大学 Method for detail real-time replay of digital television
JP2010041175A (en) * 2008-08-01 2010-02-18 Olympus Imaging Corp Image reproducing apparatus, image reproducing method, and program
CN101778214B (en) * 2009-01-09 2011-08-31 华晶科技股份有限公司 Digital image pick-up device having brightness and focusing compensation function and image compensation method thereof
JP5361528B2 (en) * 2009-05-15 2013-12-04 キヤノン株式会社 Imaging apparatus and program
CN101895723A (en) * 2009-05-22 2010-11-24 深圳市菲特数码技术有限公司 Monitoring device
JP5460173B2 (en) * 2009-08-13 2014-04-02 富士フイルム株式会社 Image processing method, image processing apparatus, image processing program, and imaging apparatus
JP5538992B2 (en) * 2010-04-27 2014-07-02 キヤノン株式会社 Imaging apparatus and control method thereof
CN102289336A (en) * 2010-06-17 2011-12-21 昆达电脑科技(昆山)有限公司 picture management system and method
JP5640155B2 (en) * 2011-09-30 2014-12-10 富士フイルム株式会社 Stereoscopic image pickup apparatus and in-focus state confirmation image display method
JP2013093819A (en) * 2011-10-05 2013-05-16 Sanyo Electric Co Ltd Electronic camera
JP5936404B2 (en) * 2012-03-23 2016-06-22 キヤノン株式会社 Imaging apparatus, control method thereof, and program
CN103366352B (en) * 2012-03-30 2017-09-22 北京三星通信技术研究有限公司 Apparatus and method for producing the image that background is blurred
CN102932541A (en) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 Mobile phone photographing method and system
CN103049175B (en) * 2013-01-22 2016-08-10 华为终端有限公司 Preview screen rendering method, device and terminal
CN103135927B (en) * 2013-01-25 2015-09-30 广东欧珀移动通信有限公司 A kind of mobile terminal rapid focus photographic method and system
CN104104787B (en) * 2013-04-12 2016-12-28 上海果壳电子有限公司 Photographic method, system and handheld device
CN103211621B (en) * 2013-04-27 2015-07-15 上海市杨浦区中心医院 Ultrasound directed texture quantitative measuring instrument and method thereof
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN103595919B (en) * 2013-11-15 2015-08-26 努比亚技术有限公司 Manual focus method and filming apparatus
CN103631599B (en) * 2013-12-11 2017-12-12 Tcl通讯(宁波)有限公司 One kind is taken pictures processing method, system and mobile terminal
CN104731494B (en) * 2013-12-23 2019-05-31 中兴通讯股份有限公司 A kind of method and apparatus of preview interface selection area amplification
JP6151176B2 (en) * 2013-12-27 2017-06-21 株式会社 日立産業制御ソリューションズ Focus control apparatus and method
CN103777865A (en) * 2014-02-21 2014-05-07 联想(北京)有限公司 Method, device, processor and electronic device for displaying information
CN104333689A (en) * 2014-03-05 2015-02-04 广州三星通信技术研究有限公司 Method and device for displaying preview image during shooting
CN103929596B (en) * 2014-04-30 2016-09-14 努比亚技术有限公司 Guide the method and device of shooting composition
CN104023172A (en) * 2014-06-27 2014-09-03 深圳市中兴移动通信有限公司 Shooting method and shooting device of dynamic image
CN104243825B (en) * 2014-09-22 2017-11-14 广东欧珀移动通信有限公司 A kind of mobile terminal Atomatic focusing method and system
CN104243827A (en) * 2014-09-23 2014-12-24 深圳市中兴移动通信有限公司 Shooting method and device
CN105512136A (en) * 2014-09-25 2016-04-20 中兴通讯股份有限公司 Method and device for processing based on layer
EP3018892A1 (en) * 2014-10-31 2016-05-11 Canon Kabushiki Kaisha Image capturing apparatus and control method thereof
CN104618627B (en) * 2014-12-31 2018-06-08 小米科技有限责任公司 Method for processing video frequency and device
CN105872349A (en) * 2015-01-23 2016-08-17 中兴通讯股份有限公司 Photographing method, photographing device and mobile terminal
CN104660913B (en) * 2015-03-18 2016-08-24 努比亚技术有限公司 Focus adjustment method and apparatus
CN104702846B (en) * 2015-03-20 2018-05-08 惠州Tcl移动通信有限公司 Mobile terminal camera preview image processing method and system
CN104754227A (en) * 2015-03-26 2015-07-01 广东欧珀移动通信有限公司 Method and device for shooting video
CN104836956A (en) * 2015-05-09 2015-08-12 陈包容 Processing method and device for cellphone video
CN104883619B (en) * 2015-05-12 2018-02-09 广州酷狗计算机科技有限公司 Audio-video frequency content commending system, method and device
CN104954672B (en) * 2015-06-10 2020-06-02 惠州Tcl移动通信有限公司 Manual focusing method of mobile terminal and mobile terminal
CN105100615B (en) * 2015-07-24 2019-02-26 青岛海信移动通信技术股份有限公司 A kind of method for previewing of image, device and terminal
CN105141858B (en) * 2015-08-13 2018-10-12 上海斐讯数据通信技术有限公司 The background blurring system and method for photo
CN105611145A (en) * 2015-09-21 2016-05-25 宇龙计算机通信科技(深圳)有限公司 Multi-graphic layer shooting method, multi-graphic layer shooting apparatus and terminal
CN105578275A (en) * 2015-12-16 2016-05-11 小米科技有限责任公司 Video display method and apparatus
CN105843501B (en) * 2016-02-03 2019-11-29 维沃移动通信有限公司 A kind of method of adjustment and mobile terminal of parameter of taking pictures
CN105979165B (en) * 2016-06-02 2019-02-05 Oppo广东移动通信有限公司 Blur photograph generation method, device and mobile terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103988489A (en) * 2011-09-30 2014-08-13 富士胶片株式会社 Imaging device, imaging method, recording medium and program
CN104038699A (en) * 2014-06-27 2014-09-10 深圳市中兴移动通信有限公司 Focusing state prompting method and shooting device

Also Published As

Publication number Publication date
CN106453924A (en) 2017-02-22
CN106572303B (en) 2020-02-18
CN106534675A (en) 2017-03-22
CN106534674A (en) 2017-03-22
CN106453924B (en) 2019-11-15
CN106412324A (en) 2017-02-15
CN106504280A (en) 2017-03-15
CN106375596A (en) 2017-02-01
CN106572249A (en) 2017-04-19
CN106572303A (en) 2017-04-19
CN106375595A (en) 2017-02-01
CN106502693B (en) 2019-07-19
CN106502693A (en) 2017-03-15
CN106572302A (en) 2017-04-19
CN106375596B (en) 2020-04-24
CN106572302B (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN106412324B (en) Device and method for prompting focusing object
CN106454121B (en) Double-camera shooting method and device
US8780258B2 (en) Mobile terminal and method for generating an out-of-focus image
CN106909274B (en) Image display method and device
WO2017067526A1 (en) Image enhancement method and mobile terminal
CN106530241B (en) Image blurring processing method and device
WO2017045650A1 (en) Picture processing method and terminal
WO2017050115A1 (en) Image synthesis method
US20170255382A1 (en) Mobile terminal and operation method thereof and computer storage medium
CN106445352B (en) Edge touch device and method of mobile terminal
CN105468158B (en) Color adjustment method and mobile terminal
CN105956999B (en) Thumbnail generation device and method
WO2017020836A1 (en) Device and method for processing depth image by blurring
CN106569709B (en) Apparatus and method for controlling mobile terminal
US10740946B2 (en) Partial image processing method, device, and computer storage medium
CN106713716B (en) Shooting control method and device for double cameras
CN106657782B (en) Picture processing method and terminal
CN106651867B (en) Method, device and terminal for realizing interactive image segmentation
CN106911881B (en) Dynamic photo shooting device and method based on double cameras and terminal
CN106851125B (en) Mobile terminal and multiple exposure shooting method
CN106993134B (en) Image generation device and method and terminal
CN106851114B (en) Photo display device, photo generation device, photo display method, photo generation method and terminal
CN106713656B (en) Shooting method and mobile terminal
CN106791119B (en) Photo processing method and device and terminal
CN105554285B (en) Processing method for taking person photo and intelligent mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200115

Address after: 518000 502, 5 / F, building 2, Guanghui Technology Park, Minqing Road, Fukang community, Longhua street, Longhua District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Apexel Technology Co., Ltd.

Address before: 518000 Guangdong Province, Shenzhen high tech Zone of Nanshan District City, No. 9018 North Central Avenue's innovation building A, 6-8 layer, 10-11 layer, B layer, C District 6-10 District 6 floor

Applicant before: Nubian Technologies Ltd.

GR01 Patent grant
GR01 Patent grant