CN110049253A - A kind of focusing control method, equipment and computer readable storage medium - Google Patents
A kind of focusing control method, equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN110049253A CN110049253A CN201910473348.5A CN201910473348A CN110049253A CN 110049253 A CN110049253 A CN 110049253A CN 201910473348 A CN201910473348 A CN 201910473348A CN 110049253 A CN110049253 A CN 110049253A
- Authority
- CN
- China
- Prior art keywords
- state
- candidate
- wearable device
- shooting
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/675—Focus control based on electronic image sensor signals comprising setting of focusing regions
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
This application discloses a kind of focusing control method, equipment and computer readable storage mediums, wherein this method comprises: determining other associated regions in shooting preview region according to the focusing state, and the associated region is divided into focus object candidate regions;Subsequently, the image information under the shooting state is extracted, parsing described image information obtains candidate target, and the candidate target is sequentially placed into the focus object candidate regions;Finally, monitor the motion state of the wearable device, according to the motion state in the focus object candidate regions selected target candidate regions, and using the candidate target in the target candidate area as the focus object under current shooting state.A kind of focusing control program of hommization is realized, so that user more easily focus object is adjusted in shooting process using wearable device, so as to avoid the more narrow brought operational inconvenience in wearable device display area.
Description
Technical field
This application involves mobile communication field more particularly to a kind of focusing control methods, equipment and computer-readable storage
Medium.
Background technique
In the prior art, with the fast development of intelligent terminal, occur being different from the wearing of normal procedure intelligent mobile phone
Formula equipment, for example, the wearable devices such as smartwatch or Intelligent bracelet.Since wearable device is compared to traditional intelligent hand
Machine, the particularity such as soft and hardware environment, mode of operation and operating environment, if the manipulation scheme of traditional smart phone is turned
With to wearable device, then may make troubles place, user experience to the operation of user are bad.
Summary of the invention
In order to solve above-mentioned technological deficiency in the prior art, the invention proposes a kind of focusing control method, this method
Include:
Identify the focusing state under wearable device current shooting state;
Other associated regions in shooting preview region are determined according to the focusing state, and the associated region is divided into
Focus object candidate regions;
Extract the image information under the shooting state, parsing described image information obtains candidate target, and by the time
Object is selected to be sequentially placed into the focus object candidate regions;
The motion state for monitoring the wearable device is selected in the focus object candidate regions according to the motion state
Target candidate area, and using the candidate target in the target candidate area as the focus object under current shooting state.
Optionally, the focusing state under the identification wearable device current shooting state, comprising:
Detect the current shooting state of the wearable device, wherein the shooting state include shooting operation region and
Shooting preview region;
The focus control signal in the shooting operation region is monitored, and, monitor the figure in the shooting preview region
As information.
Optionally, other associated regions that shooting preview region is determined according to the focusing state, and by the pass
Connection region division is focus object candidate regions, comprising:
Neighboring region in the shooting preview region divides multiple associated regions;
Determine the area attribute of the associated region, wherein the area attribute includes body attribute, background attribute, quiet
State attribute and dynamic attribute.
Optionally, other associated regions that shooting preview region is determined according to the focusing state, and by the pass
Connection region division is focus object candidate regions, further includes:
The object type of the object is extracted, wherein the object type includes main object, background object, static state
Object and dynamic object;
The corresponding relationship for determining the object type Yu the area attribute is associated with the body attribute and the main body pair
As being associated with the background attribute and the background object, being associated with the static attribute and the static object, and described in association
Dynamic attribute and the dynamic object.
Optionally, the image information extracted under the shooting state, parsing described image information obtain candidate target,
And the candidate target is sequentially placed into the focus object candidate regions, comprising:
It obtains and parses the image information in the shooting preview region;
It parses and identifies the reference object in described image information.
Optionally, the image information extracted under the shooting state, parsing described image information obtain candidate target,
And the candidate target is sequentially placed into the focus object candidate regions, further includes:
Classification extraction is carried out to the reference object by the object type, obtains the candidate target;
In conjunction with the corresponding relationship of the object type and the area attribute, it is described right that the candidate target is sequentially placed into
Jiao Duixianghouxuanqu.
Optionally, the motion state of the monitoring wearable device, according to the motion state in the focus object
Selected target candidate regions in candidate regions, and using the candidate target in the target candidate area as the focusing under current shooting state
Object, comprising:
Monitor the motion state of the wearable device, wherein the motion state includes rotational motion state;
The rotational motion state is parsed, determines rotation direction and rotational angle.
Optionally, the motion state of the monitoring wearable device, according to the motion state in the focus object
Selected target candidate regions in candidate regions, and using the candidate target in the target candidate area as the focusing under current shooting state
Object, further includes:
The collection area in corresponding multiple object candidates areas is determined according to the rotation direction;
According to the rotational angle in the collection area, selected target candidate regions, and will be in the target candidate area
Candidate target as the focus object under current shooting state.
The invention also provides a kind of focusings to control equipment, which includes:
Memory, processor and it is stored in the computer program that can be run on the memory and on the processor;
The step of computer program realizes method as described in any one of the above embodiments when being executed by the processor.
The invention also provides a kind of computer readable storage medium, focusing is stored on the computer readable storage medium
Program is controlled, the focusing control program realizes the step of focusing control method as described in any one of the above embodiments when being executed by processor
Suddenly.
The beneficial effects of the present invention are pass through the focusing state under identification wearable device current shooting state;Then, root
Other associated regions in shooting preview region are determined according to the focusing state, and the associated region is divided into focus object and is waited
Constituency;Subsequently, the image information under the shooting state is extracted, parsing described image information obtains candidate target, and by institute
It states candidate target and is sequentially placed into the focus object candidate regions;Finally, the motion state of the wearable device is monitored, according to described
Motion state selected target candidate regions in the focus object candidate regions, and the candidate target in the target candidate area is made
For the focus object under current shooting state.A kind of focusing control program of hommization is realized, so that user is using wearing
Equipment be more easily adjusted to focus object, more so as to avoid wearable device display area in shooting process
Operational inconvenience brought by narrow, improves operating efficiency, enhances user experience.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention
Example, and be used to explain the principle of the present invention together with specification.
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, for those of ordinary skill in the art
Speech, without any creative labor, is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of hardware structural diagram of embodiment of wearable device provided in an embodiment of the present invention;
Fig. 2 is a kind of hardware schematic of embodiment of wearable device provided by the embodiments of the present application;
Fig. 3 is a kind of hardware schematic of embodiment of wearable device provided by the embodiments of the present application;
Fig. 4 is a kind of hardware schematic of embodiment of wearable device provided by the embodiments of the present application;
Fig. 5 is a kind of hardware schematic of embodiment of wearable device provided by the embodiments of the present application;
Fig. 6 is the flow chart of present invention focusing control method first embodiment;
Fig. 7 is the flow chart of present invention focusing control method second embodiment;
Fig. 8 is the flow chart of present invention focusing control method 3rd embodiment;
Fig. 9 is the flow chart of present invention focusing control method fourth embodiment;
Figure 10 is the flow chart of present invention focusing the 5th embodiment of control method;
Figure 11 is the flow chart of present invention focusing control method sixth embodiment;
Figure 12 is the flow chart of present invention focusing the 7th embodiment of control method;
Figure 13 is the flow chart of present invention focusing the 8th embodiment of control method.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In subsequent description, using for indicate element such as " module ", " component " or " unit " suffix be only
Be conducive to explanation of the invention, itself there is no a specific meaning.Therefore, " module ", " component " or " unit " can mix
Ground uses.
The wearable device provided in the embodiment of the present invention includes that Intelligent bracelet, smartwatch and smart phone etc. move
Dynamic terminal.With the continuous development of Screen Technology, the appearance of the screens form such as flexible screen, Folding screen, smart phone etc. is mobile eventually
End can also be used as wearable device.The wearable device provided in the embodiment of the present invention may include: RF (Radio
Frequency, radio frequency) unit, WiFi module, audio output unit, A/V (audio/video) input unit, sensor, display
The components such as unit, user input unit, interface unit, memory, processor and power supply.
It will be illustrated by taking wearable device as an example in subsequent descriptions, referring to Fig. 1, its each implementation to realize the present invention
A kind of hardware structural diagram of wearable device of example, which may include: RF (Radio
Frequency, radio frequency) unit 101, WiFi module 102, audio output unit 103, A/V (audio/video) input unit 104,
Sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, Yi Ji electricity
The components such as source 111.It will be understood by those skilled in the art that wearable device structure shown in Fig. 1 is not constituted to wearable
The restriction of equipment, wearable device may include perhaps combining certain components or difference than illustrating more or fewer components
Component layout.
It is specifically introduced below with reference to all parts of the Fig. 1 to wearable device:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, signal sends and receivees, specifically, radio frequency list
Uplink information can be sent to base station by member 101, and after the downlink information that in addition can also be sent base station receives, being sent to can be worn
The processor 110 for wearing equipment is handled, and base station can be to the downlink information that radio frequency unit 101 is sent and be sent out according to radio frequency unit 101
What the uplink information sent generated, it is also possible to actively push away to radio frequency unit 101 after the information update for detecting wearable device
It send, for example, base station can penetrating to wearable device after detecting that geographical location locating for wearable device changes
Frequency unit 101 sends the message informing of geographical location variation, and radio frequency unit 101, can should after receiving the message informing
The processor 110 that message informing is sent to wearable device is handled, and it is logical that the processor 110 of wearable device can control the message
Know on the display panel 1061 for being shown in wearable device;In general, radio frequency unit 101 include but is not limited to antenna, at least one
Amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, radio frequency unit 101 can also pass through channel radio
Letter communicated with network and other equipment, specifically may include: by wireless communication with the server communication in network system, example
Such as, wearable device can download file resource from server by wireless communication, for example can download and answer from server
With program, after wearable device completes the downloading of a certain application program, if the corresponding file of the application program in server
Resource updates, then the server can be by wireless communication to the message informing of wearable device push resource updates, to remind
User is updated the application program.Any communication standard or agreement can be used in above-mentioned wireless communication, including but not limited to
GSM (Global System of Mobile communication, global system for mobile communications), GPRS (General
Packet Radio Service, general packet radio service), CDMA2000 (Code Division Multiple
Access2000, CDMA 2000), WCDMA (Wideband Code Division Multiple Access, broadband code
Point multiple access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, the time-division
S-CDMA-Synchronous Code Division Multiple Access), (Frequency Division Duplexing-Long Term Evolution, frequency division are double by FDD-LTE
Work long term evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, time division duplex are long-term
Evolution) etc..
In one embodiment, wearable device 100 can access existing communication network by insertion SIM card.
In another embodiment, wearable device 100 can be come real by the way that esim card (Embedded-SIM) is arranged
Existing communication network is now accessed, by the way of esim card, the inner space of wearable device can be saved, reduce thickness.
It is understood that although Fig. 1 shows radio frequency unit 101, but it is understood that, radio frequency unit 101 its
And it is not belonging to must be configured into for wearable device, it can according to need within the scope of not changing the essence of the invention and save completely
Slightly., wearable device 100 can realize the communication connection with other equipment or communication network separately through wifi module 102,
The embodiment of the present invention is not limited thereto.
WiFi belongs to short range wireless transmission technology, and wearable device can help user to receive and dispatch by WiFi module 102
Email, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 1
WiFi module 102 is shown, but it is understood that, and it is not belonging to must be configured into for wearable device, it completely can root
It is omitted within the scope of not changing the essence of the invention according to needs.
Audio output unit 103 can be in call signal reception pattern, call mode, record in wearable device 100
When under the isotypes such as mode, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is received or
The audio data that person stores in memory 109 is converted into audio signal and exports to be sound.Moreover, audio output unit
103 can also provide audio output relevant to the specific function that wearable device 100 executes (for example, call signal reception sound
Sound, message sink sound etc.).Audio output unit 103 may include loudspeaker, buzzer etc..
A/V input unit 104 is for receiving audio or video signal.A/V input unit 104 may include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited
Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (audio data), and can be audio data by such acoustic processing.Audio that treated (voice) data can
To be converted to the format output that can be sent to mobile communication base station via radio frequency unit 101 in the case where telephone calling model.
Microphone 1042 can be implemented various types of noises elimination (or inhibition) algorithms and send and receive sound to eliminate (or inhibition)
The noise generated during frequency signal or interference.
In one embodiment, wearable device 100 includes one or more cameras, by opening camera,
It can be realized the capture to image, realize the functions such as take pictures, record a video, the position of camera, which can according to need, to be configured.
Wearable device 100 further includes at least one sensor 105, for example, optical sensor, motion sensor and other
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ring
The light and shade of border light adjusts the brightness of display panel 1061, proximity sensor can when wearable device 100 is moved in one's ear,
Close display panel 1061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions
The size of (generally three axis) acceleration, can detect that size and the direction of gravity, can be used to identify mobile phone posture when static
It (for example pedometer, is struck using (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function
Hit) etc..
In one embodiment, wearable device 100 further includes proximity sensor, can by using proximity sensor
Wearable device can be realized non-contact manipulation, provide more modes of operation.
In one embodiment, wearable device 100 further includes heart rate sensor, when wearing, by close to using
Person can be realized the detecting of heart rate.
In one embodiment, wearable device 100 can also include that fingerprint sensor can by reading fingerprint
Realize the functions such as safety verification.
Display unit 106 is for showing information input by user or being supplied to the information of user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 1061.
In one embodiment, display panel 1061 uses flexible display screen, and wearable using flexible display screen sets
For when wearing, screen is able to carry out bending, to more be bonded.Optionally, the flexible display screen can use OLED screen
Body and graphene screen body, in other embodiments, the flexible display screen is also possible to other display materials, the present embodiment
It is not limited thereto.
In one embodiment, the display panel 1061 of wearable device can take rectangle, ring when convenient for wearing
Around.In other embodiments, other modes can also be taken.
User input unit 107 can be used for receiving the number or character information of input, and generate and wearable device
User setting and the related key signals input of function control.Specifically, user input unit 107 may include touch panel 1071
And other input equipments 1072.Touch panel 1071, also referred to as touch screen collect the touch behaviour of user on it or nearby
Make (for example user uses any suitable objects or attachment such as finger, stylus on touch panel 1071 or in touch panel
Operation near 1071), and corresponding attachment device is driven according to preset formula.Touch panel 1071 may include touching
Two parts of detection device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch behaviour
Make bring signal, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and
It is converted into contact coordinate, then gives processor 110, and order that processor 110 is sent can be received and executed.This
Outside, touch panel 1071 can be realized using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touching
Panel 1071 is controlled, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072
It can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operation
One of bar etc. is a variety of, specifically herein without limitation.
In one embodiment, one or more buttons have can be set in the side of wearable device 100.Button can be with
The various ways such as short-press, long-pressing, rotation are realized, to realize a variety of operating effects.The quantity of button can be different to be multiple
It can be applied in combination between button, realize a variety of operating functions.
Further, touch panel 1071 can cover display panel 1061, when touch panel 1071 detect on it or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, touch panel 1071 and display panel
1061 be the function that outputs and inputs of realizing wearable device as two independent components, but in certain embodiments,
Touch panel 1071 and display panel 1061 can be integrated and be realized the function that outputs and inputs of wearable device, specifically herein
Without limitation.For example, processor 110 can be controlled when receiving the message informing of a certain application program by radio frequency unit 101
The message informing show in a certain predeterminable area of display panel 1061 by system, the predeterminable area and touch panel 1071 certain
One region is corresponding, can be to corresponding to area on display panel 1061 by carrying out touch control operation to a certain region of touch panel 1071
The message informing shown in domain is controlled.
Interface unit 108 be used as at least one external device (ED) connect with wearable device 100 can by interface.Example
Such as, external device (ED) may include wired or wireless headphone port, external power supply (or battery charger) port, You Xianhuo
Wireless data communications port, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in wearable device 100 or can
For transmitting data between wearable device 100 and external device (ED).
In one embodiment, wearable device 100 interface unit 108 using contact structure, by contact with
Corresponding other equipment connection, realizes the functions such as charging, connection.Use contact can be with waterproof.
Memory 109 can be used for storing software program and various data.Memory 109 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 109 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of wearable device, utilizes various interfaces and the entire wearable device of connection
Various pieces, by running or execute the software program and/or module that are stored in memory 109, and call and be stored in
Data in memory 109 execute the various functions and processing data of wearable device, to carry out to wearable device whole
Monitoring.Processor 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulation
Demodulation processor, wherein the main processing operation system of application processor, user interface and application program etc., modulation /demodulation processing
Device mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Wearable device 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply
111 can be logically contiguous by power-supply management system and processor 110, thus charged by power-supply management system realization management,
The functions such as electric discharge and power managed.
Although Fig. 1 is not shown, wearable device 100 can also be including bluetooth module etc., and details are not described herein.It is wearable to set
Standby 100, by bluetooth, can connect with other terminal devices, realize communication and the interaction of information.
Fig. 2-Fig. 4 is please referred to, is the structure under a kind of a kind of embodiment of wearable device provided in an embodiment of the present invention
Schematic diagram.Wearable device in the embodiment of the present invention, including flexible screen.In wearable device expansion, flexible screen is in
Strip;When wearable device is in wearing state, flexible screen bending is annular in shape.Fig. 2 and Fig. 3 show wearable device
Structural schematic diagram when screen is unfolded, Fig. 4 show structural schematic diagram when wearable device screen-bending.
Based on above-mentioned each embodiment, it can be seen that if the equipment is wrist-watch, bracelet or wearable device
When, the screen of the equipment can not overlay device watchband region, can also be with the watchband region of overlay device.Here, this Shen
It please propose a kind of optional embodiment, in the present embodiment, the equipment for wrist-watch, bracelet or wearable can be set
Standby, the equipment includes screen and interconnecting piece.The screen can be flexible screen, and the interconnecting piece can be watchband.It can
Choosing, the screen of the equipment or the viewing area of screen can be partly or completely covered on the watchband of equipment.Such as Fig. 5
Shown, Fig. 5 is a kind of a kind of hardware schematic of embodiment of wearable device provided by the embodiments of the present application, the equipment
Screen extends to two sides, and part is covered on the watchband of equipment.In other embodiments, the screen of the equipment can also be with
It is all covered on the watchband of the equipment, the embodiment of the present application is not limited thereto.
Embodiment one
Fig. 6 is the flow chart of present invention focusing control method first embodiment.A kind of focusing control method, this method packet
It includes:
Focusing state under S1, identification wearable device current shooting state;
S2, other associated regions that shooting preview region is determined according to the focusing state, and the associated region is drawn
It is divided into focus object candidate regions;
Image information under S3, the extraction shooting state, parsing described image information obtain candidate target, and will be described
Candidate target is sequentially placed into the focus object candidate regions;
The motion state of S4, the monitoring wearable device, according to the motion state in the focus object candidate regions
Selected target candidate regions, and using the candidate target in the target candidate area as the focus object under current shooting state.
Specifically, in the present embodiment, firstly, the focusing state under identification wearable device current shooting state;Then, root
Other associated regions in shooting preview region are determined according to the focusing state, and the associated region is divided into focus object and is waited
Constituency;Subsequently, the image information under the shooting state is extracted, parsing described image information obtains candidate target, and by institute
It states candidate target and is sequentially placed into the focus object candidate regions;Finally, the motion state of the wearable device is monitored, according to described
Motion state selected target candidate regions in the focus object candidate regions, and the candidate target in the target candidate area is made
For the focus object under current shooting state.
Optionally, the focusing state under wearable device current shooting state is identified, wherein wearable device has one or more
A camera starts one or more of cameras and obtains image information, Ke Yili when wearable device starts shooting function
Solution, in the present embodiment, wherein at least one camera is for obtaining subject, at the same time it can also using same
A or another camera is used for the motion state for assisting determining wearable device in the present embodiment.In the present embodiment, when wearing
When wearing equipment and being in shooting state, shooting preview image is shown in the predeterminable area of wearable device, meanwhile, in shooting preview rank
In section, the focusing state of current reference object is obtained in real time, wherein focusing state includes the focusing of user's operation, selects focus
It further include the pre-focusing object currently obtained by shooting algorithm Deng operation;
Optionally, in the present embodiment, other associated regions in shooting preview region are determined according to the focusing state, and
The associated region is divided into focus object candidate regions, wherein shooting preview region is contained in the indication range of wearable device
It is interior, it is to be understood that the indication range of wearable device is more broad, but when being in wearing state due to wearable device, uses
Family is limited compared to the visual range of wearable device, therefore, the bat is determined in visual range of the user compared to wearable device
Preview area is taken the photograph, meanwhile, in order to guarantee that there are also other visual ranges, in the present embodiment, which is less than this can
Depending on the display area in range, other regions in the shooting preview region will be removed in visual range as the other of the present embodiment
Associated region.Be understood that when, when the wearing state of wearable device changes during shooting preview, correspondingly,
User also correspondingly changes compared to the visual range of wearable device, as a result, other associated regions of the present embodiment also with
Change.It is understood that the focus object for changing front and back is candidate when other associated regions also change therewith
The arrangement mode in area may correspondingly make adjustment, in order to which user checks and operates;
Optionally, the image information under the shooting state is extracted, parsing described image information obtains candidate target, and will
The candidate target is sequentially placed into the focus object candidate regions, in the present embodiment, obtains the image under shooting state in real time
Then information parses the image information and obtains candidate target, it is to be understood that the candidate target includes different types of right
As for example, one or more shooting main bodys for being likely to become this time shooting, one or more are likely to become the shooting of this time shooting
Then background extracts the candidate target, it is to be understood that each object is extracted by edge analysis scheme, then,
The image of each object is placed in each focus object candidate regions of above-mentioned division in the form of thumbnail, it is possible to understand that
It is that said one or multiple shooting main bodys for being likely to become this time shooting, and one or more are likely to become this time shooting
Shooting background, the display scale in preview image may be different, still, in the present embodiment, check for the ease of user, will
The thumbnail of above-mentioned each shooting main body and/or shooting background is adjusted to same size, is subsequently placed in each focusing of above-mentioned division
In object candidates area;
Optionally, the motion state for monitoring the wearable device, it is candidate in the focus object according to the motion state
Selected target candidate regions in area, and using the candidate target in the target candidate area as the focusing pair under current shooting state
As, wherein the motion state of wearable device is the motion state of preset ad hoc fashion, for example, at the shooting preview stage, if
Wearable device is in perpendicular to the state of ground, then obtains the wearable device in the rotational angle of vertical direction, in another example, when herein
When the shooting preview stage, if wearable device is in level in the state of ground, wearable device rotation in the horizontal direction is obtained
Angle.Then, with according to the rotational angle in the focus object candidate regions selected target candidate regions, it is to be understood that
In rotation process, real-time display or highlighted or highlight corresponding candidate regions under current operation angle, stop when rotation or
Rotation is when remaining stable for preset time, using the candidate regions currently fallen into as the target candidate area of the present embodiment, and by institute
The candidate target in target candidate area is stated as the focus object under current shooting state.
The beneficial effect of the present embodiment is, passes through the focusing state under identification wearable device current shooting state;Then,
Other associated regions in shooting preview region are determined according to the focusing state, and the associated region is divided into focus object
Candidate regions;Subsequently, the image information under the shooting state is extracted, parsing described image information obtains candidate target, and will
The candidate target is sequentially placed into the focus object candidate regions;Finally, the motion state of the wearable device is monitored, according to institute
State motion state selected target candidate regions in the focus object candidate regions, and by the candidate target in the target candidate area
As the focus object under current shooting state.The focusing control program for realizing a kind of hommization, so that user wears in use
Wear equipment carry out shooting process in, more easily focus object is adjusted, so as to avoid wearable device display area compared with
For narrow brought operational inconvenience, operating efficiency is improved, user experience is enhanced.
Embodiment two
Fig. 7 is the flow chart of present invention focusing control method second embodiment, based on the above embodiment, the identification wearing
Focusing state under equipment current shooting state, comprising:
The current shooting state of S11, the detection wearable device, wherein the shooting state includes shooting operation region
And shooting preview region;
Focus control signal in S12, the monitoring shooting operation region, and, it monitors in the shooting preview region
Image information.
Specifically, in the present embodiment, firstly, detecting the current shooting state of the wearable device, wherein the shooting
State includes shooting operation region and shooting preview region;Then, the focal point control letter in the shooting operation region is monitored
Number, and, monitor the image information in the shooting preview region.
Optionally, the current shooting state of the wearable device is detected, wherein the shooting state and wearable device are current
Wearing state it is associated, for example, wearing state includes wearing position and worn orientation, specifically, for example, wearing position packet
Wrist wearing is included, worn orientation includes that camera is dressed and camera is dressed outwardly inwardly, according to different wearing states,
And used camera is combined, determine the shooting operation region and shooting preview region of the present embodiment;
Optionally, the focus control signal in the shooting operation region is monitored, for example, passing through the single point for receiving user
It hits, the touching signals for determining that the click location generates determine the touching signals pair in conjunction with the corresponding image information of the click location
The focus control signal answered, and, monitor the image information in the shooting preview region.
The beneficial effect of the present embodiment is, by detecting the current shooting state of the wearable device, wherein the bat
The state of taking the photograph includes shooting operation region and shooting preview region;Then, the focal point control in the shooting operation region is monitored
Signal, and, monitor the image information in the shooting preview region.Realize a kind of humanized focusing controlling party
Case, so that user be more easily adjusted focus object in shooting process using wearable device, so as to avoid
The more narrow brought operational inconvenience in wearable device display area, improves operating efficiency, enhances user experience.
Embodiment three
Fig. 8 is the flow chart of present invention focusing control method 3rd embodiment, based on the above embodiment, described according to
Focusing state determines other associated regions in shooting preview region, and the associated region is divided into focus object candidate regions,
Include:
S21, the neighboring region in the shooting preview region divide multiple associated regions;
S22, the area attribute for determining the associated region, wherein the area attribute includes body attribute, background category
Property, static attribute and dynamic attribute.
Specifically, in the present embodiment, firstly, the neighboring region in the shooting preview region divides multiple association areas
Domain;Then, it is determined that the area attribute of the associated region, wherein the area attribute includes body attribute, background attribute, quiet
State attribute and dynamic attribute.
Optionally, since the display area of wearable device is strip, and due to shooting preview region have it is specific or
The preset Aspect Ratio of person, therefore, it is however generally that, the upper area and/or lower end area in shooting preview region belong to free area
In the present embodiment multiple associated regions are respectively divided in the upper area in shooting preview region and/or lower end area by domain;
Optionally, the area attribute of the associated region is determined, wherein the area attribute includes body attribute, background
Attribute, static attribute and dynamic attribute, it is to be understood that different area attributes is for putting different attribute or difference
The object of classification improves operating efficiency consequently facilitating user's classification is checked.
The beneficial effect of the present embodiment is, divides multiple association areas by the neighboring region in the shooting preview region
Domain;Then, it is determined that the area attribute of the associated region, wherein the area attribute includes body attribute, background attribute, quiet
State attribute and dynamic attribute.A kind of humanized focusing control program is realized, so that user is using wearable device
More easily focus object is adjusted in shooting process, it is more narrow so as to avoid wearable device display area
Brought operational inconvenience, improves operating efficiency, enhances user experience.
Example IV
Fig. 9 is the flow chart of present invention focusing control method fourth embodiment, based on the above embodiment, described according to
Focusing state determines other associated regions in shooting preview region, and the associated region is divided into focus object candidate regions,
Further include:
S23, the object type for extracting the object, wherein the object type include main object, background object,
Static object and dynamic object;
S24, the corresponding relationship for determining the object type Yu the area attribute, are associated with the body attribute and the master
Body object is associated with the background attribute and the background object, is associated with the static attribute and the static object, and association
The dynamic attribute and the dynamic object.
Specifically, in the present embodiment, firstly, the object type of the object is extracted, wherein the object type
Including main object, background object, static object and dynamic object;Then, it is determined that the object type and the region belong to
Property corresponding relationship, be associated with the body attribute and the main object, be associated with the background attribute and the background object, close
Join the static attribute and the static object, and the association dynamic attribute and the dynamic object.
Optionally, the object type of the object is extracted, wherein the object type includes main object, background pair
As, static object and dynamic object, likewise, as above described in example, with area attribute for putting different attribute or not
Generic object improves operating efficiency and therefore in the present embodiment, extracts the object consequently facilitating user's classification is checked
The object type of object, wherein the object type includes main object, background object, static object and dynamic object;
Optionally, the corresponding relationship for determining the object type Yu the area attribute is associated with the body attribute and institute
Main object is stated, the background attribute and the background object are associated with, is associated with the static attribute and the static object, and
It is associated with the dynamic attribute and the dynamic object, consequently facilitating carrying out corresponding region storing after image recognition.
The beneficial effect of the present embodiment is, by extracting the object type of the object, wherein the object class
It Bao Kuo not main object, background object, static object and dynamic object;Then, it is determined that the object type and the region
The corresponding relationship of attribute is associated with the body attribute and the main object, is associated with the background attribute and the background object,
It is associated with the static attribute and the static object, and the association dynamic attribute and the dynamic object.Realize one kind
Humanized focusing control program, so that user is carried out in shooting process using wearable device, more easily to focusing
Object is adjusted, and so as to avoid the more narrow brought operational inconvenience in wearable device display area, improves behaviour
Make efficiency, enhances user experience.
Embodiment five
Figure 10 is the flow chart of present invention focusing the 5th embodiment of control method, based on the above embodiment, the extraction institute
The image information under shooting state is stated, parsing described image information obtains candidate target, and the candidate target is sequentially placed into
The focus object candidate regions, comprising:
S31, acquisition simultaneously parse image information in the shooting preview region;
S32, parsing simultaneously identify reference object in described image information.
Specifically, in the present embodiment, firstly, obtaining and parsing the image information in the shooting preview region;Then,
It parses and identifies the reference object in described image information.
Optionally, the image information in the shooting preview region is obtained and parsed in real time, alternatively, when switching camera
Afterwards, it reacquires and parses the image information in the shooting preview region;
Optionally, real time parsing and the reference object in described image information is identified, alternatively, when switching currently used take the photograph
As parsing again and identifying the reference object in described image information after head.
The beneficial effect of the present embodiment is, by obtaining and parsing the image information in the shooting preview region;So
Afterwards, it parses and identifies the reference object in described image information.A kind of humanized focusing control program is realized, so that
User be more easily adjusted focus object in shooting process, set so as to avoid wearing using wearable device
The more narrow brought operational inconvenience in standby display area, improves operating efficiency, enhances user experience.
Embodiment six
Figure 11 is the flow chart of present invention focusing control method sixth embodiment, based on the above embodiment, the extraction institute
The image information under shooting state is stated, parsing described image information obtains candidate target, and the candidate target is sequentially placed into
The focus object candidate regions, further includes:
S33, classification extraction is carried out to the reference object by the object type, obtains the candidate target;
S34, in conjunction with the corresponding relationship of the object type and the area attribute, the candidate target is sequentially placed into institute
State focus object candidate regions.
Specifically, in the present embodiment, firstly, carrying out classification extraction to the reference object by the object type, obtaining
To the candidate target;Then, in conjunction with the corresponding relationship of the object type and the area attribute, by the candidate target according to
It is secondary to be placed in the focus object candidate regions.
Optionally, classification extraction is carried out to the reference object by the object type, obtains the candidate target,
In, according to the complexity of present image information or the quantity of object type, determine corresponding candidate target mode classification,
For example, candidate target is only divided into dynamic dynamic object and static object when the complexity of image information is lower, alternatively,
Only it is divided into main object and background object;
Optionally, in conjunction with the corresponding relationship of the object type and the area attribute, the candidate target is successively set
In the focus object candidate regions, wherein when shooting preview image changes, it is candidate to adjust above-mentioned focus object in real time
Area.
The beneficial effect of the present embodiment is, by carrying out classification extraction to the reference object by the object type,
Obtain the candidate target;Then, in conjunction with the corresponding relationship of the object type and the area attribute, by the candidate target
It is sequentially placed into the focus object candidate regions.A kind of humanized focusing control program is realized, so that user is using
Wearable device be more easily adjusted to focus object, in shooting process so as to avoid wearable device display area
More narrow brought operational inconvenience, improves operating efficiency, enhances user experience.
Embodiment seven
Figure 12 is the flow chart of present invention focusing the 7th embodiment of control method, based on the above embodiment, the monitoring institute
The motion state for stating wearable device, according to the motion state in the focus object candidate regions selected target candidate regions, and
Using the candidate target in the target candidate area as the focus object under current shooting state, comprising:
The motion state of S41, the monitoring wearable device, wherein the motion state includes rotational motion state;
S42, the parsing rotational motion state, determine rotation direction and rotational angle.
Specifically, in the present embodiment, firstly, monitoring the motion state of the wearable device, wherein the motion state
Including rotational motion state;Then, the rotational motion state is parsed, determines rotation direction and rotational angle.
Optionally, the motion state of the wearable device is monitored, wherein the motion state includes rotational motion state,
Wherein, the motion state of wearable device is the motion state of preset ad hoc fashion, for example, at the shooting preview stage, if wearing
It wears equipment to be in perpendicular to the state of ground, then obtains the wearable device in the rotational angle of vertical direction, in another example, it is clapped when herein
When taking the photograph preview phase, if wearable device is in level in the state of ground, wearable device angle of rotation in the horizontal direction is obtained
Degree;
Optionally, when wearable device is not at perpendicular or parallel to the state of ground, it may be predetermined that move shape
The rotational angle under vertical component or parallel component under state.
The beneficial effect of the present embodiment is, by the motion state for monitoring the wearable device, wherein the movement shape
State includes rotational motion state;Then, the rotational motion state is parsed, determines rotation direction and rotational angle.It realizes
A kind of humanized focusing control program, so that user is carried out in shooting process using wearable device, it is more easily right
Focus object is adjusted, and so as to avoid the more narrow brought operational inconvenience in wearable device display area, is improved
Operating efficiency, enhances user experience.
Embodiment eight
Figure 13 is the flow chart of present invention focusing the 8th embodiment of control method, based on the above embodiment, the monitoring institute
The motion state for stating wearable device, according to the motion state in the focus object candidate regions selected target candidate regions, and
Using the candidate target in the target candidate area as the focus object under current shooting state, further includes:
S43, the collection area that corresponding multiple object candidates areas are determined according to the rotation direction;
S44, according to the rotational angle in the collection area, selected target candidate regions, and by the target candidate
Candidate target in area is as the focus object under current shooting state.
Specifically, in the present embodiment, firstly, determining the collection in corresponding multiple object candidates areas according to the rotation direction
Close region;Then, according to the rotational angle in the collection area, selected target candidate regions, and by the target candidate
Candidate target in area is as the focus object under current shooting state.
Optionally, the collection area in corresponding multiple object candidates areas is determined according to the rotation direction, it is possible to understand that
It is that corresponding if candidate regions are respectively set in the upper and lower side in shooting area, both direction respectively corresponds two collection of upper and lower side
Close region;
Optionally, it according to the rotational angle in the collection area, highlights or highlighted mode selectes mesh
Candidate regions are marked, and using the candidate target in the target candidate area as the focus object under current shooting state, meanwhile, it is clapping
It takes the photograph and highlights or highlight the focus object in preview area.
The beneficial effect of the present embodiment is, the set in corresponding multiple object candidates areas is determined by the rotation direction
Region;Then, according to the rotational angle in the collection area, selected target candidate regions, and by the target candidate area
In candidate target as the focus object under current shooting state.A kind of humanized focusing control program is realized,
So that user be more easily adjusted focus object, in shooting process so as to avoid wearing using wearable device
The more narrow brought operational inconvenience of device display area is worn, operating efficiency is improved, enhances user experience.
Embodiment nine
Based on the above embodiment, the invention also provides a kind of focusings to control equipment, which includes:
Memory, processor and it is stored in the computer program that can be run on the memory and on the processor;
The step of computer program realizes method as described in any one of the above embodiments when being executed by the processor.
Specifically, in the present embodiment, firstly, the focusing state under identification wearable device current shooting state;Then, root
Other associated regions in shooting preview region are determined according to the focusing state, and the associated region is divided into focus object and is waited
Constituency;Subsequently, the image information under the shooting state is extracted, parsing described image information obtains candidate target, and by institute
It states candidate target and is sequentially placed into the focus object candidate regions;Finally, the motion state of the wearable device is monitored, according to described
Motion state selected target candidate regions in the focus object candidate regions, and the candidate target in the target candidate area is made
For the focus object under current shooting state.
Optionally, the focusing state under wearable device current shooting state is identified, wherein wearable device has one or more
A camera starts one or more of cameras and obtains image information, Ke Yili when wearable device starts shooting function
Solution, in the present embodiment, wherein at least one camera is for obtaining subject, at the same time it can also using same
A or another camera is used for the motion state for assisting determining wearable device in the present embodiment.In the present embodiment, when wearing
When wearing equipment and being in shooting state, shooting preview image is shown in the predeterminable area of wearable device, meanwhile, in shooting preview rank
In section, the focusing state of current reference object is obtained in real time, wherein focusing state includes the focusing of user's operation, selects focus
It further include the pre-focusing object currently obtained by shooting algorithm Deng operation;
Optionally, in the present embodiment, other associated regions in shooting preview region are determined according to the focusing state, and
The associated region is divided into focus object candidate regions, wherein shooting preview region is contained in the indication range of wearable device
It is interior, it is to be understood that the indication range of wearable device is more broad, but when being in wearing state due to wearable device, uses
Family is limited compared to the visual range of wearable device, therefore, the bat is determined in visual range of the user compared to wearable device
Preview area is taken the photograph, meanwhile, in order to guarantee that there are also other visual ranges, in the present embodiment, which is less than this can
Depending on the display area in range, other regions in the shooting preview region will be removed in visual range as the other of the present embodiment
Associated region.Be understood that when, when the wearing state of wearable device changes during shooting preview, correspondingly,
User also correspondingly changes compared to the visual range of wearable device, as a result, other associated regions of the present embodiment also with
Change.It is understood that the focus object for changing front and back is candidate when other associated regions also change therewith
The arrangement mode in area may correspondingly make adjustment, in order to which user checks and operates;
Optionally, the image information under the shooting state is extracted, parsing described image information obtains candidate target, and will
The candidate target is sequentially placed into the focus object candidate regions, in the present embodiment, obtains the image under shooting state in real time
Then information parses the image information and obtains candidate target, it is to be understood that the candidate target includes different types of right
As for example, one or more shooting main bodys for being likely to become this time shooting, one or more are likely to become the shooting of this time shooting
Then background extracts the candidate target, it is to be understood that each object is extracted by edge analysis scheme, then,
The image of each object is placed in each focus object candidate regions of above-mentioned division in the form of thumbnail, it is possible to understand that
It is that said one or multiple shooting main bodys for being likely to become this time shooting, and one or more are likely to become this time shooting
Shooting background, the display scale in preview image may be different, still, in the present embodiment, check for the ease of user, will
The thumbnail of above-mentioned each shooting main body and/or shooting background is adjusted to same size, is subsequently placed in each focusing of above-mentioned division
In object candidates area;
Optionally, the motion state for monitoring the wearable device, it is candidate in the focus object according to the motion state
Selected target candidate regions in area, and using the candidate target in the target candidate area as the focusing pair under current shooting state
As, wherein the motion state of wearable device is the motion state of preset ad hoc fashion, for example, at the shooting preview stage, if
Wearable device is in perpendicular to the state of ground, then obtains the wearable device in the rotational angle of vertical direction, in another example, when herein
When the shooting preview stage, if wearable device is in level in the state of ground, wearable device rotation in the horizontal direction is obtained
Angle.Then, with according to the rotational angle in the focus object candidate regions selected target candidate regions, it is to be understood that
In rotation process, real-time display or highlighted or highlight corresponding candidate regions under current operation angle, stop when rotation or
Rotation is when remaining stable for preset time, using the candidate regions currently fallen into as the target candidate area of the present embodiment, and by institute
The candidate target in target candidate area is stated as the focus object under current shooting state.
The beneficial effect of the present embodiment is, passes through the focusing state under identification wearable device current shooting state;Then,
Other associated regions in shooting preview region are determined according to the focusing state, and the associated region is divided into focus object
Candidate regions;Subsequently, the image information under the shooting state is extracted, parsing described image information obtains candidate target, and will
The candidate target is sequentially placed into the focus object candidate regions;Finally, the motion state of the wearable device is monitored, according to institute
State motion state selected target candidate regions in the focus object candidate regions, and by the candidate target in the target candidate area
As the focus object under current shooting state.The focusing control program for realizing a kind of hommization, so that user wears in use
Wear equipment carry out shooting process in, more easily focus object is adjusted, so as to avoid wearable device display area compared with
For narrow brought operational inconvenience, operating efficiency is improved, user experience is enhanced.
Embodiment ten
Based on the above embodiment, the invention also provides a kind of computer readable storage medium, the computer-readable storages
Bitmap processing routine is stored on medium, bitmap processing routine realizes bitmap as described in any one of the above embodiments when being executed by processor
The step of processing method.
Implement bitmap processing method, equipment and computer readable storage medium of the invention, by identifying that wearable device is worked as
Focusing state under preceding shooting state;Then, other associated regions in shooting preview region are determined according to the focusing state, and
The associated region is divided into focus object candidate regions;Subsequently, the image information under the shooting state is extracted, institute is parsed
It states image information and obtains candidate target, and the candidate target is sequentially placed into the focus object candidate regions;Finally, monitoring institute
The motion state for stating wearable device, according to the motion state in the focus object candidate regions selected target candidate regions, and
Using the candidate target in the target candidate area as the focus object under current shooting state.Realize a kind of pair of hommization
Burnt control program, so that user be more easily adjusted focus object in shooting process using wearable device, from
And the more narrow brought operational inconvenience in wearable device display area is avoided, operating efficiency is improved, use is enhanced
Family experience.
It should be noted that, in this document, term " including ", " include " or its any other variant be intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.The element that " including one ... " limits in the absence of more restrictions, by sentence, it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form, all of these belong to the protection of the present invention.
Claims (10)
1. a kind of focusing control method, which is characterized in that the described method includes:
Identify the focusing state under wearable device current shooting state;
Other associated regions in shooting preview region are determined according to the focusing state, and the associated region is divided into focusing
Object candidates area;
The image information under the shooting state is extracted, parsing described image information obtains candidate target, and the candidate is right
As being sequentially placed into the focus object candidate regions;
The motion state for monitoring the wearable device, according to motion state selected target in the focus object candidate regions
Candidate regions, and using the candidate target in the target candidate area as the focus object under current shooting state.
2. focusing control method according to claim 1, which is characterized in that the identification wearable device current shooting state
Under focusing state, comprising:
Detect the current shooting state of the wearable device, wherein the shooting state includes shooting operation region and shooting
Preview area;
The focus control signal in the shooting operation region is monitored, and, monitor the image letter in the shooting preview region
Breath.
3. focusing control method according to claim 2, which is characterized in that described determined according to the focusing state shoots
Other associated regions of preview area, and the associated region is divided into focus object candidate regions, comprising:
Neighboring region in the shooting preview region divides multiple associated regions;
Determine the area attribute of the associated region, wherein the area attribute includes body attribute, background attribute, static category
Property and dynamic attribute.
4. focusing control method according to claim 3, which is characterized in that described determined according to the focusing state shoots
Other associated regions of preview area, and the associated region is divided into focus object candidate regions, further includes:
The object type of the object is extracted, wherein the object type includes main object, background object, static object
And dynamic object;
The corresponding relationship for determining the object type Yu the area attribute is associated with the body attribute and the main object,
It is associated with the background attribute and the background object, is associated with the static attribute and the static object, and is associated with described dynamic
State attribute and the dynamic object.
5. focusing control method according to claim 4, which is characterized in that the image extracted under the shooting state
Information, parsing described image information obtains candidate target, and the candidate target is sequentially placed into the focus object candidate regions,
Include:
It obtains and parses the image information in the shooting preview region;
It parses and identifies the reference object in described image information.
6. focusing control method according to claim 5, which is characterized in that the image extracted under the shooting state
Information, parsing described image information obtains candidate target, and the candidate target is sequentially placed into the focus object candidate regions,
Further include:
Classification extraction is carried out to the reference object by the object type, obtains the candidate target;
In conjunction with the corresponding relationship of the object type and the area attribute, the candidate target is sequentially placed into the focusing pair
As candidate regions.
7. focusing control method according to claim 6, which is characterized in that the movement shape of the monitoring wearable device
State, according to the motion state in the focus object candidate regions selected target candidate regions, and will be in the target candidate area
Candidate target as the focus object under current shooting state, comprising:
Monitor the motion state of the wearable device, wherein the motion state includes rotational motion state;
The rotational motion state is parsed, determines rotation direction and rotational angle.
8. focusing control method according to claim 7, which is characterized in that the movement shape of the monitoring wearable device
State, according to the motion state in the focus object candidate regions selected target candidate regions, and will be in the target candidate area
Candidate target as the focus object under current shooting state, further includes:
The collection area in corresponding multiple object candidates areas is determined according to the rotation direction;
According to the rotational angle in the collection area, selected target candidate regions, and by the time in the target candidate area
Select object as the focus object under current shooting state.
9. a kind of focusing controls equipment, which is characterized in that the equipment includes:
Memory, processor and it is stored in the computer program that can be run on the memory and on the processor;
The step such as method described in any item of the claim 1 to 8 is realized when the computer program is executed by the processor
Suddenly.
10. a kind of computer readable storage medium, which is characterized in that be stored with focusing control on the computer readable storage medium
Processing procedure sequence, the focusing control program realize that focusing described in any item of the claim 1 to 8 such as controls when being executed by processor
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910473348.5A CN110049253B (en) | 2019-05-31 | 2019-05-31 | Focusing control method and device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910473348.5A CN110049253B (en) | 2019-05-31 | 2019-05-31 | Focusing control method and device and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110049253A true CN110049253A (en) | 2019-07-23 |
CN110049253B CN110049253B (en) | 2021-12-17 |
Family
ID=67284402
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910473348.5A Active CN110049253B (en) | 2019-05-31 | 2019-05-31 | Focusing control method and device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110049253B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113766140A (en) * | 2021-09-30 | 2021-12-07 | 北京蜂巢世纪科技有限公司 | Image shooting method and image shooting device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101340520A (en) * | 2007-07-03 | 2009-01-07 | 佳能株式会社 | Image data management apparatus and method, and recording medium |
CN104679394A (en) * | 2013-11-26 | 2015-06-03 | 中兴通讯股份有限公司 | Method and device for amplifying selected region of preview interface |
CN106412431A (en) * | 2016-09-30 | 2017-02-15 | 珠海市魅族科技有限公司 | Image display method and device |
US20170064188A1 (en) * | 2015-08-26 | 2017-03-02 | Canon Kabushiki Kaisha | Image capturing apparatus and auto focus control method therefor |
CN109799937A (en) * | 2019-02-25 | 2019-05-24 | 努比亚技术有限公司 | A kind of input control method, equipment and computer readable storage medium |
-
2019
- 2019-05-31 CN CN201910473348.5A patent/CN110049253B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101340520A (en) * | 2007-07-03 | 2009-01-07 | 佳能株式会社 | Image data management apparatus and method, and recording medium |
CN104679394A (en) * | 2013-11-26 | 2015-06-03 | 中兴通讯股份有限公司 | Method and device for amplifying selected region of preview interface |
US20170064188A1 (en) * | 2015-08-26 | 2017-03-02 | Canon Kabushiki Kaisha | Image capturing apparatus and auto focus control method therefor |
CN106412431A (en) * | 2016-09-30 | 2017-02-15 | 珠海市魅族科技有限公司 | Image display method and device |
CN109799937A (en) * | 2019-02-25 | 2019-05-24 | 努比亚技术有限公司 | A kind of input control method, equipment and computer readable storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113766140A (en) * | 2021-09-30 | 2021-12-07 | 北京蜂巢世纪科技有限公司 | Image shooting method and image shooting device |
Also Published As
Publication number | Publication date |
---|---|
CN110049253B (en) | 2021-12-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110109725A (en) | A kind of interface color method of adjustment and wearable device | |
CN109982179A (en) | Audio frequency signal output, device, wearable device and storage medium | |
CN110099218A (en) | Interaction control method, equipment and computer readable storage medium in a kind of shooting process | |
CN110012258A (en) | Best audio-video perception point acquisition methods, system, wearable device and storage medium | |
CN110308883A (en) | Method for splitting, wearable device and the computer readable storage medium of screen | |
CN110362368A (en) | Picture customization display methods, relevant device and the storage medium of wearable device | |
CN110069132A (en) | Application control method, intelligent wearable device and computer readable storage medium | |
CN110187769A (en) | A kind of preview image inspection method, equipment and computer readable storage medium | |
CN110225282A (en) | A kind of video record control method, equipment and computer readable storage medium | |
CN110177209A (en) | A kind of video parameter regulation method, equipment and computer readable storage medium | |
CN110083289A (en) | A kind of button display methods, wearable device and computer readable storage medium | |
CN110086563A (en) | A kind of method of controlling rotation, equipment and computer readable storage medium | |
CN110213810A (en) | Wearable device control method, wearable device and computer readable storage medium | |
CN110139270A (en) | Wearable device matching method, wearable device and computer readable storage medium | |
CN110086929A (en) | Breath screen display methods, mobile phone, wearable device and computer readable storage medium | |
CN109947524A (en) | Interface display method, wearable device and computer readable storage medium | |
CN110177208A (en) | A kind of association control method of video record, equipment and computer readable storage medium | |
CN110198411A (en) | Depth of field control method, equipment and computer readable storage medium during a kind of video capture | |
CN110072071A (en) | A kind of video record interaction control method, equipment and computer readable storage medium | |
CN110113529A (en) | A kind of acquisition parameters regulation method, equipment and computer readable storage medium | |
CN110083205A (en) | Page switching method, wearable device and computer readable storage medium | |
CN110069200A (en) | Wearable device input control method, wearable device and storage medium | |
CN110109721A (en) | A kind of wearable device control method, system, wearable device and storage medium | |
CN110049253A (en) | A kind of focusing control method, equipment and computer readable storage medium | |
CN110191282A (en) | A kind of acquisition parameters regulation method, equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |