CN107124560A - A kind of self-heterodyne system, medium and method - Google Patents

A kind of self-heterodyne system, medium and method Download PDF

Info

Publication number
CN107124560A
CN107124560A CN201710464940.XA CN201710464940A CN107124560A CN 107124560 A CN107124560 A CN 107124560A CN 201710464940 A CN201710464940 A CN 201710464940A CN 107124560 A CN107124560 A CN 107124560A
Authority
CN
China
Prior art keywords
image
information
shooting
self
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710464940.XA
Other languages
Chinese (zh)
Inventor
张志勇
郭盛勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aiyouwei Software Development Co Ltd
Original Assignee
Shanghai Aiyouwei Software Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aiyouwei Software Development Co Ltd filed Critical Shanghai Aiyouwei Software Development Co Ltd
Priority to CN201710464940.XA priority Critical patent/CN107124560A/en
Publication of CN107124560A publication Critical patent/CN107124560A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the present application discloses a kind of self-timer method, is related to technical field of image processing.Methods described includes:Obtain head image information and environmental information;Judge style of shooting;Determine angle parameter;Adjust shooting angle;Generate the first image.The self-timer method of the application, can make auto heterodyne image more attractive in appearance, improve the auto heterodyne effect of user.

Description

A kind of self-heterodyne system, medium and method
Technical field
The application is related to technical field of image processing, more particularly to self-heterodyne system, medium and method.
Background technology
With the fast development of mobile communication, the use of mobile terminal is increasingly frequent, mobile terminal provide function and The application program of installation also becomes increasingly abundant.At present, mobile phone, which is autodyned, also turns into one of function that user commonly uses, but is taken the photograph by preposition When being autodyned as head, main camera or self-shooting bar, user is difficult often to hold shooting angle, can not preferably be autodyned Effect.
Accordingly, it is desired to provide a kind of self-timer method, adjusts shooting angle, to improve auto heterodyne effect.
The content of the invention
According to the first aspect of some embodiments of the present application there is provided a kind of self-timer method, applied to terminal(For example, Electronic equipment etc.)In, methods described can include:Obtain head image information and environmental information;Judge style of shooting;Determine that angle is joined Number;Adjust shooting angle;Generate the first image.
In certain embodiments, methods described further comprises:Obtain U.S. face parameter;Generate the second image.
In certain embodiments, the second image of the generation includes:The head image information and described first image are analyzed, really Fixed U.S.'s face parameter;The environmental information and described first image are analyzed, ambient parameter is determined;Optimize described first image, obtain Second image.
In certain embodiments, methods described further comprises:The head image information is analyzed, cognitive information is obtained.
In certain embodiments, the style of shooting includes forehand mode, side bat mode.
In certain embodiments, the determination angle parameter includes:According to the mode of forehanding, first angle parameter is determined;Root Mode is clapped according to side, second angle parameter is determined.
In certain embodiments, the adjustment shooting angle includes:Obtain angle parameter;Judge whether adjust automatically;If It is, adjust automatically camera shooting angle;If it is not, prompting user's adjustment shooting angle.
In certain embodiments, the prompting user adjustment shooting angle includes voice message, picture cues, animation effect One or more of combinations in prompting.
According to the second aspect of some embodiments of the present application there is provided a system, including:One memory, by with It is set to data storage and instruction;One is set up the processor communicated with memory, wherein, when performing the instruction in memory, The processor is configured as:Obtain head image information and environmental information;Judge style of shooting;Determine angle parameter;Adjustment is shot Angle;Generate the first image.
According to the third aspect of some embodiments of the present application, there is provided the permanent calculating that one has computer program Machine readable medium, the computer program includes instruction, and the instruction can be realized a kind of method, institute by least one computing device The method of stating includes:Obtain head image information and environmental information;Judge style of shooting;Determine angle parameter;Adjust shooting angle;Generation First image.
Therefore, according to the self-timer method of some embodiments of the present application, auto heterodyne image can be made more attractive in appearance, improves user Auto heterodyne effect.
Brief description of the drawings
To more fully understand and illustrating some embodiments of the present application, below with reference to the description of accompanying drawing reference implementation example, In the drawings, same numeral numbering indicates corresponding part in the accompanying drawings.
Fig. 1 is the illustrative diagram of the Environment System provided according to some embodiments of the present application.
Fig. 2 is the exemplary cell schematic diagram that the electronic functionalities provided according to some embodiments of the present application are configured.
Fig. 3 is the exemplary process diagram of the self-timer method provided according to some embodiments of the present application.
Fig. 4 is the exemplary process diagram of the determination angle parameter provided according to some embodiments of the present application.
Fig. 5 is the exemplary process diagram of the adjustment shooting angle provided according to some embodiments of the present application.
Fig. 6 is the exemplary process diagram of the image of generation second provided according to some embodiments of the present application.
Embodiment
Below with reference to accompanying drawing description for ease of Integrated Understanding the application as defined in claim and its equivalent Various embodiments.These embodiments include various specific details in order to understand, but these be considered only as it is exemplary.Cause This, it will be appreciated by those skilled in the art that carrying out variations and modifications without departing from this to various embodiments described here The scope and spirit of application.In addition, briefly and to be explicitly described the application, the application will be omitted to known function and structure Description.
The term and phrase used in description below and claims is not limited to literal meaning, and be merely can Understand and as one man understand the application.Therefore, for those skilled in the art, it is possible to understand that there is provided implement to the application is various The description of example is only the purpose to illustrate, rather than limitation appended claims and its application of Equivalent definitions.
Below in conjunction with the accompanying drawing in the application some embodiments, the technical scheme in the embodiment of the present application is carried out clear Chu, it is fully described by, it is clear that described embodiment is only some embodiments of the present application, rather than whole embodiments. Based on the embodiment in the application, it is all that those of ordinary skill in the art are obtained under the premise of creative work is not made Other embodiment, belongs to the scope of the application protection.
It should be noted that the term used in the embodiment of the present application is the mesh only merely for description specific embodiment , and it is not intended to be limiting the application." one " of singulative used in the embodiment of the present application and appended claims, " one ", " one kind ", " described " and "the" are also intended to including most forms, unless context clearly shows that other implications.Also It should be appreciated that term "and/or" used herein refers to and lists any of project comprising what one or more phases were bound Or it is possible to combination.Expression " first ", " second ", " described the first " and " described the second " be used for modify respective element without Consideration order or importance, are used only for distinguishing a kind of element and another element, without limiting respective element.
Can be electronic equipment according to the terminal of the application some embodiments, the electronic equipment can include smart mobile phone, PC(PC, such as tablet personal computer, desktop computer, notebook, net book, palm PC PDA), mobile phone, e-book Reader, portable media player(PMP), audio/video player(MP3/MP4), video camera, virtual reality device (VR)With one or more of combinations in wearable device etc..According to some embodiments of the present application, the wearable device Type of attachment can be included(Such as wrist-watch, ring, bracelet, glasses or wear-type device(HMD)), integrated type(Such as electronics Clothes), decorated type(Such as pad skin, tatoo or built in electronic device)Deng, or several combinations.In some realities of the application Apply in example, the electronic equipment can be flexible, be not limited to the said equipment, or can be one kind in above-mentioned various equipment Or several combinations.In this application, term " user " can be indicated using the people of electronic equipment or setting using electronic equipment It is standby(Such as artificial intelligence electronic equipment).
The embodiment of the present application provides a kind of self-timer method.For the ease of understanding the embodiment of the present application, below with reference to attached The embodiment of the present application is described in detail figure.
Fig. 1 is the illustrative diagram of the Environment System 100 provided according to some embodiments of the present application.Such as Fig. 1 Shown, Environment System 100 can include electronic equipment 110, network 120 and server 130 etc..Electronic equipment 110 can be with Including bus 111, processor 112, memory 113, input/output module 114, display 115, communication module 116 and physics Key 117 etc..In some embodiments of the present application, electronic equipment 110 can omit one or more elements, or can enter one Step includes one or more of the other element.
Bus 111 can include circuit.The circuit can be with one or more elements in interconnection electronics 110(Example Such as, bus 111, processor 112, memory 113, input/output module 114, display 115, communication module 116 and secondary or physical bond 117).Communication is realized between one or more elements that the circuit can also be in electronic equipment 110(For example, obtain and/or Send information).
Processor 112 can include one or more coprocessors(Co-processor), application processor(AP, Application Processor)And communication processor(Communication Processor).As an example, processor 112 can perform the control and/or data processing with one or more elements of electronic equipment 110(For example, the behaviour such as analysis information Make).
Memory 113 can be with data storage.The data can include other with one or more of electronic equipment 110 The related instruction of element or data.For example, the data can include the initial data of the before processing of processor 112, intermediate data And/or the data after processing.Memory 113 can include impermanent memory memory and/or permanent memory memory.
According to some embodiments of the present application, memory 113 can store software and/or program.Described program can be wrapped Include kernel, middleware, API(API, Application Programming Interface)And/or apply journey Sequence(Or " application ").
At least a portion of the kernel, the middleware or the API can include operating system(OS, Operating System).As an example, the kernel can be controlled or managed for performing other programs(For example, middle Part, API and application program)The operation of middle realization or the system resource of function(For example, bus 111, processor 112nd, memory 113 etc.).In addition, the kernel can provide interface.The interface can by the middleware, it is described should One or more elements of electronic equipment 110 are accessed with DLL or the application program to control or management system resource.
The middleware can as data transfer intermediate layer.The data transfer can allow API or Application program is with the kernel communication to exchange data.As an example, the middleware can be handled from the application program The one or more task requests obtained.For example, the middleware can be to one or more application assigned electronic equipments 110 system resource(For example, bus 111, processor 112, memory 113 etc.)Priority, and handle it is one or Multiple tasks are asked.The API can be that the application program is used to control from the kernel or the middleware The interface of function is provided.The API can also include one or more interfaces or function(For example, instruction).It is described Function can be used for security control, Control on Communication, document control, window control, text control, image procossing, information processing etc..
Input/output module 114 can send what is inputted from user or external equipment to the other elements of electronic equipment 110 Instruction or data.Input/output module 114 can also be defeated by the instruction of the other elements acquisition from electronic equipment 110 or data Go out to user or external equipment.In certain embodiments, input/output module 114 can include input block, and user can lead to Cross the input block input information or instruction.
Display 115 can be with display content.The content can show all kinds to user(For example, text, image, Video, icon and/or symbol etc., or several combinations).Display 115 can include liquid crystal display(LCD, Liquid Crystal Display), light emitting diode(LED, Light-Emitting Diode)Display, Organic Light Emitting Diode (OLED, Organic Light Emitting Diode)Display, Micro Electro Mechanical System(MEMS, Micro Electro Mechanical Systems)Display or electric paper display etc., or several combinations.Display 115 can include touching Screen.In certain embodiments, display 115 can show virtual key.The touch-screen can obtain the input of the virtual key. Display 115 can be obtained by the touch-screen and inputted.The input can be defeated including touch input, gesture input, action Enter, approach the input of input, electronic pen or user's body part(For example, hovering input).
Communication module 116 can configure the communication between equipment.In certain embodiments, Environment System 100 can be with Further comprise electronic equipment 140.As an example, the communication between the equipment can include electronic equipment 110 and other set It is standby(For example, server 130 or electronic equipment 140)Between communication.For example, communication module 116 can by radio communication or Wire communication is connected to network 120, with other equipment(For example, server 130 or electronic equipment 140)Realize communication.
The radio communication can include microwave communication and/or satellite communication etc..The radio communication can include honeycomb Communication(For example, global mobile communication(GSM, Global System for Mobile Communications), CDMA (CDMA, Code Division Multiple Access), 3G (Third Generation) Moblie(3G, The 3rd Generation Telecommunication), forth generation mobile communication(4G), the 5th third-generation mobile communication(5G), Long Term Evolution(LTE, Long Term Evolution), Long Term Evolution upgrade version(LTE-A, LTE-Advanced), WCDMA (WCDMA, Wideband Code Division Multiple Access), UMTS(UMTS, Universal Mobile Telecommunications System), WiMAX(WiBro, Wireless Broadband)Deng, or several combinations.According to some embodiments of the present application, the radio communication can include wireless local Net(WiFi, Wireless Fidelity), bluetooth, low-power consumption bluetooth(BLE, Bluetooth Low Energy), ZigBee protocol (ZigBee), near-field communication(NFC, Near Field Communication), magnetic safe transmission, radio frequency and body area network(BAN, Body Area Network)Deng, or several combinations.According to some embodiments of the present application, the wire communication can include GLONASS(Glonass/GNSS, Global Navigation Satellite System), global positioning system System(GPS, Global Position System), Beidou navigation satellite system or Galileo(European GPS) Deng.The wire communication can include USB(USB, Universal Serial Bus), high-definition media interface (HDMI, High-Definition Multimedia Interface), proposed standard 232(RS-232, Recommend Standard 232), and/or plain old telephone service(POTS, Plain Old Telephone Service)Deng, or it is several The combination planted.
Secondary or physical bond 117 can be used for user mutual.Secondary or physical bond 117 can include one or more entity keys.In some realities Apply in example, user can be with the function of self-defined secondary or physical bond 117.As an example, secondary or physical bond 117 can send instruction.The instruction It can include starting self-timer mode, switching auto heterodyne adjustment prompting mode, start image optimization etc., or several combinations.
In certain embodiments, electronic equipment 110 may further include sensor.The sensor can be included but not It is limited to light sensor, acoustic sensor, gas sensor, chemical sensor, voltage sensitive sensor, temp-sensitive sensor, fluid to pass Sensor, biology sensor, laser sensor, Hall sensor, intelligence sensor etc., or several combinations.
In certain embodiments, electronic equipment 110 may further include infrared equipment, image capture device etc..As Example, the infrared equipment can recognize by infrared ray mode of delivery, and blink, watch the technical limit spacing eyes such as identification attentively Information.For example, the infrared equipment is acted come certification user profile by gathering the blink of user.As an example, described image Collecting device can include camera, iris device etc..The camera can realize the functions such as eyeball tracking.The iris dress Authentication can be carried out using iris recognition technology by putting(For example, certification user profile).
Network 120 can include communication network.The communication network can include computer network(For example, LAN (LAN, Local Area Network)Or wide area network(WAN, Wide Area Network)), internet and/or telephone network Deng, or several combinations.Network 120 can be to the other equipment in Environment System 100(For example, electronic equipment 110, clothes Business device 130, electronic equipment 140 etc.)Send information.Described information can include auto heterodyne background, auto heterodyne image etc..The back of the body of autodyning Scape includes scenery picture, animation picture, design picture etc., or several combinations.
Server 130 can connect the other equipment in Environment System 100 by network 120(For example, electronic equipment 110th, electronic equipment 140 etc.).For example, server 130 to electronic equipment 110 can send auto heterodyne background etc. by network 120.
Electronic equipment 140 can be identical or different with electronic equipment 110 type.According to some embodiments of the present application, The part or all of operation performed in electronic equipment 110 can be in another equipment or multiple equipment(For example, electronic equipment 140 And/or server 130)It is middle to perform.In certain embodiments, when electronic equipment 110 be automatically or in response to request perform it is a kind of or When a variety of functions and/or service, electronic equipment 110 can ask other equipment(For example, electronic equipment 140 and/or server 130)Substitute perform function and/or service.In certain embodiments, electronic equipment 110 is in addition to perform function or service, further Perform relative one or more functions.In certain embodiments, other equipment(For example, electronic equipment 140 and/or clothes Business device 130)Asked function or other related one or more functions can be performed, implementing result electricity can be sent to Sub- equipment 110.Electronic equipment 110 can repeat result or further handle implementing result, to provide asked function Or service.As an example, electronic equipment 110 can use cloud computing, distributed computing technology and/or client-server end to calculate meter Calculate etc., or several combinations.In certain embodiments, according to the difference of cloud computing service property, the cloud computing can include Public cloud, private clound and mixed cloud etc..For example, electronic equipment 110 can send auto heterodyne image etc. to electronic equipment 140.
It should be noted that the description for Environment System 100 above, only for convenience of description, can not be this Shen It please be limited within the scope of illustrated embodiment.It is appreciated that for those skilled in the art, the principle based on the system can Each element can be combined on the premise of without departing substantially from the principle, or constitutes subsystem and be connected with other elements, To implementing the various modifications and variations on the above method and systematic difference field progress form and details.For example, network environment System 100 may further include database.Such deformation, within the protection domain of the application.
Fig. 2 is the exemplary cell block diagram that the electronic functionalities provided according to some embodiments of the present application are configured.Such as Shown in Fig. 2, processor 112 can include processing module 200, and the processing module 200 can include acquiring unit 210, analysis Unit 220, determining unit 230, control unit 240, generation unit 250.
According to some embodiments of the present application, acquiring unit 210 can obtain information.In certain embodiments, the letter Breath can include but is not limited to text, picture, audio, video, action, gesture, sound, eyes(For example, iris information etc.), gas Breath, light etc., or several combinations.In certain embodiments, described information can include but is not limited to input information, system information And/or the communication information etc..As an example, acquiring unit 210 can be by the touch of input/output module 114, display 115 Screen, secondary or physical bond 117 and/or sensor obtain the input information of electronic equipment 110.The input information can include other equipment (For example, electronic equipment 140)And/or the input of user, for example, the input of key-press input, touch-control, gesture input, action input, remote Journey input, transmission input, eyes input, sound input, breath input, light input etc., or several combinations.The input information Obtaining widget can include but is not limited to infrared equipment, image capture device, sensor etc., or several combinations.As showing Example, acquiring unit 210 can obtain head image information, environmental information etc..For example, acquiring unit 210 can be set by IMAQ It is standby to obtain head image information.In another example, acquiring unit 210 can obtain environmental information by sensor.
In certain embodiments, acquiring unit 210 can obtain the communication information by network 120.The communication information can With including application software information, communication signal(For example, voice signal, vision signal etc.), short message etc..In some embodiments In, acquiring unit 210 can obtain system information by network 120, memory 113 and/or sensor.The system information can To include but is not limited to the information that the system mode of electronic equipment 110, presupposed information, memory 113 are stored(For example, history is believed Breath etc.)Deng, or several combinations.As an example, when user presses secondary or physical bond(For example, shortcut etc.)When, electronic equipment 110 can To start self-timer mode.
In certain embodiments, described information can include instruction.The instruction includes user instruction and/or system command Deng, or several combinations.The instruction can include triggering command, certification instruction, fill in instruction etc., or several combinations.Institute Stating triggering command can include starting self-timer mode etc..The certification instruction can include certification user profile.The certification is used Family information can pass through password identification or biometric authentication.The password identification includes numerical ciphers, gesture password.The life Thing identification includes iris recognition, fingerprint recognition, face recognition, voice recognition.
According to some embodiments of the present application, analytic unit 220 can analyze information.In certain embodiments, analyze single Member 220 can analyze head image information, to obtain cognitive information.As an example, analytic unit 220 can utilize face identification system Analyze shape of face, hair style of head image information etc..The face identification system can with integrated AI, machine recognition, machine learning, The technologies such as model theory, expert system, Computer Vision.The face identification system can include man face image acquiring and inspection Survey, facial image is pre-processed, facial image feature extraction and matching are with recognizing.In certain embodiments, analytic unit 220 can With analysis environments information, to obtain ambient parameter.As an example, analytic unit 220 can utilize Ray Tracing Algorithm technology point Analyse light of environmental information etc..The Ray Tracing Algorithm technology can include Sampling techniques, projection view, depending on seeing system, scape Depth, non-linear projection, stereoscopic vision, illumination and material, mirror-reflection, gloss reflection, global illumination, transparency, shade, environment Block, area light is shone, light intersects calculating, object transformation, grid technique and mapping technology etc. between object, or it is several The combination planted.In certain embodiments, analytic unit 220 can analyze head image information, environmental information in auto heterodyne image etc., or Several combinations.In certain embodiments, analytic unit 220 can be analyzed using human face detection tech and judge style of shooting. The style of shooting can include forehand mode, side bat mode etc..The mode of forehanding can include many people group photo auto heterodyne etc..
According to some embodiments of the present application, determining unit 230 can determine information.In certain embodiments, it is determined that single Member 230 can determine angle parameter etc. according to style of shooting.As an example, when style of shooting is to forehand mode, determining unit 230 can determine first angle parameter.Positive face product when the first angle parameter can make to forehand is in first interval scope. In another example, when style of shooting is that mode is clapped in side, determining unit 230 can determine second angle parameter.The second angle ginseng Side face product when number can clap side is in second interval scope.In certain embodiments, determining unit 230 may determine whether Adjust automatically shooting angle etc..
According to some embodiments of the present application, control unit 240 can be with control electronics.In certain embodiments, control Unit 240 processed can be according to user instruction or the one or more functions of system command control electronics 110, and/or one Or multiple application programs.The user instruction or system command can by one kind in secondary or physical bond, virtual key, sensor etc. or It is several to obtain.In certain embodiments, control unit 240 can adjust shooting angle according to angle parameter.As an example, control The acquisition parameters that unit 240 can be determined according to determining unit 210, adjust shooting angle.For example, when it is determined that during adjust automatically, Control unit 240 can be with adjust automatically camera shooting angle.In another example, when it is determined that during non-automatic adjustment, control unit 240 can be with Point out user adjustment shooting angle.The prompting user adjustment can include voice message, picture cues, animation effect prompting Deng, or several combinations.The animation effect prompting can show the cartoon track of face mask in display 115, to point out User follows the cartoon track adjustment shooting angle.
According to some embodiments of the present application, generation unit 250 can generate auto heterodyne image.In certain embodiments, it is raw Auto heterodyne original image, and/or the auto heterodyne image of optimization can be generated into unit 250.For example, generation unit 250 can generate One image, described first image can be the original image that user's auto heterodyne is obtained.In another example, generation unit 250 can generate Two images, second image can optimize raw after described first image according to information such as U.S. face parameter, and/or ambient parameters Into auto heterodyne image.
It should be noted that described above for the unit in processing module 200, only for convenience of description, can not be this Application is limited within the scope of illustrated embodiment.It is appreciated that for those skilled in the art, the principle based on the system, Unit may be combined on the premise of without departing substantially from the principle, or constitute submodule and connected with other units Connect, the various modifications and variations in form and details are carried out to the function of implementing above-mentioned module and unit.For example, processing module 200 determining unit 230 can merge into One function unit with analytic unit 220 and perform corresponding function etc..It is all such The deformation of class, within the protection domain of the application.
Fig. 3 is the exemplary process diagram of the self-timer method provided according to some embodiments of the present application.As shown in figure 3, stream Journey 300 can be realized by processing module 200.The self-timer method can be started by instructing.The instruction can include using Family instruction, system command, action command etc., or several combinations.As an example, the system command can be obtained by sensor The information generation taken.The user instruction can include voice, gesture, action, secondary or physical bond 117 and/or virtual key etc., or several Combination.
310, head image information and environmental information are obtained.Operation 310 can be by the acquiring unit 210 of processing module 200 Realize.The head image information can include shape of face information, hair style information, Skin Color Information, eye information etc..The environmental information can With including light information, color information, scenery information, Weather information etc..
320, style of shooting is judged.Operation 320 can be realized by the analytic unit 220 of processing module 200.At some In embodiment, analytic unit 220 can analyze the head image information using method for detecting human face, determine whether style of shooting. The method for detecting human face can include Knowledge based engineering method, feature invariant method, template matching method, the side based on outward appearance Method etc..As an example, when detect have two eyes in a face area when, analytic unit 220 may determine that the shooting Mode is mode of forehanding.In another example, when detecting multiple face areas, analytic unit 220 may determine that the style of shooting For the mode of forehanding.For another example when detecting face area area and being less than default face area threshold value, analytic unit 220 can be with Judge that the style of shooting claps mode for side.It should be noted that the judgement of the style of shooting can include but is not limited to eye The face such as eyeball, nose, eyebrow, face, ear information, dental information and/or face mask information etc..
330, angle parameter is determined.Operation 330 can be realized by the determining unit 230 of processing module 200.At some In embodiment, determining unit 230 can determine angle parameter using face recognition algorithms.As an example, determining unit 230 can be with Angle parameter is determined using facial symmetry.
340, shooting angle is adjusted.Operation 340 can be realized by the control unit 240 of processing module 200.At some In embodiment, control unit 240 can control image capture device adjust automatically shooting angle.In another example, control unit 240 can To point out user to adjust shooting angle.The prompting user adjustment can include voice message, picture cues, animation effect prompting Deng.Or several combinations.
350, the first image is generated.Operation 350 can be realized by the generation unit 250 of processing module 200.At some In embodiment, generation unit 250 can generate auto heterodyne image.Described first image can be originally generated including image capture device Auto heterodyne image.
360, U.S. face parameter is obtained.Operation 360 can be realized by the acquiring unit 210 of processing module 200.At some In embodiment, acquiring unit 210 can analyze the head image information, the environmental information, described first by analytic unit 220 Image etc. obtains U.S. face parameter.The U.S. face parameter can include U.S. pupil, thin face, dressing, the colour of skin, go the information such as wrinkle.
370, the second image is generated.Operation 370 can be realized by the generation unit 250 of processing module 200.At some In embodiment, after generation unit 250 can be according to U.S. face parameter optimization described first image, the second image is generated.
It should be noted that the description for flow 300 above, only for convenience of description, can not be limited in the application Within the scope of illustrated embodiment.It is appreciated that for those skilled in the art, the principle based on the system may not carried on the back On the premise of from the principle, each operation is combined, or constitutes sub-process and other operative combinations, in implementation State the various modifications and variations in the function progress form and details of flow and operation.For example, flow 300 can not perform operation 360, operation 370 directly generates first image etc..Such deformation, within the protection domain of the application.
Fig. 4 is the exemplary process diagram of the determination angle parameter provided according to some embodiments of the present application.Such as Fig. 4 institutes Show, flow 400 can be realized by processing module 200.
410, head image information and environmental information are obtained.Operation 410 can be by the acquiring unit 210 of processing module 200 Realize.The head image information can include shape of face information, hair style information, Skin Color Information, eye information etc..The environmental information can With including light information, color information, scenery information, Weather information etc..
420, head image information is analyzed, cognitive information is obtained.Operation 420 can be by the analytic unit of processing module 200 220 realize.In certain embodiments, analytic unit 220 can analyze the head image information using method for detecting human face, further Obtain cognitive information.The cognitive information can include cognitive user characteristics, for example, shape of face, face, hair style etc..
430, style of shooting is judged.Operation 430 can be realized by the analytic unit 220 of processing module 200.At some In embodiment, analytic unit 230 can judge the style of shooting of active user using method for detecting human face.Detected for example, working as During symmetrical eyes, analytic unit 230 may determine that current for mode of forehanding.In another example, when detecting asymmetric one When eyes or an eyes, analytic unit 230 may determine that currently claps mode for side.It should be noted that passing through eyes Symmetrical analysis style of shooting is merely exemplary that the style of shooting can be judged by other means, for example ear Piece symmetry, the profile of nose, the profile of lip etc., or several combinations.
If style of shooting is mode of forehanding, 440, first angle parameter is determined.Operation 440 can pass through processing module 200 determining unit 230 is realized.In certain embodiments, determining unit 230 can determine that angle is joined using face recognition algorithms Number.As an example, according to cognitive information and/or environmental information, determining unit 230 can determine first angle parameter, described One angle parameter can obtain the image of forehanding of optimal effectiveness.The optimal effectiveness can include the hair style for combining user, use Family shape of face shows thin, prominent user most beautiful face, the shape of face or face flaw that weaken user etc..
If style of shooting, which is side, claps mode, 450, second angle parameter is determined.Operation 450 can pass through processing module 200 determining unit 230 is realized.In certain embodiments, according to cognitive information and/or environmental information, determining unit 230 can be with Second angle parameter is determined, image is clapped in the side that the second angle parameter can obtain optimal effectiveness.
It should be noted that the description for flow 400 above, only for convenience of description, can not be limited in the application Within the scope of illustrated embodiment.It is appreciated that for those skilled in the art, the principle based on the system may not carried on the back On the premise of from the principle, each operation is combined, or constitutes sub-process and other operative combinations, in implementation State the various modifications and variations in the function progress form and details of flow and operation.For example, flow 400 can not perform operation 420, style of shooting etc. is directly judged according to head image information.Such deformation, within the protection domain of the application.
Fig. 5 is the exemplary process diagram of the adjustment shooting angle provided according to some embodiments of the present application.Such as Fig. 5 institutes Show, flow 500 can be realized by processing module 200.
510, angle parameter is obtained.Operation 510 can be realized by the acquiring unit 210 of processing module 200.The angle First angle parameter, second angle parameter etc. can be included by spending parameter.
520, judge whether adjust automatically.Operation 520 can be realized by the analytic unit 220 of processing module 200. In some embodiments, analytic unit 220 can judge whether adjust automatically according to angle parameter.For example, working as image capture device When can be according to angle parameter adjust automatically, it be automatically adjusted.In another example, when image capture device can not be adjusted to angle ginseng During number, user is pointed out to be adjusted.
If during adjust automatically, in 530, adjust automatically camera shooting angle.Operation 530 can pass through processing module 200 Control unit 240 realize.In certain embodiments, control unit 240 can be according to angle parameter adjust automatically IMAQ Equipment(For example, camera)Shooting angle.
If it is not, 540, pointing out user adjustment shooting angle.Operation 540 can be by the control unit of processing module 200 240 realize.In certain embodiments, control unit 240 can point out user's adjustment shooting angle according to angle parameter.It is described to carry Show that user's adjustment includes voice message, picture cues, animation effect prompting etc..For example, when user is made at a distance by self-shooting bar When being autodyned with front camera or using main camera, user has certain distance or cannot see that display screen with display screen, it is impossible to According to picture cues or animation effect prompting adjustment shooting angle, control unit 240 can help user to adjust by voice message Whole shooting angle.The voice message can include " new line ", " bowing ", " left-hand rotation ", " right-hand rotation ", " left swing ", " right pendulum " etc.. In some embodiments, the voice message may further include adjustment amplitude, for example, " nodding slightly ", " micro- to bow ", " a left side Yaw portion is to left shoulder " etc..In another example, when user is closely autodyned using front camera, user can see display screen clearly, control Unit 240 processed can point out user to adjust shooting angle by picture cues and/or animation effect.Described image prompting can be wrapped Include down arrow bowed to upward arrow, prompting of left-hand rotation arrow, right-hand rotation arrow, prompting new line etc..The animation effect prompting The movement locus of the head portrait profile can be shown in display screen by extracting user's head portrait profile, to point out user to follow institute State the movement locus adjustment shooting angle of head portrait profile.
It should be noted that the description for flow 500 above, only for convenience of description, can not be limited in the application Within the scope of illustrated embodiment.It is appreciated that for those skilled in the art, the principle based on the system may not carried on the back On the premise of from the principle, each operation is combined, or constitutes sub-process and other operative combinations, in implementation State the various modifications and variations in the function progress form and details of flow and operation.For example, flow 500 may further include According to user's auto heterodyne scene, judge that prompting mode etc. is operated.Such deformation, within the protection domain of the application.
Fig. 6 is the exemplary process diagram of the image of generation second provided according to some embodiments of the present application.Such as Fig. 6 institutes Show, flow 600 can be realized by processing module 200.
610, head image information and the first image are analyzed.Operation 610 can be by the analytic unit 220 of processing module 200 Realize.In certain embodiments, analytic unit 220 can analyze the head image information of user, obtain the cognitive information of user.One In a little embodiments, analytic unit 220 can further analyze the first image that image capture device is originally generated, and obtain described the First relevant information of one image, for example, the information such as the colour of skin, sex, age.
620, it is determined that U.S. face parameter.Operation 620 can be realized by the determining unit 230 of processing module 200.At some In embodiment, determining unit 230 can be according to cognitive information, first relevant information of the first image etc., it is determined that U.S. face parameter.Institute Colour of skin adjustment, shape of face adjustment, eyes processing can be included by stating U.S. face parameter(For example, U.S. pupil, removing eyeprint etc.)Deng.
630, head image information and the first image are analyzed.Operation 630 can be by the analytic unit 220 of processing module 200 Realize.In certain embodiments, analytic unit 220 can be with analysis environments information(For example, light, color, scenery, weather etc.), And the second relevant information of the first image that image capture device is originally generated(For example, light, color, scenery etc.).One In a little embodiments, the environmental information can directly be obtained by image capture device, and/or sensor.Described first image The second relevant information can be environmental information in described first image.In certain embodiments, the environmental information can be with It is real environmental information.Environmental information in described first image can generate the environmental information that image is shown.
640, ambient parameter is determined.Operation 640 can be realized by the determining unit 230 of processing module 200.At some In embodiment, determining unit 230 can determine ambient parameter according to environmental information, second relevant information of the first image etc.. In some embodiments, environmental information in described first image can and the real environmental information there is deviation.It is determined that single The deviation that member 230 can be analyzed by analytic unit 220, determines ambient parameter, to improve the accuracy of ambient parameter.
650, optimize described first image.Operation 650 can be realized by the control unit 240 of processing module 200. In some embodiments, control unit 240 can optimize described first image according to U.S. the face parameter, and/or ambient parameter.
660, the second image is generated.Operation 660 can be realized by the generation unit 250 of processing module 200.At some In embodiment, generation unit 240 can generate the second image by the optimization of the first image.
It should be noted that the description for flow 600 above, only for convenience of description, can not be limited in the application Within the scope of illustrated embodiment.It is appreciated that for those skilled in the art, the principle based on the system may not carried on the back On the premise of from the principle, each operation is combined, or constitutes sub-process and other operative combinations, in implementation State the various modifications and variations in the function progress form and details of flow and operation.For example, flow 600 can perform operation 610 and operation 620 after enter operation 650.In another example, flow 600 can not perform operation 610 and operation 620 etc..It is such Deformation, within the protection domain of the application.
In summary, according to the self-timer method of the embodiment of the present application, auto heterodyne image can be made more attractive in appearance, improves user's Auto heterodyne effect.
It should be noted that the above embodiments are intended merely as example, the application is not limited to such example, but can To carry out various change.
It should be noted that in this manual, term " comprising ", "comprising" or its any other variant are intended to Nonexcludability is included, so that process, method, article or equipment including a series of key elements not only will including those Element, but also other key elements including being not expressly set out, or also include being this process, method, article or equipment Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that Also there is other identical element in process, method, article or equipment including the key element.
Finally, in addition it is also necessary to explanation, a series of above-mentioned processing are not only included with order described here in temporal sequence The processing of execution, and the processing including performing parallel or respectively rather than in chronological order.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with Completed by the related hardware of computer program instructions, described program can be stored in a computer-readable recording medium, The program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage medium can for magnetic disc, CD, read-only storage(Read-Only Memory, ROM)Or random access memory(Random Access Memory, RAM) Deng.
Above disclosed is only some preferred embodiments of the application, it is impossible to the right model of the application is limited with this Enclose, one of ordinary skill in the art will appreciate that all or part of flow of above-described embodiment is realized, and will according to the application right Made equivalent variations are sought, still falls within and invents covered scope.

Claims (10)

1. a kind of self-timer method, it is characterised in that including:
Obtain head image information and environmental information;
Judge style of shooting;
Determine angle parameter;
Adjust shooting angle;
Generate the first image.
2. self-timer method according to claim 1, it is characterised in that further comprise:
Obtain U.S. face parameter;
Generate the second image.
3. self-timer method according to claim 2, it is characterised in that the image of generation second includes:
The head image information and described first image are analyzed, it is determined that U.S. face parameter;
The environmental information and described first image are analyzed, ambient parameter is determined;
Optimize described first image, obtain the second image.
4. self-timer method according to claim 1, it is characterised in that further comprise:
The head image information is analyzed, cognitive information is obtained.
5. self-timer method according to claim 1, it is characterised in that the style of shooting includes forehand mode, side bat side Formula.
6. self-timer method according to claim 5, it is characterised in that the determination angle parameter includes:
According to the mode of forehanding, first angle parameter is determined;
Mode is clapped according to side, second angle parameter is determined.
7. the self-timer method according to claim 1 or 6, it is characterised in that the adjustment shooting angle includes:
Obtain angle parameter;
Judge whether adjust automatically;
If so, adjust automatically camera shooting angle;
If it is not, prompting user's adjustment shooting angle.
8. self-timer method according to claim 7, it is characterised in that the prompting user adjustment shooting angle includes voice One or more of combinations in prompting, picture cues, animation effect prompting.
9. a system, it is characterised in that including:
One memory, is configured as data storage and instruction;
One is set up the processor communicated with memory, wherein, when performing the instruction in memory, the processor is configured For:
Obtain head image information and environmental information;
Judge style of shooting;
Determine angle parameter;
Adjust shooting angle;
Generate the first image.
10. a permanent computer-readable medium for having computer program, it is characterised in that the computer program includes referring to Order, the instruction can be realized a kind of method by least one computing device, and methods described includes:
Obtain head image information and environmental information;
Judge style of shooting;
Determine angle parameter;
Adjust shooting angle;
Generate the first image.
CN201710464940.XA 2017-06-19 2017-06-19 A kind of self-heterodyne system, medium and method Pending CN107124560A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710464940.XA CN107124560A (en) 2017-06-19 2017-06-19 A kind of self-heterodyne system, medium and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710464940.XA CN107124560A (en) 2017-06-19 2017-06-19 A kind of self-heterodyne system, medium and method

Publications (1)

Publication Number Publication Date
CN107124560A true CN107124560A (en) 2017-09-01

Family

ID=59719089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710464940.XA Pending CN107124560A (en) 2017-06-19 2017-06-19 A kind of self-heterodyne system, medium and method

Country Status (1)

Country Link
CN (1) CN107124560A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107835367A (en) * 2017-11-14 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN107845057A (en) * 2017-09-25 2018-03-27 维沃移动通信有限公司 One kind is taken pictures method for previewing and mobile terminal
CN108462770A (en) * 2018-03-21 2018-08-28 北京松果电子有限公司 Rear camera self-timer method, device and electronic equipment
CN108492266A (en) * 2018-03-18 2018-09-04 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108629339A (en) * 2018-06-15 2018-10-09 Oppo广东移动通信有限公司 Image processing method and related product
CN109873950A (en) * 2019-01-30 2019-06-11 努比亚技术有限公司 A kind of image correcting method, terminal and computer readable storage medium
CN112446832A (en) * 2019-08-31 2021-03-05 华为技术有限公司 Image processing method and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103458219A (en) * 2013-09-02 2013-12-18 小米科技有限责任公司 Method, device and terminal device for adjusting face in video call
CN103841324A (en) * 2014-02-20 2014-06-04 小米科技有限责任公司 Shooting processing method and device and terminal device
CN105227832A (en) * 2015-09-09 2016-01-06 厦门美图之家科技有限公司 A kind of self-timer method based on critical point detection, self-heterodyne system and camera terminal
CN106231178A (en) * 2016-07-22 2016-12-14 维沃移动通信有限公司 A kind of self-timer method and mobile terminal
CN106341607A (en) * 2016-10-24 2017-01-18 深圳市金立通信设备有限公司 Method and terminal to remind user of adjusting their shooting angle
CN106803894A (en) * 2017-03-20 2017-06-06 上海与德科技有限公司 Auto heterodyne reminding method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103458219A (en) * 2013-09-02 2013-12-18 小米科技有限责任公司 Method, device and terminal device for adjusting face in video call
CN103841324A (en) * 2014-02-20 2014-06-04 小米科技有限责任公司 Shooting processing method and device and terminal device
CN105227832A (en) * 2015-09-09 2016-01-06 厦门美图之家科技有限公司 A kind of self-timer method based on critical point detection, self-heterodyne system and camera terminal
CN106231178A (en) * 2016-07-22 2016-12-14 维沃移动通信有限公司 A kind of self-timer method and mobile terminal
CN106341607A (en) * 2016-10-24 2017-01-18 深圳市金立通信设备有限公司 Method and terminal to remind user of adjusting their shooting angle
CN106803894A (en) * 2017-03-20 2017-06-06 上海与德科技有限公司 Auto heterodyne reminding method and device

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107845057A (en) * 2017-09-25 2018-03-27 维沃移动通信有限公司 One kind is taken pictures method for previewing and mobile terminal
CN107835367A (en) * 2017-11-14 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN108492266A (en) * 2018-03-18 2018-09-04 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
US10769416B2 (en) 2018-03-18 2020-09-08 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, electronic device and storage medium
CN108492266B (en) * 2018-03-18 2020-10-09 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN108462770A (en) * 2018-03-21 2018-08-28 北京松果电子有限公司 Rear camera self-timer method, device and electronic equipment
CN108629339A (en) * 2018-06-15 2018-10-09 Oppo广东移动通信有限公司 Image processing method and related product
CN108629339B (en) * 2018-06-15 2022-10-18 Oppo广东移动通信有限公司 Image processing method and related product
CN109873950A (en) * 2019-01-30 2019-06-11 努比亚技术有限公司 A kind of image correcting method, terminal and computer readable storage medium
CN112446832A (en) * 2019-08-31 2021-03-05 华为技术有限公司 Image processing method and electronic equipment

Similar Documents

Publication Publication Date Title
CN107124560A (en) A kind of self-heterodyne system, medium and method
CA3043230C (en) Face liveness detection method and apparatus, and electronic device
CN107425579A (en) A kind of intelligent charging method and system
CN109472122A (en) A kind of multimedia messages reminding method and system
CN107315681A (en) Application program self-starting test system, medium and method
CN107358179A (en) A kind of living management system, medium and method based on iris verification
CN107423585A (en) The concealed application method and system of a kind of application
CN107786979A (en) A kind of multiple terminals shared communication method and system
CN107862518A (en) A kind of method of payment and system based on terminal location
CN107368793A (en) A kind of colored method for collecting iris and system
CN108810401A (en) Guide the method and system taken pictures
CN107018153A (en) A kind of safe login method
CN107220531A (en) A kind of information processing method of convenient login
CN108897479A (en) A kind of terminal touch control method and system
CN107451564A (en) A kind of human face action control method and system
CN109857504A (en) A kind of interface control method for drafting and system
CN108536409A (en) A kind of terminal display adjusting method and system
CN108010519A (en) A kind of information search method and system
CN107704843A (en) A kind of simple eye iris verification method and system
CN107623736A (en) A kind of equipment connection method and system
CN107566978A (en) A kind of tracking terminal method and system based on intelligent Neural Network
CN107835117A (en) A kind of instant communicating method and system
CN108229383A (en) A kind of iris wiring method and system
CN108765324A (en) It is a kind of based on infrared image processing method and system
CN108184248A (en) The data processing method and system of a kind of terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170901