CN110336945A - A kind of intelligence assisted tomography patterning process and system - Google Patents

A kind of intelligence assisted tomography patterning process and system Download PDF

Info

Publication number
CN110336945A
CN110336945A CN201910618687.8A CN201910618687A CN110336945A CN 110336945 A CN110336945 A CN 110336945A CN 201910618687 A CN201910618687 A CN 201910618687A CN 110336945 A CN110336945 A CN 110336945A
Authority
CN
China
Prior art keywords
information
scene
scene information
composition
target scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910618687.8A
Other languages
Chinese (zh)
Inventor
彭礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI TAIDA BUILDING TECHNOLOGY Co Ltd
Original Assignee
SHANGHAI TAIDA BUILDING TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI TAIDA BUILDING TECHNOLOGY Co Ltd filed Critical SHANGHAI TAIDA BUILDING TECHNOLOGY Co Ltd
Priority to CN201910618687.8A priority Critical patent/CN110336945A/en
Publication of CN110336945A publication Critical patent/CN110336945A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a kind of method of intelligent assisted tomography composition, this method is applied on mobile terminal, comprising: acquires scene information to be captured;Identification matching is carried out to the scene information of acquisition, obtains the high target scene information of matching degree;Scene capture composition to be captured is carried out according to the mode of target scene information;Scene information to be captured after exporting composition completes shooting.Layman can learn composition in a manner of shining with reference to figure, be quickly obtained composition skill according to existing target scene in the technology of the present invention;In face of a scene, mobile terminal can judge automatically out how composition seems " advanced ", so that user, which need to only follow suit, can greatly improve patterning effect, composition from this can also be foolproof operation, reduce the shooting difficulty of user;For some famous sites welcome by cameraman, since the people to take pictures is more, data volume is big, and the method based on the technology of the present invention can greatly meet the needs of user.

Description

Intelligent auxiliary photography composition method and system
Technical Field
The invention relates to an intelligent auxiliary composition technology for photography, in particular to an intelligent auxiliary photography composition method and system.
Background
With the continuous increase of camera pixels in mobile terminals, the mobile terminals have basically replaced traditional cameras to capture images, and users have become an indispensable part of daily life using the mobile terminals to capture images. At present, when a mobile terminal shoots an image, shooting parameters such as exposure, focal length, color, aperture size and the like are mostly automatically set according to parameters such as current ambient brightness, color temperature and the like to assist imaging, so that a good shooting effect can be obtained. However, the most important point of a good photo is not the color, brightness, style, etc. of the photo, but the composition. However, the subjective judgment of the photographer is still required for the composition of the picture in hundreds at present. And non-professional persons are not easy to master the composition, and a large amount of professional learning and practice are needed to master certain composition methods and skills. Therefore, it is very difficult for non-professional people to take a beautiful picture, even if the user goes to the beautiful scene, the user can only mechanically shoot an image meeting the basic standard, and the user needs to adjust the shooting parameters and the shooting angle for many times, so that the operation is complicated, and the filming rate is low.
Based on the above problems, the invention provides a method and a system for intelligently assisting in photographing and composing a picture, which solve the problem of how to take a good composition picture by a non-professional person.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method for assisting a photographer in composition by adopting artificial intelligence, so as to solve the problem of poor composition effect in the photographing process of non-professionals and solve the problem in the background technology.
In order to solve the above problems, the present invention provides a method for composition using intelligent auxiliary photography, which is applied to a mobile terminal and comprises:
collecting scene information to be shot;
identifying and matching the collected scene information to obtain target scene information with high matching degree;
shooting composition of a scene to be shot is carried out according to the mode of the target scene information;
and outputting the scene information to be shot after composition to finish shooting.
The collected scene information to be shot comprises the type of the shot scene, the type of the shot subject and the real-time environment conditions (such as geographical position information, weather conditions, light rays and the like) of the shot so as to obtain the real-time scene information of the place to be shot and enable the subsequent shooting composition to be closer to the requirement.
The method comprises the steps of acquiring scene information, identifying and matching the acquired scene information, wherein the steps of identifying and matching the acquired scene information comprise the steps of determining the scene type and the shooting subject type of a shot image according to the scene information to be shot, matching the selected scene type and subject type with the scene to be shot and determining primary target scene information; and performing secondary matching according to the shot real-time environment condition and the determined primary target scene information to obtain secondary target scene information, wherein the scene type and the subject type can be determined according to a conventional determination mode.
Obtaining the target scene information with high matching degree further needs to perform final screening evaluation on the obtained secondary target scene information, and determining the target scene to be finally used; the evaluation criterion comprises the popularity of the target scene, the similarity between the target scene and the scene to be shot and the like.
And the finally used target scene information comprises at least one target scene according to the type and the number of the subjects to be shot.
The shooting composition of the scene to be shot is carried out according to the mode of the target scene information, and the mode of the composition of the scene to be shot is required to be consistent with the mode of the composition of the target scene information.
The scene information after shooting can be used as the target scene information for follow-up.
The invention also provides an intelligent auxiliary photography composition system, which comprises an information acquisition unit, an information processing unit, a network connection unit, a cloud server and an information output unit; wherein,
the information acquisition unit acquires scene information to be shot to form image information, and the image information is transmitted to the information processing unit through the network connection unit;
the information processing unit processes the image information acquired by the information acquisition unit to form digital information, and the digital information forms a data source and is transmitted to a cloud server through the network connection unit;
the cloud server is provided with a big data storage and processing module, and the processing module identifies and compares the data source transmitted by the information processing unit and the information in the big data storage received by the cloud server, so that the required target scene information is screened out, and the screened target scene is transmitted to the information output unit through the network connection unit;
the information output unit displays the target scene information transmitted by the cloud server, and the target scene information is used as a shooting reference of the mobile terminal to assist in completing shooting;
the information acquisition unit, the information processing unit and the information output unit are all applied to the mobile terminal, and the network connection unit is connected with the mobile terminal and the cloud server.
The information acquisition unit at least comprises a camera module (such as a camera), a positioning module (such as GPS positioning and Beidou positioning) and a light sensing module of the mobile terminal so as to acquire the current and local image information of a scene to be shot.
The information output unit at least comprises a shooting display interface of the mobile terminal, and finally selected target scene information serving as a reference and other reference information are arranged on the display interface.
The processing module of the cloud server is provided with a processing program, performs data integration and classification on the information stored in the big data, and performs sorting according to specific indexes, wherein the specific indexes comprise popularity, reference use times, shooting quality scores and the like.
Scene information of a final picture shot by the camera module can be transmitted to a big data storage of the cloud server through the network connection unit to serve as new target scene information, the information of the big data storage is perfected, and a big data number is provided for subsequent shooting.
The invention also provides a computer readable storage medium, which is applied to a mobile terminal and stores a computer program thereon, wherein when the computer program is executed by the mobile terminal, the intelligent auxiliary photography composition system realizes the intelligent auxiliary photography composition method.
By implementing the intelligent auxiliary photography composition method and the intelligent auxiliary photography composition system provided by the invention, the following technical effects are achieved:
(1) in the invention, non-professionals in the technology can learn composition according to the existing target scene in a mode of following a reference picture, and quickly obtain composition skill and composition mode;
(2) facing a scene, the mobile terminal can automatically judge how to compose the picture to be high-grade, beautiful and popular, so that a user only needs to do the picture enough, the picture composition can be operated in a foolproof way, and the shooting difficulty of the user is reduced;
(3) for some famous scenic spots popular with photographers, the method based on the technology can greatly meet the requirements of users due to the fact that the number of people for photographing is large and the data volume is large.
Drawings
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
FIG. 1 is a diagram illustrating the general steps of an intelligent assisted photography patterning method according to an embodiment of the present invention;
FIG. 2 is a detailed flow chart of the general steps of FIG. 1;
FIG. 3 is a schematic diagram of an intelligent assisted photography patterning system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical solution of the present invention will be described in detail with specific embodiments.
The method for intelligently assisting photography composition as shown in fig. 1 is applied to a mobile terminal and comprises the following steps:
s1: collecting scene information to be shot;
s2: identifying and matching the collected scene information to obtain target scene information with high matching degree;
s3: shooting composition of a scene to be shot is carried out according to the mode of the target scene information;
s4: and outputting the scene information to be shot after composition to finish shooting.
Based on the above steps, the present embodiment adopts a specific operation flow as shown in fig. 2:
s10: acquiring scene information to be shot, wherein the scene information comprises a shot scene type, a shot subject type and a shot real-time environment condition (such as geographical position information, weather conditions, light rays, gender of people, clothing and the like) so as to acquire the real-time scene information of a place to be shot and enable a subsequent shooting composition to be closer to requirements;
s20: determining the scene type and the shooting subject type of the shot image according to the scene information to be shot, so as to match the selected scene type and subject type with the scene to be shot and determine primary target scene information;
s21: performing secondary matching according to the shot real-time environment condition and the determined primary target scene information to obtain secondary target scene information, wherein the scene type and the subject type can be determined according to a conventional determination mode;
s22, performing final screening evaluation on the acquired secondary target scene information according to specific indexes, such as the popularity of the target scene, the similarity between the target scene and the scene to be shot and the like, and determining the target scene with high matching degree for final use;
s30: shooting composition of a scene to be shot according to the mode of the target scene information, and requiring that the mode of the composition of the scene information to be shot is consistent with that of the target scene information;
s40: and outputting the scene information to be shot after composition to finish one-time shooting.
Based on the steps, according to the type and the number of the subjects to be shot, at least one piece of finally used target scene information is included; at this time, when a plurality of target scenes are selected as references for photographing, the above-described steps of S20-S22 are repeated until the final photographing is completed.
Among them, the scene information of which shooting is completed can be used as the target scene information of the subsequent follow-up in S20-S22.
The intelligent auxiliary photography composition system as shown in fig. 3 comprises an information acquisition unit, an information processing unit, a network connection unit, a cloud server and an information output unit; wherein,
the information acquisition unit acquires scene information to be shot to form image information, and the image information is transmitted to the information processing unit through the network connection unit;
the information processing unit processes the image information acquired by the information acquisition unit to form digital information, and the digital information forms a data source and is transmitted to the cloud server through the network connection unit;
the cloud server is provided with a big data storage and processing module, and the processing module identifies and compares the data source transmitted by the information processing unit and the information in the big data storage received by the cloud server, so that the required target scene information is screened out, and the screened target scene is transmitted to the information output unit through the network connection unit;
the information output unit displays the target scene information transmitted by the cloud server, and the target scene information is used as a shooting reference of the mobile terminal to assist in completing shooting;
the mobile terminal comprises an information acquisition unit, an information processing unit and an information output unit, wherein the information acquisition unit, the information processing unit and the information output unit are all applied to the mobile terminal, and the network connection unit is connected with the mobile terminal and the cloud server.
The information acquisition unit at least comprises a camera module (such as a camera), a positioning module (such as GPS positioning and Beidou positioning) and a light sensing module of the mobile terminal so as to acquire the current and local image information of a scene to be shot.
Wherein, the information output unit at least comprises a display interface of the mobile terminal, and the display interface is provided with finally selected target scene information as reference and other reference information
The processing module of the cloud server is provided with a processing program, performs data integration and classification on the information stored in the big data, and performs sorting according to specific indexes, wherein the specific indexes comprise popularity, reference use times, shooting quality scores and the like.
It should be noted that the processing module may also search for and use the public network photo information or photo information of other specialized photo websites.
The scene information of the final picture shot by the camera module can be transmitted to the big data storage of the cloud server through the network connection unit and used as new target scene information, the information of the big data storage is perfected, and the big data number is provided for subsequent shooting.
Based on the steps and the system, the actual operation conditions of the user are as follows: the mobile phone is used as a mobile terminal, when a user opens a mobile phone camera, the mobile phone senses the position of the user through positioning systems such as a GPS and a Beidou, senses conditions such as weather and light at that time through a mobile phone photographing module (such as a camera), packs the information into source data through a mobile phone information processing unit, and uploads the source data to a cloud server through a wireless network.
After receiving the data, the cloud server searches data information through the processing module, screens photos which are similar to the information of the geographic position, weather, light and the like of the data in the big data storage, and evaluates the popularity (or refers to the use times or shooting quality score and the like) of the photos; the cloud processing system sorts and scores the photos according to popularity (or reference use times or shooting quality score) and scene matching degree, and pushes a plurality of photos to the mobile phone of the user from high to low according to the scores.
It should be noted that the cloud server photo may correspond to the scene of the current scene, the weather, even the gender, wearing, or even the stature of the photographed person.
After receiving the photos, the mobile phone displays the photos at a certain position of a display interface of the mobile phone; after seeing the photos, the user only needs to press the shutter and take a picture by following the picture composition of ' target scene information ' (possibly including the same gesture of the photographed person), and the mobile phone can obtain high-quality photos by taking the picture according to the gourd ladle '.
The user also can upload own high-quality photo to the cloud server through data transmission, grade this photo, constantly improve the big data storage of cloud server, provide big data for the photographer afterwards.
It is to be understood that unless otherwise defined, technical or scientific terms used herein have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any uses or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the present invention is not limited to the structures that have been described above and shown in the drawings, and that various modifications and changes can be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A method for intelligently assisting photography and composition is applied to a mobile terminal, and is characterized by comprising the following steps:
collecting scene information to be shot;
identifying and matching the collected scene information to obtain target scene information with high matching degree;
shooting composition of a scene to be shot is carried out according to the mode of the target scene information;
and outputting the scene information to be shot after composition to finish shooting.
2. The method for intelligent auxiliary photography and composition as claimed in claim 1, wherein the collected scene information to be photographed comprises a type of scene to be photographed, a type of subject to be photographed, and a real-time environment condition of the photographing, so as to obtain real-time scene information of a place to be photographed, and make the subsequent photography and composition closer to the demand.
3. The method of intelligently assisting photography patterning as claimed in claim 2, wherein identifying and matching the captured scene information comprises determining a scene type of the captured image and a subject type from the scene information to be captured, thereby matching the selected scene type and subject type with the scene to be captured to determine primary target scene information; performing secondary matching according to the shot real-time environment condition and the determined primary target scene information to obtain secondary target scene information, wherein the scene type and the subject type are determined according to a conventional determination mode;
the acquisition of the target scene information with high matching degree also needs to perform final screening evaluation on the acquired secondary target scene information according to specific indexes, and determine the target scene for final use.
4. A method of intelligent assistance in photography patterning as claimed in claim 3, wherein scene information of the completed photography is available as target scene information for subsequent follow-up.
5. The invention also provides an intelligent auxiliary photography composition system, which is used for realizing the intelligent auxiliary photography composition method of any one of claims 1 to 4, and comprises an information acquisition unit, an information processing unit, a network connection unit, a cloud server and an information output unit; which is characterized in that, among others,
the information acquisition unit acquires scene information to be shot to form image information, and the image information is transmitted to the information processing unit through the network connection unit;
the information processing unit processes the image information acquired by the information acquisition unit to form digital information, and the digital information forms a data source and is transmitted to a cloud server through the network connection unit;
the cloud server is provided with a big data storage and processing module, and the processing module identifies and compares the data source transmitted by the information processing unit and the information in the big data storage received by the cloud server, so that the required target scene information is screened out, and the screened target scene is transmitted to the information output unit through the network connection unit;
the information output unit displays the target scene information transmitted by the cloud server, and the target scene information is used as a shooting reference of the mobile terminal to assist in completing shooting;
the information acquisition unit, the information processing unit and the information output unit are all applied to the mobile terminal, and the network connection unit is connected with the mobile terminal and the cloud server.
6. The intelligent auxiliary photography composition system according to claim 5, wherein the information acquisition unit comprises at least a camera module, a positioning module and a light sensing module of the mobile terminal to obtain the current and local image information of the scene to be photographed.
7. The intelligent auxiliary photography patterning system of claim 5, wherein the information output unit includes at least a photography display interface of the mobile terminal, the display interface having finally selected target scene information as a reference and other reference information.
8. The intelligent auxiliary photography composition system as claimed in claim 5, wherein the processing module of the cloud server is configured with a processing program, and performs data integration and classification on the information stored in the big data, and performs sorting according to specific indexes, wherein the specific indexes include popularity, reference usage times, shooting quality scores and the like.
9. The intelligent auxiliary photography patterning system of claim 6, wherein scene information of a final picture taken by the camera module is transmitted to a big data storage of the cloud server through the network connection unit, and is used as new target scene information to perfect the big data storage information and provide a big data count for subsequent shooting.
10. A computer-readable storage medium applied to the mobile terminal according to any one of the preceding claims 5 to 9, wherein a computer program is stored thereon, and when the computer program is executed by the mobile terminal, the intelligent assisted photography patterning system according to any one of the preceding claims 5 to 9 is used for implementing the intelligent assisted photography patterning method according to any one of the preceding claims 1 to 4.
CN201910618687.8A 2019-07-09 2019-07-09 A kind of intelligence assisted tomography patterning process and system Pending CN110336945A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910618687.8A CN110336945A (en) 2019-07-09 2019-07-09 A kind of intelligence assisted tomography patterning process and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910618687.8A CN110336945A (en) 2019-07-09 2019-07-09 A kind of intelligence assisted tomography patterning process and system

Publications (1)

Publication Number Publication Date
CN110336945A true CN110336945A (en) 2019-10-15

Family

ID=68146122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910618687.8A Pending CN110336945A (en) 2019-07-09 2019-07-09 A kind of intelligence assisted tomography patterning process and system

Country Status (1)

Country Link
CN (1) CN110336945A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110958384A (en) * 2019-11-06 2020-04-03 苏州佳世达光电有限公司 Intelligent auxiliary photographing system and method
CN111343382A (en) * 2020-03-09 2020-06-26 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and storage medium
CN111885296A (en) * 2020-06-16 2020-11-03 联想企业解决方案(新加坡)有限公司 Dynamic processing method of visual data and electronic equipment
CN113643376A (en) * 2021-07-13 2021-11-12 杭州群核信息技术有限公司 Camera view angle generation method and device, computing device and storage medium
CN113905174A (en) * 2021-09-18 2022-01-07 咪咕文化科技有限公司 Photographing gesture recommendation method, device, equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011223599A (en) * 2011-05-31 2011-11-04 Casio Comput Co Ltd Photographing apparatus and program
JP2015103968A (en) * 2013-11-25 2015-06-04 オリンパス株式会社 Image processing apparatus, image processing method and image processing program
JP2017184021A (en) * 2016-03-30 2017-10-05 株式会社Nttドコモ Content providing device and content providing program
CN109639964A (en) * 2018-11-26 2019-04-16 北京达佳互联信息技术有限公司 Image processing method, processing unit and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011223599A (en) * 2011-05-31 2011-11-04 Casio Comput Co Ltd Photographing apparatus and program
JP2015103968A (en) * 2013-11-25 2015-06-04 オリンパス株式会社 Image processing apparatus, image processing method and image processing program
JP2017184021A (en) * 2016-03-30 2017-10-05 株式会社Nttドコモ Content providing device and content providing program
CN109639964A (en) * 2018-11-26 2019-04-16 北京达佳互联信息技术有限公司 Image processing method, processing unit and computer readable storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110958384A (en) * 2019-11-06 2020-04-03 苏州佳世达光电有限公司 Intelligent auxiliary photographing system and method
CN111343382A (en) * 2020-03-09 2020-06-26 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and storage medium
CN111343382B (en) * 2020-03-09 2021-09-10 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and storage medium
CN111885296A (en) * 2020-06-16 2020-11-03 联想企业解决方案(新加坡)有限公司 Dynamic processing method of visual data and electronic equipment
CN113643376A (en) * 2021-07-13 2021-11-12 杭州群核信息技术有限公司 Camera view angle generation method and device, computing device and storage medium
CN113643376B (en) * 2021-07-13 2024-05-03 杭州群核信息技术有限公司 Camera view angle generation method, device, computing equipment and storage medium
CN113905174A (en) * 2021-09-18 2022-01-07 咪咕文化科技有限公司 Photographing gesture recommendation method, device, equipment and computer readable storage medium
CN113905174B (en) * 2021-09-18 2024-07-19 咪咕文化科技有限公司 Shooting gesture recommendation method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN110336945A (en) A kind of intelligence assisted tomography patterning process and system
CN109379572B (en) Image conversion method, image conversion device, electronic equipment and storage medium
CN108933899B (en) Panorama shooting method, device, terminal and computer readable storage medium
CN108805198B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
US8036430B2 (en) Image-processing device and image-processing method, image-pickup device, and computer program
US9013589B2 (en) Digital image processing apparatus and digital image processing method capable of obtaining sensibility-based image
US8212911B2 (en) Imaging apparatus, imaging system, and imaging method displaying recommendation information
US8477993B2 (en) Image taking apparatus and image taking method
CN108307120B (en) Image shooting method and device and electronic terminal
JP4829186B2 (en) Imaging device
JP4577275B2 (en) Imaging apparatus, image recording method, and program
CN107040726B (en) Double-camera synchronous exposure method and system
CN107948538B (en) Imaging method, imaging device, mobile terminal and storage medium
WO2006028108A1 (en) Image processing system and method, and terminal and server used for the same
US20090034953A1 (en) Object-oriented photographing control method, medium, and apparatus
WO2016197734A1 (en) Image capturing method, terminal and server
KR20150078342A (en) Photographing apparatus and method for sharing setting values, and a sharing system
CN112887610A (en) Shooting method, shooting device, electronic equipment and storage medium
WO2023071933A1 (en) Camera photographing parameter adjustment method and apparatus and electronic device
CN110830712A (en) Autonomous photographing system and method
CN108093170B (en) User photographing method, device and equipment
CN110581950B (en) Camera, system and method for selecting camera settings
CN106657798A (en) Photographing method for intelligent terminal
CN109472230B (en) Automatic athlete shooting recommendation system and method based on pedestrian detection and Internet
TWI466034B (en) Methods to Improve Face Recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191015

WD01 Invention patent application deemed withdrawn after publication