CN113704526A - Shooting composition guiding method and terminal - Google Patents
Shooting composition guiding method and terminal Download PDFInfo
- Publication number
- CN113704526A CN113704526A CN202110863897.0A CN202110863897A CN113704526A CN 113704526 A CN113704526 A CN 113704526A CN 202110863897 A CN202110863897 A CN 202110863897A CN 113704526 A CN113704526 A CN 113704526A
- Authority
- CN
- China
- Prior art keywords
- composition
- shooting
- scheme
- user
- evaluation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000011156 evaluation Methods 0.000 claims abstract description 87
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 37
- 230000000694 effects Effects 0.000 claims abstract description 22
- 238000004364 calculation method Methods 0.000 claims abstract description 18
- 238000004590 computer program Methods 0.000 claims description 12
- 238000004806 packaging method and process Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 7
- 238000012790 confirmation Methods 0.000 claims description 5
- 238000012986 modification Methods 0.000 claims description 5
- 230000004048 modification Effects 0.000 claims description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000011002 quantification Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000009897 systematic effect Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/54—Browsing; Visualisation therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Studio Devices (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a shooting composition guiding method and a terminal; according to the method, a shooting theme is determined according to user selection, picture information of a shooting preview interface is acquired, and semantic identification is carried out on the picture information to obtain subject semantic data in a picture; performing reasoning calculation for generating a confrontation network model according to the shooting theme and the theme semantic data to obtain a composition scheme and a composition index evaluation algorithm; according to the composition scheme, composition guide is loaded on a shooting preview interface, composition effect evaluation is carried out after a user finishes composition according to the composition guide according to a composition index evaluation algorithm, and an evaluation score is given, so that the user can carry out composition adjustment or shooting according to the evaluation score; the composition scheme is identified and generated by semantics, composition guiding is carried out, meanwhile, a photographer can obtain a relatively suitable shooting composition scheme, the shooting composition difficulty is reduced, composition recommendation, guiding and evaluation are realized, and the shooting efficiency is improved.
Description
Technical Field
The invention relates to the technical field of photography, in particular to a shooting composition guiding method and a shooting composition guiding terminal.
Background
The composition before shooting is to arrange and combine useful elements in a picture regularly, and is a picture selection of a shooting subject expression mode by a photographer. The subject and the object, or the foreground and the background in the picture are combined by comparison, and the emotional shooting theme presentation is achieved through the light and shadow change. Meanwhile, the composition also plays a role in highlighting the subject, attracting the line of sight, simplifying disorder, and balancing and coordinating the picture.
The composition is a complete art knowledge system to support the importance of the composition in the art field, a certain shooting composition skill can be obtained by systematic learning and shooting practice, the composition skill is usually completed before shooting by the understanding, feeling and experience of an individual, and a complete set of composition recommendation, guidance and evaluation system is not provided to assist in completing the composition before shooting.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the shooting composition guiding method and the shooting composition guiding terminal are provided, composition recommendation, guidance and evaluation are achieved, and shooting efficiency is improved.
In order to solve the technical problems, the invention adopts the technical scheme that:
a shooting composition guidance method, comprising:
s1, determining a shooting subject according to the selection of a user, acquiring picture information of a shooting preview interface, and performing semantic recognition on the picture information to obtain subject semantic data in a picture;
s2, performing reasoning calculation for generating a confrontation network model according to the shooting subject and the subject semantic data to obtain a composition scheme and a composition index evaluation algorithm;
s3, according to the composition scheme, composition guide is loaded on a shooting preview interface, according to the composition index evaluation algorithm, composition effect evaluation is carried out after a user completes composition according to the composition guide, and an evaluation score is given, so that the user can carry out composition adjustment or shooting according to the evaluation score.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a shooting composition guidance terminal comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
s1, determining a shooting subject according to the selection of a user, acquiring picture information of a shooting preview interface, and performing semantic recognition on the picture information to obtain subject semantic data in a picture;
s2, performing reasoning calculation for generating a confrontation network model according to the shooting subject and the subject semantic data to obtain a composition scheme and a composition index evaluation algorithm;
s3, according to the composition scheme, composition guide is loaded on a shooting preview interface, according to the composition index evaluation algorithm, composition effect evaluation is carried out after a user completes composition according to the composition guide, and an evaluation score is given, so that the user can carry out composition adjustment or shooting according to the evaluation score.
The invention has the beneficial effects that: the invention utilizes the composition scheme identified and generated by semantics and carries out composition guidance, and simultaneously, a photographer can obtain a relatively suitable shooting composition scheme, thereby reducing the shooting composition difficulty, realizing composition recommendation, guidance and evaluation and improving the shooting efficiency.
Drawings
FIG. 1 is a flowchart of a photographing composition guidance method according to an embodiment of the present invention;
fig. 2 is a structural diagram of a photographing composition guidance terminal according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a photographing composition guiding method according to an embodiment of the present invention;
description of reference numerals:
1. a photographing composition guidance terminal; 2. a processor; 3. a memory.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1 and 3, a photographing composition guiding method includes:
s1, determining a shooting subject according to the selection of a user, acquiring picture information of a shooting preview interface, and performing semantic recognition on the picture information to obtain subject semantic data in a picture;
s2, performing reasoning calculation for generating a confrontation network model according to the shooting subject and the subject semantic data to obtain a composition scheme and a composition index evaluation algorithm;
s3, according to the composition scheme, composition guide is loaded on a shooting preview interface, according to the composition index evaluation algorithm, composition effect evaluation is carried out after a user completes composition according to the composition guide, and an evaluation score is given, so that the user can carry out composition adjustment or shooting according to the evaluation score.
From the above description, the beneficial effects of the present invention are: the invention utilizes the composition scheme identified and generated by semantics and carries out composition guidance, and simultaneously, a photographer can obtain a relatively suitable shooting composition scheme, thereby reducing the shooting composition difficulty, realizing composition recommendation, guidance and evaluation and improving the shooting efficiency.
Further, the step S2 is specifically:
s21, displaying the identified subject semantic data for confirmation, manual modification or re-identification selection by the user;
s22, analyzing and packaging the shooting subject and subject semantic data confirmed or modified by a user to obtain composition structured data;
and S23, starting a composition scheme engine, performing generation countermeasure network model reasoning calculation, uniformly quantifying the composition method in the composition system, and generating a group of composition schemes and corresponding composition index evaluation algorithms.
As can be seen from the above description, the method generates the confrontation network model for reasoning and calculation, and uniformly quantifies the method in the composition science system, so that the composition index evaluation algorithm is generated.
Further, the step S2 further includes:
s24, traversing composition metadata in the composition scheme, and fitting and drawing the content in the picture according to the composition metadata to form a scheme preview;
s25, fitting, packaging and displaying the composition scheme, the scheme preview and the composition index evaluation algorithm, and obtaining the final composition scheme and the composition index evaluation algorithm according to user selection.
As can be seen from the above description, the present invention also fits the picture content to generate a scheme preview, so that the user can more clearly understand the composition scheme.
Further, the step S3 is specifically:
s31, loading composition guide or voice guide on the shooting preview interface according to the final composition scheme;
and S32, according to the final composition scheme, performing composition effect evaluation on the picture information of the current shooting preview interface, grading the weight, the size ratio, the placing position, the image fitting degree and the picture uniformity of the main body and the auxiliary objects, and performing normalization processing on the grades to obtain the evaluation scores and display the evaluation scores.
According to the invention, the user is guided to compose by loading composition guide or directly conducting voice guide on the shooting page, so that the method is more convenient, and meanwhile, the grading display is carried out on the picture information of the current shooting preview interface, so that the user can know whether the current picture accords with the composition rule more intuitively.
Further, the method also comprises the following steps:
s4, photographing the screen or reselecting a composition scheme according to the user request, and performing the step S3.
According to the description, the user can select to reselect the composition scheme or shoot according to whether the current scheme is satisfied, and the method is more flexible and humanized.
Referring to fig. 2, a shooting composition guidance terminal includes a processor, a memory, and a computer program stored in the memory and operable on the processor, and when the processor executes the computer program, the processor implements the following steps:
s1, determining a shooting subject according to the selection of a user, acquiring picture information of a shooting preview interface, and performing semantic recognition on the picture information to obtain subject semantic data in a picture;
s2, performing reasoning calculation for generating a confrontation network model according to the shooting subject and the subject semantic data to obtain a composition scheme and a composition index evaluation algorithm;
s3, according to the composition scheme, composition guide is loaded on a shooting preview interface, according to the composition index evaluation algorithm, composition effect evaluation is carried out after a user completes composition according to the composition guide, and an evaluation score is given, so that the user can carry out composition adjustment or shooting according to the evaluation score.
From the above description, the beneficial effects of the present invention are: the invention utilizes the composition scheme identified and generated by semantics and carries out composition guidance, and simultaneously, a photographer can obtain a relatively suitable shooting composition scheme, thereby reducing the shooting composition difficulty, realizing composition recommendation, guidance and evaluation and improving the shooting efficiency.
Further, the step S2 is specifically:
s21, displaying the identified subject semantic data for confirmation, manual modification or re-identification selection by the user;
s22, analyzing and packaging the shooting subject and subject semantic data confirmed or modified by a user to obtain composition structured data;
and S23, starting a composition scheme engine, performing generation countermeasure network model reasoning calculation, uniformly quantifying the composition method in the composition system, and generating a group of composition schemes and corresponding composition index evaluation algorithms.
As can be seen from the above description, the method generates the confrontation network model for reasoning and calculation, and uniformly quantifies the method in the composition science system, so that the composition index evaluation algorithm is generated.
Further, the step S2 when the processor executes the computer program further includes:
s24, traversing composition metadata in the composition scheme, and fitting and drawing the content in the picture according to the composition metadata to form a scheme preview;
s25, fitting, packaging and displaying the composition scheme, the scheme preview and the composition index evaluation algorithm, and obtaining the final composition scheme and the composition index evaluation algorithm according to user selection.
As can be seen from the above description, the present invention also fits the picture content to generate a scheme preview, so that the user can more clearly understand the composition scheme.
Further, the step S3 is specifically:
s31, loading composition guide or voice guide on the shooting preview interface according to the final composition scheme;
and S32, according to the final composition scheme, performing composition effect evaluation on the picture information of the current shooting preview interface, grading the weight, the size ratio, the placing position, the image fitting degree and the picture uniformity of the main body and the auxiliary objects, and performing normalization processing on the grades to obtain the evaluation scores and display the evaluation scores.
According to the invention, the user is guided to compose by loading composition guide or directly conducting voice guide on the shooting page, so that the method is more convenient, and meanwhile, the grading display is carried out on the picture information of the current shooting preview interface, so that the user can know whether the current picture accords with the composition rule more intuitively.
Further, the processor, when executing the computer program, further comprises the steps of:
s4, photographing the screen or reselecting a composition scheme according to the user request, and performing the step S3.
According to the description, the user can select to reselect the composition scheme or shoot according to whether the current scheme is satisfied, and the method is more flexible and humanized.
Referring to fig. 1 and fig. 3, a first embodiment of the present invention is:
a shooting composition guidance method, comprising:
s1, determining a shooting subject according to the selection of a user, acquiring picture information of a shooting preview interface, and performing semantic recognition on the picture information to obtain subject semantic data in a picture;
in this embodiment, according to different shooting scene categories, targeted shooting subject categories, such as jewelry shooting, portrait shooting, and the like, are preset in the system. Before shooting, a user needs to select a shooting subject category corresponding to the shooting so that a system can load a corresponding image processing engine under the category. And according to the selected shooting subject category, enabling a corresponding image semantic segmentation model inference engine to carry out inference calculation of semantic identification on the image in the shooting picture, and obtaining image semantic data in the picture, wherein the image semantic data comprises information such as the background in the picture and the category, position and area of each object. The image semantic segmentation model is a pre-trained composition neural network model.
S2, performing reasoning calculation for generating a confrontation network model according to the shooting subject and the subject semantic data to obtain a composition scheme and a composition index evaluation algorithm;
the step S2 specifically includes:
s21, displaying the identified subject semantic data for confirmation, manual modification or re-identification selection by the user;
in this embodiment, the recognized image semantic data is marked on the camera mobile application interface to let the user confirm whether the recognition is correct. If the identification is missed or wrong, the user can modify the identification result in a manual correction mode or perform re-identification.
S22, analyzing and packaging the shooting subject and subject semantic data confirmed or modified by a user to obtain composition structured data;
in this embodiment, after the user confirms the recognition result, the composition evaluation application program starts to execute a semantic analysis algorithm, and analyzes and encapsulates the semantic data of the current image, so as to finally obtain composition structured data composed of the subject category, the background, the subject information, and the auxiliary shooting object information, where each object may be multiple in the data.
S23, starting a composition scheme engine, carrying out generation countermeasure network model reasoning calculation, carrying out unified quantification on the composition method in a composition system, and generating a group of composition schemes and corresponding composition index evaluation algorithms;
in this embodiment, according to the obtained composition structured data of the shot picture, a countermeasure network model is generated through an image processing engine to perform inference calculation, so as to obtain a group of composition candidate schemes and a corresponding composition index evaluation algorithm. The composition index evaluation algorithm is characterized in that the evaluation indexes are obtained by carrying out factor grading on object weight, main body and auxiliary object size ratio, image fitting degree, picture symmetry and the like according to the unified quantification of various composition methods in a composition system and aiming at different subject categories and carrying out normalization processing on the grading.
S24, traversing composition metadata in the composition scheme, and fitting and drawing the content in the picture according to the composition metadata to form a scheme preview;
s25, fitting, packaging and displaying the composition scheme, the scheme preview and the composition index evaluation algorithm, and obtaining the final composition scheme and the composition index evaluation algorithm according to user selection.
In this embodiment, the composition scheme engine may traverse the composition metadata in the composition scheme, perform fitting drawing on the image object according to the composition metadata to form a preview, package the composition index evaluation algorithm, the composition scheme, and the fitted picture into a composition candidate scheme recommended to the user, and after receiving the recommended composition scheme recommended by the composition scheme engine, the user may preview a composition effect on an application interface and select a preferred composition scheme of the user.
S3, according to the composition scheme, loading composition guide on a shooting preview interface, according to the composition index evaluation algorithm, after the user finishes composition according to the composition guide, carrying out composition effect evaluation, and giving an evaluation score, so that the user can carry out composition adjustment or shooting according to the evaluation score;
the step S3 specifically includes:
s31, loading composition guide or voice guide on the shooting preview interface according to the final composition scheme;
and S32, according to the final composition scheme, performing composition effect evaluation on the picture information of the current shooting preview interface, grading the weight, the size ratio, the placing position, the image fitting degree and the picture uniformity of the main body and the auxiliary objects, and performing normalization processing on the grades to obtain the evaluation scores and display the evaluation scores.
In this embodiment, after the user selects the composition scheme on the interface, the application starts composition shooting guidance, and displays a selected composition scheme guidance frame on the functional interface, where the outlines, placement positions, and real-time effect evaluation scores of the main object and the auxiliary object are identified. The engine analyzes whether factors such as object weight, main body and auxiliary object size ratio, image fitting degree, picture symmetry and the like accord with the composition rule or not, scores are carried out, and normalization processing is carried out on the scores to obtain evaluation scores. Or clicking a gradual placement button to guide the user to carry out placement shooting on the object in a manner of placement prompt information and voice prompt.
S4, photographing the screen or reselecting a composition scheme according to the user request, and performing the step S3.
In the embodiment, the user refers to the composition score according to the shooting requirement, and evaluates whether the composition effect is satisfactory or not according to the composition result, if the composition effect is satisfactory, shooting can be performed, and if the composition effect is unsatisfactory, adjustment and decoration can be continued, or the composition scheme can be selected again. After the user finishes shooting, a composition effect evaluation program can be executed to score the effect.
Referring to fig. 2, the second embodiment of the present invention is:
a shooting composition guidance terminal 1 comprises a processor 2, a memory 3 and a computer program stored in the memory 3 and capable of running on the processor 2, wherein the processor 2 realizes the steps in the first embodiment when executing the computer program.
In summary, the shooting composition guiding method and the shooting composition guiding terminal provided by the invention utilize the composition scheme identified and generated by the semantics, perform composition guiding, and simultaneously enable a photographer to obtain a relatively suitable shooting composition scheme, thereby reducing the shooting composition difficulty, realizing composition recommendation, guiding and evaluation, and improving the shooting efficiency. According to the invention, through image understanding of a picture, image semantic information such as a background, a specific main body target and an auxiliary target in the picture, a position incidence relation between the targets, an incidence relation between the background and the targets and the like is identified, a composition scheme and a composition evaluation index are obtained through reasoning and calculation, and photographing composition is guided in a preview interface in an information prompt or voice prompt mode so as to achieve an optimal photographing composition effect, simplify a photographing composition process and improve a photographing level, and make up for the problem that a full picture theme cannot be presented due to systematic control shortage of photographing knowledge.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.
Claims (10)
1. A shooting composition guidance method, comprising:
s1, determining a shooting subject according to the selection of a user, acquiring picture information of a shooting preview interface, and performing semantic recognition on the picture information to obtain subject semantic data in a picture;
s2, performing reasoning calculation for generating a confrontation network model according to the shooting subject and the subject semantic data to obtain a composition scheme and a composition index evaluation algorithm;
s3, according to the composition scheme, composition guide is loaded on a shooting preview interface, according to the composition index evaluation algorithm, composition effect evaluation is carried out after a user completes composition according to the composition guide, and an evaluation score is given, so that the user can carry out composition adjustment or shooting according to the evaluation score.
2. The shooting composition guiding method according to claim 1, wherein the step S2 is specifically:
s21, displaying the identified subject semantic data for confirmation, manual modification or re-identification selection by the user;
s22, analyzing and packaging the shooting subject and subject semantic data confirmed or modified by a user to obtain composition structured data;
and S23, starting a composition scheme engine, performing generation countermeasure network model reasoning calculation, uniformly quantifying the composition method in the composition system, and generating a group of composition schemes and corresponding composition index evaluation algorithms.
3. The shooting composition guidance method according to claim 2, wherein said step S2 further includes:
s24, traversing composition metadata in the composition scheme, and fitting and drawing the content in the picture according to the composition metadata to form a scheme preview;
s25, fitting, packaging and displaying the composition scheme, the scheme preview and the composition index evaluation algorithm, and obtaining the final composition scheme and the composition index evaluation algorithm according to user selection.
4. The shooting composition guiding method according to claim 1, wherein the step S3 is specifically:
s31, loading composition guide or voice guide on the shooting preview interface according to the final composition scheme;
and S32, according to the final composition scheme, performing composition effect evaluation on the picture information of the current shooting preview interface, grading the weight, the size ratio, the placing position, the image fitting degree and the picture uniformity of the main body and the auxiliary objects, and performing normalization processing on the grades to obtain the evaluation scores and display the evaluation scores.
5. The photographing composition guidance method according to claim 1, further comprising the steps of:
s4, photographing the screen or reselecting a composition scheme according to the user request, and performing the step S3.
6. A shooting composition guidance terminal comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the following steps when executing the computer program:
s1, determining a shooting subject according to the selection of a user, acquiring picture information of a shooting preview interface, and performing semantic recognition on the picture information to obtain subject semantic data in a picture;
s2, performing reasoning calculation for generating a confrontation network model according to the shooting subject and the subject semantic data to obtain a composition scheme and a composition index evaluation algorithm;
s3, according to the composition scheme, composition guide is loaded on a shooting preview interface, according to the composition index evaluation algorithm, composition effect evaluation is carried out after a user completes composition according to the composition guide, and an evaluation score is given, so that the user can carry out composition adjustment or shooting according to the evaluation score.
7. The shooting composition guidance terminal according to claim 6, wherein the step S2 is specifically:
s21, displaying the identified subject semantic data for confirmation, manual modification or re-identification selection by the user;
s22, analyzing and packaging the shooting subject and subject semantic data confirmed or modified by a user to obtain composition structured data;
and S23, starting a composition scheme engine, performing generation countermeasure network model reasoning calculation, uniformly quantifying the composition method in the composition system, and generating a group of composition schemes and corresponding composition index evaluation algorithms.
8. The shooting composition guidance terminal of claim 7, wherein the processor executing the computer program further comprises the step S2 of:
s24, traversing composition metadata in the composition scheme, and fitting and drawing the content in the picture according to the composition metadata to form a scheme preview;
s25, fitting, packaging and displaying the composition scheme, the scheme preview and the composition index evaluation algorithm, and obtaining the final composition scheme and the composition index evaluation algorithm according to user selection.
9. The shooting composition guidance terminal according to claim 6, wherein the step S3 is specifically:
s31, loading composition guide or voice guide on the shooting preview interface according to the final composition scheme;
and S32, according to the final composition scheme, performing composition effect evaluation on the picture information of the current shooting preview interface, grading the weight, the size ratio, the placing position, the image fitting degree and the picture uniformity of the main body and the auxiliary objects, and performing normalization processing on the grades to obtain the evaluation scores and display the evaluation scores.
10. The shooting composition guidance terminal according to claim 6, wherein the processor when executing the computer program further comprises:
s4, photographing the screen or reselecting a composition scheme according to the user request, and performing the step S3.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310932356.8A CN116955680A (en) | 2021-07-29 | 2021-07-29 | Shooting composition scheme generation method and terminal |
CN202110863897.0A CN113704526B (en) | 2021-07-29 | 2021-07-29 | Shooting composition guiding method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110863897.0A CN113704526B (en) | 2021-07-29 | 2021-07-29 | Shooting composition guiding method and terminal |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310932356.8A Division CN116955680A (en) | 2021-07-29 | 2021-07-29 | Shooting composition scheme generation method and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113704526A true CN113704526A (en) | 2021-11-26 |
CN113704526B CN113704526B (en) | 2023-08-04 |
Family
ID=78650899
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110863897.0A Active CN113704526B (en) | 2021-07-29 | 2021-07-29 | Shooting composition guiding method and terminal |
CN202310932356.8A Pending CN116955680A (en) | 2021-07-29 | 2021-07-29 | Shooting composition scheme generation method and terminal |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310932356.8A Pending CN116955680A (en) | 2021-07-29 | 2021-07-29 | Shooting composition scheme generation method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113704526B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100329552A1 (en) * | 2009-06-24 | 2010-12-30 | Samsung Electronics Co., Ltd. | Method and apparatus for guiding user with suitable composition, and digital photographing apparatus |
CN103929596A (en) * | 2014-04-30 | 2014-07-16 | 深圳市中兴移动通信有限公司 | Method and device for guiding shooting picture composition |
CN105991925A (en) * | 2015-03-06 | 2016-10-05 | 联想(北京)有限公司 | Scene composition indicating method and indicating device |
CN109218609A (en) * | 2018-07-23 | 2019-01-15 | 麒麟合盛网络技术股份有限公司 | Image composition method and device |
CN111327829A (en) * | 2020-03-09 | 2020-06-23 | Oppo广东移动通信有限公司 | Composition guiding method, composition guiding device, electronic equipment and storage medium |
CN111757012A (en) * | 2020-07-16 | 2020-10-09 | 盐城工学院 | Image processing method based on combination of individual and photographic aesthetics |
-
2021
- 2021-07-29 CN CN202110863897.0A patent/CN113704526B/en active Active
- 2021-07-29 CN CN202310932356.8A patent/CN116955680A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100329552A1 (en) * | 2009-06-24 | 2010-12-30 | Samsung Electronics Co., Ltd. | Method and apparatus for guiding user with suitable composition, and digital photographing apparatus |
CN103929596A (en) * | 2014-04-30 | 2014-07-16 | 深圳市中兴移动通信有限公司 | Method and device for guiding shooting picture composition |
CN105991925A (en) * | 2015-03-06 | 2016-10-05 | 联想(北京)有限公司 | Scene composition indicating method and indicating device |
CN109218609A (en) * | 2018-07-23 | 2019-01-15 | 麒麟合盛网络技术股份有限公司 | Image composition method and device |
CN111327829A (en) * | 2020-03-09 | 2020-06-23 | Oppo广东移动通信有限公司 | Composition guiding method, composition guiding device, electronic equipment and storage medium |
CN111757012A (en) * | 2020-07-16 | 2020-10-09 | 盐城工学院 | Image processing method based on combination of individual and photographic aesthetics |
Non-Patent Citations (1)
Title |
---|
左向梅,武亮: "基于深度图像分割的场景物体识别与匹配", 工程技术研究, vol. 4, no. 17, pages 219 - 221 * |
Also Published As
Publication number | Publication date |
---|---|
CN113704526B (en) | 2023-08-04 |
CN116955680A (en) | 2023-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7259785B2 (en) | Digital imaging method and apparatus using eye-tracking control | |
CN104346788B (en) | Image splicing method and device | |
US20080025616A1 (en) | Fast multiple template matching using a shared correlation map | |
CN108628061B (en) | Self-adaptive automatic focusing method and device for industrial camera | |
CN107423306B (en) | Image retrieval method and device | |
CN109445465A (en) | Method for tracing, system, unmanned plane and terminal based on unmanned plane | |
CN108846807A (en) | Light efficiency processing method, device, terminal and computer readable storage medium | |
CN106651755A (en) | Panoramic image processing method and device for terminal and terminal | |
CN106815803B (en) | Picture processing method and device | |
WO2016200734A1 (en) | Optimizing capture of focus stacks | |
CN114241277A (en) | Attention-guided multi-feature fusion disguised target detection method, device, equipment and medium | |
CN112446322A (en) | Eyeball feature detection method, device, equipment and computer-readable storage medium | |
CN113704526A (en) | Shooting composition guiding method and terminal | |
US20150085159A1 (en) | Multiple image capture and processing | |
CN117095006B (en) | Image aesthetic evaluation method, device, electronic equipment and storage medium | |
CN111081286B (en) | Video editing system for artificial intelligence teaching | |
CN112241940B (en) | Fusion method and device for multiple multi-focus images | |
KR20190114739A (en) | method AND DEVICE for processing Image | |
CN108965743A (en) | Image synthesizing method, device and readable storage medium storing program for executing based on the segmentation of front and back scape | |
CN116311515A (en) | Gesture recognition method, device, system and storage medium | |
CN112449115A (en) | Shooting method and device and electronic equipment | |
CN106506971A (en) | A kind of focusing method and mobile terminal | |
CN105893578A (en) | Method and device for selecting photos | |
CN109151314B (en) | Camera blurring processing method and device for terminal, storage medium and terminal | |
Cao et al. | Automatic image cropping via the novel saliency detection algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |