CN113643376A - Camera view angle generation method and device, computing device and storage medium - Google Patents

Camera view angle generation method and device, computing device and storage medium Download PDF

Info

Publication number
CN113643376A
CN113643376A CN202110790390.7A CN202110790390A CN113643376A CN 113643376 A CN113643376 A CN 113643376A CN 202110790390 A CN202110790390 A CN 202110790390A CN 113643376 A CN113643376 A CN 113643376A
Authority
CN
China
Prior art keywords
camera
camera view
scene
view angle
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110790390.7A
Other languages
Chinese (zh)
Other versions
CN113643376B (en
Inventor
黄茜茜
陈丰
何剑丰
邓曦澄
何迅
蔡文彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qunhe Information Technology Co Ltd
Original Assignee
Hangzhou Qunhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qunhe Information Technology Co Ltd filed Critical Hangzhou Qunhe Information Technology Co Ltd
Priority to CN202110790390.7A priority Critical patent/CN113643376B/en
Priority claimed from CN202110790390.7A external-priority patent/CN113643376B/en
Publication of CN113643376A publication Critical patent/CN113643376A/en
Application granted granted Critical
Publication of CN113643376B publication Critical patent/CN113643376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Abstract

The invention discloses a camera visual angle generation method, a device, a computing device and a storage medium, wherein the method comprises the following steps: acquiring a scene effect graph, and extracting and recording core main body information and camera visual angle information of the scene effect graph; determining the relative positions of the camera and the core main body in the scene space contour according to the scene space contour, the core main body information and the camera view angle information, screening and classifying according to the relative positions and the camera view angles to determine the camera view angle style, and associating an effect image corresponding to the camera view angle style as a camera view angle template; carrying out space region segmentation on the target scene and the camera view angle template to obtain a target scene segmentation result and a template segmentation result; and matching the scene segmentation result and the template segmentation result, obtaining a matching result, and screening a camera visual angle template with a high matching rate and a corresponding camera visual angle style as recommendations according to the matching result. The appropriate camera view angle can be automatically generated according to the design scene type, and the design efficiency is improved.

Description

Camera view angle generation method and device, computing device and storage medium
Technical Field
The invention belongs to the technical field of scene design, and particularly relates to a camera visual angle generation method and device, computing equipment and a storage medium.
Background
The scene design process is that a designer builds a target scene based on design tool software. In the design process, the designer can judge the design effect when observing a certain angle of a target scene, and the effect picture can be obtained through the view finding function provided by a design tool, so-called view finding is to place a virtual camera in the scene, and the effect picture of the current visual angle is obtained through technologies such as snapshot or rendering, and the current camera view finding process has the following problems:
problem 1: for non-professional designers, it is often uncertain how composition selection views can better represent the scene, and therefore the designer is required to master a lot of composition skills, consuming learning costs and certain entry costs.
Problem 2: even after the professional designer determines the approximate viewing angle, the professional designer needs to take time to repeatedly perform fine adjustment within the viewing angle in order to highlight the core subject within the viewing angle, which is inefficient and consumes a large amount of adjustment cost.
Problem 3: the designer generally has a more personal habit in composition and view-finding viewing angle, the composition effect does not always accord with various group aesthetics, and even if a new technology and a new style are actively learned, the personalized expression result is limited due to the limitation of time and energy. Meeting a wide variety of composition framing needs is also valuable in the big data internet platform trend.
Disclosure of Invention
In view of the foregoing, an object of the present invention is to provide a method, an apparatus, a computing device and a storage medium for generating a camera view angle, which automatically generate a suitable camera view angle according to a design scene type, and improve design efficiency.
In a first aspect, an embodiment provides a camera view generating method, including the following steps:
acquiring a scene effect graph, and extracting and recording core main body information and camera visual angle information of the scene effect graph;
determining the relative positions of the camera and the core main body in the scene space contour according to the scene space contour, the core main body information and the camera view angle information, screening and classifying according to the relative positions and the camera view angles to determine the camera view angle style, and associating an effect image corresponding to the camera view angle style as a camera view angle template;
carrying out space region segmentation on the target scene and the camera view angle template to obtain a target scene segmentation result and a template segmentation result;
and matching the scene segmentation result and the template segmentation result, obtaining a matching result, and screening a camera visual angle template with a high matching rate and a corresponding camera visual angle style as recommendations according to the matching result.
In a second aspect, an embodiment provides a camera angle generating apparatus, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a scene effect graph, and extracting and recording core main body information and camera visual angle information of the scene effect graph;
the generating module is used for determining the relative positions of the camera and the core main body in the scene space contour according to the scene space contour, the core main body information and the camera view angle information, screening and classifying according to the relative positions and the camera view angles to establish a camera view angle style, and associating an effect graph corresponding to the camera view angle style as a camera view angle template;
the segmentation module is used for carrying out space region segmentation on the target scene and the camera view angle template to obtain a target scene segmentation result and a template segmentation result;
and the matching recommendation module is used for matching the scene segmentation result and the template segmentation result, obtaining a matching result, and screening the camera view angle template with a high matching rate and the corresponding camera view angle style as recommendations according to the matching result.
In a third aspect, embodiments provide a computing device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the camera angle generation method according to the first aspect when executing the computer program.
In a fourth aspect, embodiments provide a computer storage medium having a computer program stored thereon, where the computer program is configured to, when executed by a processor, implement the steps of the camera angle generation method according to the first aspect.
The technical scheme provided by the embodiment has the beneficial effects that at least:
various camera view angle styles and corresponding camera view angle templates are constructed by extracting core main body information and camera view angle information from the scene effect graph so as to ensure the accuracy and diversity of camera view angles; the camera view segmentation is determined and recommended by matching the segmentation results of the target scene and the camera view template, so that the design efficiency of the camera view is improved, and the applicability is stronger.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow diagram of a camera view generation method provided by an embodiment;
FIGS. 2-4 are schematic diagrams of a determined viewing angle style of a camera according to an embodiment;
FIG. 5 is a flow diagram of a camera view generation method provided by an embodiment;
fig. 6 is a schematic structural diagram of a camera view angle generation apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
In order to improve the universality and the design efficiency of the design, the embodiment of the invention provides a camera view angle generation method, a camera view angle generation device, a computing device and a storage medium. Fig. 1 is a flowchart of a camera angle generation method according to an embodiment. As shown in fig. 1, the camera view angle generating method according to the embodiment includes the following steps:
s110, acquiring a scene effect graph, and extracting and recording core main body information and camera view angle information of the scene effect graph.
Design platforms of different industries such as industries of indoor design, public costume, architectural design and the like can store a large number of design effect diagrams of indoor and outdoor scenes designed by excellent designers. The design effect drawings can be rendered drawings obtained through rendering, and include finished core main body information and camera view angle information, so that the design effect drawings submitted by designers can be extracted from a design platform at regular time to serve as basic data for constructing the camera view angle style. For the obtained scene effect graph, an image recognition technology or a rendering scene reconstruction algorithm can be adopted to extract and record the core main body information and the camera view angle information from the effect graph.
In the embodiment, entities which are generally necessary and large in size in a scene are used as core main bodies, and are used as core materials or basic materials of a design effect diagram. The determination of the core subject is related to the scene type, for example, for a living room scene, a sofa with a larger size is taken as the core subject; aiming at the bedroom scene, the bed body can be used as a core main body. Of course, the core subject may also be the target subject of primary interest for the design. In general, the target subject of interest or a representative large entity may be the core subject. The core body information includes a three-dimensional shape, size, spatial position, and the like of the core body, and the core body information is associated with the service information of the core body, and it can be understood that the core body information is recorded in a service name represented by the core body, for example, a service name in which a large rectangle provided in a wall area of a living room is associated with a sofa. In an embodiment, a relative positional relationship between the core subjects is determined according to the spatial position, and the relative positional relationship may be used as a reference for determining the viewing angle style of the camera.
In an embodiment, the camera view information refers to information including camera attribute parameters, and may include a camera position, a camera view, a composition ratio, cropping, and the like, and a relative position within a scene space outline is determined according to the camera position and a spatial position of the core subject, and the relative position is used as basic data for determining a camera view style.
And S120, determining the camera view angle style according to the core main body information and the camera view angle information, and associating corresponding camera view angle templates.
In the embodiment, core subject information and camera view angle information are modeled, similar features shown by the core subject information and the camera view angle information are classified into a certain camera view angle style through a screening and classifying mode, and a typical effect graph matched with an intersected view angle style is used as a camera view angle template.
In one embodiment, screening and categorizing the camera view angle to establish the camera view angle style according to the relative position and the camera view angle comprises:
as shown in fig. 2, when the camera and the core subject are both located at the relative center line position of the scene space, and the camera view angle is directly opposite to the core subject, the scene is classified as a spatially symmetric style. The relative center line position is a position having a certain constraint distance from the center line of the scene space, and the constraint distance can be customized, for example, if the constraint distance is defined as 10cm, the positions within 10cm from the center line are considered to be the relative center line positions. When the camera and the core main body are located at the position of the relative central line of the scene space, and the camera view angle is over against the core main body, the camera is considered to be in a space symmetry style, and when the camera and the core main body are arranged in the space symmetry style, the central point of the short side of the space is used as the central point of the camera view target to carry out layout design.
As shown in fig. 3, when the camera is located at the position of the relative center line of the scene space, the core subject is located at two sides of the position of the relative center line of the scene space, and the camera view angle is directly opposite to the core subject, the core subject is classified as an oblique angle style. For the oblique angle style layout, the target space is laid out from the oblique angle direction, and meanwhile, the whole core area is ensured to be in the visual field.
As shown in fig. 4, when the camera coincides with the center of the core subject, the camera view angle is in the horizontal or vertical direction, and is classified as a subject style. And when the main body style layout is performed, selecting a center point of the target furniture as a camera center, and performing layout in a vertical or horizontal direction.
And S130, carrying out space region segmentation on the target scene and the camera view angle template to obtain a target scene segmentation result and a template segmentation result.
In order to realize the rapid matching of the target scene and the camera view angle template, the target scene and the camera view angle template are firstly subjected to space region segmentation, a k-d tree algorithm can be adopted to carry out space region segmentation on the target scene and the camera view angle template, and the obtained target scene segmentation result and the template segmentation result are respectively managed by a k-d tree.
And S140, matching the scene segmentation result and the template segmentation result, obtaining a matching result, and screening a camera view angle template with a high matching rate and a corresponding camera view angle style as a recommendation according to the matching result.
In an embodiment, matching the scene segmentation result and the template segmentation result includes: and matching from three dimensions of the matching degree of the core main body, the average distance difference of the space and the space position of the core main body according to the scene segmentation result and the template segmentation result, and taking the weighted sum of the matching results of the three dimensions as a final matching result. The scene segmentation result and the template segmentation result which can be directly managed by the k-d tree are matched in three dimensions, then weighting is carried out, the weighting factor can be set, and then the camera view style corresponding to the camera view template with the largest value n before the final matching result value of the weighted summation is screened as recommendation.
And calculating the matching degree of the core main body category of the target scene and the core main body category of the camera view angle template as a matching result according to the matching degree of the core main body. The matching result is mainly used for describing the matching degree of the core body. Specifically, the number of categories of core bodies of the same type may be counted according to the service information corresponding to the core bodies, and the number of categories is used as the matching degree. For example, when there are a sofa and a tv in both the target scene and the camera view template, the two classes are considered to be matched, and the corresponding matching degree may be 2.
And calculating the distance between the space outline of the target scene and the space outline of the camera view angle template as a matching result aiming at the average distance difference of the space. The matching result is mainly used for describing the matching degree of the space. For example, assuming that the target scene is a bedroom and the camera view angle template is a dining room, the euclidean distance between the outline of the polygon in the bedroom and the outline of the dining room can be calculated to be the best matching result.
And calculating diff values between an upper, lower, left and right distance group of the core main body of the target scene from the space outline of the target scene and an upper, lower, left and right distance group of the core main body of the camera view angle template from the space outline of the camera view angle template as a matching result aiming at the space position of the core main body. The matching result is mainly used for describing the matching degree of the core main body in the space position.
In the embodiment, the matching degree of the target scene and the camera view angle template is calculated from the above three dimensions in an all-round manner, so that the matching result is obtained more accurately as the basis of recommendation. And after the matching result is obtained, selecting the camera view angle templates which are sequentially front according to the sequence of the matching rate from large to small for recommendation.
In the actual application process, in order to improve the calculation efficiency and reduce the calculation overhead, a multi-level cache technology is adopted to cache the template segmentation result for matching with the scene segmentation result next time.
As shown in fig. 5, the camera view generating method according to the embodiment further includes:
s150, collecting the using condition of the camera view angle template, carrying out personalized recommendation on the camera view angle template for the user according to the using amount of the camera view angle template, and meanwhile, improving the using weight of the camera view angle template in the next matching process.
In the embodiment, the using condition of the camera view angle template is monitored, sorted and collected, the using amount of the camera view angle template is used as a preference value of a user, and personalized recommendation of the camera view angle template is carried out according to the preference value. In the next matching process, the weighting factor of the camera view angle template which likes the camera view angle style is improved, so that the purpose of dynamically carrying out personalized recommendation can be achieved.
In the camera view generating method provided by the embodiment, various camera view styles and corresponding camera view templates are constructed by extracting the core subject information and the camera view information from the scene effect graph, so that the accuracy and diversity of camera views are ensured; the segmentation result of the target scene and the camera view angle template is matched to determine and recommend the segmentation of the camera view angle, so that the design efficiency of the camera view angle is improved, and the personalized push of thousands of people and thousands of faces of the camera view angle can be achieved. In addition, the camera visual angle generation method is suitable for personalized recommendation of different designers, composition style types liked by designers are judged through behaviors of the designers, and recommended camera visual angle templates are updated through dynamic trial, so that the method is high in applicability, suitable for the field of indoor decoration, suitable for scenes in multiple fields such as outdoor, public clothes and space buildings, and wide in general application value.
Fig. 6 is a schematic structural diagram of a camera view angle generation apparatus according to an embodiment. As shown in fig. 6, an embodiment provides a camera angle generating apparatus 600, including:
the acquiring module 610 is configured to acquire a scene effect graph, and extract and record core subject information and camera view angle information of the scene effect graph;
the generating module 620 is configured to determine the relative positions of the camera and the core subject within the scene space contour according to the scene space contour, the core subject information, and the camera view angle information, screen and classify the camera view angle style according to the relative positions and the camera view angles, and associate the camera view angle style and the corresponding effect map as a camera view angle template;
a segmentation module 630, configured to perform spatial region segmentation on the target scene and the camera view template to obtain a target scene segmentation result and a template segmentation result;
and the matching recommendation module 640 is configured to match the scene segmentation result and the template segmentation result, obtain a matching result, and filter, according to the matching result, a camera view style corresponding to the camera view template with a high matching rate as a recommendation.
It should be noted that, in the method for generating a camera view angle according to the embodiments, the division of the functional modules should be exemplified when automatically generating a camera view angle, and the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the terminal or the server is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the camera view angle generating device and the camera view angle generating method provided by the embodiment belong to the same concept, and specific implementation processes thereof are detailed in the camera view angle generating method embodiment and are not described herein again.
Embodiments also provide a computing device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the above camera perspective generation method when executing the computer program, comprising the steps of:
s110, acquiring a scene effect graph, and extracting and recording core main body information and camera visual angle information of the scene effect graph;
s120, determining the relative positions of the camera and the core main body in the scene space contour according to the scene space contour, the core main body information and the camera view angle information, screening and classifying according to the relative positions and the camera view angles to determine the camera view angle style, and associating an effect image corresponding to the camera view angle style as a camera view angle template;
s130, carrying out space region segmentation on the target scene and the camera view angle template to obtain a target scene segmentation result and a template segmentation result;
s140, matching the scene segmentation result and the template segmentation result, obtaining a matching result, and screening a camera view angle template with a high matching rate and a corresponding camera view angle style as recommendations according to the matching result;
s150, collecting the using condition of the camera view angle template, carrying out personalized recommendation on the camera view angle template for the user according to the using amount of the camera view angle template, and meanwhile, improving the using weight of the camera view angle template in the next matching process.
Embodiments also provide a computer storage medium having stored thereon a computer program which, when processed and executed, implements the steps of the camera perspective generation method described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (11)

1. A camera view generation method, comprising the steps of:
acquiring a scene effect graph, and extracting and recording core main body information and camera visual angle information of the scene effect graph;
determining the relative positions of the camera and the core main body in the scene space contour according to the scene space contour, the core main body information and the camera view angle information, screening and classifying according to the relative positions and the camera view angles to determine the camera view angle style, and associating an effect image corresponding to the camera view angle style as a camera view angle template;
carrying out space region segmentation on the target scene and the camera view angle template to obtain a target scene segmentation result and a template segmentation result;
and matching the scene segmentation result and the template segmentation result, obtaining a matching result, and screening a camera visual angle template with a high matching rate and a corresponding camera visual angle style as recommendations according to the matching result.
2. The camera view generation method according to claim 1, wherein the core subject information includes a three-dimensional spatial shape, size, and spatial position of the core subject, and the relative positional relationship between the core subjects is determined depending on the spatial position;
the camera view information includes a camera position, a camera view, a composition ratio, and cropping, and a relative position within the scene spatial profile is determined according to the camera position and the spatial position of the core subject.
3. The method for generating camera angles according to claim 1, wherein the filtering and classifying the camera angle style according to the relative position and the camera angle comprises:
when the camera and the core main body are both positioned at the position of the relative central line of the scene space, and the camera view angle is over against the core main body, the camera is classified into a space symmetry style;
when the camera is positioned at the position of the relative center line of the scene space, the core main body is positioned at two sides of the position of the relative center line of the scene space, and the camera view angle is over against the core main body, the camera is classified into an oblique angle style;
when the centers of the camera and the core subject coincide, the camera view angle is in the horizontal or vertical direction, and the subject style is classified.
4. The camera perspective generation method of claim 1, wherein a k-d tree algorithm is used to perform spatial region segmentation on the target scene and the camera perspective template, and the obtained target scene segmentation result and the template segmentation result are managed by a k-d tree, respectively.
5. The camera perspective generation method of claim 1, wherein the matching of the scene segmentation result and the template segmentation result comprises: and matching from three dimensions of the matching degree of the core main body, the average distance difference of the space and the space position of the core main body according to the scene segmentation result and the template segmentation result, and taking the weighted sum of the matching results of the three dimensions as a final matching result.
6. The camera perspective generation method according to claim 5, characterized in that, for the matching degree of the core subject, a matching degree of a core subject category of the target scene and a core subject category of the camera perspective template is calculated as a matching result;
calculating the distance between the space outline of the target scene and the space outline of the camera view angle template as a matching result according to the average distance difference of the spaces;
and calculating diff values between an upper, lower, left and right distance group of the core main body of the target scene from the space outline of the target scene and an upper, lower, left and right distance group of the core main body of the camera view angle template from the space outline of the camera view angle template as a matching result aiming at the space position of the core main body.
7. The camera perspective generation method of claim 1, wherein a multi-level caching technique is employed to cache the template segmentation result for a next matching with the scene segmentation result.
8. The camera perspective generation method of any one of claims 1 to 7, further comprising: collecting the using condition of the camera view angle template, carrying out personalized recommendation on the camera view angle template for the user according to the using amount of the camera view angle template, and meanwhile, improving the using weight of the camera view angle template in the next matching process.
9. A camera angle generation apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a scene effect graph, and extracting and recording core main body information and camera visual angle information of the scene effect graph;
the generating module is used for determining the relative positions of the camera and the core main body in the scene space outline according to the scene space outline, the core main body information and the camera view angle information, screening and classifying according to the relative positions and the camera view angles to establish a camera view angle style, and associating the camera view angle style and a corresponding effect image as a camera view angle template;
the segmentation module is used for carrying out space region segmentation on the target scene and the camera view angle template to obtain a target scene segmentation result and a template segmentation result;
and the matching recommendation module is used for matching the scene segmentation result and the template segmentation result, obtaining a matching result, and screening the camera view angle style corresponding to the camera view angle template with a high matching rate as recommendation according to the matching result.
10. A computing device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the camera perspective generation method of any of claims 1-7 when executing the computer program.
11. A computer storage medium having a computer program stored thereon, wherein the computer program when processed and executed implements the steps of the camera perspective generation method of any of claims 1-7.
CN202110790390.7A 2021-07-13 Camera view angle generation method, device, computing equipment and storage medium Active CN113643376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110790390.7A CN113643376B (en) 2021-07-13 Camera view angle generation method, device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110790390.7A CN113643376B (en) 2021-07-13 Camera view angle generation method, device, computing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113643376A true CN113643376A (en) 2021-11-12
CN113643376B CN113643376B (en) 2024-05-03

Family

ID=

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101854482B (en) * 2009-03-11 2012-09-05 索尼公司 Image pickup apparatus, control method for the same
US20170243352A1 (en) * 2016-02-18 2017-08-24 Intel Corporation 3-dimensional scene analysis for augmented reality operations
WO2018086262A1 (en) * 2016-11-08 2018-05-17 华为技术有限公司 Method for acquiring photographing reference data, mobile terminal and server
US20180213145A1 (en) * 2017-01-25 2018-07-26 International Business Machines Corporation Preferred picture taking
CN108513073A (en) * 2018-04-13 2018-09-07 朱钢 A kind of implementation method for the mobile phone photograph function having photographer's composition consciousness
CN108989670A (en) * 2018-07-18 2018-12-11 奇酷互联网络科技(深圳)有限公司 The method, apparatus that mobile terminal and guidance user take pictures
CN110336945A (en) * 2019-07-09 2019-10-15 上海泰大建筑科技有限公司 A kind of intelligence assisted tomography patterning process and system
CN110430359A (en) * 2019-07-31 2019-11-08 北京迈格威科技有限公司 Shoot householder method, device, computer equipment and storage medium
CN111343382A (en) * 2020-03-09 2020-06-26 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101854482B (en) * 2009-03-11 2012-09-05 索尼公司 Image pickup apparatus, control method for the same
US20170243352A1 (en) * 2016-02-18 2017-08-24 Intel Corporation 3-dimensional scene analysis for augmented reality operations
WO2018086262A1 (en) * 2016-11-08 2018-05-17 华为技术有限公司 Method for acquiring photographing reference data, mobile terminal and server
US20180213145A1 (en) * 2017-01-25 2018-07-26 International Business Machines Corporation Preferred picture taking
CN108513073A (en) * 2018-04-13 2018-09-07 朱钢 A kind of implementation method for the mobile phone photograph function having photographer's composition consciousness
CN108989670A (en) * 2018-07-18 2018-12-11 奇酷互联网络科技(深圳)有限公司 The method, apparatus that mobile terminal and guidance user take pictures
CN110336945A (en) * 2019-07-09 2019-10-15 上海泰大建筑科技有限公司 A kind of intelligence assisted tomography patterning process and system
CN110430359A (en) * 2019-07-31 2019-11-08 北京迈格威科技有限公司 Shoot householder method, device, computer equipment and storage medium
CN111343382A (en) * 2020-03-09 2020-06-26 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Aneiros et al. Recent advances in functional data analysis and high-dimensional statistics
CN105869173B (en) A kind of stereoscopic vision conspicuousness detection method
CN109377445B (en) Model training method, method and device for replacing image background and electronic system
US20200401842A1 (en) Human Hairstyle Generation Method Based on Multi-Feature Retrieval and Deformation
CN108875600A (en) A kind of information of vehicles detection and tracking method, apparatus and computer storage medium based on YOLO
CN110047139B (en) Three-dimensional reconstruction method and system for specified target
CN108305260B (en) Method, device and equipment for detecting angular points in image
CN103279936A (en) Human face fake photo automatic combining and modifying method based on portrayal
CN110674685B (en) Human body analysis segmentation model and method based on edge information enhancement
Aiteanu et al. Hybrid tree reconstruction from inhomogeneous point clouds
CN110147833A (en) Facial image processing method, apparatus, system and readable storage medium storing program for executing
WO2023124160A1 (en) Method, system and apparatus for automatically generating three-dimensional house layout, and medium
CN115081087B (en) Decoration cloud design method, device, equipment and storage medium based on Internet of things
CN111414803A (en) Face recognition method and device and electronic equipment
CN114239116B (en) BIM design recommendation method based on style migration
CN106407281B (en) Image retrieval method and device
Zhang et al. Perception-based shape retrieval for 3D building models
Phillips et al. Bayesian faces via hierarchical template modeling
CN111461196B (en) Rapid robust image identification tracking method and device based on structural features
CN113643376B (en) Camera view angle generation method, device, computing equipment and storage medium
CN109934926B (en) Model data processing method, device, readable storage medium and equipment
CN113643376A (en) Camera view angle generation method and device, computing device and storage medium
CN116663113A (en) Space design generation type method based on AIGC technology
CN113946900B (en) Method for quickly recommending similar house types based on house type profiles and distribution characteristics
CN115273219A (en) Yoga action evaluation method and system, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant