CN114697545B - Mobile photographing system and photographing composition control method - Google Patents

Mobile photographing system and photographing composition control method Download PDF

Info

Publication number
CN114697545B
CN114697545B CN202110470014.XA CN202110470014A CN114697545B CN 114697545 B CN114697545 B CN 114697545B CN 202110470014 A CN202110470014 A CN 202110470014A CN 114697545 B CN114697545 B CN 114697545B
Authority
CN
China
Prior art keywords
image
processing device
interest
region
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110470014.XA
Other languages
Chinese (zh)
Other versions
CN114697545A (en
Inventor
陈国睿
林子扬
郭慧冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Publication of CN114697545A publication Critical patent/CN114697545A/en
Application granted granted Critical
Publication of CN114697545B publication Critical patent/CN114697545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Details Of Cameras Including Film Mechanisms (AREA)

Abstract

The application provides a movable photographic system. The movable photographic system comprises a carrier, an image capturing device, a storage device and a processing device. The image capturing device is mounted on the carrier for generating a first image. The storage device is used for storing a plurality of image data. The processing device is used for obtaining the characteristic information of the target object in the first image, and comparing the first image with the plurality of image data according to the characteristic information so as to select a reference image from the plurality of image data. In addition, the processing device generates movement information according to the first image and the reference image, and the carrier moves according to the movement information so as to adjust the shooting position of the image capturing device and generate a second image.

Description

Mobile photographing system and photographing composition control method
Technical Field
The present application relates to a photography composition control technology, and more particularly, to a mobile photography system and a photography composition control method.
Background
With the increasing progress of technology, the camera device becomes a standard device of the mobile phone, so that the application situation of photographing becomes diversified, and the popular use of community websites, self-timer (Selfie) and live broadcast are the current popular trends. In order to achieve a better self-photographing effect, many people use a self-photographing rod to assist photographing, but the self-photographing rod is limited by the extending distance of a lens, so that the problems of too close photographing of a person main body, distortion of the edge of a picture, mirror entering of the self-photographing rod, incapability of accommodating the person and the like occur. Or the problem of insufficient distance is solved by utilizing the fixed foot rest, but the foot rest is limited to fixed shooting angles and cannot be adjusted at any time during shooting.
In order not to be limited by the distance and space at the time of shooting, in recent years, unmanned aerial vehicles have been used by more and more people for self-shooting. However, conventionally, the unmanned aerial vehicle is used for self-photographing, and a user often needs to spend a lot of time manually adjusting photographing positions and angles to obtain a satisfactory composition. Therefore, how to use unmanned aerial vehicle to perform self-timer more effectively and achieve ideal composition is a research subject.
Disclosure of Invention
In view of the above-described problems of the prior art, embodiments of the present application provide a mobile photographing system and a photographing composition control method.
According to an embodiment of the present application, a mobile photography system is provided. The movable photographic system comprises a carrier, an image capturing device, a storage device and a processing device. The image capturing device is mounted on the carrier for generating a first image. The storage device is used for storing a plurality of image data. The processing device is used for obtaining the characteristic information of the target object in the first image, and comparing the first image with the plurality of image data according to the characteristic information so as to select a reference image from the plurality of image data. In addition, the processing device generates movement information according to the first image and the reference image, and the carrier moves according to the movement information so as to adjust the shooting position of the image capturing device and generate a second image.
In some embodiments, the characteristic information may include human characteristic information, salient characteristic information, or environmental characteristic information.
In some embodiments, the processing device obtains the human feature information according to a pedestrian detection algorithm, a face detection algorithm, or a skeleton detection algorithm.
In some embodiments, the processing device obtains the salient feature information according to a salient detection algorithm.
In some embodiments, the processing device obtains the environmental characteristic information according to an environmental detection algorithm.
In some embodiments, the processing device calculates the similarity between the plurality of skeletons of the object and the plurality of image data according to the feature information, compares the first image with the plurality of image data, and selects the image data corresponding to the highest similarity as the reference image. In this embodiment, the skeletons may correspond to different weight values.
In some embodiments, the processing device calculates an area of the region of interest of the first image and an area of the region of interest of the reference image, and generates the movement information based on the area of the region of interest of the first image and the area of the region of interest of the reference image.
In some embodiments, the processing device adjusts the size of the second image according to the reference image.
According to an embodiment of the present application, there is provided a photographic composition control method. The photographic composition control method is suitable for a movable photographic system. The photographic composition control method comprises the following steps: generating a first image by an image capturing device of a movable photographic system, wherein the image capturing device is carried on a carrier; acquiring characteristic information of a target object in a first image by a processing device of a movable photographing system; comparing, by the processing device, the first image with a plurality of image data stored in a storage device of the mobile photographing system according to the feature information, so as to select a reference image from the plurality of image data; generating movement information according to the first image and the reference image by a processing device; and moving the carrier according to the movement information to adjust the shooting position of the image capturing device and generate a second image.
Other additional features and advantages of the present application will be apparent to those of ordinary skill in the art from the foregoing disclosure, and can be obtained by modifying the movable photographing system and the photographing composition control method disclosed in the present application without departing from the spirit and scope of the present application.
Drawings
FIG. 1 shows a block diagram of a mobile camera system 100 according to an embodiment of the application.
Fig. 2A-2B are schematic diagrams of a reference image and a first image according to an embodiment of the application.
Fig. 3 shows a schematic diagram of a human skeleton according to an embodiment of the application.
Fig. 4A-4C are schematic diagrams illustrating a reference image, a first image, and a second image according to an embodiment of the application.
Fig. 5 is a flowchart of a photographing composition control method according to an embodiment of the present application.
[ reference numerals description ]
100: mobile photographic system
110: carrier tool
120: image capturing apparatus
130: storage device
140: processing device
B1 to B14: skeleton frame
S1: first image
S2: reference image
S3: second image
P1: target object
P2: similar objects
S510-S550: step (a)
Detailed Description
The present application will be further described in detail below with reference to specific embodiments and with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present application more apparent.
The description of the preferred embodiments in this section is for the purpose of illustrating the spirit of the application and is not to be construed as limiting the scope of the application, which is defined in the appended claims.
Fig. 1 shows a block diagram of a mobile camera system 100 according to an embodiment of the application. As shown in fig. 1, the mobile photographing system 100 may include a carrier 110, an image capturing device 120, a storage device 130 and a processing device 140. It is noted that the block diagram shown in fig. 1 is merely for convenience of illustrating an embodiment of the present application, but the present application is not limited to fig. 1. Other components and devices may also be included in the mobile camera system 100. According to an embodiment of the application, the storage device 130 and the processing device 140 may be configured in the carrier 110. According to another embodiment of the present application, the processing device 140 may be configured in the carrier 110.
According to an embodiment of the application, the carrier 110 may be a unmanned aerial vehicle, a robot arm or other devices capable of three-dimensional movement, but the application is not limited thereto. The carrier 110 may be used to mount the image capturing device 120 to adjust a shooting position of the image capturing device 120.
According to an embodiment of the application, the image capturing device 120 may be a camera. Image capture Device 120 may include a Charge-coupled Device (CCD) sensor, a complementary metal-oxide-semiconductor (Complementary Metal-Oxide Semiconductor) sensor, or other photosensitive Device to capture images and movies.
According to the embodiment of the application, the storage device 130 may be a volatile Memory (e.g., random access Memory (Random Access Memory, RAM)), or a Non-volatile Memory (e.g., flash Memory, read Only Memory (ROM)), a hard disk, or a combination thereof. In addition, according to another embodiment of the present application, the storage device 130 may be a cloud database. The storage device 130 may be used to store a plurality of image data. In one embodiment, the processing device 140 may obtain the plurality of image data directly from the storage device 130. In another embodiment, a communication device (not shown) of the mobile photographing system 100 can obtain a plurality of image data from the storage device 130, and the processing device 140 can then obtain a plurality of image data from the communication device.
According to the embodiment of the application, the processing device 140 may be a microprocessor, a microcontroller or an image processing chip, but the application is not limited thereto. The processing device 140 may be disposed on the carrier 110 or in a back-end computer (not shown).
According to an embodiment of the present application, when a user wants to capture a patterned image, the image capturing apparatus 120 mounted on the carrier 110 can capture a target object first to generate a first image. According to the embodiment of the application, the object may be a portrait, a salient object (salient object) in a viewfinder, or a scene, but the application is not limited thereto.
After the first image is generated, the processing device 140 may obtain the feature information of the object in the first image through a suitable feature extraction algorithm. That is, the processing device 140 may employ different feature extraction algorithms according to the attribute of the target object. According to an embodiment of the present application, the feature information may include human feature information, salient feature information, or environmental feature information.
According to an embodiment of the application, when the target object is a portrait, the processing device 140 may use a pedestrian detection algorithm (e.g., a histogram of direction gradient (Histogram of oriented gradient, HOG) algorithm, YOLO (You Only Look Once) algorithm, but the application is not limited thereto), a face detection algorithm (e.g., an SSR-Net (Soft Stagewise Regression Network) algorithm, but the application is not limited thereto), or a skeleton detection algorithm (e.g., an openpost algorithm or a Move Mirror algorithm, but the application is not limited thereto) to obtain the feature information (i.e., the human feature information) of the target object in the first image (i.e., the portrait in the first image).
According to another embodiment of the present application, when the object is a salient object in the viewfinder, the processing apparatus 140 may use a salient object detection algorithm (e.g., a salient object detection algorithm of a deep learning method such as BASET (Boundary-Aware Salient Object Detection) or U2-Net (U2-Net: going deeper with nested U-structure for salient object detection)), but the present application is not limited thereto, to obtain the feature information (i.e., salient object feature information) of the object in the first image (i.e., the most salient object in the first image).
According to another embodiment of the present application, when the object is a scene, the processing device 140 may use an environment detection algorithm (e.g. scene analysis of a deep learning method such as PSANet (Point-wise Spatial Attention Network for Scene Parsing) or OCNet (Object Context Network for Scene Parsing), but the present application is not limited thereto) to obtain the feature information (i.e. the environmental feature information) of the object in the first image (i.e. the scene contained in the first image, e.g. mountain, sea or building, but the present application is not limited thereto).
According to an embodiment of the application, after the processing device 140 obtains the feature information of the object in the first image, the processing device 140 can compare the first image with each image data stored in the storage device 130 according to the feature information of the object, so as to select a reference image from the plurality of image data. Specifically, the processing device 140 can compare the object in the first image with the similar object corresponding to the object in each image data according to the feature information of the object, so as to obtain the similarity between the first image and each image data, and select the image data with the highest similarity with the first image as the reference image. Taking fig. 2A-2B as an example, the processing device 140 may select the reference image S2 (fig. 2A) with the highest similarity to the first image according to the feature information of the object P1 of the first image S1 (fig. 2B). That is, the similar object P2 corresponding to the object P1 in the reference image S2 has the most similar posture to the object P1.
According to an embodiment of the present application, if the processing device 140 uses a skeleton detection algorithm to obtain the feature information (e.g., skeleton information of the object) of the object in the first image, the processing device 140 calculates the similarity between each skeleton of the object in the first image and each skeleton of the similar object in each image data. The following description will take fig. 3 as an example. As shown in fig. 3, according to an embodiment of the present application, the skeleton of the human body can be divided into 14 parts, but the present application is not limited thereto. The processing device 140 can calculate the similarity between the 14 skeletons B1 to B14 of the object and each skeleton of the similar object in each image data according to a similarity formula. The similarity formula is as follows:
wherein: mu (mu) n Representing a weight value corresponding to an nth skeleton of the target object;representing a vector value corresponding to the nth skeleton coordinate of the target object; />Vector values representing the nth skeleton coordinates of similar objects; m represents the coordinates of the center of the backbone of the target; s is S n Representing the coordinates of the nth skeleton of the object. From the following componentsThe above formula shows that the closer the skeleton is to the center of the backbone, the larger the weight value. Specifically, in this embodiment, only the skeleton detection algorithm is taken as an example, but the application is not limited thereto.
According to another embodiment of the present application, if the processing device 140 uses a salient object detection algorithm to obtain the feature information (i.e. salient object feature information) of the object in the first image, the processing device 140 calculates the similarity between the salient object in the first image and the salient object in each image data, and selects the image data with the highest similarity to the first image as the reference image. In this embodiment, the processing device 140 may calculate the similarity between the salient object in the first image and the salient object in each image data according to a salient object difference formula. The significance differential formula is shown below:
wherein: p represents the total number of salient feature points, and the total information of p salient feature points forms complete salient feature information;representing coordinates of the ith salient feature point in the first image, ++>And the coordinates of the ith salient feature point in the image data are represented. In particular, the present embodiment is only exemplified by the calculation of the difference between the salient objects, but the present application is not limited thereto.
According to another embodiment of the present application, if the processing device 140 uses the environment detection algorithm to obtain the feature information (i.e. the environmental feature information) of the object (i.e. the scene included in the first image, such as a mountain, a sea or a building, but not limited thereto), the processing device 140 calculates the similarity between the scene included in the first image and the scene included in each image data, and selects the image data with the highest similarity to the first image as the reference image.
According to an embodiment of the application, after the processing device 140 obtains the reference image, the processing device 140 may obtain a movement information according to the coordinates of the region of interest (region ofinterest, ROI) of the first image and the region of interest of the reference image. In this embodiment, the region of interest of the first image may represent an object in the first image, and the region of interest of the reference image may represent a similar object to the corresponding object in the reference image. According to an embodiment of the present application, a spatial rectangular coordinate system O-XYZ is established, and the processing device 140 can compare the coordinates of the X-axis and the Y-axis of the region of interest of the first image and the coordinates of the region of interest of the reference image to calculate the areas of the region of interest of the first image and the region of interest of the reference image, and calculate the amount of change (i.e. movement information) of the region of interest of the first image and the region of interest of the reference image with respect to each other according to the areas of the region of interest of the first image and the region of interest of the reference image. According to an embodiment of the application, the processing device 140 may calculate the area of the region of interest of the first image according to the following formula:
wherein: sa represents the area of the region of interest of the first image, (x) 0 ,y 0 ),(x 1 ,y 1 )…(x q-1 ,y q-1 ) A coordinate point on the peripheral outline of the region of interest representing the first image; q represents the total number of coordinate points on the peripheral outline of the region of interest of the first image, and the total information of the coordinate points on the q peripheral outlines constitutes the complete region of interest information of the first image. The area of the region of interest of the reference image is calculated in the same manner as the first image, and will not be described in detail herein. After the processing device 140 obtains the areas of the region of interest of the first image and the region of interest of the reference image, the following formula can be used to calculate the Z-axis between the region of interest of the first image and the region of interest of the reference imageChange amount (i.e., movement information) of (a):
dz=Sa/Sb
wherein: sb denotes an area of the region of interest of the reference image, dz denotes an amount of change in the Z-axis between the region of interest of the first image and the region of interest of the reference image (i.e., movement information).
After the processing device 140 generates the movement information, the carrier 110 can move according to the movement information to adjust the shooting position of the image capturing apparatus 120 (e.g. adjust the shooting angle, the shooting height, the shooting distance, etc. of the image capturing apparatus 120, but the application is not limited thereto). After the shooting position of the image capturing device 120 is adjusted, the image capturing device 120 can shoot a second image similar to the composition of the reference image.
According to an embodiment of the present application, the processing device 140 may further determine whether the composition of the second image of the image capturing apparatus 120 has a composition according to the reference image according to the coordinates of the region of interest of the second image and the region of interest of the reference image. If not, the processing device 140 of the mobile photographing system 100 can recalculate the areas of the regions of interest of the second image and the reference image, and back-calculate the new movement information according to the areas of the regions of interest of the second image and the reference image. The carrier 110 moves according to the new movement information to adjust a shooting position of the image capturing device 120 again.
According to an embodiment of the application, the processing device 140 may adjust the size of the second image according to the reference image and the first image. That is, the size of the second image may be different from the size of the first image. Taking fig. 4A-4C as an example, the processing device 140 may determine the size of the second image S3 (fig. 4C) according to the size of the object P1 of the first image S1 (fig. 4B), the size S2 of the reference image, and the size of the similar object P2 of the corresponding object in the reference image (fig. 4A). In fig. 4A, w denotes the width of the reference image S2, h denotes the height of the reference image S2, (x 1, y 1) denotes the upper left angular coordinate corresponding to the similar object P2, and (x 2, y 2) denotes the lower right angular coordinate corresponding to the similar object P2. In fig. 4B, (x '1, y' 1) represents the upper left angular position corresponding to the target object P1, and (x '2, y' 2) represents the lower right angular position corresponding to the target object P1.
In one embodiment, the processing device 140 can calculate the size of the second image S3 according to the following formula:
aspect ratio of reference image = w/h;
height ratio of similar object P2 = h/(y 2-y 1);
the height of the second image S3= (y '2-y' 1) is similar to the height ratio of the target object P2;
the width of the second image S3=the height of the second image S3×the aspect ratio of the reference image.
In another embodiment, the processing device 140 can calculate the size of the second image S3 according to the following formula:
distance between the object P1 and the right boundary of the second image S3
=(x’2-x’1)*(w-x2)/(x2-x1)
Distance between the object P1 and the left boundary of the second image S3
=(x’2-x’1)*(x1-0)/(x2-x1)
Distance between the object P1 and the upper boundary of the second image S3
=(y’2-y’1)*(y1-0)/(y2-y1)
Distance between the object P1 and the lower boundary of the second image S3
=(y’2-y’1)*(h-y2)/(y2-y1)
According to another embodiment of the present application, the user can directly upload a reference image to the storage device 130 for the processing device 140 to perform subsequent operations. That is, in this embodiment, the processing device 140 can directly move the carrier 110 by analyzing the composition in the reference image to adjust the photographing position of the image capturing apparatus 120 (e.g. adjust the photographing angle, the photographing height, the photographing distance, etc. of the image capturing apparatus 120, but the application is not limited thereto).
Fig. 5 is a flowchart of a photographing composition control method according to an embodiment of the present application. The photography composition control method may be applied to the movable photography system 100. As shown in fig. 5, in step S510, the image capturing device of the mobile camera system 100 generates a first image, wherein the image capturing device is mounted on a carrier.
In step S520, a processing device of the mobile photographing system 100 obtains feature information of a target object in the first image.
In step S530, the processing device of the mobile photographing system 100 compares the first image with the plurality of image data stored in the storage device of the mobile photographing system 100 according to the feature information of the target object, so as to select a reference image from the plurality of image data.
In step S540, the processing device of the mobile camera system 100 generates movement information according to the first image and the reference image.
In step S550, the carrier moves according to the movement information to adjust the shooting position of the image capturing device and generate a second image.
According to an embodiment of the present application, in the photographing composition control method, the feature information may include human body feature information, salient feature information, or environmental feature information.
According to an embodiment of the present application, in the photo composition control method, the processing device of the mobile photo system 100 obtains the human body characteristic information according to the pedestrian detection algorithm, the face detection algorithm or the skeleton detection algorithm. According to another embodiment of the present application, the processing device of the mobile photographing system 100 obtains the salient feature information according to the salient detection algorithm. According to another embodiment of the present application, the processing device of the mobile camera system 100 obtains the environmental characteristic information according to the environmental detection algorithm.
According to an embodiment of the application, in step S530 of the photographing composition control method, the processing device of the mobile photographing system 100 may calculate the similarity between a plurality of skeletons of the object and a plurality of image data according to the feature information, so as to compare the image with the plurality of image data, and select the image data corresponding to the highest similarity as the reference image. In this embodiment, the multiple skeletons may correspond to different weight values.
According to an embodiment of the present application, in step S540 of the photography composition control method, the processing device of the mobile photography system 100 may calculate the area of the region of interest of the first image and the area of the region of interest of the reference image, and generate the movement information according to the area of the region of interest of the first image and the area of the region of interest of the reference image.
According to an embodiment of the present application, after step S550, the photography composition control method further includes that the processing device of the mobile photography system 100 can determine whether the composition of the second image of the image capturing device has a composition according to the coordinates of the region of interest of the second image and the region of interest of the reference image. If not, the processing device of the mobile photographing system 100 can calculate the area of the region of interest of the second image and the area of the region of interest of the reference image, and generate new movement information according to the area of the region of interest of the second image and the area of the region of interest of the reference image. The carrier can move according to the new movement information so as to adjust the shooting position of the image capturing device again.
According to an embodiment of the present application, the method for controlling a photo composition further includes the step of adjusting the size of the second image according to the reference image and the first image by the processing device of the mobile photo system 100.
According to the movable photographing system and the photographing composition control method provided by the application, the photographing position of the image capturing device can be automatically adjusted by referring to the composition of the reference image so as to generate an image having a composition similar to the reference image. Therefore, by the movable photographing system and the photographing composition control method provided by the application, the time required by manual operation of a user can be saved, and an image with an ideal composition can be generated.
The numerals in the specification and claims, such as "first", "second", etc., are for convenience of description only and are not sequentially related to each other.
The steps of a method or algorithm disclosed in the present specification may be embodied directly in hardware, in a software module or in a combination of the two, and in a processor. A software module (including execution instructions and associated data) and other data may be stored in a data memory, such as Random Access Memory (RAM), flash memory (flash memory), read-only memory (ROM), erasable programmable read-only memory (EPROM), electronically erasable programmable read-only memory (EEPROM), a buffer, a hard disk, a portable hard disk, a compact disc read-only memory (CD-ROM), a DVD, or any other form of storage media that is readable by a computer as is conventional in the art. A storage medium may be coupled to a machine, such as a computer/processor (shown as a processor in this disclosure for convenience of description), for example, by reading information (e.g., program code) and writing information to the storage medium. A storage medium may be integral to a processor. An Application Specific Integrated Circuit (ASIC) may include, for example, a processor and a storage medium. A user device may then comprise, for example, an application specific integrated circuit. In other words, the processor and the storage medium are included in the user equipment in a manner that the user equipment is not directly connected. Furthermore, in some embodiments, any suitable computer program product comprises a readable storage medium comprising program code associated with one or more of the disclosed embodiments.
Those skilled in the art will appreciate that the features recited in the various embodiments of the application can be combined in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the present application. In particular, the features described in the various embodiments of the application may be combined and/or combined in various ways without departing from the spirit and teachings of the application, all such combinations and/or combinations being included within the scope of the application.
While the foregoing is directed to embodiments of the present application, other and further details of the application may be had by the present application, it should be understood that the foregoing description is merely illustrative of the present application and that no limitations are intended to the scope of the application, except insofar as modifications, equivalents, improvements or modifications are within the spirit and principles of the application.

Claims (16)

1. A mobile photography system comprising:
the carrier is provided with a plurality of grooves,
the image capturing device is carried on the carrier and used for generating a first image;
a storage device for storing a plurality of image data;
a processing device for acquiring characteristic information of a target object in the first image, and comparing the first image with the plurality of image data according to the characteristic information to select a reference image from the plurality of image data;
the processing device generates moving information according to the first image and the reference image, and the carrier moves according to the moving information so as to adjust the shooting position of the image capturing device and generate a second image;
the processing device calculates the similarity between a plurality of skeletons of the target object and the plurality of image data according to the characteristic information, compares the first image with the plurality of image data, and selects the image data corresponding to the highest similarity as the reference image.
2. The mobile camera system of claim 1, wherein the characteristic information includes human body characteristic information, salient feature information, or environmental feature information.
3. The mobile camera system according to claim 2, wherein the processing device obtains the human body characteristic information according to a pedestrian detection algorithm, a face detection algorithm, or a skeleton detection algorithm.
4. The mobile camera system of claim 2, wherein the processing device obtains the salient feature information based on a salient detection algorithm.
5. The mobile camera system of claim 2, wherein the processing device obtains the environmental characteristic information based on an environmental detection algorithm.
6. The mobile camera system of claim 1, wherein the plurality of skeletons correspond to different weight values.
7. The mobile camera system of claim 1, wherein the processing device calculates an area of the region of interest of the first image and an area of the region of interest of the reference image, and generates the movement information based on the area of the region of interest of the first image and the area of the region of interest of the reference image.
8. The mobile imaging system of claim 1, wherein said processing means adjusts the size of said second image based on said reference image and said first image.
9. A photographic composition control method, applicable to a mobile photographic system, comprising:
generating a first image by an image capturing device of the movable photographic system, wherein the image capturing device is mounted on a carrier;
acquiring, by a processing device of the mobile photographing system, feature information of a target object in the first image;
comparing, by the processing device, the first image with a plurality of image data stored in a storage device of the mobile photographing system according to the feature information, so as to select a reference image from the plurality of image data;
generating movement information according to the first image and the reference image by the processing device; and
moving the carrier according to the movement information to adjust the shooting position of the image capturing device and generate a second image;
calculating, by the processing device, a similarity between the plurality of skeletons of the target object and the plurality of image data according to the feature information, so as to compare the first image with the plurality of image data; and
and selecting the image data corresponding to the highest similarity as the reference image.
10. The method of claim 9, wherein the characteristic information includes human body characteristic information, salient feature information, or environmental characteristic information.
11. The method according to claim 10, wherein the processing device obtains the human feature information according to a pedestrian detection algorithm, a face detection algorithm, or a skeleton detection algorithm.
12. The method according to claim 10, wherein the processing device obtains the salient feature information based on a salient object detection algorithm.
13. The method according to claim 10, wherein the processing device obtains the environmental characteristic information based on an environmental detection algorithm.
14. The method of claim 9, wherein the plurality of skeletons correspond to different weight values.
15. The photographic composition control method according to claim 9, further comprising:
calculating an area of the region of interest of the first image and an area of the region of interest of the reference image by the processing device; and
and generating the movement information according to the area of the region of interest of the first image and the area of the region of interest of the reference image by the processing device.
16. The photographic composition control method according to claim 9, further comprising:
and adjusting the size of the second image according to the reference image and the first image by the processing device.
CN202110470014.XA 2020-12-29 2021-04-28 Mobile photographing system and photographing composition control method Active CN114697545B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW109146701 2020-12-29
TW109146701A TWI768630B (en) 2020-12-29 2020-12-29 Movable photographing system and photography composition control method

Publications (2)

Publication Number Publication Date
CN114697545A CN114697545A (en) 2022-07-01
CN114697545B true CN114697545B (en) 2023-10-13

Family

ID=82135600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110470014.XA Active CN114697545B (en) 2020-12-29 2021-04-28 Mobile photographing system and photographing composition control method

Country Status (2)

Country Link
CN (1) CN114697545B (en)
TW (1) TWI768630B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102612634A (en) * 2010-09-13 2012-07-25 株式会社理光 A calibration apparatus, a distance measurement system, a calibration method and a calibration program
CN104702852A (en) * 2013-12-09 2015-06-10 英特尔公司 Techniques for disparity estimation using camera arrays for high dynamic range imaging
JP2018044897A (en) * 2016-09-15 2018-03-22 株式会社五合 Information processing device, camera, mobile body, mobile body system, information processing method, and program
WO2019164275A1 (en) * 2018-02-20 2019-08-29 (주)휴톰 Method and device for recognizing position of surgical instrument and camera

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017197174A1 (en) * 2016-05-11 2017-11-16 H4 Engineering, Inc. Apparatus and method for automatically orienting a camera at a target
CN106096573A (en) * 2016-06-23 2016-11-09 乐视控股(北京)有限公司 Method for tracking target, device, system and long distance control system
CN106078670B (en) * 2016-07-08 2018-02-13 深圳市飞研智能科技有限公司 A kind of self-timer robot and self-timer method
US10250801B2 (en) * 2017-04-13 2019-04-02 Institute For Information Industry Camera system and image-providing method
WO2018191840A1 (en) * 2017-04-17 2018-10-25 英华达(上海)科技有限公司 Interactive photographing system and method for unmanned aerial vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102612634A (en) * 2010-09-13 2012-07-25 株式会社理光 A calibration apparatus, a distance measurement system, a calibration method and a calibration program
CN104702852A (en) * 2013-12-09 2015-06-10 英特尔公司 Techniques for disparity estimation using camera arrays for high dynamic range imaging
JP2018044897A (en) * 2016-09-15 2018-03-22 株式会社五合 Information processing device, camera, mobile body, mobile body system, information processing method, and program
WO2019164275A1 (en) * 2018-02-20 2019-08-29 (주)휴톰 Method and device for recognizing position of surgical instrument and camera

Also Published As

Publication number Publication date
TW202226161A (en) 2022-07-01
CN114697545A (en) 2022-07-01
TWI768630B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
US7583858B2 (en) Image processing based on direction of gravity
CN109313799B (en) Image processing method and apparatus
US10127639B2 (en) Image processing device having depth map generating unit, image processing method and non-transitory computer readable recording medium
US10121229B2 (en) Self-portrait enhancement techniques
US9679394B2 (en) Composition determination device, composition determination method, and program
US20060078215A1 (en) Image processing based on direction of gravity
US20120307000A1 (en) Image Registration Using Sliding Registration Windows
CN109474780B (en) Method and device for image processing
WO2020237565A1 (en) Target tracking method and device, movable platform and storage medium
WO2002037179A2 (en) Method and apparatus for tracking an object using a camera in a hand-held processing device
CN109981972B (en) Target tracking method of robot, robot and storage medium
CN112083403B (en) Positioning tracking error correction method and system for virtual scene
CN113630549B (en) Zoom control method, apparatus, electronic device, and computer-readable storage medium
KR101745493B1 (en) Apparatus and method for depth map generation
CN111654624B (en) Shooting prompting method and device and electronic equipment
CN112333468B (en) Image processing method, device, equipment and storage medium
CN114697545B (en) Mobile photographing system and photographing composition control method
Dasari et al. A joint visual-inertial image registration for mobile HDR imaging
CN111684488A (en) Image cropping method and device and shooting device
CN111225144A (en) Video shooting method and device, electronic equipment and computer storage medium
JP2008236015A (en) Image processor, imaging apparatus and its program
Campbell et al. Leveraging limited autonomous mobility to frame attractive group photos
US11445121B2 (en) Movable photographing system and photography composition control method
CN113034345B (en) Face recognition method and system based on SFM reconstruction
KR102628714B1 (en) Photography system for surpporting to picture for mobile terminal and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant