CN114930798A - Shooting object switching method and device, and image processing method and device - Google Patents
Shooting object switching method and device, and image processing method and device Download PDFInfo
- Publication number
- CN114930798A CN114930798A CN201980103307.5A CN201980103307A CN114930798A CN 114930798 A CN114930798 A CN 114930798A CN 201980103307 A CN201980103307 A CN 201980103307A CN 114930798 A CN114930798 A CN 114930798A
- Authority
- CN
- China
- Prior art keywords
- image
- distance
- shooting
- target
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 99
- 238000003672 processing method Methods 0.000 title claims abstract description 49
- 238000012545 processing Methods 0.000 claims abstract description 53
- 230000011218 segmentation Effects 0.000 claims description 69
- 238000013136 deep learning model Methods 0.000 claims description 62
- 230000005012 migration Effects 0.000 claims description 58
- 238000013508 migration Methods 0.000 claims description 58
- 230000008859 change Effects 0.000 claims description 37
- 230000008569 process Effects 0.000 claims description 30
- 238000010422 painting Methods 0.000 claims description 20
- 230000006870 function Effects 0.000 claims description 19
- 230000004048 modification Effects 0.000 claims description 17
- 238000012986 modification Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 5
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 238000003786 synthesis reaction Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 14
- 238000003384 imaging method Methods 0.000 description 13
- 238000003062 neural network model Methods 0.000 description 13
- 238000013527 convolutional neural network Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 8
- 230000007704 transition Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 238000005034 decoration Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The application provides a shot object switching method and device and an image processing method and device, wherein the shot object switching method comprises the following steps: acquiring a first distance between a first shooting object and shooting equipment; determining a second shooting object to be switched according to the first distance; switching a photographic subject from the first photographic subject to the second photographic subject; an image processing method, comprising: acquiring a target image; and carrying out reconstruction processing on the target image to obtain a reconstructed image. The shot object switching method and device and the image processing method and device are used for intelligently obtaining images, reducing labor cost and saving time.
Description
The present application relates to the field of image-related technologies, and in particular, to a shot object switching method and apparatus, and an image processing method and apparatus.
In the prior art, the acquisition of partial images still needs human participation in operation control, and the intelligent acquisition of such partial images is difficult, for example, the acquisition of a follow-up shot image or the acquisition of a processed image based on an original image.
When a follow-up shot image is obtained, the general follow-up shooting work needs manual operation control, for example, a user directly holds a camera to carry out follow-up shooting, if different shot objects are to be switched, the user can only manually align the object to be shot, the mode needs direct or indirect participation of people, the camera cannot select the object to be shot according to requirements, and the shot object cannot be automatically and intelligently switched.
When obtaining a processed image based on an original image, people are usually required to manually modify and adjust the original image according to a desired image that the people want to obtain, and this method is limited by professionalism, and is often time-consuming.
Disclosure of Invention
The embodiment of the application aims to provide a shot object switching method and device, and an image processing method and device, so as to solve the problem that images are difficult to obtain intelligently in the prior art.
In a first aspect, an embodiment of the present application provides a shot object switching method, including:
acquiring a first distance between a first shooting object and shooting equipment; determining a second shooting object to be switched according to the first distance; switching the subject from the first subject to the second subject.
In the implementation process, the second shooting object needing to be switched is determined according to the distance between the first shooting object and the shooting equipment, so that the shooting equipment can select the shooting object needing to be switched according to the distance, the switching object does not need to be manually selected, the shooting object can be flexibly switched, the following shooting image or video can be intelligently obtained, the labor cost can be effectively reduced, and the time can be saved.
In a second aspect, an embodiment of the present application provides a photographic subject switching apparatus, including:
the distance acquisition module is used for acquiring a first distance between a first shooting object and the shooting equipment;
the object determining module is used for determining a second shooting object needing to be switched according to the first distance;
and the object switching module is used for switching the shooting object from the first shooting object to the second shooting object.
In a third aspect, an embodiment of the present application provides an image processing method, including:
acquiring a target image;
and carrying out reconstruction processing on the target image to obtain a reconstructed image.
In the implementation process, the target image is reconstructed to obtain a reconstructed image, and the reconstructed image is a processed image, so that the original image does not need to be modified and adjusted manually by people, the method is not limited by professions, the processed image is obtained intelligently, the labor cost can be greatly reduced, and the time is saved.
In a fourth aspect, an embodiment of the present application provides an image processing apparatus, including:
the image acquisition module is used for acquiring a target image;
and the image reconstruction module is used for reconstructing the target image to obtain a reconstructed image.
In a fifth aspect, an embodiment of the present application provides a shooting device, including a memory and a processor, where the memory is used to store a computer program, and the processor runs the computer program to make the shooting device execute the above-mentioned shooting object switching method and/or the above-mentioned image processing method.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program used in the above-mentioned shooting apparatus.
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of a shot object switching method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a shooting area according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating a switching of a photographic subject according to an embodiment of the present application;
fig. 4 is a schematic diagram of another object switching provided in an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a distance between a photographic object and a photographic device according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a photographic subject switching apparatus according to a first embodiment of the present application;
fig. 7 is a schematic flowchart of an image processing method according to a second embodiment of the present application;
fig. 8 is a schematic diagram of a target image according to a second embodiment of the present application;
fig. 9 is a schematic diagram of a painting image according to a second embodiment of the present application;
fig. 10 is a schematic diagram of style transition images obtained based on fig. 8 and 9 according to the second embodiment of the present application;
fig. 11 is a block diagram of an image processing apparatus according to a second embodiment of the present application.
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Example one
The embodiment of the application provides a shot object switching method, which determines a second shot object to be switched according to the distance between a first shot object and a shooting device, so that the shot object can be flexibly switched without manually selecting the switching object, a follow-up shot image or video can be intelligently obtained, the labor cost is reduced, and the time is saved.
It should be understood that the capturing device in the present application may be any device, instrument or machine having computing processing capability and capturing function, for example, the capturing device in the present application may include but is not limited to: cameras, video cameras, unmanned aerial vehicles, unmanned ships, unmanned submarines, handheld DVs, and monitoring equipment.
Referring to fig. 1, fig. 1 is a flowchart of a method for switching a shot object according to an embodiment of the present application, where the method includes the following steps:
step S110: a first distance between a first photographic subject and a photographic device is acquired.
The first photographic subject may be a photographic subject selected by a user when the photographing apparatus initially photographs, for example, the user may arbitrarily select one subject in a photographing region of the photographing apparatus as the first photographic subject, and at this time, the photographing apparatus may acquire relevant features of the first photographic subject, so as to lock the first photographic subject, thereby implementing follow-up photographing of the first photographic subject.
Of course, the first photographic subject may be a photographic subject which meets the conditions selected by the photographic device in the photographic area, such as the photographic device may select a nearest subject to itself as the first photographic subject, or such as a specific person in the photographic area as the first photographic subject.
As can be understood, the photographing apparatus can arbitrarily select an object within the photographing region as the first photographing object according to actual needs.
In addition, the shooting area in the embodiment of the present application may refer to a field range of the shooting device, an area arbitrarily cut out by the shooting device in the field range, or an area generated by the shooting device as required, or an area set by a human, which may be a square area, a circular area, a fan-shaped area, a polygonal area, and/or other irregular areas, for example, the shooting area may be a circular area and a polygonal area. Alternatively, if the first object is a certain object within the field of view of the imaging device, after the first object is selected, the imaging area may be a circular or rectangular area with a certain radius and the first object as a center point, as shown in fig. 2, the object a in fig. 2 is the first object, and the imaging area is a circular area formed by the object a as a circle and with a preset radius r. The preset radius r may be equal to a first distance between the first shooting object and the shooting device, or the preset radius r may be set manually, and may also be smaller than the first distance between the first shooting object and the shooting device, or may also be larger than the first distance between the first shooting object and the shooting device.
It can be understood that, when the first photographic subject is not yet determined, the photographic region may be a region set manually or a region generated by the photographic apparatus, and after the first photographic subject is determined, the photographic region may be a circular region formed by taking the first photographic subject as a center and taking a preset radius.
In the embodiment of the application, the automatic switching of the shooting objects by the shooting device can be realized based on the distance between the first shooting object and the shooting device, so the shooting device needs to first acquire the first distance between the first shooting object and the shooting device, and then select the second shooting object to be switched according to the first distance.
In order to enable the shooting device to realize automatic switching of the shooting objects, in the embodiment of the application, the shooting device is automatically switched to the shooting objects based on the distance between the first shooting object and the shooting device, that is, the shooting device can select an object within a proper distance as a shooting object to be switched, so that a first distance between the first shooting object and the shooting device needs to be obtained, and the first distance can be obtained by measuring through an infrared ranging module installed in the shooting device.
Or, the shooting device includes a monocular camera, a binocular camera, or an RGBD camera, and the parallax image of the shooting object is obtained through shooting by the camera, and since the parallax image includes distance information of a scene, the distance between the shooting object and the shooting device can be obtained based on the parallax image, for simplicity of description, the specific implementation of this method may refer to the related description in the prior art, and will not be described herein in detail.
Step S120: and determining a second shooting object needing to be switched according to the first distance.
The second photographic subject may also refer to any subject that meets a certain condition within the photographing region of the photographing apparatus, for example, the second photographic subject may be a subject within the first distance. It is understood that the photographing apparatus may detect other photographing objects within the first distance in real time within the photographing region thereof, and determine the other photographing objects as the second photographing objects if the other photographing objects are detected to be present within the first distance. If the first photographic subject is in the first distance during the moving process, and the photographic device further detects that the distance between another certain subject and the photographic device is also in the first distance, the first photographic subject and another certain subject may be used together as the second photographic subject, that is, the second photographic subject may include the first photographic subject, or only another certain subject may be used as the second photographic subject.
Still alternatively, the photographing apparatus may generate a sector area having the photographing apparatus as a center point, the first photographic subject being on an arc-shaped side of the sector area, the sector area being an area included in the photographing area of the photographing apparatus, so that the photographing apparatus may monitor the subject existing in the sector area in real time and then take the subject existing in the sector area as the second photographic subject.
It is to be understood that there may be more than one second photographic subject, and there may be more than one first photographic subject, each of which may be one or more than two subjects. For example, at the time of initial shooting, the user may select two or more objects as the first shooting objects, or the shooting device may select two or more objects that meet the conditions as the first shooting objects, or if two or more objects that meet the conditions are present, the shooting device may arbitrarily select one of the two or more objects as the first shooting objects, and in the process of determining the second shooting object, if two or more second shooting objects that meet the conditions are present, both of the two or more objects may be set as the second shooting objects, or of course, one object other than the first shooting object may be arbitrarily selected as the second shooting object.
Step S130: switching the subject from the first subject to the second subject.
After the second shooting object needing to be switched is determined, the shooting device switches the shooting object to the second shooting object, the shooting device initially selects the first shooting object and then carries out follow-up shooting on the first shooting object, the shooting device detects the second shooting object meeting the conditions in real time based on the first distance, and then after the second shooting object is switched, the shooting device carries out follow-up shooting on the second shooting object.
In the implementation process, the second shooting object to be switched is determined according to the distance between the first shooting object and the shooting equipment, so that the shooting equipment can select the shooting object to be switched according to the distance, the switching object does not need to be manually selected, the flexible switching of the shooting object can be realized, the follow-up shooting image or video can be intelligently obtained, the labor cost can be effectively reduced, and the time can be saved.
In addition, in the process of determining the second photographic subject based on the first distance, at least one other photographic subject except the first photographic subject in the shooting area of the shooting device may be detected first, then the second distance between each other photographic subject and the shooting device may be obtained, and then the second photographic subject to be switched may be determined from the at least one other photographic subject according to the first distance and the at least one second distance.
As shown in fig. 3, the object a is a first photographic object, a circular area in the figure can be used as a photographic area, in this case, the objects a, B, and C are in the photographic area, the photographic device obtains a second distance between the object B, the object C, and the photographic device in real time, and the photographic device can determine a second photographic object to be switched from the object B and the object C according to the first distance and the second distance.
Since the objects a, B, and C may be in a moving state, distances between the respective objects and the photographing apparatus may also change, at this time, the first distance between the first photographing object and the photographing apparatus may refer to a distance between a position where the first photographing object is initially located and the photographing apparatus, that is, the first distance may be a fixed value, which does not change with the movement of the first photographing object, and of course, the first distance may also be a changing value, that is, the first distance may also refer to a real-time distance between the first photographing object and the photographing apparatus.
For example, if the first distance is a fixed value, the photographing apparatus may determine that the other object is the second photographing object when detecting that the second distance between the other object and the photographing apparatus is less than or equal to the first distance, and for example, if the first distance is 5 meters, the photographing apparatus may switch the photographing object to the other object when arbitrarily detecting that the second distance between the other object and the photographing apparatus is less than or equal to 5 meters.
For example, in fig. 3, after obtaining the second distance B between the object B and the photographing apparatus and the second distance C between the object C and the photographing apparatus, the photographing apparatus compares the second distance B and the second distance C with the first distance (5 meters), respectively, and if the second distance B is less than or equal to the first distance, determines the object B as the second photographing object to be switched. Of course, if the second distance C is also smaller than or equal to the first distance, the object B and the object C may be both taken as the second photographing objects, that is, the photographing apparatus photographs the object B and the object C. Of course, if the distances between the subject a, the subject B, and the subject C and the photographing apparatus are all less than or equal to the first distance, the subject a, the subject B, and the subject C are all photographed as the second photographing subject.
If the first distance is a change value, that is, as shown in fig. 3, the first photographic subject (subject a) moves to the position of the original subject B, the positions of all three subjects change, the first distance is the distance between the position of the subject a and the photographic equipment, and the distances between the subjects B and C and the photographic equipment are both smaller than or equal to the first distance, then both the subjects B and C are determined as the second photographic subject.
It is understood that, regardless of whether the first distance is a fixed value or a variable value, a target distance that is less than or equal to the first distance may be determined from the obtained second distances between the other photographic subjects and the photographic apparatus, and then a target photographic subject corresponding to the target distance may be determined as the second photographic subject to be switched.
In the above example, there are a plurality of other imaging subjects, and if a second distance between at least two other imaging subjects among the plurality of other imaging subjects and the imaging device is smaller than or equal to the first distance, all of the at least two other imaging subjects are taken as the second imaging subject, or one other imaging subject may be arbitrarily selected from the at least two other imaging subjects as the second imaging subject. Of course, if there is one other photographic subject, if the second distance between the other photographic subject and the photographic apparatus is less than or equal to the first distance, the other photographic subject is taken as the second photographic subject.
In addition, if the shooting device does not detect that the second distance between the object and the shooting device is smaller than or equal to the first distance, the shooting object is not switched, and the first shooting object continues to be shot in a follow-up mode.
In order to realize the follow-up shooting of the shooting object by the shooting device, the shooting device can acquire relevant feature information of the shooting object, if the shooting object is a person, the shooting device can acquire the human face feature of the shooting object as a shooting basis, and when the shooting device carries out follow-up shooting on the shooting object, the human face feature of the shooting object is detected in real time so as to determine that the shooting object is the follow-up shooting object.
In the implementation process, the shooting device can select the object within the first distance as the second shooting object, so that the shooting device can clearly shoot the object at a short distance.
As another alternative embodiment, in order to ensure that the adjustment range of the shooting device is small when the shooting device is switched to a new shooting object, so as to avoid the situation that the rotation range of the shooting device is too large when the shooting device is switched from the first shooting object to the second shooting object, which results in unstable shooting pictures, if there are a plurality of other shooting objects, the obtained second distances are also multiple, that is, at least one second distance includes multiple second distances, a target distance smaller than or equal to the first distance may also be determined from the multiple second distances, if there are at least two target shooting objects corresponding to the target distances, a third distance between each target shooting object and the first shooting object is obtained, and then a target shooting object corresponding to the third distance that is the smallest in distance from the first shooting object is determined as the second shooting object to be switched.
For example, if the distances between the subject B and the subject C and the photographing apparatus are both smaller than or equal to the first distance, the photographing apparatus may measure a third distance between the subject B and the subject a and a third distance between the subject C and the subject a by the ranging module, and then compare the two distances, and if the distance between the subject B and the subject a is the minimum, the subject B is the target photographing object, and the subject B is determined as the second photographing object, the photographing apparatus switches to photograph the subject B.
In addition, in an embodiment, the shooting device may further determine a target distance smaller than or equal to the first distance from the plurality of second distances, and if the target distances are at least two, determine a target shooting object corresponding to a smallest target distance of the at least two target distances as the second shooting object to be switched, that is, the shooting device may select an object closest to the shooting device as the shooting object, for example, if the distances between the object B and the object C and the shooting device are both smaller than or equal to the first distance, and at this time, it may compare which distances between the object B and the object C and the shooting device are smaller, and if the distance between the object B and the shooting device is smaller than the distance between the object C and the shooting device, determine the object B as the second shooting object, and switch the shooting device to shoot the object B.
As another optional embodiment, the shooting device may further fixedly detect the position of the first shooting object, so as to perform follow-up shooting on the object at the position of the first shooting object, for example, the shooting device first acquires the position of the first shooting object, and then determines the second shooting object to be switched according to the position of the first shooting object and the first distance between the first shooting object and the shooting device.
For example, in fig. 3, the photographing apparatus acquires the position of the object a, and when the photographing apparatus detects that the object B has moved to the position where the object a was originally located as each object moves, the photographing apparatus switches the photographing object to the object B, and performs follow-up photographing on the object B.
The shooting device can determine a shooting position to be shot by the shooting device according to the position of the first shooting object and the first distance, then detect a target shooting object which is present at the shooting position and is except the first shooting object, then determine the target shooting object as a second shooting object to be switched, and if the first shooting object does not leave the shooting position at this time, determine the target shooting object and the first shooting object as the second shooting object to be switched.
For example, the photographing device may acquire coordinates of a first photographing object, determine a photographing position according to the first distance, and monitor whether another object appears at the photographing position in real time during the process of performing follow-up photographing on the first photographing object, as shown in fig. 4, the first photographing object (object a) starts at a point a, the object a moves to a point B as time goes on, and at this time, when the object B appears at the point a, the object B is a target photographing object, the object B is determined as a second photographing object, and the photographing device switches to perform follow-up photographing on the object B.
However, if the object a does not leave the point a, and the object B appears at the point a, both the first object and the object B can be taken as the second object, and the shooting device is switched to follow-up shooting of the first object and the object B.
Alternatively, the object a leaves the point a, but the object B appears at the point a, and in this case, the object a and the object B may be taken together as the second shooting object, that is, the shooting device may also perform follow-up shooting on the object a and the object B.
In addition, when the photographing apparatus determines the first photographing object at an initial stage, at least one photographing object within the photographing region may be acquired first, then a fourth distance between each photographing object and the photographing apparatus may be acquired, and the first photographing object may be determined from the at least one photographing object according to the at least one fourth distance.
For example, if the photographic subject includes a plurality of photographic subjects, a plurality of fourth distances may be obtained, and the photographing apparatus may select a target distance having a minimum distance from among the plurality of fourth distances and then determine the photographic subject corresponding to the target distance as the first photographic subject. It can be understood that if there are an object a, an object B, and an object C in the shooting area, the distance measuring module measures the fourth distance between each of the three objects and the shooting device, and if the distance between the object a and the shooting device is the smallest, the object a is determined as the first shooting object, that is, the shooting device selects the object a for follow-up shooting.
Of course, the shooting device may also select a second closest object as the first shooting object, for example, the distance between the object a and the shooting device is smaller than the distance between the object B and the shooting device, and the distance between the object B and the shooting device is smaller than the distance between the object C and the shooting device, and in this case, the object B may be taken as the first shooting object for follow-up shooting.
Still alternatively, a target distance having a distance smaller than a preset threshold may be selected from the plurality of fourth distances, and then the photographic subject corresponding to the target distance may be determined as the first photographic subject. For example, the preset threshold may be set according to an actual requirement, or the preset threshold may also be obtained by the shooting device through autonomous learning based on user habits, if the distance between the object a and the shooting device is smaller than the preset threshold, the object a is determined as a first shooting object, and if the distances between the object a and the shooting device and the distances between the object B and the shooting device are both smaller than the preset threshold, both the object a and the object B are taken as the first shooting object, or one of the objects is arbitrarily selected as the first shooting object.
Of course, when there is one photographic subject, the photographic apparatus may directly take the photographic subject as the first photographic subject, or when a fourth distance between the photographic subject and the photographic apparatus is less than a preset threshold, determine that the photographic subject is the first photographic subject.
Still alternatively, the photographing apparatus may detect a photographing region, and take a first subject appearing in the photographing region as the first photographing subject.
In the implementation process, the shooting device can flexibly select the first shooting object according to actual requirements.
In addition, as shown in fig. 5, when the first photographic object includes at least two objects, and the distance between the first photographic object and the photographic apparatus is acquired, the center position thereof may be calculated from the coordinates of each object, and then the distance between the center position thereof and the photographic apparatus may be calculated as the first distance between the first photographic object and the photographic apparatus.
For example, if the first photographic subject includes a subject a and a subject C, the center position between the subject a and the subject C may be acquired, and then the distance between the center position thereof and the photographic apparatus may be taken as the first distance between the first photographic subject and the photographic apparatus.
In addition, in order to accurately obtain the distance between the photographic subject and the photographic equipment, the first distance between the first photographic subject and the photographic equipment can also be obtained through a first deep learning model trained in advance.
The first deep learning model may be a convolutional neural network model or other models, such as neural network models like long-short term memory network models, and the process of obtaining the first distance through the first deep learning model may be as follows: the method comprises the steps of obtaining a plurality of shot images of a first shot object, inputting the plurality of shot images into a first deep learning model, obtaining a disparity map of the first shot object by utilizing the pixel position relation among the plurality of shot images through the first deep learning model, and then determining a first distance between the first shot object and shooting equipment according to the disparity map and position and posture information of the shooting equipment.
For example, after the shooting device determines the first shooting object, the position information of the first shooting object and the position posture information of the shooting device are recorded at the moment, if the shooting device comprises a holder, the position posture information is the posture angle information of the holder, then the shooting device shoots continuous multi-frame RGB images of the first shooting object and inputs the continuous multi-frame RGB images into the convolutional neural network model, and the convolutional neural network model learns the disparity map of the first shooting object according to the pixel position relationship between the two frames of images; then, based on the position and orientation information of the shooting device, the disparity map of the two-dimensional image plane can be converted into a three-dimensional space with the shooting device as a coordinate origin, and the spatial distance between the first shooting object and the shooting device is estimated, that is, the distance between the shooting device and the first shooting object in the three-dimensional space is obtained and is used as the first distance between the first shooting object and the shooting device.
The plurality of shot images of the first shot object may refer to images having a sequential relationship, such as several continuous frames of images. The two continuous frames of images can be collected by a binocular camera, for example, two cameras in the binocular camera respectively collect one frame of image, and the two frames of images collected by the binocular camera can be collected by a monocular camera from left to right or from top to bottom.
It should be noted that, the disparity map may be replaced by a depth map, and the first distance between the first photographic subject and the photographic device may also be determined according to the depth map and the position and orientation information of the photographic device.
In order to make the prediction effect of the first deep learning model better, a large number of samples can be adopted to train the first deep learning model, and the training process is as follows: and when the preset loss function is smaller than a preset value, the training is determined to be finished, and the trained first deep learning model is obtained.
The process of training the first deep learning model is similar to the process of obtaining the distance between the first shooting object and the shooting device, and is not repeated here. If the first deep learning model is a convolutional neural network model, the predetermined loss function may be, where n is the nth output layer, y is a predetermined desired value, a (n) is the output value of the nth output layer, and f (z (n)) is the derivative function value of the excitation function f and is an error. When the value of the loss function is smaller than the preset value, it is indicated that the training of the convolutional neural network model is completed, of course, the preset loss functions corresponding to different first deep learning models are different, and for the sake of simplicity of description, this is not illustrated one by one.
In the implementation process, the disparity map of the shot image is obtained through the first deep learning model, so that the distance between the shot object and the shooting device can be more accurately obtained on the basis of the disparity map and the position and posture information of the shooting device.
It should be noted that the first distance and the fourth distance may be obtained based on the corresponding disparity map of the photographic subject and the position and orientation information of the photographic device, and the second distance and the third distance may be measured by using a distance measuring sensor, such as an infrared sensor, a monocular vision sensor, or a binocular vision sensor. Of course, the above-mentioned determining manner of each distance is only a distance, and in practical applications, each distance may also be obtained by other manners, and it should be understood that other manners for obtaining each distance also should be within the protection scope of the present invention.
In addition, in the process that the shooting device shoots the second shooting objects, if the number of the second shooting objects is at least two, after the shooting device is switched to the second shooting objects, the shooting positions for shooting the at least two second shooting objects can be determined, and then the shooting parameters of the shooting device can be adjusted according to the shooting positions.
As shown in fig. 5, if the second object includes an object a and an object C, when two objects in the pair of objects are captured, the center position between the two objects can be used as the capturing position, so that the capturing effect is better. The photographing apparatus determines photographing parameters according to its center position, which may include some parameters such as a moving direction, an angle, a focal length, etc. of the photographing apparatus, and then the photographing apparatus may adjust based on the photographing parameters so that it can photograph the second photographic subject.
When there are a plurality of second imaging objects, the center position thereof is the spatial center position thereof, which may be calculated from the coordinates of the plurality of objects, or may be an approximately middle position set by the user. For better shooting effect, the shooting position may be a central position, and of course, the shooting position may also be a position between two second shooting objects, such as a position near the object a or a position near the object C, for example, a position at 1/3 where a connecting line between two objects is near one of the objects is used as the shooting position, and when the two objects are shot, the position of the object in the image is left or right. Therefore, the shooting position can be flexibly set according to actual requirements.
The shooting device can adjust the shooting parameters of the shooting device in real time according to the movement of the shot object in the following shooting process, so that the shot object can be positioned in the middle of the shot picture, a better shooting effect is achieved, and the specific following shooting process is not introduced too much.
Referring to fig. 6, fig. 6 is a block diagram of a device 200 for switching a shot object according to an embodiment of the present disclosure, where the device 200 may be a module, a program segment, or a code on a shooting device. It should be understood that the apparatus 200 corresponds to the above-mentioned embodiment of the method of fig. 1, and can perform various steps related to the embodiment of the method of fig. 1, and the specific functions of the apparatus 200 can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy.
Optionally, the apparatus 200 comprises:
a distance obtaining module 210, configured to obtain a first distance between a first photographic object and a photographic device;
an object determining module 220, configured to determine a second shooting object to be switched according to the first distance;
an object switching module 230, configured to switch the photographic subject from the first photographic subject to the second photographic subject.
Optionally, the object determining module 220 is specifically configured to detect at least one other photographic object in a photographic area except for the first photographic object; acquiring a second distance between each of the at least one other photographic subject and the photographic device; and determining a second shooting object to be switched from the at least one other shooting object according to the first distance and the at least one second distance.
Optionally, the object determining module 220 is further configured to determine a target distance smaller than or equal to the first distance from at least one of the second distances; and determining the target shooting object corresponding to the target distance as a second shooting object needing to be switched.
Optionally, if the at least one second distance includes a plurality of second distances, the object determining module 220 is further configured to determine a target distance smaller than or equal to the first distance from the plurality of second distances; and if the target distances are at least two, determining the target shooting object corresponding to the minimum target distance in the at least two target distances as the second shooting object to be switched.
Optionally, if the at least one second distance includes a plurality of second distances, the object determining module 220 is further configured to determine a target distance smaller than or equal to the first distance from the plurality of second distances; if at least two target shooting objects corresponding to the target distance exist, acquiring a third distance between each target shooting object and the first shooting object; and determining a target shooting object corresponding to a third distance with the minimum first shooting object distance as a second shooting object needing switching.
Optionally, the object determining module 220 is further configured to obtain a position where the first shooting object is located; and determining a second shooting object to be switched according to the position of the first shooting object and the first distance.
Optionally, the object determining module 220 is further configured to determine a shooting position according to the position of the first shooting object and the first distance; detecting a target photographic subject other than the first photographic subject appearing at the photographic position; determining the target shooting object as a second shooting object needing to be switched; or determining the target shooting object and the first shooting object as a second shooting object needing to be switched.
Optionally, the object determining module 220 is further configured to acquire at least one shooting object in the shooting area; acquiring a fourth distance between each shooting object and the shooting device; and determining the first photographic object from the at least one photographic object according to at least one fourth distance.
Optionally, if the at least one object includes a plurality of objects, the object determining module 220 is further configured to select a target distance with a minimum distance from the plurality of fourth distances; and determining the shooting object corresponding to the target distance as the first shooting object.
Optionally, if the at least one shooting object includes a plurality of shooting objects, the object determining module 220 is further configured to select a target distance with a distance smaller than a preset threshold from the plurality of fourth distances; and determining the shooting object corresponding to the target distance as the first shooting object.
Optionally, the shooting area is a circular area and/or a polygonal area.
Optionally, if there are at least two second objects, the apparatus 200 further includes:
the adjusting module is used for determining shooting positions for shooting the at least two second shooting objects; and adjusting the shooting parameters of the shooting equipment according to the shooting position.
Optionally, the shooting position is a central position between the at least two second photographic subjects.
Optionally, the distance obtaining module 210 is specifically configured to obtain a first distance between the first shooting object and the shooting device through a trained first deep learning model.
Optionally, the distance obtaining module 210 is specifically configured to obtain multiple captured images of the first captured object; inputting the multiple shot images into the first deep learning model, and obtaining a disparity map of the first shot object by using a pixel position relation among the multiple shot images through the first deep learning model; and determining a first distance between the first shooting object and the shooting device according to the disparity map and the position and posture information of the shooting device.
Optionally, the distance obtaining module 210 is specifically configured to convert the disparity map into a three-dimensional space with the shooting device as a coordinate origin based on the position and posture information of the shooting device; acquiring a first distance between the shooting device and the first shooting object in the three-dimensional space.
Optionally, the apparatus 200 further comprises:
the model training module is used for taking a plurality of images containing an object to be shot as the input of the first deep learning model, taking the disparity map of the object to be shot as the output of the first deep learning model and training network parameters in the first deep learning model; and when the preset loss function is smaller than the preset value, determining that the training is finished, and obtaining a trained first deep learning model.
It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and when the actual implementation is performed, another division manner may be provided, for example, any one of the modules may be further divided into more sub-modules, or a plurality of the modules may be combined into one module, or one module may also implement one or more other module functions. In addition, functional modules in the embodiments of the present invention may be integrated into one processor, may exist alone physically, or may be integrated into one module by two or more modules.
The embodiment of the application provides a shooting device, which can comprise: at least one processor, such as a CPU, at least one communication interface, at least one memory, and at least one communication bus. Wherein, the communication bus is used for realizing the direct connection communication of the components. The communication interface of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The memory may be a high-speed RAM memory or a non-volatile memory, such as at least one disk memory. The memory may optionally be at least one memory device located remotely from the processor. The memory stores computer readable instructions which, when executed by the processor, the photographing apparatus performs the above-described method procedure shown in fig. 1, for example, the memory may be used to store distances between respective photographic subjects and the photographing apparatus, and the processor may be used to perform the respective steps in the photographic subject switching method.
It is understood that the camera device may also include more or fewer components, or other different configurations, for example, the camera device may also include camera components such as a camera, a pan-tilt, etc. The components of the photographing apparatus may be implemented by hardware, software, or a combination thereof.
An embodiment of the present application provides a readable storage medium, and when being executed by a processor, the computer program performs the method processes performed by a shooting device in the method embodiment shown in fig. 1.
The present embodiments disclose a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the methods provided by the above-described method embodiments, for example, comprising: acquiring a first distance between a first shooting object and shooting equipment; determining a second shooting object to be switched according to the first distance; switching the subject from the first subject to the second subject.
In summary, the embodiment of the application provides a shot object switching method, a shot object switching device, a shooting device, and a readable storage medium, and a second shot object to be switched is determined according to a distance between a first shot object and the shooting device, so that the shooting device can select the shot object to be switched according to the distance, and the shot object can be flexibly switched without manually selecting the switching object, so that a following shot image or video can be intelligently obtained, labor cost can be effectively reduced, and time can be saved.
Example two
The embodiment of the application provides an image processing method, which is used for reconstructing a target image to obtain a reconstructed image, wherein the reconstructed image is a processed image.
The execution subject of the image processing method in the embodiment of the present application may be an application device, and the application device may be a shooting device, and it should be understood that the shooting device in the present application may be any device, apparatus, or machine having a computing processing capability and a shooting function, for example, the shooting device in the present application may include but is not limited to: cameras, video cameras, unmanned aerial vehicles, unmanned ships, unmanned submarines, handheld DVs, and monitoring equipment; besides, the application device can be a mobile phone, a tablet computer, a notebook computer, a desktop computer and a server.
Referring to fig. 7, fig. 7 is a schematic flowchart of an image processing method according to an embodiment of the present application, where the method includes the following steps:
step S310: and acquiring a target image.
The target image may be a captured image, a drawn image, a frame image of a video, etc., and the captured image, the drawn image, the frame image of the video may be a landscape image, an image containing a target object (e.g., a person or an animal), etc.
Alternatively, the target image may be selected by the user, or may be obtained by the application device automatically according to a preset image obtaining rule, for example, the application device may automatically obtain an image stored by the application device or the application device may automatically obtain an image in an album of the application device.
Step S320: and carrying out reconstruction processing on the target image to obtain a reconstructed image.
It can be understood that the target image is subjected to reconstruction processing, that is, the target image is modified and adjusted.
Alternatively, the target image may be subjected to reconstruction processing, such as changing the style of the target image, changing the background of the target image, changing a target object in the target image, or adding an image element to the target image.
Optionally, the target image may be reconstructed according to a user operation instruction, or may be reconstructed according to a preset image reconstruction rule, where the target image is reconstructed according to the user operation instruction, or may be reconstructed according to an image reconstruction mode selected by a user.
The user operation instruction may be a text input instruction, a voice input instruction, a touch instruction, a gesture instruction, or the like.
In the implementation process, the reconstructed image is obtained by reconstructing the target image, and the reconstructed image is the processed image, so that the original image is not required to be modified and adjusted manually by people, the limitation of profession is avoided, the processed image is obtained intelligently, the labor cost can be greatly reduced, and the time is saved.
As an optional implementation manner, when the target image is subjected to the reconstruction processing to obtain the reconstructed image, the style migration processing may be performed on the target image by using the painting style image as the style migration template to obtain the reconstructed image.
Understandably, a stylistic image of a pictorial image; performing style migration processing on the target image, namely converting the style of the target image; and performing style migration processing on the target image by taking the picture image as a style migration template to obtain a reconstructed image which is the style migration image.
Reference may be made to fig. 8 to fig. 10, where fig. 8 is a schematic diagram of a target image provided in an embodiment of the present application, fig. 9 is a schematic diagram of a painting image provided in an embodiment of the present application, and fig. 10 is a schematic diagram of a style transition image obtained based on fig. 8 and fig. 9 and provided in an embodiment of the present application.
Optionally, the pictogram may be preset or user-selected.
If the picture style image is selected by the user, the picture style image can be selected by the user according to the picture style image recommended by the application equipment, and the picture style image recommended by the application equipment can be obtained according to the target image.
Optionally, when the target image is subjected to style migration processing by using the painting image as a style migration template to obtain a reconstructed image, the target image and the painting image may be input to a trained second deep learning model to be subjected to style migration processing to obtain the reconstructed image. The second deep learning model is used for style migration processing of the image.
The second deep learning model includes, but is not limited to, one or more of a CNN neural network model, a DNN neural network model, an RNN neural network model, a DBN neural network model, and the like.
In the implementation process, the style of the reconstructed image is adjusted relative to the style of the target image, and the processed images with different styles can be intelligently obtained through the method.
As an optional implementation manner, when the target image is subjected to the reconstruction processing to obtain the reconstructed image, the target image may be segmented to obtain a first segmented image and a second segmented image; performing style migration processing on the first segmentation image by taking the painting image as a style migration template to obtain a style migration image; and fusing the style migration image and the second segmentation image to obtain a reconstructed image.
Optionally, the target image may be segmented according to a user operation instruction, or may be segmented according to a preset segmentation rule.
The target image is segmented according to the user operation instruction, and may be segmented according to a segmentation mode selected by the user, or segmented according to a target object (e.g., a person) selected by the user, or segmented according to a segmentation track drawn by the user.
When the target object is a person, the target image is segmented to obtain a first segmented image and a second segmented image, and the target image is segmented through human skeleton information to obtain the first segmented image and the second segmented image.
The target image is segmented through the human body skeleton information, so that the segmentation precision of the target image can be better improved, and the obtained reconstructed image has better effect.
Optionally, when the style migration template is a painting image, the style migration processing is performed on the first segmentation image to obtain a style migration image, and the first segmentation image and the painting image may be input to a trained second deep learning model to perform the style migration processing to obtain the style migration image.
Optionally, after the target image is segmented to obtain a first segmented image and a second segmented image, the method may further obtain position information of the first segmented image in the target image; when the style transition image and the second divided image are fused to obtain a reconstructed image, the style transition image and the second divided image can be fused to obtain the reconstructed image according to the position information.
The style migration image and the second segmentation image are fused through the position information of the first segmentation image in the target image, so that the fusion accuracy of the style migration image and the second segmentation image is effectively improved, and the obtained reconstructed image is better in effect.
In the implementation process, the style migration processing is carried out on the first segmentation image by taking the painting image as a style migration template to obtain a style migration image; the style migration image and the second segmentation image are fused, the obtained reconstructed image is the image of the target image with the style migration of part of the content, and the method can be used for carrying out the style migration of the background of the target object (such as a person) in the target image so as to intelligently obtain the image of the target object under different style backgrounds.
As an optional embodiment, when the target image is subjected to the reconstruction processing to obtain the reconstructed image, the target template image may be used as a reference, and the corresponding target modification object in the target image may be subjected to the modification processing to obtain the reconstructed image.
It is to be understood that the target template image may be a background image, a person image, or the like; the target template image corresponds to the target modification object, wherein the target template image corresponds to the target modification object, and may be a correspondence having the same image elements, for example, the target template image and the target modification object are both a background or a person, or a correspondence having an attaching relationship between the image elements, for example, the target template image is a hat, and the target modification object is a person or a head of the person.
Alternatively, the target template image may be preset or user-selected.
Alternatively, the modification processing on the corresponding target modification object in the target image may be to modify the background of the target image, modify the target object in the target image, add an image element in the target image, or the like.
In the implementation process, the obtained reconstructed image is the image in which the target change object corresponding to the target template image in the target image is changed, and this way can be used for changing the background in the target image or the target object in the target image, so as to intelligently obtain the image in which the target change object in the target image is changed.
Optionally, when a target template image is taken as a reference and a corresponding target change object in the target image is subjected to change processing to obtain a reconstructed image, the target image may be segmented to obtain a foreground segmented image and a background segmented image; and fusing the foreground segmentation image to a target template image to obtain a reconstructed image, wherein the target template image is a background template image.
In the implementation process, the obtained reconstructed image is the image with the changed background in the target image, and the background of the target image is changed in the image segmentation and fusion mode, so that the difficulty of changing the background in the target image can be effectively reduced, and the effect of the obtained reconstructed image is ensured.
If the object corresponding to the foreground segmentation image in the target image is a human body, the target image can be segmented through human body skeleton information to obtain a foreground segmentation image and a background segmentation image.
The target image is segmented through the human body skeleton information, so that the segmentation precision of the target image can be better improved, and the obtained reconstructed image has better effect.
Optionally, when the foreground segmentation image is fused to the target template image to obtain a reconstructed image, the foreground segmentation image and the target template image may be input to a trained third deep learning model for fusion to obtain the reconstructed image. The third deep learning model is used for fusing the foreground segmentation image and the target template image (background template image).
The third deep learning model includes, but is not limited to, one or more of a CNN neural network model, a DNN neural network model, an RNN neural network model, a DBN neural network model, and the like.
In addition, when a target template image is taken as a reference, a corresponding target change object in the target image is subjected to change processing to obtain a reconstructed image, the target image can be segmented by taking an actual shooting object as a reference to obtain a foreground segmentation image and a background segmentation image; fusing the target template image to the target change position of the actual shooting object in the foreground segmentation image to obtain a change image; and fusing the changed image and the background segmentation image to obtain a reconstructed image.
It is to be understood that the actual photographic subject may be a person, an animal, and/or a moving object, etc.; the target change position of the actual shooting object can be a certain part of the actual shooting object, for example, the actual shooting object is a person, and the target change position of the actual shooting object can be a human face, a piece of clothing of the person, a certain part of the human face and the like; taking the actual shooting object as a person for example, the target template image is merged into the target change position of the actual shooting object in the foreground segmentation image, and the target change position may be a position where the face of the actual shooting object is changed, a position where clothing of the actual shooting object is changed, or a position where decoration (for example, a hat, earring, or the like) is added to the face of the actual shooting object.
When the target change position of the actual shooting object is a certain part of the human face, the face of the actual shooting object can be detected, and the position information of key parts of the human face, such as the coordinates of the eyes, the eyebrows, the mouth, the nose, the ears and the like, can be acquired, so that the target template image can be conveniently fused into the target change position of the actual shooting object in the foreground segmentation image.
In the implementation process, the method can be used for changing the face of a person in the target image, changing the dress of the person in the target image, or decorating the face of the person in the target image so as to intelligently obtain an image with a changed target change position of an actual shooting object in the target image.
Optionally, when a target template image is fused to a target change position of an actual shooting object in the foreground segmentation image to obtain a change image, the foreground segmentation image and the target template image may be input to a trained fourth deep learning model, so that the target template image is fused to the target change position of the actual shooting object in the foreground segmentation image to obtain the change image. And the fourth deep learning model is used for fusing the target template image to the target change position of the actual shooting object in the foreground segmentation image.
The fourth deep learning model includes, but is not limited to, one or more of a CNN neural network model, a DNN neural network model, an RNN neural network model, a DBN neural network model, and the like.
In this embodiment, if the target image is a frame image of the target video, the method may further include: acquiring a plurality of reconstructed images, wherein the plurality of reconstructed images are a plurality of frame images corresponding to the same target video; and carrying out video synthesis processing on the plurality of reconstructed images to obtain a reconstructed video.
In the implementation process, the reconstructed video is the processed video, and the mode of reconstructing the video can be applied to scenes shot by the video in real time, namely the target video is the real-time shot video.
In addition, as an optional implementation manner, before the acquiring the target image, the method may further include:
acquiring a first distance between a first shooting object and shooting equipment; determining a second shooting object to be switched according to the first distance; switching a photographic subject from the first photographic subject to the second photographic subject; taking the second shooting object as an actual shooting object, and shooting to obtain a target image or a target video; when the second shooting object is taken as an actual shooting object and a target video is obtained through shooting, the obtaining of the target image comprises the following steps: and acquiring a target image from the target video.
In the implementation process, the second shooting object to be switched is determined according to the distance between the first shooting object and the shooting equipment, and the second shooting object is used as an actual shooting object to obtain the target image, so that the shooting equipment can select the shooting object to be switched according to the distance to obtain the target image, the shooting object can be flexibly switched without manually selecting the switching object, the labor cost can be effectively reduced, and the time is saved.
It should be noted that, in the method in the embodiment of the present application, "obtaining a first distance between a first photographic object and a photographic device; determining a second shooting object to be switched according to the first distance; the specific contents of switching the shot object from the first shot object to the second shot object "may be referred to the contents of the first embodiment, and in this embodiment, details are not repeated.
Referring to fig. 11, fig. 11 is a block diagram of an image processing apparatus 400 according to an embodiment of the present disclosure, where the apparatus 400 may be a module, a program segment, or a code on a shooting device. It should be understood that the apparatus 400 corresponds to the above-mentioned embodiment of the method of fig. 7, and can perform various steps related to the embodiment of the method of fig. 7, and the specific functions of the apparatus 400 can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy.
Optionally, the apparatus 400 comprises:
an image acquisition module 410 for acquiring a target image;
and the image reconstruction module 420 is configured to perform reconstruction processing on the target image to obtain a reconstructed image.
Optionally, the image reconstruction module 420 is specifically configured to perform style migration processing on the target image by using the painting style image as a style migration template, so as to obtain a reconstructed image.
Optionally, when the image reconstruction module 420 uses the painting style image as a style migration template to perform style migration processing on the target image to obtain a reconstructed image, the target image and the painting style image are input to the trained second deep learning model to perform style migration processing to obtain the reconstructed image.
Optionally, the image reconstructing module 420 is specifically configured to segment the target image to obtain a first segmented image and a second segmented image; performing style migration processing on the first segmentation image by taking the painting image as a style migration template to obtain a style migration image; and fusing the style migration image and the second segmentation image to obtain a reconstructed image.
Optionally, when the image reconstruction module 420 segments the target image to obtain a first segmented image and a second segmented image, the target image is segmented by human skeleton information to obtain the first segmented image and the second segmented image.
Optionally, when the style migration module uses the painting image as a style migration template and performs style migration processing on the first segmentation image to obtain a style migration image, the image reconstruction module 420 inputs the first segmentation image and the painting image to the trained second deep learning model to perform style migration processing to obtain a style migration image.
Optionally, the image reconstructing module 420 is further configured to obtain position information of the first segmented image in the target image; the image reconstruction module 420 is configured to fuse the style transition image and the second segmentation image according to the position information to obtain a reconstructed image when the style transition image and the second segmentation image are fused to obtain the reconstructed image.
Optionally, the image reconstructing module 420 is specifically configured to perform modification processing on a corresponding target modification object in the target image by using the target template image as a reference, so as to obtain a reconstructed image.
Optionally, the image reconstruction module 420 performs modification processing on a corresponding target modification object in the target image by taking the target template image as a reference to obtain a reconstructed image, and segments the target image to obtain a foreground segmented image and a background segmented image; and fusing the foreground segmentation image to a target template image to obtain a reconstructed image, wherein the target template image is a background template image.
Optionally, when the image reconstruction module 420 segments the target image to obtain a foreground segmented image and a background segmented image, the target image is segmented by human skeleton information to obtain the foreground segmented image and the background segmented image.
Optionally, the image reconstruction module 420 is configured to, when the foreground segmentation image is fused to a target template image to obtain a reconstructed image, and the target template image is a background template image, input the foreground segmentation image and the target template image to a trained third deep learning model for fusion to obtain the reconstructed image, and the target template image is the background template image.
Optionally, the image reconstruction module 420 performs modification processing on a corresponding target modification object in the target image by taking the target template image as a reference to obtain a reconstructed image, and segments the target image by taking an actual shooting object as a reference to obtain a foreground segmented image and a background segmented image; fusing the target template image to the target change position of the actual shooting object in the foreground segmentation image to obtain a change image; and fusing the changed image and the background segmentation image to obtain a reconstructed image.
Optionally, when the target template image is fused to the target change position of the actual shooting object in the foreground segmentation image to obtain a change image, the image reconstruction module 420 inputs the foreground segmentation image and the target template image to a trained fourth deep learning model, so that the target template image is fused to the target change position of the actual shooting object in the foreground segmentation image to obtain the change image.
Optionally, the image obtaining module 410 is further configured to obtain a plurality of reconstructed images, where the plurality of reconstructed images correspond to a plurality of frame images of the same target video; the device further comprises: and the video synthesis module is used for carrying out video synthesis processing on the plurality of reconstructed images to obtain reconstructed videos.
Optionally, the apparatus further comprises: the distance acquisition module is used for acquiring a first distance between a first shooting object and the shooting equipment; the object determining module is used for determining a second shooting object needing to be switched according to the first distance; the object switching module is used for switching the shooting object from the first shooting object to the second shooting object; the shooting module is used for shooting to obtain a target image or a target video by taking the second shooting object as an actual shooting object; the image obtaining module 410 is further configured to obtain a target image from the target video.
Optionally, the object determining module is specifically configured to detect at least one other photographic object in the photographic area except the first photographic object; acquiring a second distance between each of the at least one other photographic subject and the photographic device; and determining a second shooting object to be switched from the at least one other shooting object according to the first distance and the at least one second distance.
Optionally, the object determining module is further configured to determine a target distance smaller than or equal to the first distance from at least one of the second distances; and determining the target shooting object corresponding to the target distance as a second shooting object needing to be switched.
Optionally, if the at least one second distance includes a plurality of second distances, the object determining module is further configured to determine a target distance smaller than or equal to the first distance from the plurality of second distances; and if the target distances are at least two, determining the target shooting object corresponding to the minimum target distance in the at least two target distances as the second shooting object to be switched.
Optionally, if the at least one second distance includes a plurality of second distances, the object determining module is further configured to determine a target distance smaller than or equal to the first distance from the plurality of second distances; if at least two target shooting objects corresponding to the target distance exist, acquiring a third distance between each target shooting object and the first shooting object; and determining a target shooting object corresponding to a third distance with the minimum first shooting object distance as a second shooting object needing switching.
Optionally, the object determining module is further configured to obtain a position where the first shooting object is located; and determining a second shooting object to be switched according to the position of the first shooting object and the first distance.
Optionally, the object determining module is further configured to determine a shooting position according to the position of the first shooting object and the first distance; detecting a target photographic subject other than the first photographic subject appearing at the photographic position; determining the target shooting object as a second shooting object needing to be switched; or determining the target shooting object and the first shooting object as a second shooting object needing to be switched.
Optionally, the object determination module is further configured to acquire at least one shooting object in a shooting area; acquiring a fourth distance between each shooting object and the shooting device; and determining the first photographic object from the at least one photographic object according to at least one fourth distance.
Optionally, if the at least one photographic subject includes a plurality of photographic subjects, the subject determining module is further configured to select a target distance with a minimum distance from the plurality of fourth distances; and determining the shooting object corresponding to the target distance as the first shooting object.
Optionally, if the at least one photographic subject includes a plurality of photographic subjects, the subject determining module is further configured to select a target distance having a distance smaller than a preset threshold value from among the plurality of fourth distances; and determining the shooting object corresponding to the target distance as the first shooting object.
Optionally, the shooting area is a circular area and/or a polygonal area.
Optionally, if there are at least two second objects, the apparatus further includes:
the adjusting module is used for determining shooting positions for shooting the at least two second shooting objects; and adjusting the shooting parameters of the shooting equipment according to the shooting position.
Optionally, the photographing position is a center position between the at least two second photographic subjects.
Optionally, the distance obtaining module is specifically configured to obtain a first distance between the first photographic object and the photographic device through a trained first deep learning model.
Optionally, the distance obtaining module is specifically configured to obtain a plurality of captured images of the first captured object; inputting the multiple shot images into the first deep learning model, and obtaining a disparity map of the first shot object by using a pixel position relation among the multiple shot images through the first deep learning model; and determining a first distance between the first shooting object and the shooting device according to the disparity map and the position and posture information of the shooting device.
Optionally, the distance obtaining module is specifically configured to convert the disparity map into a three-dimensional space with the shooting device as a coordinate origin based on the position and posture information of the shooting device; acquiring a first distance between the shooting device and the first shooting object in the three-dimensional space.
Optionally, the apparatus further comprises:
the model training module is used for taking a plurality of images containing an object to be shot as the input of the first deep learning model, taking the disparity map of the pair to be shot as the output of the first deep learning model and training network parameters in the first deep learning model; and when the preset loss function is smaller than the preset value, determining that the training is finished, and obtaining a trained first deep learning model.
It should be noted that, in the embodiment of the present application, division of a module is schematic, and is only one logical function division, and in actual implementation, there may be another division manner, for example, any one of the modules may be further divided into more sub-modules, or multiple modules may be combined into one module, or one module may also implement one or more other module functions. In addition, functional modules in the embodiments of the present invention may be integrated into one processor, may exist alone physically, or may be integrated into one module by two or more modules.
The embodiment of the application provides application equipment, and the application equipment can be shooting equipment. The application device may include: at least one processor, such as a CPU, at least one communication interface, at least one memory, and at least one communication bus. Wherein, the communication bus is used for realizing the direct connection communication of the components. The communication interface of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The memory may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory may optionally be at least one memory device located remotely from the processor. The memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the application device executes the method process shown in fig. 7, for example, the memory can be used for storing the distance between each photographic subject and the photographic device, and the processor can be used for executing each step in the image processing method.
It will be appreciated that the application device may also include more or fewer components, or other different configurations. The components of the application device may be implemented in hardware, software, or a combination thereof.
The embodiment of the present application provides a readable storage medium, and the computer program, when executed by a processor, performs the method processes performed by the application device in the method embodiment shown in fig. 7.
The present embodiments disclose a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, the computer is capable of performing the method provided by the above method embodiments, for example, comprising: acquiring a target image; and carrying out reconstruction processing on the target image to obtain a reconstructed image.
In summary, the embodiments of the present application provide an image processing method, an image processing apparatus, an application device, and a readable storage medium, where a target image is reconstructed to obtain a reconstructed image, which is a processed image, and in this way, people do not need to modify or adjust an original image manually, and the method is not limited by a professional, and a processed image is obtained intelligently, so that labor cost can be greatly reduced, and time can be saved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some communication interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
The shot object switching method and device and the image processing method and device are used for intelligently obtaining images, reducing labor cost and saving time.
Claims (53)
- A photographic subject switching method, characterized by comprising:acquiring a first distance between a first shooting object and shooting equipment;determining a second shooting object to be switched according to the first distance;switching the subject from the first subject to the second subject.
- The photographic subject switching method according to claim 1, wherein the determining a second photographic subject to be switched according to the first distance includes:detecting at least one other photographic subject other than the first photographic subject in a photographic region;acquiring a second distance between each of the at least one other photographic object and the photographic equipment;and determining a second shooting object to be switched from the at least one other shooting object according to the first distance and the at least one second distance.
- The method according to claim 2, wherein the determining a second subject to be switched from the at least one other subject based on the first distance and the at least one second distance includes:determining a target distance from at least one of the second distances that is less than or equal to the first distance;and determining the target shooting object corresponding to the target distance as a second shooting object needing to be switched.
- The method according to claim 2, wherein if the at least one second distance includes a plurality of second distances, the determining a second object to be switched from the at least one other object according to the first distance and the at least one second distance includes:determining a target distance from the plurality of second distances that is less than or equal to the first distance;and if the target distances are at least two, determining the target shooting object corresponding to the minimum target distance in the at least two target distances as the second shooting object to be switched.
- The method according to claim 2, wherein if the at least one second distance includes a plurality of second distances, the determining a second object to be switched from the at least one other object according to the first distance and the at least one second distance includes:determining a target distance from the plurality of second distances that is less than or equal to the first distance;if at least two target shooting objects corresponding to the target distance are available, acquiring a third distance between each target shooting object and the first shooting object;and determining the target shooting object corresponding to the third distance with the minimum first shooting object distance as a second shooting object needing to be switched.
- The photographic subject switching method according to claim 1, wherein the determining a second photographic subject to be switched according to the first distance includes:acquiring the position of the first shooting object;and determining a second shooting object to be switched according to the position of the first shooting object and the first distance.
- The method for switching the photographic objects according to claim 6, wherein the determining the second photographic object to be switched according to the position of the first photographic object and the first distance comprises:determining a shooting position according to the position of the first shooting object and the first distance;detecting a target photographic subject other than the first photographic subject appearing at the photographic position;determining the target shooting object as a second shooting object needing to be switched; orAnd determining the target shooting object and the first shooting object as a second shooting object to be switched.
- The photographic subject switching method according to claim 1, wherein before the acquiring the first distance between the first photographic subject and the photographic apparatus, further comprising:acquiring at least one shooting object in a shooting area;acquiring a fourth distance between each shooting object and the shooting device;and determining the first shooting object from the at least one shooting object according to at least one fourth distance.
- The method for switching the photographic subject according to claim 8, wherein if the at least one photographic subject includes a plurality of photographic subjects, the determining the first photographic subject from the at least one photographic subject according to the at least one fourth distance comprises:selecting a target distance with the smallest distance from a plurality of the fourth distances;and determining the shooting object corresponding to the target distance as the first shooting object.
- The method of claim 8, wherein if the at least one object includes a plurality of objects, the determining the first object from the at least one object according to the at least one fourth distance comprises:selecting a target distance with a distance smaller than a preset threshold value from the plurality of fourth distances;and determining the shooting object corresponding to the target distance as the first shooting object.
- The photographic subject switching method according to claim 8, wherein the photographic region is a circular region and/or a polygonal region.
- The method of claim 1, wherein, when there are at least two second objects, after the object is switched from the first object to the second object, the method further comprises:determining shooting positions for shooting the at least two second shooting objects;and adjusting the shooting parameters of the shooting equipment according to the shooting position.
- The photographic subject switching method according to claim 12, characterized in that the photographing position is a center position between the at least two second photographic subjects.
- The photographic subject switching method according to any one of claims 1 to 13, wherein the acquiring the first distance between the first photographic subject and the photographic apparatus includes:and acquiring a first distance between the first shooting object and the shooting equipment through a trained first deep learning model.
- The photographic subject switching method according to claim 14, wherein the obtaining of the first distance between the first photographic subject and the photographic apparatus through the trained first deep learning model includes:acquiring a plurality of shot images of the first shot object;inputting the multiple shot images into the first deep learning model, and obtaining a disparity map of the first shot object by using a pixel position relation between the multiple shot images through the first deep learning model;and determining a first distance between the first shooting object and the shooting device according to the disparity map and the position and posture information of the shooting device.
- The photographic subject switching method according to claim 15, wherein the determining a first distance between the first photographic subject and the photographic device from the disparity map and position and orientation information of the photographic device includes:converting the disparity map into a three-dimensional space with the shooting device as a coordinate origin based on the position and posture information of the shooting device;acquiring a first distance between the shooting device and the first shooting object in the three-dimensional space.
- The photographic subject switching method according to claim 14, wherein the first deep learning model is trained by:taking a plurality of images containing an object to be shot as input of the first deep learning model, taking a disparity map of the object to be shot as output of the first deep learning model, and training network parameters in the first deep learning model;and when the preset loss function is smaller than the preset value, determining that the training is finished, and obtaining a trained first deep learning model.
- An image processing method, comprising:acquiring a target image;and carrying out reconstruction processing on the target image to obtain a reconstructed image.
- The image processing method according to claim 18, wherein the performing reconstruction processing on the target image to obtain a reconstructed image includes:and performing style migration processing on the target image by taking the picture image as a style migration template to obtain a reconstructed image.
- The image processing method according to claim 19, wherein performing style migration processing on the target image by using the painting image as a style migration template to obtain a reconstructed image comprises:and inputting the target image and the painting image into a trained second deep learning model for style migration processing to obtain a reconstructed image.
- The image processing method according to claim 18, wherein the performing reconstruction processing on the target image to obtain a reconstructed image includes:segmenting the target image to obtain a first segmentation image and a second segmentation image;performing style migration processing on the first segmentation image by taking the painting image as a style migration template to obtain a style migration image;and fusing the style migration image and the second segmentation image to obtain a reconstructed image.
- The image processing method according to claim 21, wherein the segmenting the target image into a first segmented image and a second segmented image comprises:and segmenting the target image through the human skeleton information to obtain a first segmentation image and a second segmentation image.
- The image processing method according to claim 21, wherein performing style migration processing on the first segmented image by using the painting image as a style migration template to obtain a style migration image comprises:and inputting the first segmentation image and the painting image into a trained second deep learning model for style migration processing to obtain a style migration image.
- The image processing method of claim 21, wherein after said segmenting the target image into a first segmented image and a second segmented image, the method further comprises:acquiring position information of the first segmentation image in the target image;the fusing the style migration image and the second segmentation image to obtain a reconstructed image, comprising:and according to the position information, fusing the style migration image and the second segmentation image to obtain a reconstructed image.
- The image processing method according to claim 18, wherein the performing reconstruction processing on the target image to obtain a reconstructed image includes:and changing a corresponding target change object in the target image by taking the target template image as a reference to obtain a reconstructed image.
- The image processing method according to claim 25, wherein the modifying process is performed on the target modification target corresponding to the target image with reference to the target template image to obtain a reconstructed image, and the modifying process includes:segmenting the target image to obtain a foreground segmentation image and a background segmentation image;and fusing the foreground segmentation image to a target template image to obtain a reconstructed image, wherein the target template image is a background template image.
- The image processing method of claim 26, wherein the segmenting the target image into a foreground segmented image and a background segmented image comprises:and segmenting the target image through the human body skeleton information to obtain a foreground segmentation image and a background segmentation image.
- The image processing method of claim 26, wherein the fusing the foreground segmentation image to a target template image to obtain a reconstructed image, wherein the target template image is a background template image, and comprises:and inputting the foreground segmentation image and the target template image into a trained third deep learning model for fusion to obtain a reconstructed image, wherein the target template image is a background template image.
- The image processing method according to claim 25, wherein the changing a corresponding target changing object in the target image with reference to the target template image to obtain a reconstructed image includes:segmenting the target image by taking an actual shooting object as a reference to obtain a foreground segmentation image and a background segmentation image;fusing the target template image to the target change position of the actual shooting object in the foreground segmentation image to obtain a change image;and fusing the changed image and the background segmentation image to obtain a reconstructed image.
- The image processing method according to claim 29, wherein the fusing the target template image to the target change position of the actual photographic subject in the foreground segmentation image to obtain a change image includes:and inputting the foreground segmentation image and the target template image into a trained fourth deep learning model so as to enable the target template image to be fused into a target change position of an actual shooting object in the foreground segmentation image, and obtaining a change image.
- The image processing method according to any one of claims 18 to 30, wherein the target image is a frame image of a target video;the method further comprises the following steps:acquiring a plurality of reconstructed images, wherein the plurality of reconstructed images are a plurality of frame images corresponding to the same target video;and carrying out video synthesis processing on the plurality of reconstructed images to obtain a reconstructed video.
- The image processing method according to any of claims 18 to 30, wherein the image processing method is performed by an application device, the application device being a camera, a video camera, a drone, an unmanned vehicle, an unmanned ship, an unmanned submarine, a handheld DV, or a surveillance device.
- The image processing method of claim 18, wherein prior to said acquiring a target image, the method further comprises:acquiring a first distance between a first shooting object and shooting equipment;determining a second shooting object to be switched according to the first distance;switching a photographic subject from the first photographic subject to the second photographic subject;taking the second shooting object as an actual shooting object, and shooting to obtain a target image or a target video;when a target video is obtained by shooting with the second shooting object as an actual shooting object, the obtaining of the target image includes:and acquiring a target image from the target video.
- The image processing method according to claim 33, wherein the determining of the second photographic subject to be switched according to the first distance includes:detecting at least one other photographic subject except the first photographic subject in a photographic area;acquiring a second distance between each of the at least one other photographic object and the photographic equipment;and determining a second shooting object to be switched from the at least one other shooting object according to the first distance and the at least one second distance.
- The image processing method according to claim 34, wherein the determining, from the at least one other photographic subject, a second photographic subject to be switched according to the first distance and the at least one second distance, comprises:determining a target distance from at least one of the second distances that is less than or equal to the first distance;and determining the target shooting object corresponding to the target distance as a second shooting object needing to be switched.
- The image processing method of claim 34, wherein if the at least one second distance includes a plurality of second distances, the determining, according to the first distance and the at least one second distance, a second object to be switched from among the at least one other object includes:determining a target distance from the plurality of second distances that is less than or equal to the first distance;and if the target distances are at least two, determining the target shooting object corresponding to the minimum target distance in the at least two target distances as a second shooting object to be switched.
- The image processing method of claim 34, wherein if the at least one second distance includes a plurality of second distances, the determining a second object to be switched from the at least one other object according to the first distance and the at least one second distance comprises:determining a target distance from the plurality of second distances that is less than or equal to the first distance;if at least two target shooting objects corresponding to the target distance are available, acquiring a third distance between each target shooting object and the first shooting object;and determining a target shooting object corresponding to a third distance with the minimum first shooting object distance as a second shooting object needing switching.
- The image processing method according to claim 33, wherein the determining of the second photographic subject to be switched according to the first distance includes:acquiring the position of the first shooting object;and determining a second shooting object to be switched according to the position of the first shooting object and the first distance.
- The image processing method of claim 38, wherein the determining a second photographic subject to be switched according to the position of the first photographic subject and the first distance comprises:determining a shooting position according to the position of the first shooting object and the first distance;detecting a target photographic subject other than the first photographic subject appearing at the photographic position;determining the target shooting object as a second shooting object needing to be switched; orAnd determining the target shooting object and the first shooting object as a second shooting object to be switched.
- The image processing method according to claim 33, wherein before the acquiring the first distance between the first photographic subject and the photographic device, further comprising:acquiring at least one shooting object in a shooting area;acquiring a fourth distance between each shooting object and the shooting device;and determining the first shooting object from the at least one shooting object according to at least one fourth distance.
- The image processing method of claim 40, wherein if the at least one object includes a plurality of objects, the determining the first object from the at least one object according to the at least one fourth distance comprises:selecting a target distance with the smallest distance from a plurality of the fourth distances;and determining the shooting object corresponding to the target distance as the first shooting object.
- The image processing method of claim 40, wherein if the at least one object includes a plurality of objects, the determining the first object from the at least one object according to the at least one fourth distance comprises:selecting a target distance with a distance smaller than a preset threshold value from the plurality of fourth distances;and determining the shooting object corresponding to the target distance as the first shooting object.
- The image processing method according to claim 40, wherein the photographing region is a circular region and/or a polygonal region.
- The image processing method according to claim 33, wherein, if there are at least two second subjects, after switching the subject from the first subject to the second subject, the method further comprises:determining shooting positions for shooting the at least two second shooting objects;and adjusting the shooting parameters of the shooting equipment according to the shooting position.
- The image processing method according to claim 44, characterized in that the photographing position is a central position between the at least two second photographing objects.
- The image processing method according to any one of claims 33 to 45, wherein the acquiring the first distance between the first photographic subject and the photographic device comprises:and acquiring a first distance between the first shooting object and the shooting equipment through a trained first deep learning model.
- The image processing method of claim 46, wherein the obtaining of the first distance between the first photographic subject and the photographic device through the trained first deep learning model comprises:acquiring a plurality of shot images of the first shot object;inputting the multiple shot images into the first deep learning model, and obtaining a disparity map of the first shot object by using a pixel position relation between the multiple shot images through the first deep learning model;and determining a first distance between the first shooting object and the shooting device according to the disparity map and the position and posture information of the shooting device.
- The method according to claim 47, wherein the determining a first distance between the first photographic subject and the photographic device according to the disparity map and the position and orientation information of the photographic device comprises:converting the disparity map into a three-dimensional space with the shooting device as a coordinate origin based on the position and posture information of the shooting device;acquiring a first distance between the shooting device and the first shooting object in the three-dimensional space.
- An image processing method according to claim 46, wherein the first deep learning model is trained by:taking a plurality of images containing an object to be shot as input of the first deep learning model, taking a disparity map of the object to be shot as output of the first deep learning model, and training network parameters in the first deep learning model;and when the preset loss function is smaller than the preset value, determining that the training is finished, and obtaining a trained first deep learning model.
- A photographic subject switching apparatus, characterized by comprising:the distance acquisition module is used for acquiring a first distance between a first shooting object and the shooting equipment;the object determining module is used for determining a second shooting object to be switched according to the first distance;and the object switching module is used for switching the shooting object from the first shooting object to the second shooting object.
- An image processing apparatus characterized by comprising:the image acquisition module is used for acquiring a target image;and the image reconstruction module is used for reconstructing the target image to obtain a reconstructed image.
- A photographing apparatus comprising a memory for storing a computer program and a processor that runs the computer program to cause the photographing apparatus to perform the photographic subject switching method according to any one of claims 1 to 17 and/or the image processing method according to any one of claims 18 to 49.
- A computer-readable storage medium characterized by storing a computer program used in the photographing apparatus of claim 52.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/130136 WO2021134311A1 (en) | 2019-12-30 | 2019-12-30 | Method and apparatus for switching object to be photographed, and image processing method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114930798A true CN114930798A (en) | 2022-08-19 |
Family
ID=76686184
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201980103307.5A Pending CN114930798A (en) | 2019-12-30 | 2019-12-30 | Shooting object switching method and device, and image processing method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114930798A (en) |
WO (1) | WO2021134311A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117751580A (en) * | 2021-12-30 | 2024-03-22 | 深圳市大疆创新科技有限公司 | Unmanned aerial vehicle control method and device, unmanned aerial vehicle and storage medium |
CN115223028B (en) * | 2022-06-02 | 2024-03-29 | 支付宝(杭州)信息技术有限公司 | Scene reconstruction and model training method, device, equipment, medium and program product |
CN115223022B (en) * | 2022-09-15 | 2022-12-09 | 平安银行股份有限公司 | Image processing method, device, storage medium and equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110075019A1 (en) * | 2009-09-29 | 2011-03-31 | Canon Kabushiki Kaisha | Lens apparatus to be mounted onto camera and camera system with lens apparatus |
CN104660904A (en) * | 2015-03-04 | 2015-05-27 | 深圳市欧珀通信软件有限公司 | Shooting subject recognition method and device |
CN107580181A (en) * | 2017-08-28 | 2018-01-12 | 努比亚技术有限公司 | A kind of focusing method, equipment and computer-readable recording medium |
CN108805803A (en) * | 2018-06-13 | 2018-11-13 | 衡阳师范学院 | A kind of portrait style moving method based on semantic segmentation Yu depth convolutional neural networks |
CN109029363A (en) * | 2018-06-04 | 2018-12-18 | 泉州装备制造研究所 | A kind of target ranging method based on deep learning |
CN109190478A (en) * | 2018-08-03 | 2019-01-11 | 北京猎户星空科技有限公司 | The switching method of target object, device and electronic equipment during focus follows |
CN109889727A (en) * | 2019-03-14 | 2019-06-14 | 睿魔智能科技(深圳)有限公司 | Unmanned photographic subjects switching method and system, unmanned cameras and storage medium |
CN110310222A (en) * | 2019-06-20 | 2019-10-08 | 北京奇艺世纪科技有限公司 | A kind of image Style Transfer method, apparatus, electronic equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108063859B (en) * | 2017-10-30 | 2021-03-12 | 努比亚技术有限公司 | Automatic photographing control method, terminal and computer storage medium |
CN207491089U (en) * | 2017-11-09 | 2018-06-12 | 张超 | A kind of unmanned machine head device for intelligently switching |
CN108629747B (en) * | 2018-04-25 | 2019-12-10 | 腾讯科技(深圳)有限公司 | Image enhancement method and device, electronic equipment and storage medium |
CN110248081A (en) * | 2018-10-12 | 2019-09-17 | 华为技术有限公司 | Image capture method and electronic equipment |
CN110473141B (en) * | 2019-08-02 | 2023-08-18 | Oppo广东移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
-
2019
- 2019-12-30 CN CN201980103307.5A patent/CN114930798A/en active Pending
- 2019-12-30 WO PCT/CN2019/130136 patent/WO2021134311A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110075019A1 (en) * | 2009-09-29 | 2011-03-31 | Canon Kabushiki Kaisha | Lens apparatus to be mounted onto camera and camera system with lens apparatus |
CN104660904A (en) * | 2015-03-04 | 2015-05-27 | 深圳市欧珀通信软件有限公司 | Shooting subject recognition method and device |
CN107580181A (en) * | 2017-08-28 | 2018-01-12 | 努比亚技术有限公司 | A kind of focusing method, equipment and computer-readable recording medium |
CN109029363A (en) * | 2018-06-04 | 2018-12-18 | 泉州装备制造研究所 | A kind of target ranging method based on deep learning |
CN108805803A (en) * | 2018-06-13 | 2018-11-13 | 衡阳师范学院 | A kind of portrait style moving method based on semantic segmentation Yu depth convolutional neural networks |
CN109190478A (en) * | 2018-08-03 | 2019-01-11 | 北京猎户星空科技有限公司 | The switching method of target object, device and electronic equipment during focus follows |
CN109889727A (en) * | 2019-03-14 | 2019-06-14 | 睿魔智能科技(深圳)有限公司 | Unmanned photographic subjects switching method and system, unmanned cameras and storage medium |
CN110310222A (en) * | 2019-06-20 | 2019-10-08 | 北京奇艺世纪科技有限公司 | A kind of image Style Transfer method, apparatus, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2021134311A1 (en) | 2021-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102362544B1 (en) | Method and apparatus for image processing, and computer readable storage medium | |
CN108764091B (en) | Living body detection method and apparatus, electronic device, and storage medium | |
CN107852533B (en) | Three-dimensional content generation device and three-dimensional content generation method thereof | |
WO2019149061A1 (en) | Gesture-and gaze-based visual data acquisition system | |
KR101893047B1 (en) | Image processing method and image processing device | |
US10559062B2 (en) | Method for automatic facial impression transformation, recording medium and device for performing the method | |
US11138432B2 (en) | Visual feature tagging in multi-view interactive digital media representations | |
KR20170031733A (en) | Technologies for adjusting a perspective of a captured image for display | |
CN110688914A (en) | Gesture recognition method, intelligent device, storage medium and electronic device | |
CN114930798A (en) | Shooting object switching method and device, and image processing method and device | |
JP7490784B2 (en) | Augmented Reality Map Curation | |
US9979894B1 (en) | Modifying images with simulated light sources | |
US20120194513A1 (en) | Image processing apparatus and method with three-dimensional model creation capability, and recording medium | |
US11138743B2 (en) | Method and apparatus for a synchronous motion of a human body model | |
US20230217001A1 (en) | System and method for generating combined embedded multi-view interactive digital media representations | |
CN112749611A (en) | Face point cloud model generation method and device, storage medium and electronic equipment | |
CN111935389B (en) | Shot object switching method and device, shooting equipment and readable storage medium | |
CN113822174B (en) | Sight line estimation method, electronic device and storage medium | |
CN111028318A (en) | Virtual face synthesis method, system, device and storage medium | |
CN110177216A (en) | Image processing method, device, mobile terminal and storage medium | |
US11954905B2 (en) | Landmark temporal smoothing | |
WO2012153868A1 (en) | Information processing device, information processing method and information processing program | |
CN112106347A (en) | Image generation method, image generation equipment, movable platform and storage medium | |
WO2022110059A1 (en) | Video processing method, scene recognition method, terminal device, and photographic system | |
CN112183155B (en) | Method and device for establishing action posture library, generating action posture and identifying action posture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20220819 |