WO2019000427A1 - Image processing method and apparatus, and electronic device - Google Patents

Image processing method and apparatus, and electronic device Download PDF

Info

Publication number
WO2019000427A1
WO2019000427A1 PCT/CN2017/091245 CN2017091245W WO2019000427A1 WO 2019000427 A1 WO2019000427 A1 WO 2019000427A1 CN 2017091245 W CN2017091245 W CN 2017091245W WO 2019000427 A1 WO2019000427 A1 WO 2019000427A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
target environment
foreground
background
Prior art date
Application number
PCT/CN2017/091245
Other languages
French (fr)
Chinese (zh)
Inventor
苏冠华
刘昂
毛曙源
胡骁
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2017/091245 priority Critical patent/WO2019000427A1/en
Priority to CN201780004688.2A priority patent/CN108521823A/en
Publication of WO2019000427A1 publication Critical patent/WO2019000427A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, and electronic device.
  • Micro-motion photography is a kind of photography between photos and videos.
  • the pictures obtained by micro-motion photography are presented in the form of moving pictures.
  • the local area in the picture is dynamic, while the other parts are static, which can achieve outstanding dynamics.
  • the artistic effect of the area For example, the picture of a micro-motion diagram is that on a road, only the protagonist walks along the road, and other people and things are still, which can highlight the protagonist.
  • the micro-motion photography is to manually record a video and then manually apply an area to the recorded video.
  • the area to be smeared is dynamic, the other areas are still, and the dynamic area is after editing. And the static area is saved as an image file, which is a micro image.
  • the shooting procedure in the above manner is complicated, and the micro-motion map cannot be generated after the shooting is completed, and the user needs to manually process the captured video.
  • the manual processing wastes time and the accuracy is poor, and the micro-motion drawing synthesis efficiency is low.
  • the embodiment of the invention discloses an image processing method, device and electronic device, which can automatically generate a micro-motion map and improve the synthesis efficiency of the micro-motion map.
  • the first aspect of the embodiment of the present invention discloses an image processing method, including:
  • a micro-motion map is generated, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image, respectively.
  • the second aspect of the embodiment of the present invention discloses an image processing apparatus, including:
  • An acquisition module configured to acquire a collection of images acquired by the aircraft for a target environment when flying along a specific trajectory, the image collection comprising a plurality of images;
  • An extracting module configured to extract, from the image set, a foreground portion of the target environment and a position of the foreground portion in a corresponding image
  • a splicing module configured to splicing a background portion of the target environment in at least part of the image of the image set to generate a panoramic view of a background portion of the target environment
  • a synthesizing module configured to generate a micro-motion image, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image.
  • a third aspect of the embodiments of the present invention discloses an electronic device, including: a processor and a memory,
  • the memory is configured to store program instructions
  • the processor is configured to execute the program instructions stored by the memory, when the program instructions are executed, the processor is configured to:
  • a micro-motion map is generated, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image, respectively.
  • a fourth aspect of an embodiment of the present invention discloses a computer program product, wherein the image processing method is executed when an instruction in the computer program product is executed by a processor.
  • a fifth aspect of an embodiment of the present invention discloses a storage medium, wherein when an instruction in the storage medium is executed by a processor of the electronic device, the electronic device is enabled to execute the image processing method described above.
  • the present invention first, acquiring an image set collected by the aircraft for the target environment, and then Extracting the foreground part of the target environment and the position of the foreground part in the corresponding image, generating a panoramic view of the background part of the target environment according to the image included in the image set, and finally generating a micro-motion map, which may be based on the image collection collected by the aircraft
  • the micro-motion map is automatically generated, which can improve the synthesis efficiency of the micro-motion map and realize the automation and intelligence of the micro-motion map synthesis.
  • FIG. 1 is a schematic flow chart of a first embodiment of an image processing method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an image processing process disclosed in an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of another image processing process disclosed in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of still another image processing process disclosed in an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart diagram of a second embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of an image processing method according to an embodiment of the present invention.
  • the image processing method described in this embodiment includes but is not limited to the following steps:
  • the target environment is a user-specified shooting area for collecting image material for making a micro-motion image.
  • the image collection includes images of a plurality of target environments, and the images of the plurality of target environments have a certain correlation, that is, the image collection includes a plurality of images continuously acquired by the aircraft for the target environment, and the aircraft is Adjacent locations, or maps acquired at adjacent pan/tilt angles There is a partial overlap between the images, and a panoramic view of the target environment, that is, an image of the entire area of the target environment, can be obtained from the plurality of images in the image set.
  • the plurality of images in the image set may be a plurality of photos collected by the aircraft for the target environment, or may be a plurality of frames in the video captured by the aircraft for the target environment, which is not limited in the embodiment of the present invention.
  • the image set may include two image subsets, and the two image subsets may be a first image set and a second image set.
  • the first image set and the second image set may be a set of images collected by the aircraft for the target environment when the aircraft performs two flight routes. Specifically, the first time the aircraft flies along the first trajectory for the target environment. Image collection; the second image collection is acquired for the target environment when the aircraft flies along the second trajectory for the second time.
  • the image collected by the aircraft for the target environment includes a foreground portion and a background portion of the target environment.
  • the foreground part of the target environment refers to the target object used to reflect the subject of the shot when the image is taken for the target environment, that is, the moving target corresponding to the dynamic area that needs to be highlighted in the micro-motion map when making the micro-motion map; the background part of the target environment It is the part of the target environment other than the foreground part.
  • each of the first image sets does not include a foreground portion of the target environment
  • at least a portion of the images in the second image set include a foreground portion of the target environment, which may be a partial image of the second image set Including the foreground portion of the target environment
  • all of the images in the second image set include the foreground portion of the target environment.
  • the excess content may be the foreground portion of the target environment; or may be the excess content.
  • part of the content of the foreground that is not part of the target environment may also be the background part of the target environment.
  • the first trajectory is the same as the second trajectory, that is, the flight path when the aircraft acquires the first image set is consistent with the flight path when the aircraft acquires the second image set.
  • the first image set and the second image set are taken by the aircraft using the same shooting strategy during the execution of the flight, and the shooting strategy includes at least one of a shooting position, a pan/tilt shooting angle, and a shooting frequency.
  • the aircraft collects photos or videos through the shooting equipment carried on the gimbal.
  • the collection of images collected by the aircraft can be collected by a shooting device carried by the aircraft or by multiple shooting devices carried by the aircraft.
  • the first image set may be a set of images acquired for a background portion of the target environment according to a preset shooting strategy when the aircraft performs flight according to the preset route, such that the first image set includes only the first image set. a background portion of the target environment;
  • the second image set may be a set of images acquired for the target environment according to the preset shooting strategy when the aircraft performs flight again according to the preset route, and the second image set includes a foreground of the target environment Part and background section.
  • the scene captured by the aircraft that is, the target environment
  • the target environment is that a car is driving slowly on the road, while other objects around the car are stationary, and the generated micro-motion map needs to reflect the picture of the car.
  • the first image set may also be a collection of images acquired for the background portion in the target environment according to the photographing instruction input by the user when the aircraft performs the flight task according to the flight control instruction manually input by the user.
  • the aircraft records parameters such as the trajectory of the flight of the aircraft, the position of the captured image, and the shooting angle of the gimbal.
  • the second image set is a set of images acquired by the aircraft for the target environment according to the above-mentioned recording parameters. Specifically, the second image set is when the aircraft performs flight according to the flight trajectory when collecting the first image set, and collects the first image set.
  • Same time cloud A collection of images acquired at a target location for the target environment at the same shooting location.
  • the parameters are the same as the shooting position of the image captured by the aircraft in the second image and the pan/tilt shooting angle.
  • the first image in the second image set may be consistent with the corresponding image capturing area of the corresponding image (eg, the second image) of the first image in the first image set, which may be understood as the first image and the first image.
  • the two images are identical, except that the first image may have more foreground portions of the target environment relative to the second image, and the second image may have more background for a portion of the target environment relative to the first image.
  • the parameters for capturing the first image and the position at which the second image is captured and the panning angle are the same, so the first image corresponds to the second image.
  • the terminal extracts, from the image set, a foreground portion of the target environment and a position of the foreground portion in a corresponding image.
  • the first image in the second image set and the first image may be A second image corresponding to the first image in the set is poor to extract a foreground portion of the target environment in the first image.
  • the difference portion of the first image relative to the second image may be extracted by subtracting the corresponding pixels of the first image and the second image; further, the connected portion is filtered by the connected domain, and filtered. The portion of the difference that is not part of the foreground of the target environment is obtained, and the foreground portion of the target environment in the first image is obtained.
  • the position of the foreground portion of the target environment in the first image in the first image may be determined according to the position of the pixel corresponding to the different portion, and further determined
  • the foreground portion of the target environment in the first image is at a corresponding location in the second image.
  • the terminal splices a background portion of the target environment in at least part of the image of the image set to generate a panoramic view of a background portion of the target environment.
  • the efficiency of image stitching can be improved to some extent; although the obtained panorama of the background portion may miss part of the background, as long as the missing portion
  • the background is not limited to the effect of the panoramic view affecting the background portion, but is within the allowable range of the embodiment of the present invention.
  • the pan-tilt shooting attitude when the aircraft captures each image in the first image set may be combined, and at least part of the image in the first image set is stitched by using a preset image stitching method of the terminal, Generate a panorama of the background portion of the target environment.
  • the pan/tilt shooting posture includes a lateral angle, a longitudinal angle, a deflection angle, and the like.
  • the position of each image in the first image set in the panoramic view of the generated background portion may be determined according to the pan/tilt shooting posture when the aircraft collects each image in the first image set, thereby avoiding Each image in the first image set is parsed to determine the position of each image in the first image set in the panoramic view of the generated background portion, which can further improve the efficiency of image stitching.
  • the terminal generates a micromotion map.
  • each frame image in the micro-motion map is synthesized according to the panoramic view of the background portion and the position of the foreground portion of the target environment in the corresponding image.
  • the first image is an image including a foreground portion of the target environment in the second image set: first according to the foreground portion of the target environment in the first image a position in an image, determining a corresponding position of a foreground portion of the target environment in the first image in a second image corresponding to the first image; and then according to a foreground portion of the target environment in the first image in the second image
  • Corresponding to the location determining a location area in the panoramic view of the background portion corresponding to the foreground portion of the target environment in the first image; finally inserting the foreground portion of the target environment in the first image into a panorama of the background portion, In the location area corresponding to the foreground portion of the target environment in the first image, a first target image is obtained
  • the position corresponding to the foreground portion of the target environment in the first image in the panorama of the background portion is the position in the second image of the background portion corresponding to the foreground portion of the target environment in the first image.
  • each frame image of the micro-motion image is synthesized in a moving image format to obtain a micro-motion image, and the foreground portion of the target environment in the obtained micro-motion image is moved, and the target environment is The background portion remains stationary.
  • the animation format includes but is not limited to a Graphics Interchange Format (GIF).
  • the aircraft automatically flies along a specific trajectory for the first time, and uses a specific shooting strategy to acquire multiple images of the background portion of the target environment; or when the aircraft performs flight according to the flight control command input by the user, according to the user input
  • the instruction captures multiple images of the background portion of the target environment. If the aircraft performs flight according to the flight control command input by the user, the flight trajectory during the flight of the aircraft, the position of the image acquired by the aircraft, and the pan/tilt shooting angle when the image is acquired are recorded.
  • the second flight of the aircraft automatically flies along the trajectory of the first flight.
  • the foreground part of the target environment moves during the process of capturing images by the aircraft.
  • the aircraft collects the target environment according to parameters such as the first shooting position and the pan-tilt shooting angle. Multiple images.
  • each image in the first image set is arranged in the order of shooting time.
  • the aircraft photographed the path, the hero, and the surrounding environment when the hero walked on the trail, and obtained a second image set including the foreground portion of the target environment.
  • the protagonist is the foreground part of the target environment, that is, the dynamic target in the micro-motion map after generating the micro-motion map. Since the shooting position of the first image set and the second image set is the same, the images acquired by the aircraft are basically the same, and the number of images is the same, except that at least part of the images in the second image set include the target.
  • the foreground part of the environment is the foreground part of the environment.
  • the foreground portion of the target environment is first extracted by subtracting the image acquired twice by the aircraft.
  • the region where the pixel is not zero in the difference graph is the foreground portion of the target environment. Because there may be errors between the images, the connected region may be filtered to filter out the small dynamic points in the difference map to obtain the target environment.
  • the position of S i in the ith image may be determined according to the position of the pixel point corresponding to the foreground portion S i of the ith image in the second image set.
  • the image set collected by the aircraft for the target environment is first acquired, and then the foreground portion of the target environment and the position of the foreground portion in the corresponding image are extracted from the image set, and the target environment is generated according to the image included in the image set.
  • the panorama of the background part finally generates the micro-motion map, which can automatically generate the micro-motion map according to the image collection collected by the aircraft, thereby improving the synthesis efficiency of the micro-motion diagram and realizing the automation and intelligence of the micro-motion pattern synthesis.
  • the terminal acquires a set of images acquired by the aircraft along a specific trajectory for the target environment.
  • the aircraft flies once along a specific trajectory, and at the same time, the foreground portion of the target environment, that is, the photographic subject moves during the process of acquiring images by the aircraft.
  • the image collection may be a collection of images acquired by the aircraft according to a preset shooting strategy for the target environment when the aircraft automatically performs the flight according to the preset specific trajectory, and the image collection may also be performed by the aircraft according to a specific flight trajectory corresponding to the flight control instruction manually input by the user.
  • the collection of images acquired for the target environment according to the camera instruction input by the user When flying, the collection of images acquired for the target environment according to the camera instruction input by the user.
  • the collection of images includes a plurality of images and includes a foreground portion of the target environment and a background portion.
  • the terminal extracts, from the image set, a foreground portion of the target environment and a position of the foreground portion in a corresponding image.
  • all the images in the image set may include the foreground part of the target environment, or only part of the image may include the foreground part of the target environment.
  • an image recognition technology may be used to determine a third image of the foreground portion of the image set including the target environment, and then for each third image, a foreground portion of the target environment in the third image is extracted, and corresponding to the foreground portion The position of the pixel determines the position of the foreground portion in the third image.
  • the terminal splices a background portion of the target environment in at least part of the image of the image set to generate a panoramic view of a background portion of the target environment.
  • the background portion of the target environment in the plurality of images included in the image set may be used to perform the blank region generated after removing the foreground portion of the target environment in the image of the background portion of each target environment. Fill to get the background portion of the target environment that does not include white space Image. The images of the background portions of the target environments that do not include the blank areas are then spliced to obtain a panoramic view of the background portion of the target environment that does not include the blank areas. It should be noted that each image of the foreground portion of the target environment is not included in the image collection, that is, an image of a background portion of the target environment that does not include the blank region.
  • the partial blank area of the panoramic part of the background part of the target environment including the blank area may be filled to obtain a micro-motion map.
  • a background panorama of each frame image Specifically, the background portion of the target environment in the plurality of images included in the image set is used, and the blank in the background portion of the background portion of the target environment including the blank region is excluded from the position region corresponding to the ith image in the image set A background fill is performed on other areas outside the area to obtain a background panorama of the i-th frame image of the micro-motion map.
  • the foreground portion of the target environment in each image of the foreground portion of the image set including the target environment may not be removed, and the target environment in the plurality of images included in the image set may be directly utilized.
  • the background portion covers the foreground portion of the target environment in each image, and an image of the background portion of the target environment that does not include the blank region is obtained.
  • the images of the background portions of the target environments that do not include the blank areas are then spliced to obtain a panoramic view of the background portion of the target environment that does not include the blank areas.
  • the effect of the micro-motion we need to generate is for a person to walk on a small road while other objects are still. Since at least part of the images in the collected image set includes the protagonist, we can remove the region corresponding to the hero in the image including the protagonist, and obtain an image including only the background portion including the blank region, and then include the blank.
  • the image of the area including only the background portion and other images not including the hero are spliced into a panorama including a background portion having a blank area. Further, the panorama of the background portion including the blank area may be filled with the background portion of the plurality of images in the image set to obtain a panoramic view of the background portion not including the blank area.
  • a partial blank area in the panoramic view including the background portion of the blank area may be filled, for example, the image i includes a character, and the image of the background portion including the blank area is included in the area corresponding to the image i.
  • the area other than the blank area ie, the area corresponding to the hero in the image i) is filled to obtain a background panorama of the i-th frame image of the micro-motion map.
  • the terminal generates a micromotion map.
  • the foreground portions according to the target environment are respectively in the respective images of the image collection.
  • the position of the foreground environment of the target environment is inserted into the panorama of the background portion, and the panoramic view of the background portion of each foreground portion inserted into the target environment is a frame image in the micro-motion map.
  • the foreground portion of the target environment in the ith picture in the image set is inserted into a blank area in the location area corresponding to the i-th picture in the panoramic view of the background portion of the target environment including the blank area,
  • An ith frame image including a blank area of the fretting map and then performing background filling on the blank area of the ith frame image including the blank area by using a background portion of at least two images in the image set to obtain a micro-motion map
  • the i-th frame image After each frame image of the micro-motion map is obtained, each frame image of the micro-motion map is synthesized in an animation format to obtain a micro-motion map.
  • the animation format includes but is not limited to the GIF format.
  • the foreground portion of the target environment in the i-th picture in the image set is inserted into the ith frame of the micro-motion map.
  • the blank area in the background panorama of the image gives the image of the ith frame of the micromotion map.
  • the obtained panorama of the background portion of the target environment that does not include the blank area is obtained, the foreground portion of the target environment in the i-th picture in the image set is inserted into the panorama of the background portion of the target environment not including the blank area
  • the position corresponding to the i-th picture in the figure is the image of the i-th frame of the micro-motion picture.
  • the terminal may determine the foreground portion of the target environment according to the user's operation.
  • the terminal may use the image collection collected by the aircraft for the target environment.
  • the image recognition technology recognizes all moving objects whose positions are changed in the image set, and only uses the selected target moving object of the user as the foreground part of the target environment, and statically processes the other moving objects as the background part of the target environment.
  • a plurality of regions of other moving objects may be included, and at this time, all regions or partial regions of other moving objects may be performed by using the background portion of the plurality of images in the image collection. Fill to make the panorama of the background section more beautiful.
  • a plurality of moving target objects may also be allowed in the generated micro-motion map, and the plurality of moving target objects are all used as foreground parts of the target environment.
  • the plurality of moving targets may be processed as a foreground part of the target environment, and the plurality of moving targets may be processed as different foreground parts of the target environment.
  • the scene we shot was a person walking on the road, while a car next to it slowly followed the hero, and other objects were still. Then, after acquiring the image collection acquired for the shooting scene, analyzing the image in the image collection, it can be determined that the person and the car are changing positions. If the user's choice is received to make people move, and the car does not move, only the person is regarded as the foreground part of the target environment, and the car is treated as the background part of the target environment.
  • a plurality of vehicles appear in the panorama of the background portion of the target environment thus spliced, but the plurality of vehicles are the same vehicle, and the terminal can arbitrarily select a vehicle or a part of the vehicle corresponding to the area, and according to the image collection
  • the background portion of the plurality of images in the background fills the background of other vehicles, and only one or a few positions of the stationary vehicle are present in the generated micro-motion map, and only the person is walking. If the user's choice is received to both move and leave the car, both the person and the car are treated as foreground parts of the target environment.
  • the image set collected by the aircraft for the target environment is first acquired, and then the foreground portion of the target environment and the position of the foreground portion in the corresponding image are extracted from the image set, and the target environment is generated according to the image included in the image set.
  • the panorama of the background part finally generates the micro-motion map, which can automatically generate the micro-motion map according to the image collection collected by the aircraft, thereby improving the synthesis efficiency of the micro-motion diagram and realizing the automation and intelligence of the micro-motion pattern synthesis.
  • the image processing method and apparatus provided by the embodiments of the present invention may be applied to an intelligent terminal installed with an APP (application software), and the smart terminal may be selected as a smart phone, a tablet computer, or the like.
  • the smart terminal is communicably connected to at least one of a drone, a pan-tilt mounted on the drone, and an imaging device mounted on the pan-tilt.
  • the drone is communicatively coupled to the smart terminal. Specifically, the image including the target object captured by the imaging device can be transmitted back to the smart terminal through the wireless link.
  • the APP is provided with a micro-motion shooting mode in which the drone has a flight path and/or shooting strategy that can be selected by the user, or has a fixed flight path and/or shooting.
  • the strategy, or the flight trajectory and/or shooting strategy input by the user may be received through the APP.
  • the drone performs flight according to the flight trajectory in the mode, and/or controls the imaging device to perform imaging according to the shooting strategy in the mode.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. Ben The image processing apparatus described in the embodiment includes:
  • the obtaining module 601 is configured to acquire a collection of images acquired by the aircraft for the target environment when flying along a specific trajectory, the image collection comprising a plurality of images.
  • the extracting module 602 is configured to extract a foreground portion of the target environment and a position of the foreground portion in the corresponding image from the image set.
  • the splicing module 603 is configured to splicing a background portion of the target environment in at least part of the image of the image set to generate a panoramic view of a background portion of the target environment.
  • the synthesizing module 604 is configured to generate a micro-motion image, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image.
  • the multiple images in the image set are multiple photos collected by the aircraft, or multiple frames in the video captured by the aircraft.
  • the image set includes a first image set and a second image set; the first image set is a set of images acquired for the target environment when the aircraft flies along the first trajectory, The second set of images is a set of images acquired for the target environment when the aircraft is flying along a second trajectory; the first set of images does not include the foreground portion, and the second set of images includes the foreground portion.
  • the first trajectory and the second trajectory are the same.
  • the first set of images and the second set of images are taken by the aircraft using the same shooting strategy in flight.
  • the shooting strategy includes at least one of the following: a shooting position, a pan/tilt shooting angle, and a shooting frequency.
  • the extracting module 602 is configured to perform a difference between the first image in the second image set and the corresponding image in the first image set to obtain a foreground of the first image. a portion and a location of the foreground portion in the first image.
  • the splicing module 603 is specifically configured to splicing at least part of the images in the first image set to generate a panoramic view of a background portion of the target environment.
  • the synthesizing module 604 is specifically configured to insert a foreground portion of the first image into a panoramic view of the background portion according to a position of a foreground portion of the first image in the first image, to obtain a first a target image, the first target image being a frame image in the fine motion map.
  • the set of images is for the aircraft to fly along the particular trajectory A collection of images acquired for the target environment is taken once.
  • the splicing module 603 is further configured to:
  • a splicing process is performed on the images of the respective background portions to obtain a panoramic view of the background portion of the target environment.
  • the apparatus further includes: a determining module 605, configured to determine a foreground portion of the target environment according to a user operation.
  • the splicing module 603 is further configured to: use a background portion of the target environment in the plurality of images included in the image set to remove a blank generated by removing a foreground portion of the target environment from an image of each of the background portions The area is filled.
  • the synthesizing module 604 is further configured to insert the foreground portion into the panoramic view of the background portion according to a position of each of the foreground portions in each image of the image set, each insertion
  • a panoramic view of the background portion of the foreground portion is a frame of image in the micro-motion map.
  • the acquiring module 601 first acquires an image set collected by the aircraft for the target environment, and then the extracting module 602 extracts a foreground portion of the target environment and a position of the foreground portion in the corresponding image from the image set, and the splicing module 603 is configured according to the splicing module 603.
  • the image set includes an image to generate a panoramic view of the background portion of the target environment, and finally the synthesis module 604 generates a micro-motion map, which can automatically generate a micro-motion map according to the image collection collected by the aircraft, thereby improving the synthesis efficiency of the micro-motion image and implementing the micro-motion Automation and intelligence of graph synthesis.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the electronic device described in this embodiment includes: a processor 701, a communication interface 702, and a memory 703.
  • the processor 701, the communication interface 702, and the memory 703 can be connected by using a bus or other means.
  • the embodiment of the present application is exemplified by a bus connection.
  • the processor 701 can be a central processing unit (CPU), network processing Network processor (NP), graphics processing unit (GPU), or a combination of CPU, GPU, and NP.
  • the processor 701 can also be a core for implementing communication identity binding in a multi-core CPU, a multi-core GPU, or a multi-core NP.
  • the processor 701 described above may be a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (GAL), or any combination thereof.
  • the above communication interface 702 can be used for transceiving information or signaling interactions, as well as receiving and transmitting signals.
  • the memory 703 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a storage program required for at least one function (such as a text storage function, a location storage function, etc.); the storage data area may be stored according to The data created by the use of the device (such as image data, text data), etc., and may include an application storage program or the like. Further, the memory 703 may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the above memory 703 is also used to store program instructions.
  • the processor 701 is a processor other than the hardware chip, the program instructions stored in the memory 703 can be invoked to implement the image processing method as shown in the embodiment of the present application.
  • the processor 701 calls the program instructions stored in the memory 703 to perform the following steps:
  • a micro-motion map is generated, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image, respectively.
  • the method performed by the processor 701 in the embodiment of the present application is described from the perspective of the processor 701. It can be understood that the processor 701 in the embodiment of the present application needs to perform other hardware structures in order to execute the foregoing method. Hehe. The specific implementation process is not described and limited in detail in the embodiments of the present application.
  • the multiple images in the image set are multiple photos collected by the aircraft, or multiple frames in the video captured by the aircraft.
  • the image set includes a first image set and a second image set; the first image set is a set of images acquired for the target environment when the aircraft flies along the first trajectory, The second set of images is a set of images acquired for the target environment when the aircraft is flying along a second trajectory; the first set of images does not include the foreground portion, and the second set of images includes the foreground portion.
  • the first trajectory and the second trajectory are the same.
  • the first set of images and the second set of images are taken by the aircraft using the same shooting strategy in flight.
  • the shooting strategy includes at least one of the following: a shooting position, a pan/tilt shooting angle, and a shooting frequency.
  • the processor 701 is specifically configured to: compare a first image in the second image set with a corresponding image in the first image set, to obtain a foreground portion of the first image and a location of the foreground portion in the first image.
  • the processor 701 is specifically configured to splicing at least part of the images in the first image set to generate a panoramic view of a background portion of the target environment.
  • the processor 701 is specifically configured to insert a foreground portion of the first image into a panoramic view of the background portion according to a position of a foreground portion of the first image in the first image to obtain a first target.
  • An image, the first target image being a frame image in the micro-motion map.
  • the set of images is a collection of images acquired by the aircraft for the target environment along the particular trajectory.
  • the processor 701 is further configured to:
  • a splicing process is performed on the images of the respective background portions to obtain a panoramic view of the background portion of the target environment.
  • the processor 701 is further configured to determine a foreground portion of the target environment according to a user operation.
  • the processor 701 is further configured to utilize the target in multiple images included in the image set
  • the background portion of the environment fills a blank area created after removing the foreground portion of the target environment in the image of each of the background portions.
  • the processor 701 is further configured to insert the foreground portion into a panoramic view of the background portion according to a position of each of the foreground portions in each image of the image set, and insert the A panoramic view of the background portion of the foreground portion is a frame of image in the micro-motion map.
  • the processor 701, the communication device 702, and the memory 703, which are described in the embodiments of the present invention, may be implemented in the first embodiment and the second embodiment of the image processing method provided by the embodiment of the present invention.
  • the implementation manner described in the embodiment of the image processing apparatus provided by the embodiment of the present invention may also be implemented, and details are not described herein again.
  • the image set collected by the aircraft for the target environment is first acquired, and then the foreground portion of the target environment and the position of the foreground portion in the corresponding image are extracted from the image set, and the target environment is generated according to the image included in the image set.
  • the panorama of the background part finally generates the micro-motion map, which can automatically generate the micro-motion map according to the image collection collected by the aircraft, thereby improving the synthesis efficiency of the micro-motion diagram and realizing the automation and intelligence of the micro-motion pattern synthesis.
  • the present invention also provides a computer readable storage medium having instructions stored therein that, when run on a computer, cause the computer to perform the image processing method described in the above method embodiments.
  • the present invention also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the image processing method described in the above method embodiments.
  • the program can be stored in a computer readable storage medium, and the storage medium can include: Flash disk, Read-Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Circuits (AREA)

Abstract

An image processing method and apparatus, and an electronic device. The method comprises: acquiring a set of target environment images collected while an aircraft is flying along a specific trajectory, wherein the set of images comprises a plurality of images; extracting, from the set of images, a foreground portion of a target environment and the respective position of the foreground portion in corresponding images; splicing background portions of the target environment in at least some images from the set of images to generate a panoramic image of the background portion of the target environment; and generating a micro-motion image, wherein each frame of image in the micro-motion image is synthesized according to the panoramic image of the background portion and the respective position of the foreground portion in the corresponding images. By means of the embodiments of the present invention, a micro-motion image can be automatically generated according to a set of images collected by an aircraft, so that the synthesis efficiency of the micro-motion image is improved, and the automation and intelligence of the synthesis of the micro-motion image is realized.

Description

一种图像处理方法、装置及电子设备Image processing method, device and electronic device 技术领域Technical field
本发明涉及图像处理技术领域,尤其涉及一种图像处理方法、装置及电子设备。The present invention relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, and electronic device.
背景技术Background technique
为满足用户的个性化需求,丰富图像的表现方式,微动摄影(Cinema Graph)应运而生。微动摄影是一种介于照片和视频之间的摄影方式,采用微动摄影得到的图片以动图的形式呈现,图中局部区域是动态的,而其他部分是静止的,能够达到突出动态区域的艺术效果。例如:某幅微动图表现的画面是,在一条道路上,只有主角一个人沿着道路步行,其他的人和物都是静止的,这可以更加突出拍摄主角。In order to meet the individual needs of users and enrich the way images are represented, Cinema Graph came into being. Micro-motion photography is a kind of photography between photos and videos. The pictures obtained by micro-motion photography are presented in the form of moving pictures. The local area in the picture is dynamic, while the other parts are static, which can achieve outstanding dynamics. The artistic effect of the area. For example, the picture of a micro-motion diagram is that on a road, only the protagonist walks along the road, and other people and things are still, which can highlight the protagonist.
现有技术中,微动摄影是通过先手动录制一段视频,然后针对录制的视频,手动涂抹某一区域,被涂抹的某一区域是动态的,其它区域是静止的,编辑完成后将动态区域以及静态区域保存为影像文件,该影像文件即是微动图。但是,采用上述方式拍摄程序复杂,且不能在拍摄完成后就生成微动图,还需要用户对拍摄的视频进行手动加工,手动加工浪费时间且精准性差,微动图合成效率低。In the prior art, the micro-motion photography is to manually record a video and then manually apply an area to the recorded video. The area to be smeared is dynamic, the other areas are still, and the dynamic area is after editing. And the static area is saved as an image file, which is a micro image. However, the shooting procedure in the above manner is complicated, and the micro-motion map cannot be generated after the shooting is completed, and the user needs to manually process the captured video. The manual processing wastes time and the accuracy is poor, and the micro-motion drawing synthesis efficiency is low.
发明内容Summary of the invention
本发明实施例公开了一种图像处理方法、装置及电子设备,可以自动生成微动图,提高微动图的合成效率。The embodiment of the invention discloses an image processing method, device and electronic device, which can automatically generate a micro-motion map and improve the synthesis efficiency of the micro-motion map.
本发明实施例第一方面公开了一种图像处理方法,包括:The first aspect of the embodiment of the present invention discloses an image processing method, including:
获取飞行器沿特定轨迹飞行时针对目标环境采集的图像集合,所述图像集合包括多张图像;Acquiring a collection of images acquired for the target environment when the aircraft is flying along a particular trajectory, the collection of images comprising a plurality of images;
从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置;Extracting a foreground portion of the target environment and a position of the foreground portion in the corresponding image from the image set;
对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生 成所述目标环境的背景部分的全景图;Splicing a background portion of the target environment in at least a portion of the image set, a panoramic view of the background portion of the target environment;
生成微动图,其中,所述微动图中的各帧图像是根据所述背景部分的全景图以及所述前景部分分别在对应图像中的位置合成的。A micro-motion map is generated, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image, respectively.
本发明实施例第二方面公开了一种图像处理装置,包括:The second aspect of the embodiment of the present invention discloses an image processing apparatus, including:
获取模块,用于获取飞行器沿特定轨迹飞行时针对目标环境采集的图像集合,所述图像集合包括多张图像;An acquisition module, configured to acquire a collection of images acquired by the aircraft for a target environment when flying along a specific trajectory, the image collection comprising a plurality of images;
提取模块,用于从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置;An extracting module, configured to extract, from the image set, a foreground portion of the target environment and a position of the foreground portion in a corresponding image;
拼接模块,用于对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图;a splicing module, configured to splicing a background portion of the target environment in at least part of the image of the image set to generate a panoramic view of a background portion of the target environment;
合成模块,用于生成微动图,其中,所述微动图中的各帧图像是根据所述背景部分的全景图以及所述前景部分分别在对应图像中的位置合成的。And a synthesizing module, configured to generate a micro-motion image, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image.
本发明实施例第三方面公开了一种电子设备,包括:处理器和存储器,A third aspect of the embodiments of the present invention discloses an electronic device, including: a processor and a memory,
所述存储器,用于存储程序指令;The memory is configured to store program instructions;
所述处理器,用于执行所述存储器存储的程序指令,当程序指令被执行时,所述处理器用于:The processor is configured to execute the program instructions stored by the memory, when the program instructions are executed, the processor is configured to:
获取飞行器沿特定轨迹飞行时针对目标环境采集的图像集合,所述图像集合包括多张图像;Acquiring a collection of images acquired for the target environment when the aircraft is flying along a particular trajectory, the collection of images comprising a plurality of images;
从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置;Extracting a foreground portion of the target environment and a position of the foreground portion in the corresponding image from the image set;
对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图;Splicing a background portion of the target environment in at least a portion of the image set to generate a panoramic view of a background portion of the target environment;
生成微动图,其中,所述微动图中的各帧图像是根据所述背景部分的全景图以及所述前景部分分别在对应图像中的位置合成的。A micro-motion map is generated, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image, respectively.
本发明实施例第四方面公开了一种计算机程序产品,其中,当该计算机程序产品中的指令由处理器执行时,执行上述图像处理方法。A fourth aspect of an embodiment of the present invention discloses a computer program product, wherein the image processing method is executed when an instruction in the computer program product is executed by a processor.
本发明实施例第五方面公开了一种存储介质,其中,当该存储介质中的指令由电子设备的处理器执行时,使得该电子设备能够执行上述图像处理方法。A fifth aspect of an embodiment of the present invention discloses a storage medium, wherein when an instruction in the storage medium is executed by a processor of the electronic device, the electronic device is enabled to execute the image processing method described above.
本发明实施例中,首先获取飞行器针对目标环境采集的图像集合,然后从 图像集合中提取出目标环境的前景部分以及前景部分分别在对应图像中的位置,根据图像集合包括的图像生成目标环境的背景部分的全景图,最后生成微动图,可以根据飞行器采集的图像集合自动生成微动图,从而可以提高微动图的合成效率,实现微动图合成的自动化以及智能化。In the embodiment of the present invention, first, acquiring an image set collected by the aircraft for the target environment, and then Extracting the foreground part of the target environment and the position of the foreground part in the corresponding image, generating a panoramic view of the background part of the target environment according to the image included in the image set, and finally generating a micro-motion map, which may be based on the image collection collected by the aircraft The micro-motion map is automatically generated, which can improve the synthesis efficiency of the micro-motion map and realize the automation and intelligence of the micro-motion map synthesis.
附图说明DRAWINGS
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings to be used in the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without paying for creative labor.
图1是本发明实施例公开的一种图像处理方法的第一实施例流程示意图;1 is a schematic flow chart of a first embodiment of an image processing method according to an embodiment of the present invention;
图2是本发明实施例公开的一种图像处理过程的示意图;2 is a schematic diagram of an image processing process disclosed in an embodiment of the present invention;
图3是本发明实施例公开的另一种图像处理过程的示意图;3 is a schematic diagram of another image processing process disclosed in an embodiment of the present invention;
图4是本发明实施例公开的又一种图像处理过程的示意图;4 is a schematic diagram of still another image processing process disclosed in an embodiment of the present invention;
图5是本发明实施例公开的一种图像处理方法的第二实施例流程示意图;FIG. 5 is a schematic flowchart diagram of a second embodiment of an image processing method according to an embodiment of the present invention; FIG.
图6是本发明实施例公开的一种图像处理装置的结构示意图;FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention; FIG.
图7是本发明实施例公开的一种电子设备的结构示意图。FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present invention will be clearly and completely described in the following with reference to the accompanying drawings.
请参阅图1,为本发明实施例提供的一种图像处理方法的第一实施例流程示意图。本实施例中所描述的图像处理方法,包括但不限于以下步骤:FIG. 1 is a schematic flowchart diagram of a first embodiment of an image processing method according to an embodiment of the present invention. The image processing method described in this embodiment includes but is not limited to the following steps:
S101、终端获取飞行器沿特定轨迹飞行时针对目标环境采集的图像集合。S101. The terminal acquires a collection of images collected by the aircraft for the target environment when the aircraft is flying along a specific trajectory.
本发明实施例中,目标环境是用户指定的拍摄区域,用于采集制作微动图的图像素材。该图像集合中包括多张目标环境的图像,且多张目标环境的图像之间具有一定的关联性,也就是说,该图像集合包括的是飞行器针对目标环境连续采集的多张图像,飞行器在相邻位置,或者以相邻的云台角度采集到的图 像之间有部分重叠区域,可以根据该图像集合中的多张图像得到目标环境的全景图,即目标环境整个区域的图像。该图像集合中的多张图像可以为飞行器针对目标环境采集的多张照片,也可以为飞行器针对目标环境采集的视频中的多帧图像,本发明实施例不作限定。In the embodiment of the present invention, the target environment is a user-specified shooting area for collecting image material for making a micro-motion image. The image collection includes images of a plurality of target environments, and the images of the plurality of target environments have a certain correlation, that is, the image collection includes a plurality of images continuously acquired by the aircraft for the target environment, and the aircraft is Adjacent locations, or maps acquired at adjacent pan/tilt angles There is a partial overlap between the images, and a panoramic view of the target environment, that is, an image of the entire area of the target environment, can be obtained from the plurality of images in the image set. The plurality of images in the image set may be a plurality of photos collected by the aircraft for the target environment, or may be a plurality of frames in the video captured by the aircraft for the target environment, which is not limited in the embodiment of the present invention.
本发明实施例中,该图像集合可以包括两个图像子集合,该两个图像子集合可以是第一图像集合和第二图像集合。该第一图像集合以及该第二图像集合可以是飞行器执行两次航线飞行时针对目标环境采集到的图像的集合,具体地,飞行器第一次沿第一轨迹飞行时针对目标环境采集得到第一图像集合;飞行器第二次沿第二轨迹飞行时针对目标环境采集得到第二图像集合。In this embodiment of the present invention, the image set may include two image subsets, and the two image subsets may be a first image set and a second image set. The first image set and the second image set may be a set of images collected by the aircraft for the target environment when the aircraft performs two flight routes. Specifically, the first time the aircraft flies along the first trajectory for the target environment. Image collection; the second image collection is acquired for the target environment when the aircraft flies along the second trajectory for the second time.
本发明实施例中,飞行器针对目标环境采集到的图像中,包括有目标环境的前景部分以及背景部分。目标环境的前景部分指的是针对目标环境拍摄图像时用于体现拍摄主题的目标对象,也就是制作微动图时,微动图中需要突出的动态区域对应的运动目标;目标环境的背景部分则是目标环境中除前景部分之外的其他部分。其中,该第一图像集合中的每一张图像都不包括目标环境的前景部分,该第二图像集合中的至少部分图像包括目标环境的前景部分,可以是该第二图像集合中的部分图像包括目标环境的前景部分,也可以是该第二图像集合中的全部图像都包括目标环境的前景部分。需要说明的是,第二图像集合中的图像相比于第一图像集合中的图像多出的内容中,可以是多出的全部内容为目标环境的前景部分;也可以是多出的部分内容为目标环境的前景部分,多出的内容中不为目标环境的前景部分的部分内容也可以是目标环境的背景部分。In the embodiment of the present invention, the image collected by the aircraft for the target environment includes a foreground portion and a background portion of the target environment. The foreground part of the target environment refers to the target object used to reflect the subject of the shot when the image is taken for the target environment, that is, the moving target corresponding to the dynamic area that needs to be highlighted in the micro-motion map when making the micro-motion map; the background part of the target environment It is the part of the target environment other than the foreground part. Wherein each of the first image sets does not include a foreground portion of the target environment, and at least a portion of the images in the second image set include a foreground portion of the target environment, which may be a partial image of the second image set Including the foreground portion of the target environment, it is also possible that all of the images in the second image set include the foreground portion of the target environment. It should be noted that, in the content in the second image set compared to the image in the first image set, the excess content may be the foreground portion of the target environment; or may be the excess content. For the foreground part of the target environment, part of the content of the foreground that is not part of the target environment may also be the background part of the target environment.
在一些可行的实施方式中,该第一轨迹和该第二轨迹相同,即飞行器采集第一图像集合时的飞行航线与飞行器采集第二图像集合时的飞行航线是一致的。进一步地,该第一图像集合和该第二图像集合是飞行器在执行飞行过程中采用相同的拍摄策略拍摄的,该拍摄策略包括拍摄位置、云台拍摄角度、拍摄频率中的至少一项。飞行器是通过云台上搭载的拍摄设备来采集照片或者视频的,飞行器采集到的图像集合可以是飞行器搭载的一部拍摄设备采集的,也可以是飞行器搭载的多部拍摄设备采集的。需要说明的是,该第一轨迹和第二轨迹也可以是不同的,例如允许第一轨迹和第二轨迹有一定的偏差;飞行器两次 采集图像的拍摄策略也可以是不同的。可以是飞行器按照不同的轨迹飞行两次,但采用相同的拍摄策略采集得到第一图像集合以及第二图像集合;也可以飞行器按照相同的轨迹飞行两次,但采用不同的拍摄策略采集得到第一图像集合以及第二图像集合;还可以是飞行器按照不同的轨迹飞行两次,且采用不同的拍摄策略采集得到第一图像集合以及第二图像集合。虽然采用上述方式采集到的第一图像集合中的图像与第二图像集合中的图像有一定的误差,但可以利用图像识别技术获取各张图像中有用信息。飞行器按照相同的轨迹飞行两次,且采用相同的拍摄策略采集得到第一图像集合以及第二图像集合是本发明实施例的首选方案,本发明实施例将针对采用首选方案采集到的图像集合进行描述。In some possible implementations, the first trajectory is the same as the second trajectory, that is, the flight path when the aircraft acquires the first image set is consistent with the flight path when the aircraft acquires the second image set. Further, the first image set and the second image set are taken by the aircraft using the same shooting strategy during the execution of the flight, and the shooting strategy includes at least one of a shooting position, a pan/tilt shooting angle, and a shooting frequency. The aircraft collects photos or videos through the shooting equipment carried on the gimbal. The collection of images collected by the aircraft can be collected by a shooting device carried by the aircraft or by multiple shooting devices carried by the aircraft. It should be noted that the first trajectory and the second trajectory may also be different, for example, allowing the first trajectory and the second trajectory to have a certain deviation; the aircraft twice The shooting strategy for acquiring images can also be different. The aircraft may fly twice according to different trajectories, but the same image acquisition strategy is used to obtain the first image set and the second image set; or the aircraft may fly twice according to the same trajectory, but adopt different shooting strategies to obtain the first The image set and the second image set; or the aircraft may fly twice according to different trajectories, and the first image set and the second image set are acquired by using different shooting strategies. Although the image in the first image set collected in the above manner has a certain error with the image in the second image set, the image recognition technology can be used to obtain useful information in each image. The aircraft is traversed by the same trajectory twice, and the first image set and the second image set are collected by the same shooting strategy. The embodiment of the present invention will be used for the image collection collected by the preferred scheme. description.
在一些可行的实施方式中,该第一图像集合可以是飞行器按照预设航线执行飞行时,按照预设拍摄策略针对目标环境中的背景部分采集的图像的集合,从而该第一图像集合只包括目标环境的背景部分;该第二图像集合可以是飞行器再次按照该预设航线执行飞行时,按照该预设拍摄策略针对目标环境采集的图像的集合,且该第二图像集合包括目标环境的前景部分以及背景部分。举例来说,假设飞行器拍摄的场景即目标环境为一辆车在马路上缓慢行驶,而车周围的其他物体都是静止的,且生成的微动图中需要体现车行驶的画面。则车即为目标环境的前景部分,车周围的其他物体即为目标环境的背景部分。飞行器第一次针对车未在马路上行驶时,马路及其周围的环境进行拍摄,得到的第一图像集合中的图像不包括车子,则第一图像集合只包括目标环境的背景部分。飞行器第二次针对车在马路上行驶时,车子及其周围环境进行拍摄,得到的第二图像集合中至少部分图像包括车子,则第二图像集合包括目标环境的背景部分以及前景部分。In some possible implementations, the first image set may be a set of images acquired for a background portion of the target environment according to a preset shooting strategy when the aircraft performs flight according to the preset route, such that the first image set includes only the first image set. a background portion of the target environment; the second image set may be a set of images acquired for the target environment according to the preset shooting strategy when the aircraft performs flight again according to the preset route, and the second image set includes a foreground of the target environment Part and background section. For example, suppose the scene captured by the aircraft, that is, the target environment, is that a car is driving slowly on the road, while other objects around the car are stationary, and the generated micro-motion map needs to reflect the picture of the car. The car is the foreground part of the target environment, and the other objects around the car are the background part of the target environment. For the first time, the aircraft photographs the environment of the road and its surroundings when the vehicle is not driving on the road. The obtained image in the first image set does not include the car, and the first image set only includes the background portion of the target environment. The second time the aircraft is photographed while the vehicle is driving on the road, the vehicle and its surroundings are photographed, and at least part of the obtained second image set includes a car, and the second image set includes a background portion of the target environment and a foreground portion.
该第一图像集合也可以是飞行器按照用户手动输入的飞行控制指令执行飞行任务时,按照用户输入的拍照指令针对目标环境中的背景部分采集的图像集合。飞行器在采集第一图像集合的过程中,记录飞行器飞行的轨迹、拍摄图像的位置以及云台的拍摄角度等参数。该第二图像集合是飞行器按照上述记录参数针对目标环境采集的图像集合,具体地,该第二图像集合是飞行器按照采集第一图像集合时的飞行轨迹执行飞行时,以与采集第一图像集合时相同的云 台拍摄角度在相同的拍摄位置上针对目标环境采集的图像集合。The first image set may also be a collection of images acquired for the background portion in the target environment according to the photographing instruction input by the user when the aircraft performs the flight task according to the flight control instruction manually input by the user. In the process of collecting the first image set, the aircraft records parameters such as the trajectory of the flight of the aircraft, the position of the captured image, and the shooting angle of the gimbal. The second image set is a set of images acquired by the aircraft for the target environment according to the above-mentioned recording parameters. Specifically, the second image set is when the aircraft performs flight according to the flight trajectory when collecting the first image set, and collects the first image set. Same time cloud A collection of images acquired at a target location for the target environment at the same shooting location.
采用上述方式,由于飞行器采集第一图像集合中的各张图像的拍摄位置以及云台拍摄角度等参数,与飞行器采集第二图像中的各张图像的拍摄位置以及云台拍摄角度等参数是相同的,则可以使得第二图像集合中的第一图像,与第一图像在第一图像集合中的对应图像(例如第二图像)的对应拍摄区域是一致的,可以理解为第一图像与第二图像是相同的,只是第一图像可能相对于第二图像多了目标环境的前景部分,第二图像可能相对于第一图像多了部分目标环境的背景。其中,拍摄第一图像与拍摄第二图像的位置以及云台拍摄角度等参数是相同的,故第一图像与第二图像对应。In the above manner, since the aircraft captures the shooting position of each image in the first image set and the pan/tilt shooting angle and the like, the parameters are the same as the shooting position of the image captured by the aircraft in the second image and the pan/tilt shooting angle. The first image in the second image set may be consistent with the corresponding image capturing area of the corresponding image (eg, the second image) of the first image in the first image set, which may be understood as the first image and the first image. The two images are identical, except that the first image may have more foreground portions of the target environment relative to the second image, and the second image may have more background for a portion of the target environment relative to the first image. The parameters for capturing the first image and the position at which the second image is captured and the panning angle are the same, so the first image corresponds to the second image.
S102、所述终端从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置。S102. The terminal extracts, from the image set, a foreground portion of the target environment and a position of the foreground portion in a corresponding image.
本发明实施例中,由于第一图像集合只包括目标环境的背景部分,第二图像集合包括有目标环境的前景部分,故可以通过将第二图像集合中的第一图像,以及该第一图像集合中与该第一图像对应的第二图像作差,来提取第一图像中的目标环境的前景部分。具体地,可以是通过将第一图像以及第二图像的对应像素点进行减法运算,来提取第一图像中相对于第二图像的区别部分;进一步地,对该区别部分进行连通域滤波,过滤掉区别部分中不属于目标环境的前景的部分,得到第一图像中的目标环境的前景部分。在对该第一图像以及该第二图像作差的过程中,可以根据该区别部分对应的像素点的位置确定第一图像中的目标环境的前景部分在第一图像中的位置,进一步可以确定第一图像中的目标环境的前景部分在第二图像中的对应位置。In the embodiment of the present invention, since the first image set includes only the background portion of the target environment, and the second image set includes the foreground portion of the target environment, the first image in the second image set and the first image may be A second image corresponding to the first image in the set is poor to extract a foreground portion of the target environment in the first image. Specifically, the difference portion of the first image relative to the second image may be extracted by subtracting the corresponding pixels of the first image and the second image; further, the connected portion is filtered by the connected domain, and filtered. The portion of the difference that is not part of the foreground of the target environment is obtained, and the foreground portion of the target environment in the first image is obtained. In the process of performing the difference between the first image and the second image, the position of the foreground portion of the target environment in the first image in the first image may be determined according to the position of the pixel corresponding to the different portion, and further determined The foreground portion of the target environment in the first image is at a corresponding location in the second image.
S103、所述终端对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图。S103. The terminal splices a background portion of the target environment in at least part of the image of the image set to generate a panoramic view of a background portion of the target environment.
本发明实施例中,由于第一图像集合只包括目标环境的背景部分,则可以只对该第一图像集合中的至少部分图像进行拼接,得到目标环境的背景部分的全景图。具体地,利用终端预置的图像拼接方法对该第一图像集合中的全部图像进行拼接,并将拼接得到的全景图中的重叠区域去除,得到目标环境的背景部分的全景图。或者,利用终端预置的图像拼接方法对该第一图像集合中的部分图像进行拼接,并将拼接得到的全景图中的重叠区域去除,得到目标环境的 背景部分的全景图。通过上述方式,并不需要对第一图像集合的每张图像进行拼接,可以在一定程度上提高图像拼接的效率;虽然得到的背景部分的全景图可能漏掉部分背景,但只要漏掉的部分背景不过于影响背景部分的全景图的效果,则在本发明实施例的允许范围内。In the embodiment of the present invention, since the first image set includes only the background portion of the target environment, only at least part of the images in the first image set may be spliced to obtain a panoramic view of the background portion of the target environment. Specifically, all the images in the first image set are spliced by using an image tiling method preset by the terminal, and the overlapping regions in the spliced panoramic image are removed to obtain a panoramic view of the background portion of the target environment. Alternatively, the partial image in the first image set is spliced by using an image tiling method preset by the terminal, and the overlapping area in the spliced panoramic image is removed to obtain a target environment. Panorama of the background section. In the above manner, it is not necessary to splicing each image of the first image set, and the efficiency of image stitching can be improved to some extent; although the obtained panorama of the background portion may miss part of the background, as long as the missing portion The background is not limited to the effect of the panoramic view affecting the background portion, but is within the allowable range of the embodiment of the present invention.
在一些可行的实施方式中,可以结合飞行器采集第一图像集合中的各张图像时的云台拍摄姿态,利用终端预置的图像拼接方法对该第一图像集合中的至少部分图像进行拼接,生成目标环境的背景部分的全景图。其中,云台拍摄姿态包括横向角度、纵向角度、偏转角等。采用上述方式,可以根据飞行器采集第一图像集合中的各张图像时的云台拍摄姿态,确定第一图像集合中的各张图像在生成的背景部分的全景图中的位置,从而避免通过对第一图像集合中的各张图像进行解析,来确定第一图像集合中的各张图像在生成的背景部分的全景图中的位置,可以进一步提高图像拼接的效率。In some feasible implementation manners, the pan-tilt shooting attitude when the aircraft captures each image in the first image set may be combined, and at least part of the image in the first image set is stitched by using a preset image stitching method of the terminal, Generate a panorama of the background portion of the target environment. Among them, the pan/tilt shooting posture includes a lateral angle, a longitudinal angle, a deflection angle, and the like. In the above manner, the position of each image in the first image set in the panoramic view of the generated background portion may be determined according to the pan/tilt shooting posture when the aircraft collects each image in the first image set, thereby avoiding Each image in the first image set is parsed to determine the position of each image in the first image set in the panoramic view of the generated background portion, which can further improve the efficiency of image stitching.
S104、所述终端生成微动图。S104. The terminal generates a micromotion map.
本发明实施例中,微动图中的各帧图像是根据背景部分的全景图以及目标环境的前景部分分别在对应图像中的位置合成的。具体地,针对第二图像集合中的每一个第一图像,该第一图像即为第二图像集合中包括目标环境的前景部分的图像:首先根据第一图像中的目标环境的前景部分在第一图像中的位置,确定第一图像中的目标环境的前景部分在与第一图像对应的第二图像中的对应位置;然后根据第一图像中的目标环境的前景部分在第二图像中的对应位置,确定背景部分的全景图中与第一图像中的目标环境的前景部分对应的位置区域;最后将第一图像中的目标环境的前景部分插入到一张背景部分的全景图中,与第一图像中的目标环境的前景部分对应的位置区域中,得到第一目标图像,该第一目标图像为将要生成的微动图中的其中一帧图像。以此类推,可以将第二图像集合中至少部分包括目标环境的前景部分的图像中的前景部分分别插入到一张背景部分的全景图中,从而得到将要生成的微动图中的每一帧图像。这样,得到的微动图中的每一帧图像中都只有一个区域对应目标环境的前景部分,且每一帧图像的背景部分为一张目标环境的全景图。In the embodiment of the present invention, each frame image in the micro-motion map is synthesized according to the panoramic view of the background portion and the position of the foreground portion of the target environment in the corresponding image. Specifically, for each first image in the second image set, the first image is an image including a foreground portion of the target environment in the second image set: first according to the foreground portion of the target environment in the first image a position in an image, determining a corresponding position of a foreground portion of the target environment in the first image in a second image corresponding to the first image; and then according to a foreground portion of the target environment in the first image in the second image Corresponding to the location, determining a location area in the panoramic view of the background portion corresponding to the foreground portion of the target environment in the first image; finally inserting the foreground portion of the target environment in the first image into a panorama of the background portion, In the location area corresponding to the foreground portion of the target environment in the first image, a first target image is obtained, which is one of the image frames in the micro-motion map to be generated. By analogy, at least part of the foreground image in the image of the foreground portion of the second image set including the foreground portion of the target environment may be inserted into the panorama of the background portion, thereby obtaining each frame in the micro-motion map to be generated. image. Thus, only one region in each frame image of the obtained micro-motion map corresponds to the foreground portion of the target environment, and the background portion of each frame image is a panoramic view of the target environment.
举例来说,以上述一辆车在马路上缓慢行驶,而车周围的其他物体都是静止的目标环境为例,第一图像即为包括车子的图像,假设第一图像中车子位于 马路的十字路口处,且画面中车子覆盖了邮箱的下半部分,然后根据车子在第一图像中的位置将车子插入到目标环境的背景部分的全景图中,即将车子插入到全景图中该十字路口对应的位置,且车子的位置正好覆盖掉该邮箱的下半部分,得到微动图的一帧图像。其中,背景部分的全景图中与第一图像中的目标环境的前景部分对应的位置为,背景部分的全景图中第二图像中与第一图像中的目标环境的前景部分对应的位置。在得到微动图的每一帧图像之后,将该微动图的各帧图像以动图格式进行合成,得到微动图,得到的微动图中目标环境的前景部分进行运动,目标环境的背景部分保持静止。其中,该动图格式包括但不限于图像互换格式(Graphics Interchange Format,GIF)。For example, taking the above-mentioned one car to drive slowly on the road, and other objects around the car are all stationary target environments, the first image is an image including the car, assuming that the car is located in the first image. At the crossroads of the road, and the car covers the lower part of the mailbox, and then inserts the car into the panorama of the background part of the target environment according to the position of the car in the first image, which is to insert the car into the panorama. The position corresponding to the intersection, and the position of the car just covers the lower half of the mailbox, and obtains a frame image of the micro-motion map. The position corresponding to the foreground portion of the target environment in the first image in the panorama of the background portion is the position in the second image of the background portion corresponding to the foreground portion of the target environment in the first image. After each frame of the micro-motion image is obtained, each frame image of the micro-motion image is synthesized in a moving image format to obtain a micro-motion image, and the foreground portion of the target environment in the obtained micro-motion image is moved, and the target environment is The background portion remains stationary. The animation format includes but is not limited to a Graphics Interchange Format (GIF).
举例来说,飞行器第一次沿特定的轨迹自动飞行,并采用特定的拍摄策略采集目标环境的背景部分的多张图像;或者飞行器按照用户输入的飞行控制指令执行飞行时,按照用户输入的拍摄指令采集目标环境的背景部分的多张图像。若飞行器是按照用户输入的飞行控制指令执行飞行的,则记录飞行器飞行过程中的飞行轨迹、飞行器采集图像的位置以及采集图像时的云台拍摄角度等参数。采集到的目标环境的背景部分的多张图像即为第一图像集合{Ii},i=1~N,其中,i为正整数。For example, the aircraft automatically flies along a specific trajectory for the first time, and uses a specific shooting strategy to acquire multiple images of the background portion of the target environment; or when the aircraft performs flight according to the flight control command input by the user, according to the user input The instruction captures multiple images of the background portion of the target environment. If the aircraft performs flight according to the flight control command input by the user, the flight trajectory during the flight of the aircraft, the position of the image acquired by the aircraft, and the pan/tilt shooting angle when the image is acquired are recorded. The plurality of images of the background portion of the collected target environment are the first image set {I i }, i=1 to N , where i is a positive integer.
飞行器第二次沿第一次飞行的轨迹自动飞行,同时,目标环境的前景部分在飞行器采集图像的过程中进行运动,飞行器按照第一次的拍摄位置以及云台拍摄角度等参数采集目标环境的多张图像。采集到的目标环境的多张图像即为第二图像集合{I'i},i=1~N,该第二图像集合中的至少部分图像包括目标环境的前景部分。进一步地,将第一图像集合中的各张图像按照拍摄时间的先后顺序进行排列,同理,将第二图像集合中的各张图像按照拍摄时间的先后顺序进行排列,以使图像Ii与图像I'i对应,即图像Ii与图像I'i对应的拍摄区域相同。例如,需要生成的微动图的效果为一个人在一条小路上行走,而其他物体都是静止的。飞行器第一次只针对目标环境的背景部分进行拍摄,即飞行器针对主人公未在小路上行走时的小路及其周围的环境进行拍摄,得到未包括目标环境的前景部分的第一图像集合。飞行器第二次针对主人公在小路上行走时的小路、主人公以及周围环境进行拍摄,得到包括目标环境的前景部分的第二图像集合。其中,主人公即为目标环境的前景部分,也即是生成微动图之后,微动图中的 动态目标。由于飞行器采集第一图像集合以及第二图像集合时的拍摄位置和云台拍摄角度相同,故而飞行器两次采集到的图像基本一致,图像数量相同,只是第二图像集合中的至少部分图像包括目标环境的前景部分。The second flight of the aircraft automatically flies along the trajectory of the first flight. At the same time, the foreground part of the target environment moves during the process of capturing images by the aircraft. The aircraft collects the target environment according to parameters such as the first shooting position and the pan-tilt shooting angle. Multiple images. The plurality of images of the collected target environment are the second image set {I' i }, i=1~N , and at least part of the images in the second image set include the foreground portion of the target environment. Further, each image in the first image set is arranged in the order of shooting time. Similarly, each image in the second image set is arranged in the order of shooting time, so that the image I i and image I 'i corresponds, i.e., the image I i and image I' i corresponding to the same imaging region. For example, the effect of the generated micro-motion map is that one person walks on one path while the other objects are still. For the first time, the aircraft only photographs the background portion of the target environment, that is, the aircraft photographs the path of the protagonist who is not walking on the trail and the surrounding environment, and obtains a first image set that does not include the foreground portion of the target environment. For the second time, the aircraft photographed the path, the hero, and the surrounding environment when the hero walked on the trail, and obtained a second image set including the foreground portion of the target environment. Among them, the protagonist is the foreground part of the target environment, that is, the dynamic target in the micro-motion map after generating the micro-motion map. Since the shooting position of the first image set and the second image set is the same, the images acquired by the aircraft are basically the same, and the number of images is the same, except that at least part of the images in the second image set include the target. The foreground part of the environment.
在获取到飞行器针对目标环境采集的第一图像集合以及第二图像集合之后,首先通过对飞行器两次采集到的图像做减法运算,提取目标环境的前景部分。请一并参见图2,将第二图像集合中的第i张图像与第一图像集合中的第i张图像对应的像素点作差,得到差图{Ki=Ii-I'i},i=1~N。差图中像素不为零的区域即为目标环境的前景部分,由于图像之间可能存在误差,可以对差图进行连通域滤波,以过滤掉差图中的小块动态点,得到目标环境的前景部分{Si},i=1~N,即第二图像集合中的第i张图像的前景部分为Si。进一步地,可以根据第二图像集合中的第i张图像的前景部分Si对应的像素点的位置确定Si在该第i图像中的位置。After acquiring the first image set and the second image set acquired by the aircraft for the target environment, the foreground portion of the target environment is first extracted by subtracting the image acquired twice by the aircraft. Referring to FIG. 2 together, the i-th image in the second image set is compared with the pixel corresponding to the i-th image in the first image set to obtain a difference map {K i =I i -I' i } , i=1~N . The region where the pixel is not zero in the difference graph is the foreground portion of the target environment. Because there may be errors between the images, the connected region may be filtered to filter out the small dynamic points in the difference map to obtain the target environment. The foreground portion {S i }, i=1 to N , that is, the foreground portion of the i-th image in the second image set is S i . Further, the position of S i in the ith image may be determined according to the position of the pixel point corresponding to the foreground portion S i of the ith image in the second image set.
请一并参见图3,然后根据预置的图像拼接算法对飞行器采集到的第一图像集合中的至少部分图像进行拼接,并将拼接得到的大图中的重叠区域去除,得到目标环境的背景部分的全景图P。其次,将第二图像集合中的第i张图像的前景部分Si插入到一张背景部分的全景图中的对应位置,请一并参见图4,由于第二图像集合中的图像I'i与第一图像集合中的图像Ii对应,而目标环境的背景部分的全景图是根据第一图像集合中的图像Ii拼接得到的,故将Si插入到Ii在P中的位置,得到微动图的第i帧图像Ci。以此类推可以得到微动图的各帧图像{Ci},i=1~N。最后将微动图的各帧图像{Ci},i=1~N以GIF格式进行合成,得到微动图。Please refer to FIG. 3 together, and then splicing at least part of the image in the first image set collected by the aircraft according to the preset image splicing algorithm, and removing the overlapping area in the spliced large image to obtain the background of the target environment. Part of the panorama P. Next, insert the foreground portion S i of the i-th image in the second image set into a corresponding position in the panorama of the background portion, please refer to FIG. 4 together, because the image I′ i in the second image set Corresponding to the image I i in the first image set, and the panorama of the background portion of the target environment is stitched according to the image I i in the first image set, so S i is inserted into the position of I i in P, The i-th frame image C i of the micro-motion map is obtained. By analogy, each frame image {C i }, i=1 to N of the micro-motion map can be obtained. Finally, each frame image {C i }, i=1~N of the micro-motion map is synthesized in GIF format to obtain a micro-motion map.
本发明实施例中,首先获取飞行器针对目标环境采集的图像集合,然后从图像集合中提取出目标环境的前景部分以及前景部分分别在对应图像中的位置,根据图像集合包括的图像生成目标环境的背景部分的全景图,最后生成微动图,可以根据飞行器采集的图像集合自动生成微动图,从而可以提高微动图的合成效率,实现微动图合成的自动化以及智能化。In the embodiment of the present invention, the image set collected by the aircraft for the target environment is first acquired, and then the foreground portion of the target environment and the position of the foreground portion in the corresponding image are extracted from the image set, and the target environment is generated according to the image included in the image set. The panorama of the background part finally generates the micro-motion map, which can automatically generate the micro-motion map according to the image collection collected by the aircraft, thereby improving the synthesis efficiency of the micro-motion diagram and realizing the automation and intelligence of the micro-motion pattern synthesis.
请参阅图5,为本发明实施例提供的一种图像处理方法的第二实施例流程示意图。本实施例中所描述的图像处理方法,包括但不限于以下步骤: FIG. 5 is a schematic flowchart diagram of a second embodiment of an image processing method according to an embodiment of the present invention. The image processing method described in this embodiment includes but is not limited to the following steps:
S501、终端获取飞行器沿特定轨迹飞行一次针对目标环境采集的图像集合。S501. The terminal acquires a set of images acquired by the aircraft along a specific trajectory for the target environment.
本发明实施例中,飞行器沿特定轨迹飞行一次,同时,目标环境的前景部分,即拍摄主体在飞行器采集图像的过程中进行运动。该图像集合可以是飞行器按照预设的特定轨迹自动执行飞行时按照预设拍摄策略针对目标环境采集的图像集合,该图像集合也可以是飞行器按照用户手动输入的飞行控制指令对应的特定飞行轨迹执行飞行时,按照用户输入的拍照指令针对目标环境采集的图像集合。该图像集合包括多张图像,且包括了目标环境的前景部分以及背景部分。In the embodiment of the present invention, the aircraft flies once along a specific trajectory, and at the same time, the foreground portion of the target environment, that is, the photographic subject moves during the process of acquiring images by the aircraft. The image collection may be a collection of images acquired by the aircraft according to a preset shooting strategy for the target environment when the aircraft automatically performs the flight according to the preset specific trajectory, and the image collection may also be performed by the aircraft according to a specific flight trajectory corresponding to the flight control instruction manually input by the user. When flying, the collection of images acquired for the target environment according to the camera instruction input by the user. The collection of images includes a plurality of images and includes a foreground portion of the target environment and a background portion.
S502、所述终端从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置。S502. The terminal extracts, from the image set, a foreground portion of the target environment and a position of the foreground portion in a corresponding image.
本发明实施例中,该图像集合中可能全部图像都包括有目标环境的前景部分,也可能只有部分图像包括有目标环境的前景部分。首先可以利用图像识别技术确定出该图像集合中包括有目标环境的前景部分的第三图像,然后针对每一张第三图像,提取第三图像中目标环境的前景部分,并根据该前景部分对应的像素点的位置确定该前景部分在第三图像中的位置。In the embodiment of the present invention, all the images in the image set may include the foreground part of the target environment, or only part of the image may include the foreground part of the target environment. First, an image recognition technology may be used to determine a third image of the foreground portion of the image set including the target environment, and then for each third image, a foreground portion of the target environment in the third image is extracted, and corresponding to the foreground portion The position of the pixel determines the position of the foreground portion in the third image.
S503、所述终端对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图。S503. The terminal splices a background portion of the target environment in at least part of the image of the image set to generate a panoramic view of a background portion of the target environment.
本发明实施例中,由于第一图像集合的至少部分图像包括有目标环境的前景部分,故而可以先将该图像集合中包括有目标环境的前景部分的各张图像中目标环境的前景部分去除,即通过图像处理技术中的抠图技术将目标环境的前景部分抠出,得到只包括目标环境的背景部分的图像,但该目标环境的背景部分的图像中至少部分图像有去除目标环境的前景部分后产生的空白区域;然后对各张目标环境的背景部分的图像进行拼接处理,得到包括空白区域的目标环境的背景部分的全景图。需要说明的是,该图像集合中不包括目标环境的前景部分的各张图像,即为一张只包括目标环境的背景部分的图像。In the embodiment of the present invention, since at least part of the image of the first image set includes the foreground portion of the target environment, the foreground portion of the target environment in each image of the foreground portion of the image set including the foreground environment may be removed first. That is, the foreground part of the target environment is extracted by the mapping technique in the image processing technology, and an image including only the background portion of the target environment is obtained, but at least part of the image in the background portion of the target environment has the foreground portion of the target environment removed. The resulting blank area; then splicing the image of the background portion of each target environment to obtain a panoramic view of the background portion of the target environment including the blank area. It should be noted that each image of the foreground portion of the target environment is not included in the image set, that is, an image including only the background portion of the target environment.
在一些可行的实施方式中,可以先利用该图像集合包括的多张图像中的目标环境的背景部分,对各张目标环境的背景部分的图像中去除目标环境的前景部分后产生的空白区域进行填充,得到不包括空白区域的目标环境的背景部分 的图像。然后对各张不包括空白区域的目标环境的背景部分的图像进行拼接,得到不包括空白区域的目标环境的背景部分的全景图。需要说明的是,该图像集合中不包括目标环境的前景部分的各张图像,即为一张不包括空白区域的目标环境的背景部分的图像。In some feasible implementation manners, the background portion of the target environment in the plurality of images included in the image set may be used to perform the blank region generated after removing the foreground portion of the target environment in the image of the background portion of each target environment. Fill to get the background portion of the target environment that does not include white space Image. The images of the background portions of the target environments that do not include the blank areas are then spliced to obtain a panoramic view of the background portion of the target environment that does not include the blank areas. It should be noted that each image of the foreground portion of the target environment is not included in the image collection, that is, an image of a background portion of the target environment that does not include the blank region.
在一些可行的实施方式中,可以在得到包括空白区域的目标环境的背景部分的全景图之后,针对包括空白区域的目标环境的背景部分的全景图的部分空白区域进行填充,得到微动图的各帧图像的背景全景图。具体的,利用该图像集合包括的多张图像中的目标环境的背景部分,将包括空白区域的目标环境的背景部分的全景图中除该图像集合中第i张图片对应的位置区域中的空白区域之外的其他区域进行背景填充,得到微动图的第i帧图像的背景全景图。In some feasible implementation manners, after obtaining the panoramic view of the background portion of the target environment including the blank area, the partial blank area of the panoramic part of the background part of the target environment including the blank area may be filled to obtain a micro-motion map. A background panorama of each frame image. Specifically, the background portion of the target environment in the plurality of images included in the image set is used, and the blank in the background portion of the background portion of the target environment including the blank region is excluded from the position region corresponding to the ith image in the image set A background fill is performed on other areas outside the area to obtain a background panorama of the i-th frame image of the micro-motion map.
在一些可行的实施方式中,也可以不去除该图像集合中包括有目标环境的前景部分的各张图像中的目标环境的前景部分,直接利用该图像集合包括的多张图像中的目标环境的背景部分覆盖各张图像中的目标环境的前景部分,得到不包括空白区域的目标环境的背景部分的图像。然后对各张不包括空白区域的目标环境的背景部分的图像进行拼接,得到不包括空白区域的目标环境的背景部分的全景图。In some feasible implementation manners, the foreground portion of the target environment in each image of the foreground portion of the image set including the target environment may not be removed, and the target environment in the plurality of images included in the image set may be directly utilized. The background portion covers the foreground portion of the target environment in each image, and an image of the background portion of the target environment that does not include the blank region is obtained. The images of the background portions of the target environments that do not include the blank areas are then spliced to obtain a panoramic view of the background portion of the target environment that does not include the blank areas.
举例来说,例如,我们需要生成的微动图的效果为一个人在一条小路上行走,而其他物体都是静止的。由于采集到的图像集合中至少部分图像中包括有主人公,则我们可以将包括有主人公的图像中主人公对应的区域去除掉,得到包括有空白区域的只包括背景部分的图像,然后将包括有空白区域的只包括背景部分的图像以及其他不包括有主人公的图像拼接为包括有空白区域的背景部分的全景图。进一步地,可以利用该图像集合中的多张图像中的背景部分对包括有空白区域的背景部分的全景图进行填充,得到不包括空白区域的背景部分的全景图。进一步的,还可以将包括有空白区域的背景部分的全景图中的部分空白区域进行填充,例如图像i包括主人公,将包括有空白区域的背景部分的全景图中除图像i对应的区域中的空白区域(即图像i中的主人公对应的区域)之外的其他区域进行填充,得到微动图的第i帧图像的背景全景图。For example, for example, the effect of the micro-motion we need to generate is for a person to walk on a small road while other objects are still. Since at least part of the images in the collected image set includes the protagonist, we can remove the region corresponding to the hero in the image including the protagonist, and obtain an image including only the background portion including the blank region, and then include the blank. The image of the area including only the background portion and other images not including the hero are spliced into a panorama including a background portion having a blank area. Further, the panorama of the background portion including the blank area may be filled with the background portion of the plurality of images in the image set to obtain a panoramic view of the background portion not including the blank area. Further, a partial blank area in the panoramic view including the background portion of the blank area may be filled, for example, the image i includes a character, and the image of the background portion including the blank area is included in the area corresponding to the image i. The area other than the blank area (ie, the area corresponding to the hero in the image i) is filled to obtain a background panorama of the i-th frame image of the micro-motion map.
S504、所述终端生成微动图。S504. The terminal generates a micromotion map.
本发明实施例中,根据目标环境的前景部分分别在图像集合的各张图像中 的位置,将目标环境的前景部分分别插入到背景部分的全景图中,每一张插入目标环境的前景部分的背景部分的全景图为微动图中的一帧图像。具体地,将该图像集合中第i张图片中的目标环境的前景部分插入到包括空白区域的目标环境的背景部分的全景图中该第i张图片对应的位置区域中的空白区域中,得到微动图的包括空白区域的第i帧图像,然后利用该图像集合中的至少两张图像中的背景部分对该包括空白区域的第i帧图像的空白区域进行背景填充,得到微动图的第i帧图像。在得到微动图的每一帧图像之后,将该微动图的各帧图像以动图格式进行合成,得到微动图。其中,该动图格式包括但不限于GIF格式。In the embodiment of the present invention, the foreground portions according to the target environment are respectively in the respective images of the image collection. The position of the foreground environment of the target environment is inserted into the panorama of the background portion, and the panoramic view of the background portion of each foreground portion inserted into the target environment is a frame image in the micro-motion map. Specifically, the foreground portion of the target environment in the ith picture in the image set is inserted into a blank area in the location area corresponding to the i-th picture in the panoramic view of the background portion of the target environment including the blank area, An ith frame image including a blank area of the fretting map, and then performing background filling on the blank area of the ith frame image including the blank area by using a background portion of at least two images in the image set to obtain a micro-motion map The i-th frame image. After each frame image of the micro-motion map is obtained, each frame image of the micro-motion map is synthesized in an animation format to obtain a micro-motion map. The animation format includes but is not limited to the GIF format.
在一些可行的实施方式中,若已得到微动图的第i帧图像的背景全景图,则将该图像集合中第i张图片中的目标环境的前景部分插入到微动图的第i帧图像的背景全景图中的空白区域,得到微动图的第i帧图像。若已得到的不包括空白区域的目标环境的背景部分的全景图,则将该图像集合中第i张图片中的目标环境的前景部分,插入到不包括空白区域的目标环境的背景部分的全景图中该第i张图片对应的位置,得到微动图的第i帧图像。In some feasible implementation manners, if the background panorama of the i-th frame image of the micro-motion map is obtained, the foreground portion of the target environment in the i-th picture in the image set is inserted into the ith frame of the micro-motion map. The blank area in the background panorama of the image gives the image of the ith frame of the micromotion map. If the obtained panorama of the background portion of the target environment that does not include the blank area is obtained, the foreground portion of the target environment in the i-th picture in the image set is inserted into the panorama of the background portion of the target environment not including the blank area The position corresponding to the i-th picture in the figure is the image of the i-th frame of the micro-motion picture.
在一些可行的实施方式中,从该图像集合中提取出目标环境的前景部分以及前景部分分别在对应图像中的位置之前,终端可以根据用户的操作确定目标环境的前景部分。在针对目标环境采集图像时,可能存在多个运动的对象,而需要生成的微动图中只需突出其中一个运动对象,此时终端可以在获取到飞行器针对目标环境采集的图像集合之后,利用图像识别技术识别出图像集合中位置在改变的全部运动对象,并只将用户的选择的目标运动对象作为目标环境的前景部分,将其他的运动对象作为目标环境的背景部分进行静态处理。在生成的目标环境的背景部分的全景图中,可能会包括多个其他运动对象的区域,此时可以利用图像集合中的多张图像中的背景部分将其他运动对象的全部区域或者部分区域进行填充,以使背景部分的全景图更加美观。In some possible implementations, before extracting the foreground portion of the target environment and the location of the foreground portion in the corresponding image from the image set, the terminal may determine the foreground portion of the target environment according to the user's operation. When acquiring an image for a target environment, there may be multiple moving objects, and only one of the moving objects needs to be highlighted in the generated micro-motion map. At this time, the terminal may use the image collection collected by the aircraft for the target environment. The image recognition technology recognizes all moving objects whose positions are changed in the image set, and only uses the selected target moving object of the user as the foreground part of the target environment, and statically processes the other moving objects as the background part of the target environment. In the panorama of the background portion of the generated target environment, a plurality of regions of other moving objects may be included, and at this time, all regions or partial regions of other moving objects may be performed by using the background portion of the plurality of images in the image collection. Fill to make the panorama of the background section more beautiful.
在一些可行的实施方式中,也可以允许生成的微动图中存在多个运动目标对象,则将该多个运动目标对象都作为目标环境的前景部分。可以将该多个运动目标作为目标环境的一个前景部分进行处理,也可以将该多个运动目标作为目标环境的不同前景部分进行处理,具体处理过程可参考上述方法实施例中的 描述,在此不再赘述。In some feasible implementation manners, a plurality of moving target objects may also be allowed in the generated micro-motion map, and the plurality of moving target objects are all used as foreground parts of the target environment. The plurality of moving targets may be processed as a foreground part of the target environment, and the plurality of moving targets may be processed as different foreground parts of the target environment. For the specific processing, refer to the method in the foregoing method embodiment. Description, no longer repeat here.
举例来说,我们拍摄的场景为一个人在道路上行走,与此同时旁边有一辆车缓慢跟随主人公行驶,其他物体都是静止的。则在获取到针对拍摄场景采集的图像集合之后,对图像集合中的图像进行分析,可以确定人和车都在改变位置。若接收到用户的选择为让人动,让车不动,则只将人作为目标环境的前景部分,把车当作目标环境的背景部分进行处理。这样拼接得到的目标环境的背景部分的全景图中会出现多辆车,但该多辆车为同一辆车,此时终端可以任意选择一辆车或者部分车辆对应的区域保留,并根据图像集合中的多张图像中的背景部分对其他车辆的区域进行背景填充,生成的微动图中就只存在一辆或者几量位置不动的车,且只有人在行走。若接收到用户的选择为既让人动,又让车不动,则将人和车都作为目标环境的前景部分进行处理。For example, the scene we shot was a person walking on the road, while a car next to it slowly followed the hero, and other objects were still. Then, after acquiring the image collection acquired for the shooting scene, analyzing the image in the image collection, it can be determined that the person and the car are changing positions. If the user's choice is received to make people move, and the car does not move, only the person is regarded as the foreground part of the target environment, and the car is treated as the background part of the target environment. A plurality of vehicles appear in the panorama of the background portion of the target environment thus spliced, but the plurality of vehicles are the same vehicle, and the terminal can arbitrarily select a vehicle or a part of the vehicle corresponding to the area, and according to the image collection The background portion of the plurality of images in the background fills the background of other vehicles, and only one or a few positions of the stationary vehicle are present in the generated micro-motion map, and only the person is walking. If the user's choice is received to both move and leave the car, both the person and the car are treated as foreground parts of the target environment.
本发明实施例中,首先获取飞行器针对目标环境采集的图像集合,然后从图像集合中提取出目标环境的前景部分以及前景部分分别在对应图像中的位置,根据图像集合包括的图像生成目标环境的背景部分的全景图,最后生成微动图,可以根据飞行器采集的图像集合自动生成微动图,从而可以提高微动图的合成效率,实现微动图合成的自动化以及智能化。In the embodiment of the present invention, the image set collected by the aircraft for the target environment is first acquired, and then the foreground portion of the target environment and the position of the foreground portion in the corresponding image are extracted from the image set, and the target environment is generated according to the image included in the image set. The panorama of the background part finally generates the micro-motion map, which can automatically generate the micro-motion map according to the image collection collected by the aircraft, thereby improving the synthesis efficiency of the micro-motion diagram and realizing the automation and intelligence of the micro-motion pattern synthesis.
在一些可行的实施方式中,本发明实施例提供的图像处理方法及装置可应用于安装有APP(应用软件)的智能终端,所述智能终端可选择为智能手机、平板电脑等。为获得无人机航拍的实时视频流,所述智能终端与无人机、搭载在无人机上的云台和搭载在所述云台上的摄像设备中的至少一个通信连接。在一具体的实现方式中,所述无人机与所述智能终端通信连接。具体地,摄像设备拍摄到的包括目标对象的影像可以通过无线链路传回到智能终端上。In some feasible implementation manners, the image processing method and apparatus provided by the embodiments of the present invention may be applied to an intelligent terminal installed with an APP (application software), and the smart terminal may be selected as a smart phone, a tablet computer, or the like. In order to obtain a real-time video stream of the drone aerial photography, the smart terminal is communicably connected to at least one of a drone, a pan-tilt mounted on the drone, and an imaging device mounted on the pan-tilt. In a specific implementation, the drone is communicatively coupled to the smart terminal. Specifically, the image including the target object captured by the imaging device can be transmitted back to the smart terminal through the wireless link.
在一些可行的实施方式中,APP中设置有微动图拍摄模式,在该模式下,无人机具有可供用户选择的飞行轨迹和/或拍摄策略,或者具有固定的飞行轨迹和/或拍摄策略,或者可通过APP接收用户输入的飞行轨迹和/或拍摄策略。当用户选择该模式时,无人机按照该模式下飞行轨迹进行飞行,和/或,控制摄像设备按照该模式下的拍摄策略进行摄像。In some possible implementations, the APP is provided with a micro-motion shooting mode in which the drone has a flight path and/or shooting strategy that can be selected by the user, or has a fixed flight path and/or shooting. The strategy, or the flight trajectory and/or shooting strategy input by the user may be received through the APP. When the user selects the mode, the drone performs flight according to the flight trajectory in the mode, and/or controls the imaging device to perform imaging according to the shooting strategy in the mode.
请参阅图6,为本发明实施例提供的一种图像处理装置的结构示意图。本 实施例中所描述的图像处理装置,包括:FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. Ben The image processing apparatus described in the embodiment includes:
获取模块601,用于获取飞行器沿特定轨迹飞行时针对目标环境采集的图像集合,所述图像集合包括多张图像。The obtaining module 601 is configured to acquire a collection of images acquired by the aircraft for the target environment when flying along a specific trajectory, the image collection comprising a plurality of images.
提取模块602,用于从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置。The extracting module 602 is configured to extract a foreground portion of the target environment and a position of the foreground portion in the corresponding image from the image set.
拼接模块603,用于对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图。The splicing module 603 is configured to splicing a background portion of the target environment in at least part of the image of the image set to generate a panoramic view of a background portion of the target environment.
合成模块604,用于生成微动图,其中,所述微动图中的各帧图像是根据所述背景部分的全景图以及所述前景部分分别在对应图像中的位置合成的。The synthesizing module 604 is configured to generate a micro-motion image, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image.
本发明实施例中,所述图像集合中的多张图像为所述飞行器采集的多张照片,或者为所述飞行器采集的视频中的多帧图像。In the embodiment of the present invention, the multiple images in the image set are multiple photos collected by the aircraft, or multiple frames in the video captured by the aircraft.
在一些可行的实施方式中,所述图像集合包括第一图像集合和第二图像集合;所述第一图像集合为所述飞行器沿第一轨迹飞行时针对所述目标环境采集的图像集合,所述第二图像集合为所述飞行器沿第二轨迹飞行时针对所述目标环境采集的图像集合;所述第一图像集合不包括所述前景部分,所述第二图像集合包括所述前景部分。In some possible implementations, the image set includes a first image set and a second image set; the first image set is a set of images acquired for the target environment when the aircraft flies along the first trajectory, The second set of images is a set of images acquired for the target environment when the aircraft is flying along a second trajectory; the first set of images does not include the foreground portion, and the second set of images includes the foreground portion.
在一些可行的实施方式中,所述第一轨迹和所述第二轨迹相同。所述第一图像集合和所述第二图像集合是所述飞行器在飞行中采用相同的拍摄策略拍摄的。其中,所述拍摄策略包括以下至少一项:拍摄位置、云台拍摄角度、拍摄频率。In some possible implementations, the first trajectory and the second trajectory are the same. The first set of images and the second set of images are taken by the aircraft using the same shooting strategy in flight. The shooting strategy includes at least one of the following: a shooting position, a pan/tilt shooting angle, and a shooting frequency.
所述提取模块602,具体用于将所述第二图像集合中的第一图像,与所述第一图像在所述第一图像集合中的对应图像作差,得到所述第一图像的前景部分以及所述前景部分在所述第一图像中的位置。The extracting module 602 is configured to perform a difference between the first image in the second image set and the corresponding image in the first image set to obtain a foreground of the first image. a portion and a location of the foreground portion in the first image.
所述拼接模块603,具体用于对所述第一图像集合中的至少部分图像进行拼接,生成所述目标环境的背景部分的全景图。The splicing module 603 is specifically configured to splicing at least part of the images in the first image set to generate a panoramic view of a background portion of the target environment.
所述合成模块604,具体用于根据所述第一图像的前景部分在所述第一图像中的位置将所述第一图像的前景部分插入到所述背景部分的全景图中,得到第一目标图像,所述第一目标图像为所述微动图中的一帧图像。The synthesizing module 604 is specifically configured to insert a foreground portion of the first image into a panoramic view of the background portion according to a position of a foreground portion of the first image in the first image, to obtain a first a target image, the first target image being a frame image in the fine motion map.
在一些可行的实施方式中,所述图像集合为所述飞行器沿所述特定轨迹飞 行一次针对所述目标环境采集的图像集合。In some possible implementations, the set of images is for the aircraft to fly along the particular trajectory A collection of images acquired for the target environment is taken once.
所述拼接模块603,还用于:The splicing module 603 is further configured to:
将所述图像集合的各张图像中所述目标环境的前景部分去除,得到所述目标环境的背景部分的图像;Removing a foreground portion of the target environment in each image of the image collection to obtain an image of a background portion of the target environment;
对各张所述背景部分的图像进行拼接处理,得到所述目标环境的背景部分的全景图。A splicing process is performed on the images of the respective background portions to obtain a panoramic view of the background portion of the target environment.
在一些可行的实施方式中,所述装置还包括:确定模块605,用于根据用户操作确定所述目标环境的前景部分。In some possible implementations, the apparatus further includes: a determining module 605, configured to determine a foreground portion of the target environment according to a user operation.
所述拼接模块603,还用于利用所述图像集合包括的多张图像中的所述目标环境的背景部分对各张所述背景部分的图像中去除所述目标环境的前景部分后产生的空白区域进行填充。The splicing module 603 is further configured to: use a background portion of the target environment in the plurality of images included in the image set to remove a blank generated by removing a foreground portion of the target environment from an image of each of the background portions The area is filled.
所述合成模块604,还用于根据所述前景部分分别在所述图像集合的各张图像中的位置,将所述前景部分分别插入到所述背景部分的全景图中,每一张插入所述前景部分的所述背景部分的全景图为所述微动图中的一帧图像。The synthesizing module 604 is further configured to insert the foreground portion into the panoramic view of the background portion according to a position of each of the foreground portions in each image of the image set, each insertion A panoramic view of the background portion of the foreground portion is a frame of image in the micro-motion map.
可以理解的是,本发明实施例提供的图像处理装置的各功能模块的功能可根据上述一种图像处理方法的第一实施例、第二实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。It can be understood that the functions of the functional modules of the image processing apparatus provided by the embodiments of the present invention may be specifically implemented according to the first embodiment and the second embodiment of the image processing method, and the specific implementation process may refer to The related description of the foregoing method embodiments is not described herein again.
本发明实施例中,首先获取模块601获取飞行器针对目标环境采集的图像集合,然后提取模块602从图像集合中提取出目标环境的前景部分以及前景部分分别在对应图像中的位置,拼接模块603根据图像集合包括的图像生成目标环境的背景部分的全景图,最后合成模块604生成微动图,可以根据飞行器采集的图像集合自动生成微动图,从而可以提高微动图的合成效率,实现微动图合成的自动化以及智能化。In the embodiment of the present invention, the acquiring module 601 first acquires an image set collected by the aircraft for the target environment, and then the extracting module 602 extracts a foreground portion of the target environment and a position of the foreground portion in the corresponding image from the image set, and the splicing module 603 is configured according to the splicing module 603. The image set includes an image to generate a panoramic view of the background portion of the target environment, and finally the synthesis module 604 generates a micro-motion map, which can automatically generate a micro-motion map according to the image collection collected by the aircraft, thereby improving the synthesis efficiency of the micro-motion image and implementing the micro-motion Automation and intelligence of graph synthesis.
请参阅图7,为本发明实施例提供的一种电子设备的结构示意图。本实施例中所描述的电子设备,包括:处理器701、通信接口702、存储器703。其中,处理器701、通信接口702、存储器703可通过总线或其他方式连接,本申请实施例以通过总线连接为例。FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device described in this embodiment includes: a processor 701, a communication interface 702, and a memory 703. The processor 701, the communication interface 702, and the memory 703 can be connected by using a bus or other means. The embodiment of the present application is exemplified by a bus connection.
处理器701可以是中央处理器(central processing unit,CPU),网络处理 器(network processor,NP),图形处理器(graphics processing unit,GPU),或者CPU、GPU和NP的组合。处理器701也可以是多核CPU、多核GPU或多核NP中用于实现通信标识绑定的核。The processor 701 can be a central processing unit (CPU), network processing Network processor (NP), graphics processing unit (GPU), or a combination of CPU, GPU, and NP. The processor 701 can also be a core for implementing communication identity binding in a multi-core CPU, a multi-core GPU, or a multi-core NP.
上述处理器701可以是硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。The processor 701 described above may be a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof. The PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (GAL), or any combination thereof.
上述通信接口702可用于收发信息或信令的交互,以及信号的接收和传递。上述存储器703可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的存储程序(比如文字存储功能、位置存储功能等);存储数据区可存储根据装置的使用所创建的数据(比如图像数据、文字数据)等,并可以包括应用存储程序等。此外,存储器703可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The above communication interface 702 can be used for transceiving information or signaling interactions, as well as receiving and transmitting signals. The memory 703 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a storage program required for at least one function (such as a text storage function, a location storage function, etc.); the storage data area may be stored according to The data created by the use of the device (such as image data, text data), etc., and may include an application storage program or the like. Further, the memory 703 may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
上述存储器703还用于存储程序指令。当上述处理器701是非硬件芯片的处理器时,可以调用上述存储器703存储的程序指令,实现如本申请实施例所示的图像处理方法。The above memory 703 is also used to store program instructions. When the processor 701 is a processor other than the hardware chip, the program instructions stored in the memory 703 can be invoked to implement the image processing method as shown in the embodiment of the present application.
具体的,上述处理器701调用存储在上述存储器703存储的程序指令执行以下步骤:Specifically, the processor 701 calls the program instructions stored in the memory 703 to perform the following steps:
获取飞行器沿特定轨迹飞行时针对目标环境采集的图像集合,所述图像集合包括多张图像;Acquiring a collection of images acquired for the target environment when the aircraft is flying along a particular trajectory, the collection of images comprising a plurality of images;
从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置;Extracting a foreground portion of the target environment and a position of the foreground portion in the corresponding image from the image set;
对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图;Splicing a background portion of the target environment in at least a portion of the image set to generate a panoramic view of a background portion of the target environment;
生成微动图,其中,所述微动图中的各帧图像是根据所述背景部分的全景图以及所述前景部分分别在对应图像中的位置合成的。A micro-motion map is generated, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image, respectively.
本申请实施例中处理器701执行的方法均从处理器701的角度来描述,可以理解的是,本申请实施例中处理器701要执行上述方法需要其他硬件结构的配 合。本申请实施例对具体的实现过程不作详细描述和限制。The method performed by the processor 701 in the embodiment of the present application is described from the perspective of the processor 701. It can be understood that the processor 701 in the embodiment of the present application needs to perform other hardware structures in order to execute the foregoing method. Hehe. The specific implementation process is not described and limited in detail in the embodiments of the present application.
本发明实施例中,所述图像集合中的多张图像为所述飞行器采集的多张照片,或者为所述飞行器采集的视频中的多帧图像。In the embodiment of the present invention, the multiple images in the image set are multiple photos collected by the aircraft, or multiple frames in the video captured by the aircraft.
在一些可行的实施方式中,所述图像集合包括第一图像集合和第二图像集合;所述第一图像集合为所述飞行器沿第一轨迹飞行时针对所述目标环境采集的图像集合,所述第二图像集合为所述飞行器沿第二轨迹飞行时针对所述目标环境采集的图像集合;所述第一图像集合不包括所述前景部分,所述第二图像集合包括所述前景部分。In some possible implementations, the image set includes a first image set and a second image set; the first image set is a set of images acquired for the target environment when the aircraft flies along the first trajectory, The second set of images is a set of images acquired for the target environment when the aircraft is flying along a second trajectory; the first set of images does not include the foreground portion, and the second set of images includes the foreground portion.
在一些可行的实施方式中,所述第一轨迹和所述第二轨迹相同。所述第一图像集合和所述第二图像集合是所述飞行器在飞行中采用相同的拍摄策略拍摄的。其中,所述拍摄策略包括以下至少一项:拍摄位置、云台拍摄角度、拍摄频率。In some possible implementations, the first trajectory and the second trajectory are the same. The first set of images and the second set of images are taken by the aircraft using the same shooting strategy in flight. The shooting strategy includes at least one of the following: a shooting position, a pan/tilt shooting angle, and a shooting frequency.
在一些可行的实施方式中,上述处理器701,具体用于将所述第二图像集合中的第一图像,与所述第一图像在所述第一图像集合中的对应图像作差,得到所述第一图像的前景部分以及所述前景部分在所述第一图像中的位置。In some possible implementations, the processor 701 is specifically configured to: compare a first image in the second image set with a corresponding image in the first image set, to obtain a foreground portion of the first image and a location of the foreground portion in the first image.
上述处理器701,具体用于对所述第一图像集合中的至少部分图像进行拼接,生成所述目标环境的背景部分的全景图。The processor 701 is specifically configured to splicing at least part of the images in the first image set to generate a panoramic view of a background portion of the target environment.
上述处理器701,具体用于根据所述第一图像的前景部分在所述第一图像中的位置将所述第一图像的前景部分插入到所述背景部分的全景图中,得到第一目标图像,所述第一目标图像为所述微动图中的一帧图像。The processor 701 is specifically configured to insert a foreground portion of the first image into a panoramic view of the background portion according to a position of a foreground portion of the first image in the first image to obtain a first target. An image, the first target image being a frame image in the micro-motion map.
在一些可行的实施方式中,所述图像集合为所述飞行器沿所述特定轨迹飞行一次针对所述目标环境采集的图像集合。In some possible implementations, the set of images is a collection of images acquired by the aircraft for the target environment along the particular trajectory.
上述处理器701,还用于:The processor 701 is further configured to:
将所述图像集合的各张图像中所述目标环境的前景部分去除,得到所述目标环境的背景部分的图像;Removing a foreground portion of the target environment in each image of the image collection to obtain an image of a background portion of the target environment;
对各张所述背景部分的图像进行拼接处理,得到所述目标环境的背景部分的全景图。A splicing process is performed on the images of the respective background portions to obtain a panoramic view of the background portion of the target environment.
上述处理器701,还用于根据用户操作确定所述目标环境的前景部分。The processor 701 is further configured to determine a foreground portion of the target environment according to a user operation.
上述处理器701,还用于利用所述图像集合包括的多张图像中的所述目标 环境的背景部分对各张所述背景部分的图像中去除所述目标环境的前景部分后产生的空白区域进行填充。The processor 701 is further configured to utilize the target in multiple images included in the image set The background portion of the environment fills a blank area created after removing the foreground portion of the target environment in the image of each of the background portions.
上述处理器701,还用于根据所述前景部分分别在所述图像集合的各张图像中的位置,将所述前景部分分别插入到所述背景部分的全景图中,每一张插入所述前景部分的所述背景部分的全景图为所述微动图中的一帧图像。The processor 701 is further configured to insert the foreground portion into a panoramic view of the background portion according to a position of each of the foreground portions in each image of the image set, and insert the A panoramic view of the background portion of the foreground portion is a frame of image in the micro-motion map.
具体实现中,本发明实施例中所描述的处理器701、通信装置702和存储器703可执行本发明实施例提供的一种图像处理方法的第一实施例和第二实施例中所描述的实现方式,也可执行本发明实施例提供的一种图像处理装置的实施例中所描述的实现方式,在此不再赘述。In a specific implementation, the processor 701, the communication device 702, and the memory 703, which are described in the embodiments of the present invention, may be implemented in the first embodiment and the second embodiment of the image processing method provided by the embodiment of the present invention. The implementation manner described in the embodiment of the image processing apparatus provided by the embodiment of the present invention may also be implemented, and details are not described herein again.
本发明实施例中,首先获取飞行器针对目标环境采集的图像集合,然后从图像集合中提取出目标环境的前景部分以及前景部分分别在对应图像中的位置,根据图像集合包括的图像生成目标环境的背景部分的全景图,最后生成微动图,可以根据飞行器采集的图像集合自动生成微动图,从而可以提高微动图的合成效率,实现微动图合成的自动化以及智能化。In the embodiment of the present invention, the image set collected by the aircraft for the target environment is first acquired, and then the foreground portion of the target environment and the position of the foreground portion in the corresponding image are extracted from the image set, and the target environment is generated according to the image included in the image set. The panorama of the background part finally generates the micro-motion map, which can automatically generate the micro-motion map according to the image collection collected by the aircraft, thereby improving the synthesis efficiency of the micro-motion diagram and realizing the automation and intelligence of the micro-motion pattern synthesis.
本发明还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述方法实施例所述的图像处理方法。The present invention also provides a computer readable storage medium having instructions stored therein that, when run on a computer, cause the computer to perform the image processing method described in the above method embodiments.
本发明还提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述方法实施例所述的图像处理方法。The present invention also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the image processing method described in the above method embodiments.
需要说明的是,对于前述的各个方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某一些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。It should be noted that, for the foregoing various method embodiments, for the sake of simple description, they are all expressed as a series of action combinations, but those skilled in the art should understand that the present invention is not limited by the described action sequence. Because certain steps may be performed in other sequences or concurrently in accordance with the present invention. In the following, those skilled in the art should also understand that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily required by the present invention.
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。 A person skilled in the art can understand that all or part of the steps of the foregoing embodiments can be completed by a program to instruct related hardware. The program can be stored in a computer readable storage medium, and the storage medium can include: Flash disk, Read-Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
以上对本发明实施例所提供的一种图像处理方法、装置及电子设备进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。 The image processing method, device and electronic device provided by the embodiments of the present invention are described in detail above. The principles and embodiments of the present invention are described in the following. The description of the above embodiments is only for helping. The method of the present invention and its core idea are understood; at the same time, for those skilled in the art, according to the idea of the present invention, there are changes in the specific embodiments and application scopes. It should be understood that the invention is limited.

Claims (42)

  1. 一种图像处理方法,其特征在于,所述方法包括:An image processing method, the method comprising:
    获取飞行器沿特定轨迹飞行时针对目标环境采集的图像集合,所述图像集合包括多张图像;Acquiring a collection of images acquired for the target environment when the aircraft is flying along a particular trajectory, the collection of images comprising a plurality of images;
    从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置;Extracting a foreground portion of the target environment and a position of the foreground portion in the corresponding image from the image set;
    对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图;Splicing a background portion of the target environment in at least a portion of the image set to generate a panoramic view of a background portion of the target environment;
    生成微动图,其中,所述微动图中的各帧图像是根据所述背景部分的全景图以及所述前景部分分别在对应图像中的位置合成的。A micro-motion map is generated, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image, respectively.
  2. 根据权利要求1所述的方法,其特征在于,所述图像集合中的多张图像为所述飞行器采集的多张照片,或者为所述飞行器采集的视频中的多帧图像。The method of claim 1 wherein the plurality of images in the set of images are multiple photos captured by the aircraft or multi-frame images in the video captured by the aircraft.
  3. 根据权利要求1所述的方法,其特征在于,所述图像集合包括第一图像集合和第二图像集合;The method of claim 1 wherein the set of images comprises a first set of images and a second set of images;
    所述第一图像集合为所述飞行器沿第一轨迹飞行时针对所述目标环境采集的图像集合,所述第二图像集合为所述飞行器沿第二轨迹飞行时针对所述目标环境采集的图像集合;The first set of images is a set of images acquired for the target environment when the aircraft flies along a first trajectory, and the second set of images is an image acquired for the target environment when the aircraft flies along a second trajectory set;
    所述第一图像集合不包括所述前景部分,所述第二图像集合包括所述前景部分。The first set of images does not include the foreground portion, and the second set of images includes the foreground portion.
  4. 根据权利要求3所述的方法,其特征在于,所述第一轨迹和所述第二轨迹相同。The method of claim 3 wherein said first trajectory and said second trajectory are the same.
  5. 根据权利要求4所述的方法,其特征在于,所述第一图像集合和所述第二图像集合是所述飞行器在飞行中采用相同的拍摄策略拍摄的。The method of claim 4 wherein said first set of images and said second set of images are taken by said aircraft in flight using the same shooting strategy.
  6. 根据权利要求5所述的方法,其特征在于,所述拍摄策略包括以下至少 一项:拍摄位置、云台拍摄角度、拍摄频率。The method of claim 5 wherein said photographing strategy comprises at least the following One: shooting position, pan/tilt shooting angle, shooting frequency.
  7. 根据权利要求3所述的方法,其特征在于,所述从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置,包括:The method according to claim 3, wherein the extracting the foreground portion of the target environment and the location of the foreground portion in the corresponding image from the image set comprises:
    将所述第二图像集合中的第一图像,与所述第一图像在所述第一图像集合中的对应图像作差,得到所述第一图像的前景部分以及所述前景部分在所述第一图像中的位置。Placing a first image in the second image set with a corresponding image of the first image in the first image set to obtain a foreground portion of the first image and the foreground portion in the The position in the first image.
  8. 根据权利要求3所述的方法,其特征在于,所述对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图,包括:The method according to claim 3, wherein the splicing the background portion of the target environment in at least part of the image of the image set to generate a panoramic view of the background portion of the target environment comprises:
    对所述第一图像集合中的至少部分图像进行拼接,生成所述目标环境的背景部分的全景图。Splicing at least a portion of the images in the first set of images to generate a panoramic view of a background portion of the target environment.
  9. 根据权利要求7所述的方法,其特征在于,所述生成微动图,包括:The method according to claim 7, wherein said generating a micromotion map comprises:
    根据所述第一图像的前景部分在所述第一图像中的位置将所述第一图像的前景部分插入到所述背景部分的全景图中,得到第一目标图像,所述第一目标图像为所述微动图中的一帧图像。Inserting a foreground portion of the first image into a panoramic view of the background portion according to a position of a foreground portion of the first image in the first image to obtain a first target image, the first target image Is a frame of image in the micro-motion map.
  10. 根据权利要求1所述的方法,其特征在于,所述图像集合为所述飞行器沿所述特定轨迹飞行一次针对所述目标环境采集的图像集合。The method of claim 1 wherein said set of images is a collection of images acquired by said aircraft along said particular trajectory for said target environment.
  11. 根据权利要求1或10所述的方法,其特征在于,所述对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图,包括:The method according to claim 1 or 10, wherein the framing the background portion of the target environment in at least part of the image of the image set to generate a panoramic view of the background portion of the target environment, including :
    将所述图像集合的各张图像中所述目标环境的前景部分去除,得到所述目标环境的背景部分的图像;Removing a foreground portion of the target environment in each image of the image collection to obtain an image of a background portion of the target environment;
    对各张所述背景部分的图像进行拼接处理,得到所述目标环境的背景部分 的全景图。Splicing processing images of each of the background portions to obtain a background portion of the target environment Panorama.
  12. 根据权利要求11所述的方法,其特征在于,所述从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置之前,所述方法还包括:The method according to claim 11, wherein the method further comprises: extracting a foreground portion of the target environment and a position of the foreground portion in a corresponding image from the image set, the method further comprising:
    根据用户操作确定所述目标环境的前景部分。The foreground portion of the target environment is determined based on user operations.
  13. 根据权利要求11所述的方法,其特征在于,所述对各张所述背景部分的图像进行拼接处理,包括:The method according to claim 11, wherein the splicing processing of the images of the respective background portions comprises:
    利用所述图像集合包括的多张图像中的所述目标环境的背景部分对各张所述背景部分的图像中去除所述目标环境的前景部分后产生的空白区域进行填充。Filling a blank area generated after removing the foreground portion of the target environment in the image of each of the background portions by using a background portion of the target environment in the plurality of images included in the image set.
  14. 根据权利要求11所述的方法,其特征在于,所述生成微动图,包括:The method according to claim 11, wherein said generating a micromotion map comprises:
    根据所述前景部分分别在所述图像集合的各张图像中的位置,将所述前景部分分别插入到所述背景部分的全景图中,每一张插入所述前景部分的所述背景部分的全景图为所述微动图中的一帧图像。Inserting the foreground portions into a panoramic view of the background portion according to positions of the foreground portions in respective images of the image set, each of which is inserted into the background portion of the foreground portion The panorama is an image of one frame in the micro-motion map.
  15. 一种图像处理装置,其特征在于,所述装置包括:An image processing apparatus, characterized in that the apparatus comprises:
    获取模块,用于获取飞行器沿特定轨迹飞行时针对目标环境采集的图像集合,所述图像集合包括多张图像;An acquisition module, configured to acquire a collection of images acquired by the aircraft for a target environment when flying along a specific trajectory, the image collection comprising a plurality of images;
    提取模块,用于从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置;An extracting module, configured to extract, from the image set, a foreground portion of the target environment and a position of the foreground portion in a corresponding image;
    拼接模块,用于对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图;a splicing module, configured to splicing a background portion of the target environment in at least part of the image of the image set to generate a panoramic view of a background portion of the target environment;
    合成模块,用于生成微动图,其中,所述微动图中的各帧图像是根据所述背景部分的全景图以及所述前景部分分别在对应图像中的位置合成的。And a synthesizing module, configured to generate a micro-motion image, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image.
  16. 根据权利要求15所述的装置,其特征在于,所述图像集合中的多张图 像为所述飞行器采集的多张照片,或者为所述飞行器采集的视频中的多帧图像。The apparatus of claim 15 wherein said plurality of images in said set of images Like multiple photos captured for the aircraft, or multiple frames of images in the video captured by the aircraft.
  17. 根据权利要求15所述的装置,其特征在于,所述图像集合包括第一图像集合和第二图像集合;The apparatus of claim 15 wherein said set of images comprises a first set of images and a second set of images;
    所述第一图像集合为所述飞行器沿第一轨迹飞行时针对所述目标环境采集的图像集合,所述第二图像集合为所述飞行器沿第二轨迹飞行时针对所述目标环境采集的图像集合;The first set of images is a set of images acquired for the target environment when the aircraft flies along a first trajectory, and the second set of images is an image acquired for the target environment when the aircraft flies along a second trajectory set;
    所述第一图像集合不包括所述前景部分,所述第二图像集合包括所述前景部分。The first set of images does not include the foreground portion, and the second set of images includes the foreground portion.
  18. 根据权利要求17所述的装置,其特征在于,所述第一轨迹和所述第二轨迹相同。The apparatus of claim 17 wherein said first trajectory and said second trajectory are the same.
  19. 根据权利要求18所述的装置,其特征在于,所述第一图像集合和所述第二图像集合是所述飞行器在飞行中采用相同的拍摄策略拍摄的。The apparatus of claim 18 wherein said first set of images and said second set of images are taken by said aircraft in flight using the same shooting strategy.
  20. 根据权利要求19所述的装置,其特征在于,所述拍摄策略包括以下至少一项:拍摄位置、云台拍摄角度、拍摄频率。The apparatus according to claim 19, wherein the shooting strategy comprises at least one of the following: a shooting position, a pan/tilt shooting angle, and a shooting frequency.
  21. 根据权利要求17所述的装置,其特征在于,The device of claim 17 wherein:
    所述提取模块,具体用于将所述第二图像集合中的第一图像,与所述第一图像在所述第一图像集合中的对应图像作差,得到所述第一图像的前景部分以及所述前景部分在所述第一图像中的位置。The extracting module is configured to perform a difference between a first image in the second image set and a corresponding image in the first image set to obtain a foreground portion of the first image. And a location of the foreground portion in the first image.
  22. 根据权利要求17所述的装置,其特征在于,The device of claim 17 wherein:
    所述拼接模块,具体用于对所述第一图像集合中的至少部分图像进行拼接,生成所述目标环境的背景部分的全景图。 The splicing module is specifically configured to splicing at least part of the images in the first image set to generate a panoramic view of a background portion of the target environment.
  23. 根据权利要求21所述的装置,其特征在于,The device according to claim 21, wherein
    所述合成模块,具体用于根据所述第一图像的前景部分在所述第一图像中的位置将所述第一图像的前景部分插入到所述背景部分的全景图中,得到第一目标图像,所述第一目标图像为所述微动图中的一帧图像。The synthesizing module is configured to insert a foreground portion of the first image into a panoramic view of the background portion according to a position of a foreground portion of the first image in the first image to obtain a first target An image, the first target image being a frame image in the micro-motion map.
  24. 根据权利要求15所述的装置,其特征在于,所述图像集合为所述飞行器沿所述特定轨迹飞行一次针对所述目标环境采集的图像集合。The apparatus of claim 15 wherein said set of images is a collection of images acquired by said aircraft along said particular trajectory for said target environment.
  25. 根据权利要求15或24所述的装置,其特征在于,所述拼接模块,还用于:The device according to claim 15 or 24, wherein the splicing module is further configured to:
    将所述图像集合的各张图像中所述目标环境的前景部分去除,得到所述目标环境的背景部分的图像;Removing a foreground portion of the target environment in each image of the image collection to obtain an image of a background portion of the target environment;
    对各张所述背景部分的图像进行拼接处理,得到所述目标环境的背景部分的全景图。A splicing process is performed on the images of the respective background portions to obtain a panoramic view of the background portion of the target environment.
  26. 根据权利要求25所述的装置,其特征在于,所述装置还包括:The device of claim 25, wherein the device further comprises:
    确定模块,用于根据用户操作确定所述目标环境的前景部分。A determining module is configured to determine a foreground portion of the target environment according to a user operation.
  27. 根据权利要求25所述的装置,其特征在于,The device according to claim 25, wherein
    所述拼接模块,还用于利用所述图像集合包括的多张图像中的所述目标环境的背景部分对各张所述背景部分的图像中去除所述目标环境的前景部分后产生的空白区域进行填充。The splicing module is further configured to use a background portion of the target environment in the plurality of images included in the image set to remove a blank region generated by removing a foreground portion of the target environment from an image of each of the background portions Fill it up.
  28. 根据权利要求25所述的装置,其特征在于,The device according to claim 25, wherein
    所述合成模块,还用于根据所述前景部分分别在所述图像集合的各张图像中的位置,将所述前景部分分别插入到所述背景部分的全景图中,每一张插入所述前景部分的所述背景部分的全景图为所述微动图中的一帧图像。The synthesizing module is further configured to insert the foreground portion into a panoramic view of the background portion according to a position of each of the foreground portions in each image of the image set, and insert the A panoramic view of the background portion of the foreground portion is a frame of image in the micro-motion map.
  29. 一种电子设备,其特征在于,包括:处理器和存储器, An electronic device, comprising: a processor and a memory,
    所述存储器,用于存储程序指令;The memory is configured to store program instructions;
    所述处理器,用于执行所述存储器存储的程序指令,当程序指令被执行时,所述处理器用于:The processor is configured to execute the program instructions stored by the memory, when the program instructions are executed, the processor is configured to:
    获取飞行器沿特定轨迹飞行时针对目标环境采集的图像集合,所述图像集合包括多张图像;Acquiring a collection of images acquired for the target environment when the aircraft is flying along a particular trajectory, the collection of images comprising a plurality of images;
    从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置;Extracting a foreground portion of the target environment and a position of the foreground portion in the corresponding image from the image set;
    对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图;Splicing a background portion of the target environment in at least a portion of the image set to generate a panoramic view of a background portion of the target environment;
    生成微动图,其中,所述微动图中的各帧图像是根据所述背景部分的全景图以及所述前景部分分别在对应图像中的位置合成的。A micro-motion map is generated, wherein each frame image in the micro-motion map is synthesized according to a panoramic view of the background portion and a position of the foreground portion in a corresponding image, respectively.
  30. 根据权利要求29所述的电子设备,其特征在于,所述图像集合中的多张图像为所述飞行器采集的多张照片,或者为所述飞行器采集的视频中的多帧图像。The electronic device according to claim 29, wherein the plurality of images in the image set are a plurality of photos collected by the aircraft or a multi-frame image in a video captured by the aircraft.
  31. 根据权利要求29所述的电子设备,其特征在于,所述图像集合包括第一图像集合和第二图像集合;The electronic device of claim 29, wherein the set of images comprises a first set of images and a second set of images;
    所述第一图像集合为所述飞行器沿第一轨迹飞行时针对所述目标环境采集的图像集合,所述第二图像集合为所述飞行器沿第二轨迹飞行时针对所述目标环境采集的图像集合;The first set of images is a set of images acquired for the target environment when the aircraft flies along a first trajectory, and the second set of images is an image acquired for the target environment when the aircraft flies along a second trajectory set;
    所述第一图像集合不包括所述前景部分,所述第二图像集合包括所述前景部分。The first set of images does not include the foreground portion, and the second set of images includes the foreground portion.
  32. 根据权利要求31所述的电子设备,其特征在于,所述第一轨迹和所述第二轨迹相同。The electronic device of claim 31, wherein the first trajectory and the second trajectory are the same.
  33. 根据权利要求32所述的电子设备,其特征在于,所述第一图像集合和所述第二图像集合是所述飞行器在飞行中采用相同的拍摄策略拍摄的。 38. The electronic device of claim 32, wherein the first set of images and the second set of images are taken by the aircraft using the same shooting strategy in flight.
  34. 根据权利要求33所述的电子设备,其特征在于,所述拍摄策略包括以下至少一项:拍摄位置、云台拍摄角度、拍摄频率。The electronic device according to claim 33, wherein the shooting strategy comprises at least one of the following: a shooting position, a pan/tilt shooting angle, and a shooting frequency.
  35. 根据权利要求31所述的电子设备,其特征在于,所述处理器从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置时,具体用于:The electronic device according to claim 31, wherein the processor extracts, from the image set, a foreground portion of the target environment and a position of the foreground portion in a corresponding image, specifically for :
    将所述第二图像集合中的第一图像,与所述第一图像在所述第一图像集合中的对应图像作差,得到所述第一图像的前景部分以及所述前景部分在所述第一图像中的位置。Placing a first image in the second image set with a corresponding image of the first image in the first image set to obtain a foreground portion of the first image and the foreground portion in the The position in the first image.
  36. 根据权利要求31所述的电子设备,其特征在于,所述处理器对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图时,具体用于:The electronic device according to claim 31, wherein said processor splices a background portion of said target environment in at least part of said image of said image set to generate a panorama of said background portion of said target environment Specifically for:
    对所述第一图像集合中的至少部分图像进行拼接,生成所述目标环境的背景部分的全景图。Splicing at least a portion of the images in the first set of images to generate a panoramic view of a background portion of the target environment.
  37. 根据权利要求35所述的电子设备,其特征在于,所述处理器生成微动图时,具体用于:The electronic device according to claim 35, wherein when the processor generates the micro-motion map, the method is specifically configured to:
    根据所述第一图像的前景部分在所述第一图像中的位置将所述第一图像的前景部分插入到所述背景部分的全景图中,得到第一目标图像,所述第一目标图像为所述微动图中的一帧图像。Inserting a foreground portion of the first image into a panoramic view of the background portion according to a position of a foreground portion of the first image in the first image to obtain a first target image, the first target image Is a frame of image in the micro-motion map.
  38. 根据权利要求29所述的电子设备,其特征在于,所述图像集合为所述飞行器沿所述特定轨迹飞行一次针对所述目标环境采集的图像集合。The electronic device of claim 29, wherein the set of images is a collection of images acquired by the aircraft for the target environment along the particular trajectory.
  39. 根据权利要求29或38所述的电子设备,其特征在于,所述处理器对所述图像集合的至少部分图像中所述目标环境的背景部分进行拼接,生成所述目标环境的背景部分的全景图时,具体用于: The electronic device according to claim 29 or claim 38, wherein the processor splices a background portion of the target environment in at least part of the image of the image set to generate a panorama of a background portion of the target environment When used in the figure, it is specifically used to:
    将所述图像集合的各张图像中所述目标环境的前景部分去除,得到所述目标环境的背景部分的图像;Removing a foreground portion of the target environment in each image of the image collection to obtain an image of a background portion of the target environment;
    对各张所述背景部分的图像进行拼接处理,得到所述目标环境的背景部分的全景图。A splicing process is performed on the images of the respective background portions to obtain a panoramic view of the background portion of the target environment.
  40. 根据权利要求39所述的电子设备,其特征在于,所述处理器从所述图像集合中提取出所述目标环境的前景部分以及所述前景部分分别在对应图像中的位置之前,还用于:The electronic device according to claim 39, wherein said processor extracts, from said image set, a foreground portion of said target environment and said foreground portion respectively before a position in said corresponding image, :
    根据用户操作确定所述目标环境的前景部分。The foreground portion of the target environment is determined based on user operations.
  41. 根据权利要求39所述的电子设备,其特征在于,所述处理器对各张所述背景部分的图像进行拼接处理时,还用于:The electronic device according to claim 39, wherein when the processor performs splicing processing on the images of the respective background portions, the processor is further configured to:
    利用所述图像集合包括的多张图像中的所述目标环境的背景部分对各张所述背景部分的图像中去除所述目标环境的前景部分后产生的空白区域进行填充。Filling a blank area generated after removing the foreground portion of the target environment in the image of each of the background portions by using a background portion of the target environment in the plurality of images included in the image set.
  42. 根据权利要求39所述的电子设备,其特征在于,所述处理器生成微动图时,具体用于:The electronic device according to claim 39, wherein when the processor generates the micro-motion map, the method is specifically configured to:
    根据所述前景部分分别在所述图像集合的各张图像中的位置,将所述前景部分分别插入到所述背景部分的全景图中,每一张插入所述前景部分的所述背景部分的全景图为所述微动图中的一帧图像。 Inserting the foreground portions into a panoramic view of the background portion according to positions of the foreground portions in respective images of the image set, each of which is inserted into the background portion of the foreground portion The panorama is an image of one frame in the micro-motion map.
PCT/CN2017/091245 2017-06-30 2017-06-30 Image processing method and apparatus, and electronic device WO2019000427A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/091245 WO2019000427A1 (en) 2017-06-30 2017-06-30 Image processing method and apparatus, and electronic device
CN201780004688.2A CN108521823A (en) 2017-06-30 2017-06-30 A kind of image processing method, device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/091245 WO2019000427A1 (en) 2017-06-30 2017-06-30 Image processing method and apparatus, and electronic device

Publications (1)

Publication Number Publication Date
WO2019000427A1 true WO2019000427A1 (en) 2019-01-03

Family

ID=63434363

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/091245 WO2019000427A1 (en) 2017-06-30 2017-06-30 Image processing method and apparatus, and electronic device

Country Status (2)

Country Link
CN (1) CN108521823A (en)
WO (1) WO2019000427A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111238A (en) * 2019-04-24 2019-08-09 薄涛 Image processing method, device, equipment and its storage medium
CN110324663A (en) * 2019-07-01 2019-10-11 北京奇艺世纪科技有限公司 A kind of generation method of dynamic image, device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101431616A (en) * 2007-11-06 2009-05-13 奥林巴斯映像株式会社 Image synthesis device and method
CN104023172A (en) * 2014-06-27 2014-09-03 深圳市中兴移动通信有限公司 Shooting method and shooting device of dynamic image
US20150077421A1 (en) * 2013-09-18 2015-03-19 Nokia Corporation Creating a cinemagraph
CN106572308A (en) * 2016-11-04 2017-04-19 宇龙计算机通信科技(深圳)有限公司 Method and system for synthesizing local dynamic graph

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101946019B1 (en) * 2014-08-18 2019-04-22 삼성전자주식회사 Video processing apparatus for generating paranomic video and method thereof
CN104243819B (en) * 2014-08-29 2018-02-23 小米科技有限责任公司 Photo acquisition methods and device
CN105827946B (en) * 2015-11-26 2019-02-22 东莞市步步高通信软件有限公司 A kind of generation of panoramic picture and playback method and mobile terminal
CN106651923A (en) * 2016-12-13 2017-05-10 中山大学 Method and system for video image target detection and segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101431616A (en) * 2007-11-06 2009-05-13 奥林巴斯映像株式会社 Image synthesis device and method
US20150077421A1 (en) * 2013-09-18 2015-03-19 Nokia Corporation Creating a cinemagraph
CN104023172A (en) * 2014-06-27 2014-09-03 深圳市中兴移动通信有限公司 Shooting method and shooting device of dynamic image
CN106572308A (en) * 2016-11-04 2017-04-19 宇龙计算机通信科技(深圳)有限公司 Method and system for synthesizing local dynamic graph

Also Published As

Publication number Publication date
CN108521823A (en) 2018-09-11

Similar Documents

Publication Publication Date Title
US11688034B2 (en) Virtual lens simulation for video and photo cropping
EP3457683B1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
KR102013978B1 (en) Method and apparatus for fusion of images
CN101689292B (en) Banana codec
CN110799921A (en) Shooting method and device and unmanned aerial vehicle
US20220385721A1 (en) 3d mesh generation on a server
JP6921686B2 (en) Generator, generation method, and program
JPWO2008126371A1 (en) Video composition method, video composition system
CN104660909A (en) Image acquisition method, image acquisition device and terminal
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
US10602064B2 (en) Photographing method and photographing device of unmanned aerial vehicle, unmanned aerial vehicle, and ground control device
CN105467741B (en) A kind of panorama photographic method and terminal
WO2019000427A1 (en) Image processing method and apparatus, and electronic device
CN109302561A (en) A kind of image capture method, terminal and storage medium
KR100926231B1 (en) Spatial information construction system and method using spherical video images
CN110036411B (en) Apparatus and method for generating electronic three-dimensional roaming environment
KR101603876B1 (en) Method for fabricating a panorama
KR102203109B1 (en) Method and apparatus of processing image based on artificial neural network
CN117014716A (en) Target tracking method and electronic equipment
US20160373493A1 (en) System and method for creating contents by collaborating between users
WO2023020190A1 (en) All-in-focus image synthesis method, storage medium and smart phone
CN112437253A (en) Video splicing method, device, system, computer equipment and storage medium
US20130286234A1 (en) Method and apparatus for remotely managing imaging
KR20180020187A (en) Method and system for generating content using panoramic image
KR102211760B1 (en) Apparatus and method for recommending image capture guide based on image taken in the past at certain place

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17915688

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17915688

Country of ref document: EP

Kind code of ref document: A1