CN108616702B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN108616702B
CN108616702B CN201611178830.9A CN201611178830A CN108616702B CN 108616702 B CN108616702 B CN 108616702B CN 201611178830 A CN201611178830 A CN 201611178830A CN 108616702 B CN108616702 B CN 108616702B
Authority
CN
China
Prior art keywords
target object
electronic device
target
electronic equipment
electronic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611178830.9A
Other languages
Chinese (zh)
Other versions
CN108616702A (en
Inventor
侯伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninebot Beijing Technology Co Ltd
Original Assignee
Ninebot Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninebot Beijing Technology Co Ltd filed Critical Ninebot Beijing Technology Co Ltd
Priority to CN201611178830.9A priority Critical patent/CN108616702B/en
Priority to PCT/CN2017/117151 priority patent/WO2018113657A1/en
Publication of CN108616702A publication Critical patent/CN108616702A/en
Application granted granted Critical
Publication of CN108616702B publication Critical patent/CN108616702B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method and device. Wherein, the method comprises the following steps: acquiring the movement condition of a target object obtained by detecting the target object by electronic equipment, wherein the electronic equipment moves according to the movement condition of the target object so as to keep the relative positions of the electronic equipment and the target object unchanged; acquiring first images obtained by shooting a target object from different angles by a plurality of electronic devices to obtain a plurality of first images; and synthesizing the plurality of first images to obtain a three-dimensional image of the target object. The invention solves the technical problem that the stereoscopic image of a moving object can not be recorded in the prior art.

Description

Image processing method and device
Technical Field
The present invention relates to the field of images, and in particular, to an image processing method and apparatus.
Background
For a stationary object, the stereoscopic image can be recorded by placing the cameras at fixed positions around the object, but for a moving object, the stereoscopic image cannot be recorded in the above manner.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, which at least solve the technical problem that the stereoscopic image of a moving object cannot be recorded in the prior art.
According to an aspect of an embodiment of the present invention, there is provided an image processing method including: acquiring the movement condition of a target object obtained by detecting the target object by electronic equipment, wherein the electronic equipment moves according to the movement condition of the target object so as to keep the relative positions of the electronic equipment and the target object unchanged; acquiring first images obtained by shooting the target object from different angles by the electronic equipment to obtain a plurality of first images; and synthesizing the plurality of first images to obtain a three-dimensional image of the target object.
Further, the step of obtaining a plurality of first images obtained by shooting the target object by the plurality of electronic devices from different angles includes: the ith electronic equipment identifies the target object according to an image acquisition unit of the ith electronic equipment; the ith electronic equipment shoots the target object at an ith angle to obtain the first image, wherein the ith angle is an angle of the ith electronic equipment relative to the target object, 1 to N are sequentially taken for i, the 1 st angle to the Nth angle are N different angles, and N is an integer greater than 1.
Further, the synthesizing the plurality of first images to obtain the stereoscopic image of the target object includes: acquiring the shooting time of each first image; and synthesizing the plurality of first images with the same shooting time to obtain the three-dimensional image.
Further, synthesizing the plurality of first images with the same shooting time to obtain the stereoscopic image includes: removing a background part in the first image to obtain a first image with the background removed; and synthesizing the plurality of background-removed first images with the same shooting time to obtain the three-dimensional image.
Further, acquiring the movement condition of the target object obtained by the electronic device detecting the target object includes: generating a first motion track by a target electronic device in the plurality of electronic devices according to the movement condition of the target object, wherein the target electronic device moves according to the first motion track; the target electronic equipment calculates the relative position of each electronic equipment except the target electronic equipment in the plurality of electronic equipment relative to the target electronic equipment; the target electronic equipment sends the relative position to other electronic equipment except the target electronic equipment in the plurality of pieces of electronic equipment; and generating a motion track by each electronic device except the target electronic device in the plurality of electronic devices according to the first motion track and the relative position of the electronic device relative to the target electronic device, and moving according to the generated motion track.
Further, before acquiring the movement of the target object obtained by the electronic device detecting the target object, the method further includes: acquiring standard time; and calibrating the time of each electronic device in the plurality of electronic devices according to the standard time.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus including: a first acquisition unit, configured to acquire a movement situation of a target object, which is obtained by an electronic device detecting the target object, wherein a plurality of pieces of electronic devices move according to the movement situation of the target object so as to keep a relative position between each of the plurality of pieces of electronic devices and the target object unchanged; the second acquisition unit is used for acquiring first images obtained by shooting the target object from different angles by the electronic equipment to obtain a plurality of first images; and the synthesizing unit is used for synthesizing the plurality of first images to obtain a three-dimensional image of the target object.
Further, the plurality of electronic devices are N electronic devices, and the second obtaining unit includes: the ith identification subunit is arranged on the ith electronic device and used for identifying the target object according to the image acquisition unit of the ith electronic device; the ith shooting subunit is arranged on the ith electronic device and used for shooting the target object at an ith angle to obtain the first image, wherein the ith angle is an angle of the ith electronic device relative to the target object, 1 to N are sequentially taken for i, the 1 st to the Nth angles are N different angles, and N is an integer greater than 1.
Further, the synthesis unit includes: an acquisition subunit, configured to acquire a shooting time of each of the first images; and the synthesizing subunit is used for synthesizing the plurality of first images with the same shooting time to obtain the stereoscopic image.
Further, the synthesis subunit includes: the removing module is used for removing the background part in the first image to obtain the first image with the background removed; and the synthesis module is used for synthesizing the first image with the plurality of removed backgrounds with the same shooting time to obtain the three-dimensional image.
Further, the first acquisition unit includes: the generating subunit is arranged on a target electronic device in the electronic devices and is used for generating a first motion track according to the movement condition of the target object, wherein the target electronic device moves according to the first motion track; a calculating subunit, provided on the target electronic device, configured to calculate a relative position of each of the remaining electronic devices, other than the target electronic device, of the plurality of electronic devices with respect to the target electronic device; a sending subunit, configured to send the relative position to the other electronic devices, except the target electronic device, of the plurality of electronic devices; and the control subunit is arranged on each electronic device except the target electronic device in the plurality of electronic devices, and is used for generating a motion track according to the first motion track and the relative position of each electronic device except the target electronic device in the plurality of electronic devices relative to the target electronic device, and controlling each electronic device except the target electronic device in the plurality of electronic devices to move according to the generated motion track.
Further, the apparatus further comprises: the third acquisition unit is used for acquiring standard time before the first acquisition unit acquires the movement condition of the target object obtained by detecting the target object by the electronic equipment; and the calibration unit is used for calibrating the time of each electronic device in the plurality of electronic devices according to the standard time.
In the embodiment of the invention, the target object is in a moving state, the plurality of electronic devices detect the target object to obtain the moving condition of the target object, the moving state of each electronic device is adjusted according to the moving condition of the target object, so that the electronic devices and the target object keep relatively static, each electronic device shoots the target object from different angles to obtain a plurality of images, and the plurality of images are synthesized to obtain the stereoscopic image of the target object, so that the technical effect of recording the stereoscopic image of the moving object is achieved, and the technical problem that the stereoscopic image of the moving object cannot be recorded in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of an image processing method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of the positions of a plurality of electronic devices and a target object according to an embodiment of the invention;
FIG. 3 is a schematic diagram of movement traces of a plurality of electronic devices and a target object according to an embodiment of the invention;
fig. 4 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present invention, there is provided an embodiment of an image processing method, it should be noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention. The image processing method is applied to the electronic equipment with a driving device, the driving device is used for providing driving force for the electronic equipment so that the electronic equipment can move, and the electronic equipment is also provided with an image acquisition unit.
As shown in fig. 1, the method comprises the steps of:
step S102, obtaining the movement situation of the target object obtained by the electronic equipment detecting the target object, wherein the electronic equipment moves according to the movement situation of the target object so as to keep the relative positions of the electronic equipment and the target object unchanged.
Step S104, acquiring first images obtained by shooting the target object from different angles by the plurality of electronic devices to obtain a plurality of first images.
And step S106, synthesizing the plurality of first images to obtain a three-dimensional image of the target object.
The target object is a moving object and can move freely within a certain range.
The electronic device may be a robot. The electronic device may have a binocular camera for photographing.
The plurality of electronic devices move according to the movement of the target object to keep the relative positions of the plurality of electronic devices and the target object unchanged, namely, each electronic device is static relative to the target object.
The angle and the direction of each electronic device relative to the target object are different, each electronic device shoots the target object to obtain a first image, and the plurality of electronic devices obtain a plurality of first images together. Each first image records the image of the target object from a different angle.
The three-dimensional image of the target object can be obtained by synthesizing the plurality of first images, and the three-dimensional image contains the information of the target object in all directions.
In the embodiment of the invention, the target object is in a moving state, the plurality of electronic devices detect the target object to obtain the moving condition of the target object, the moving state of each electronic device is adjusted according to the moving condition of the target object, so that the electronic devices and the target object keep relatively static, each electronic device shoots the target object from different angles to obtain a plurality of images, and the plurality of images are synthesized to obtain the stereoscopic image of the target object.
Optionally, the multiple electronic devices are N electronic devices, and acquiring first images of the multiple electronic devices, which are obtained by shooting the target object from different angles, includes: the ith electronic equipment identifies a target object according to an image acquisition unit of the ith electronic equipment; the ith electronic equipment shoots a target object at an ith angle to obtain a first image, wherein the ith angle is an angle of the ith electronic equipment relative to the target object, 1 to N are sequentially taken for i, the 1 st angle to the Nth angle are N different angles, and N is an integer greater than 1.
For example, fig. 2 shows 3 pieces of electronic equipment, namely an electronic equipment E1, an electronic equipment E2, and an electronic equipment E3, a being target objects. The target object A is moving, and the electronic equipment E1, the electronic equipment E2 and the electronic equipment E3 adjust the movement condition of the electronic equipment according to the movement condition of the target object A so as to keep the relative position between the electronic equipment and the target object A unchanged.
The electronic device E1, the electronic device E2, and the electronic device E3 all have different angles and orientations with respect to the target object, and these 3 electronic devices capture images of the target object, respectively, to obtain 3 first images. The 3 first images record images of the target object from different angles, and the 3 first images are combined to obtain a stereoscopic image of the target object a.
Optionally, the synthesizing the plurality of first images to obtain the stereoscopic image of the target object includes: acquiring the shooting time of each first image; and synthesizing a plurality of first images with the same shooting time to obtain a stereoscopic image.
Each electronic device captures a plurality of first images, which may be captured at different times. When the electronic equipment shoots an image, shooting time is recorded. By reading the shooting time, the time when the electronic device shoots the image can be known.
And synthesizing a plurality of first images shot by different electronic equipment and having the same shooting time to obtain a three-dimensional image of the target object.
For example, the electronic device E1 shown in fig. 2 captures a first image P11 at time T1, captures a first image P12 at time T2, and captures a first image P13 at time T3.
The electronic device E2 captured the first image P21 at time T1, captured the first image P22 at time T2, and captured the first image P23 at time T3.
The electronic device E3 captured the first image P31 at time T1, captured the first image P32 at time T2, and captured the first image P33 at time T3.
And synthesizing the first image P11, the first image P21 and the first image P31 which have the shooting time of T1 to obtain a stereoscopic image of the target object at the time of T1.
And synthesizing the first image P12, the first image P22 and the first image P32 which have the shooting time of T2 to obtain a stereoscopic image of the target object at the time of T2.
And synthesizing the first image P13, the first image P23 and the first image P33 which have the shooting time of T3 to obtain a stereoscopic image of the target object at the time of T3.
Optionally, before acquiring the movement condition of the target object obtained by the electronic device detecting the target object, the method further includes: acquiring standard time; and calibrating the time of each electronic device in the plurality of electronic devices according to the standard time.
The time in the electronic device may or may not be accurate. If the time in the electronic devices is not synchronous, an error occurs in the process of synthesizing the first images with the same shooting time to obtain the stereoscopic image, that is, the first images with different actual shooting times are synthesized to obtain an error image.
In order to avoid the above problem, the time of each of the plurality of electronic devices is calibrated using the standard time so that the times in the plurality of electronic devices are synchronized.
Optionally, the synthesizing a plurality of first images with the same shooting time to obtain a stereoscopic image includes: removing a background part in the first image to obtain a first image with the background removed; and synthesizing the plurality of background-removed first images with the same shooting time to obtain the stereoscopic image.
Before the plurality of first images with the same shooting time are synthesized, the background parts in the plurality of first images can be removed, then the plurality of first images are synthesized, the interference of the background can be removed, and the three-dimensional image of the target object which is more clear and prominent can be obtained. Through recognition and interaction, the subject (target object) is confirmed, the background is removed from the current 3D image, and the subject is cut out.
The method for generating the motion trail of the electronic devices can be any one of the following methods:
the first method for generating the motion trail of the electronic equipment comprises the following steps:
the electronic devices can respectively detect the motion situation of the target object in real time, and then respectively generate the motion trail of the electronic devices in real time according to the motion situation of the target object and the position of the electronic devices relative to the target object, so that the electronic devices and the target object are relatively static.
As shown in fig. 3, there are 3 electronic devices, namely, an electronic device E1, an electronic device E2, and an electronic device E3, and a is a target object. The target object A is in motion, and the electronic equipment E1, the electronic equipment E2 and the electronic equipment E3 move according to the motion situation of the target object A so as to keep the relative position between the electronic equipment E1 and the target object A unchanged. The electronic device E1, the electronic device E2, and the electronic device E3 all detect the motion of the target object a in real time, and generate their own motion trajectory according to the motion of the target object a, so that they remain relatively still with the target object a. The trajectory G0 shows the movement trajectory of the target object a; the trajectory G1 shows the motion trajectory of the electronic device E1; the trajectory G2 shows the motion trajectory of the electronic device E2; the trajectory G3 shows the movement trajectory of the electronic device E3.
A second method for generating motion trajectories of a plurality of electronic devices:
generating a first motion track by a target electronic device in the plurality of electronic devices according to the movement condition of a target object, wherein the target electronic device moves according to the first motion track; the target electronic equipment calculates the relative position of each electronic equipment except the target electronic equipment in the plurality of electronic equipment relative to the target electronic equipment; the target electronic equipment sends the relative position to other electronic equipment except the target electronic equipment in the plurality of pieces of electronic equipment; and each electronic device except the target electronic device in the plurality of electronic devices generates a motion track according to the first motion track and the relative position of the electronic device with respect to the target electronic device, and moves according to the generated motion track.
The electronic devices have communication capability, one electronic device (target electronic device) can generate a first motion track according to the movement condition of the target object, and when the target electronic device moves according to the first motion track, the relative position of the target electronic device and the target object is unchanged.
And each electronic device except the target electronic device in the plurality of electronic devices generates a motion track according to the first motion track and the relative position of the electronic device with respect to the target electronic device, and moves according to the generated motion track so as to enable the relative position of the electronic device with respect to the target object to be unchanged.
In the second method, the target electronic device may be any one of the plurality of electronic devices, or may be the one of the plurality of electronic devices with the strongest computing capability.
In the plurality of electronic devices, only the target electronic device needs to detect the motion condition of the target object, and other electronic devices do not need to detect the motion condition of the target object, so that the workload of the electronic devices is greatly reduced.
Since each electronic device is stationary relative to the target object, any two electronic devices are also stationary relative to each other, i.e., the relative positions of any two electronic devices at any one time remain unchanged. The target electronic device may detect and calculate the relative position of each of the other electronic devices with respect to the target electronic device in real time, or may detect and calculate the relative position at any time during the movement process, or may detect and calculate the relative position before the target object starts to move, or may be input by the user.
And each electronic device except the target electronic device in the plurality of electronic devices generates a motion track according to the first motion track and the relative position of the electronic device with respect to the target electronic device, and moves according to the generated motion track.
Since each of the plurality of electronic devices other than the target electronic device is stationary with respect to the target electronic device and the target electronic device is stationary with respect to the target object, each of the plurality of electronic devices is stationary with respect to the target object.
In the embodiment of the invention, the cooperative motion is realized through the ad hoc network communication of a plurality of electronic devices, the 3D images are photographed in a cooperative following manner, and then the omnibearing 3D images are synthesized.
The image processing method provided by the embodiment of the invention is described by taking a robot as an example.
The multiple robots establish connection through a network or other modes to ensure that information can be exchanged. The robots confirm and adjust each other's orientation through information exchange. Locating and adjusting the orientation includes, but is not limited to, using visual, ultrasound, microwave, etc. techniques. The user triggers or enables the robot to start the omnibearing 3D image recording function in a combined mode in any other mode. The robot identifies the shot object through binocular vision, and performs mutual communication by combining the self position to confirm the shot object and confirm the relative positions of the shot object and the robot. And starting to record images, identifying the shot object from the current binocular vision by each robot, removing the background, and keeping information such as recording time. When the object starts to move, the object is perceived to move through binocular vision, and the position change between the shot object and the robot is calculated. The relative position of the robot to the object is kept as constant as possible by autonomous movement. The plurality of robots calibrate the photographed images by recording time, and synthesize an omnidirectional 3D image (stereoscopic image).
In the embodiment of the invention, a plurality of robots are connected with each other through a network or other modes to exchange information, signaling is transmitted among the robots to enable the robots to work cooperatively, and the robots generate omnibearing images by synthesizing 3D images at different angles at the same time, so that the multi-robot cooperation omnibearing dynamic 3D image recording is realized.
According to the embodiment of the invention, the image processing device is also provided. The image processing apparatus may execute the image processing method, and the image processing method may be implemented by the image processing apparatus.
Fig. 4 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention. The image processing device is applied to an electronic device with a driving device, the driving device is used for providing driving force for the electronic device so that the electronic device can move, and the electronic device is also provided with an image acquisition unit.
As shown in fig. 4, the apparatus includes: a first acquisition unit 10, a second acquisition unit 20, a synthesis unit 30.
The first acquisition unit 10 is configured to acquire a movement situation of the target object obtained by detecting the target object by the electronic device, wherein the plurality of electronic devices perform movement according to the movement situation of the target object so as to keep the relative positions of the respective plurality of electronic devices and the target object unchanged.
The second acquiring unit 20 is configured to acquire first images of a target object captured by the plurality of electronic devices from different angles, so as to obtain a plurality of first images.
The synthesizing unit 30 is configured to synthesize the plurality of first images to obtain a stereoscopic image of the target object.
Alternatively, the plurality of electronic devices are N electronic devices, and the second acquiring unit 20 includes N identifying subunits and N shooting subunits. The N identification subunits are respectively the 1 st identification subunit to the N identification subunit. The N shooting subunits are respectively the 1 st shooting subunit to the Nth shooting subunit. And the ith identification subunit is arranged on the ith electronic device and used for identifying the target object according to the image acquisition unit of the ith electronic device. The ith shooting subunit is arranged on the ith electronic device and used for shooting the target object at an ith angle to obtain a first image, wherein the ith angle is an angle of the ith electronic device relative to the target object, 1 to N are sequentially taken for i, the 1 st angle to the N are different angles, and N is an integer greater than 1.
Alternatively, the synthesizing unit 30 includes: an acquisition subunit and a synthesis subunit. And the acquisition subunit is used for acquiring the shooting time of each first image. And the synthesizing subunit is used for synthesizing a plurality of first images with the same shooting time to obtain a stereoscopic image.
Optionally, the synthesis subunit comprises: a removal module and a synthesis module. And the removing module is used for removing the background part in the first image to obtain the first image with the background removed. And the synthesis module is used for synthesizing the first image with the plurality of removed backgrounds with the same shooting time to obtain a three-dimensional image.
Optionally, the first obtaining unit includes: the device comprises a generating subunit, a calculating subunit, a sending subunit and a controlling subunit. The generating subunit is arranged on a target electronic device in the plurality of electronic devices and is used for generating a first motion track according to the movement condition of the target object, wherein the target electronic device moves according to the first motion track. And the calculating subunit is arranged on the target electronic equipment and used for calculating the relative position of each of the rest electronic equipment except the target electronic equipment in the plurality of electronic equipment relative to the target electronic equipment. And the sending subunit is arranged on the target electronic equipment and used for sending the relative position to the other electronic equipment except the target electronic equipment in the plurality of electronic equipment. And the control subunit is arranged on each electronic device except the target electronic device in the plurality of electronic devices, and is used for generating a motion track according to the first motion track and the relative position of each electronic device except the target electronic device in the plurality of electronic devices relative to the target electronic device, and controlling each electronic device except the target electronic device in the plurality of electronic devices to move according to the generated motion track.
Optionally, the apparatus further comprises: a third acquisition unit and a calibration unit. And a third acquiring unit, configured to acquire the standard time before the first acquiring unit 10 acquires the movement condition of the target object obtained by the electronic device detecting the target object. And the calibration unit is used for calibrating the time of each electronic device in the plurality of electronic devices according to the standard time.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed technical contents can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An image processing method applied to an electronic device having a driving device for providing a driving force to the electronic device to enable the electronic device to move, the electronic device further having an image capturing unit, the method comprising:
acquiring the movement condition of a target object obtained by detecting the target object by electronic equipment, wherein the electronic equipment moves according to the movement condition of the target object so as to keep the relative positions of the electronic equipment and the target object unchanged;
acquiring first images obtained by shooting the target object from different angles by the electronic equipment to obtain a plurality of first images;
synthesizing a plurality of first images to obtain a three-dimensional image of the target object;
wherein the step of moving the plurality of electronic devices according to the movement condition of the target object comprises: generating a first motion track by a target electronic device in the plurality of electronic devices according to the movement condition of the target object, wherein the target electronic device moves according to the first motion track; the target electronic equipment calculates the relative position of each electronic equipment except the target electronic equipment in the plurality of electronic equipment relative to the target electronic equipment; the target electronic equipment sends the relative position to other electronic equipment except the target electronic equipment in the plurality of pieces of electronic equipment; each electronic device except the target electronic device in the plurality of electronic devices generates a motion track according to the first motion track and the relative position of the electronic device relative to the target electronic device, and moves according to the generated motion track;
the plurality of electronic devices can recognize the target object.
2. The method of claim 1, wherein the plurality of electronic devices is N electronic devices, and wherein obtaining the first images of the target object captured by the plurality of electronic devices from different angles comprises:
the ith electronic equipment identifies the target object according to an image acquisition unit of the ith electronic equipment;
the ith electronic equipment shoots the target object at an ith angle to obtain the first image, wherein the ith angle is an angle of the ith electronic equipment relative to the target object, 1 to N are sequentially taken for i, the 1 st angle to the Nth angle are N different angles, and N is an integer greater than 1.
3. The method of claim 1, wherein the synthesizing the plurality of first images to obtain the stereoscopic image of the target object comprises:
acquiring the shooting time of each first image;
and synthesizing the plurality of first images with the same shooting time to obtain the three-dimensional image.
4. The method according to claim 3, wherein the synthesizing of the plurality of first images with the same shooting time to obtain the stereoscopic image comprises:
removing a background part in the first image to obtain a first image with the background removed;
and synthesizing the plurality of background-removed first images with the same shooting time to obtain the three-dimensional image.
5. The method of claim 1, wherein prior to acquiring the movement of the target object resulting from the detection of the target object by the electronic device, the method further comprises:
acquiring standard time;
and calibrating the time of each electronic device in the plurality of electronic devices according to the standard time.
6. An image processing apparatus applied to an electronic device provided with a driving device for providing a driving force to the electronic device so that the electronic device can move, the electronic device further having an image capturing unit, the apparatus comprising:
a first acquisition unit, configured to acquire a movement situation of a target object, which is obtained by an electronic device detecting the target object, wherein a plurality of pieces of electronic devices move according to the movement situation of the target object so as to keep a relative position between each of the plurality of pieces of electronic devices and the target object unchanged;
the second acquisition unit is used for acquiring first images obtained by shooting the target object from different angles by the electronic equipment to obtain a plurality of first images;
a synthesizing unit, configured to synthesize the plurality of first images to obtain a stereoscopic image of the target object;
wherein, a plurality of the electronic devices are moved according to the movement situation of the target object by the following modes: generating a first motion track by a target electronic device in the plurality of electronic devices according to the movement condition of the target object, wherein the target electronic device moves according to the first motion track; the target electronic equipment calculates the relative position of each electronic equipment except the target electronic equipment in the plurality of electronic equipment relative to the target electronic equipment; the target electronic equipment sends the relative position to other electronic equipment except the target electronic equipment in the plurality of pieces of electronic equipment; each electronic device except the target electronic device in the plurality of electronic devices generates a motion track according to the first motion track and the relative position of the electronic device relative to the target electronic device, and moves according to the generated motion track;
the plurality of electronic devices can recognize the target object.
7. The apparatus according to claim 6, wherein the plurality of electronic devices are N pieces of electronic devices, and the second acquisition unit includes:
the ith identification subunit is arranged on the ith electronic device and used for identifying the target object according to the image acquisition unit of the ith electronic device;
the ith shooting subunit is arranged on the ith electronic device and used for shooting the target object at an ith angle to obtain the first image, wherein the ith angle is an angle of the ith electronic device relative to the target object, 1 to N are sequentially taken for i, the 1 st to the Nth angles are N different angles, and N is an integer greater than 1.
8. The apparatus of claim 6, wherein the synthesis unit comprises:
an acquisition subunit, configured to acquire a shooting time of each of the first images;
and the synthesizing subunit is used for synthesizing the plurality of first images with the same shooting time to obtain the stereoscopic image.
9. The apparatus of claim 8, wherein the synthesis subunit comprises:
the removing module is used for removing the background part in the first image to obtain the first image with the background removed;
and the synthesis module is used for synthesizing the first image with the plurality of removed backgrounds with the same shooting time to obtain the three-dimensional image.
10. The apparatus of claim 6, further comprising:
the third acquisition unit is used for acquiring standard time before the first acquisition unit acquires the movement condition of the target object obtained by detecting the target object by the electronic equipment;
and the calibration unit is used for calibrating the time of each electronic device in the plurality of electronic devices according to the standard time.
CN201611178830.9A 2016-12-19 2016-12-19 Image processing method and device Active CN108616702B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611178830.9A CN108616702B (en) 2016-12-19 2016-12-19 Image processing method and device
PCT/CN2017/117151 WO2018113657A1 (en) 2016-12-19 2017-12-19 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611178830.9A CN108616702B (en) 2016-12-19 2016-12-19 Image processing method and device

Publications (2)

Publication Number Publication Date
CN108616702A CN108616702A (en) 2018-10-02
CN108616702B true CN108616702B (en) 2021-02-12

Family

ID=62624532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611178830.9A Active CN108616702B (en) 2016-12-19 2016-12-19 Image processing method and device

Country Status (2)

Country Link
CN (1) CN108616702B (en)
WO (1) WO2018113657A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693506A (en) * 2020-12-15 2022-07-01 华为技术有限公司 Image processing method and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030035262A (en) * 2001-10-30 2003-05-09 거래주식회사 Method and apparatus for controlling image size
CN1976445A (en) * 2006-12-15 2007-06-06 上海鼎业电子科技有限公司 Multi-camera monitoring system based on three-dimensional video frequency dynamic tracking and tracking method thereof
CN102289145A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) camera equipment based on 3D panoramic vision
CN102665087A (en) * 2012-04-24 2012-09-12 浙江工业大学 Automatic shooting parameter adjusting system of three dimensional (3D) camera device
CN104751164A (en) * 2013-12-30 2015-07-01 鸿富锦精密工业(武汉)有限公司 Method and system for capturing movement trajectory of object
CN105245846A (en) * 2015-10-12 2016-01-13 西安斯凯智能科技有限公司 Multi-unmanned aerial vehicle cooperative tracking type shooting system and shooting method
CN105892668A (en) * 2016-04-01 2016-08-24 纳恩博(北京)科技有限公司 Equipment control method and device
CN105915784A (en) * 2016-04-01 2016-08-31 纳恩博(北京)科技有限公司 Information processing method and information processing device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1479046A4 (en) * 2002-02-27 2007-01-31 Geo Rae Co Ltd Method and system for displaying stereoscopic image
CN102903090B (en) * 2012-01-20 2015-11-25 李文松 Panoramic stereo image synthetic method, device, system and browsing apparatus
CN106162150B (en) * 2016-09-30 2018-08-17 维沃移动通信有限公司 A kind of photographic method and mobile terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030035262A (en) * 2001-10-30 2003-05-09 거래주식회사 Method and apparatus for controlling image size
CN1976445A (en) * 2006-12-15 2007-06-06 上海鼎业电子科技有限公司 Multi-camera monitoring system based on three-dimensional video frequency dynamic tracking and tracking method thereof
CN102289145A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) camera equipment based on 3D panoramic vision
CN102665087A (en) * 2012-04-24 2012-09-12 浙江工业大学 Automatic shooting parameter adjusting system of three dimensional (3D) camera device
CN104751164A (en) * 2013-12-30 2015-07-01 鸿富锦精密工业(武汉)有限公司 Method and system for capturing movement trajectory of object
CN105245846A (en) * 2015-10-12 2016-01-13 西安斯凯智能科技有限公司 Multi-unmanned aerial vehicle cooperative tracking type shooting system and shooting method
CN105892668A (en) * 2016-04-01 2016-08-24 纳恩博(北京)科技有限公司 Equipment control method and device
CN105915784A (en) * 2016-04-01 2016-08-31 纳恩博(北京)科技有限公司 Information processing method and information processing device

Also Published As

Publication number Publication date
CN108616702A (en) 2018-10-02
WO2018113657A1 (en) 2018-06-28

Similar Documents

Publication Publication Date Title
US10789765B2 (en) Three-dimensional reconstruction method
JP4532856B2 (en) Position and orientation measurement method and apparatus
JP5260705B2 (en) 3D augmented reality provider
US20180192033A1 (en) Multi-view scene flow stitching
JP4160572B2 (en) Image processing apparatus and image processing method
EP1670237A2 (en) Matching un-synchronized image portions
US20190333269A1 (en) Three-dimensional reconstruction method, three-dimensional reconstruction apparatus, and generation method for generating three-dimensional model
KR20160140452A (en) Method and apparatus for displaying a light field based image on a user's device, and corresponding computer program product
US20120162220A1 (en) Three-dimensional model creation system
GB2557465A (en) Method for calibrating a camera and calibration system
KR101672732B1 (en) Apparatus and method for tracking object
CN104980651A (en) Image processing apparatus and control method
US8531505B2 (en) Imaging parameter acquisition apparatus, imaging parameter acquisition method and storage medium
JP2010239564A (en) Imaging apparatus, imaging control method, and program
CN110751685B (en) Depth information determination method, determination device, electronic device and vehicle
CN109495733B (en) Three-dimensional image reconstruction method, device and non-transitory computer readable storage medium thereof
KR20190063153A (en) System and method for simultaneous reconsttuction of initial 3d trajectory and velocity using single camera images
JP2018155664A (en) Imaging system, imaging control method, image processing device, and image processing program
CN108616702B (en) Image processing method and device
US11176353B2 (en) Three-dimensional dataset and two-dimensional image localization
CN109360277A (en) Virtual emulation display control method and device, storage medium and electronic device
US20160127617A1 (en) System for tracking the position of the shooting camera for shooting video films
JP2019062340A (en) Image shake correction apparatus and control method
JP4578653B2 (en) Depth image generation apparatus, depth image generation method, and computer-readable recording medium storing a program for causing a computer to execute the method
JP4469757B2 (en) Image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant