CN106815809B - Picture processing method and device - Google Patents

Picture processing method and device Download PDF

Info

Publication number
CN106815809B
CN106815809B CN201710207499.7A CN201710207499A CN106815809B CN 106815809 B CN106815809 B CN 106815809B CN 201710207499 A CN201710207499 A CN 201710207499A CN 106815809 B CN106815809 B CN 106815809B
Authority
CN
China
Prior art keywords
target object
picture
image
images
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710207499.7A
Other languages
Chinese (zh)
Other versions
CN106815809A (en
Inventor
董培
白天翔
徐霄
许枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201710207499.7A priority Critical patent/CN106815809B/en
Publication of CN106815809A publication Critical patent/CN106815809A/en
Application granted granted Critical
Publication of CN106815809B publication Critical patent/CN106815809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a picture processing method and a picture processing device, wherein the picture processing method comprises the following steps: acquiring at least two frames of images, and acquiring at least one target object in each frame of image; splicing the at least two frames of images by using a first processing mode to obtain spliced images; determining a first target object corresponding to the specified region of the spliced picture from the at least one target object, and processing the first target object by using a second processing mode to obtain a processed second target object; and fusing the second target object and the spliced picture.

Description

Picture processing method and device
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a picture processing method and device.
Background
With the development and popularization of intelligent terminal technology, image processing methods based on the intelligent terminal technology are more and more common. Considering that the amount of information that can be carried by a single picture is generally limited, in some application scenarios, it is more desirable for a user to be able to splice multiple pictures to form a spliced picture and record or transmit the spliced picture.
In the existing picture splicing algorithm, natural connection and transition can be formed as much as possible by utilizing the similarity of corresponding pixels of the overlapped parts of the pictures through the corresponding splicing algorithm, and in the process, the relations of rotation, affine, perspective and the like between every two adjacent pictures are always considered, and corresponding processing is carried out based on the relations so as to achieve the aim of seamless splicing. However, various splicing algorithms in the prior art still generate various types of deformation and distortion, which affects the splicing effect of pictures.
Disclosure of Invention
According to an aspect of the present invention, there is provided a picture processing method, including: acquiring at least two frames of images, and acquiring at least one target object in each frame of image; splicing the at least two frames of images by using a first processing mode to obtain spliced images; determining a first target object corresponding to the specified region of the spliced picture from the at least one target object, and processing the first target object by using a second processing mode to obtain a processed second target object; and fusing the second target object and the spliced picture.
According to another aspect of the present invention, there is provided a picture processing apparatus including: the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is configured to acquire at least two frames of images and acquire at least one target object in each frame of image; the splicing unit is configured to splice the at least two frames of images by using a first processing mode to obtain spliced images; the processing unit is configured to determine a first target object corresponding to the specified region of the spliced picture from the at least one target object, and process the first target object by using a second processing mode to obtain a processed second target object; and the fusion unit is configured to fuse the second target object and the spliced picture.
According to the picture processing method and device provided by the invention, different processing modes can be utilized to carry out different processing on objects in different areas of the spliced picture, and the processed result is fused with the spliced picture, so that the distortion of the spliced picture is reduced, and the seamless splicing effect is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1(a) and 1(b) are schematic diagrams of spliced pictures obtained by splicing based on planar projection in different scenes;
fig. 2(a) and 2(b) are schematic diagrams of a spliced picture obtained by splicing based on spherical projection in different scenes;
FIG. 3 shows a flow diagram of a picture processing method according to an embodiment of the invention;
fig. 4 illustrates a specific example of a picture processing method according to an embodiment of the present invention;
FIG. 5 shows a block diagram of a picture processing apparatus according to an embodiment of the invention;
fig. 6 illustrates a block diagram of a picture processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention.
With the enhancement of the computing power of smart phones and the progress of camera technologies, new photographing modes are emerging continuously. The panoramic self-shooting mode is a brand-new shooting mode based on a common front-mounted camera of a smart phone, can realize automatic triggering of multiple shooting through holding and rotating the smart phone by hands, and obtains photos with wide visual field range by means of technologies such as picture splicing and the like.
When perspective relation between adjacent pictures is considered, and when a plurality of input pictures are projected and fused by using the existing picture splicing algorithm, a plurality of choices can be made for a target projection surface, such as common plane, spherical surface, conical surface and the like, and different types of deformation and distortion can be introduced into the finally obtained spliced picture.
Fig. 1(a) and 1(b) show schematic diagrams of spliced pictures obtained by splicing based on plane (plane) projection in different scenes. The projection splicing mode can well keep straight line elements in an actual scene, namely objects which are originally seen as straight lines by naked eyes, such as lane lines, vehicle location lines and the like of a parking lot in fig. 1(a), and still can keep straight line characteristics after splicing. However, in the stitched picture, the stretching deformation of the corner region of the stitched picture tends to be more obvious, for example, the white car in the right edge region of fig. 1(a) and the human face in the left lower corner region of fig. 1(b) are obviously lengthened, which may result in an unsatisfactory stitching effect on the stitched picture with a wide total visual field range.
Fig. 2(a) and 2(b) show schematic diagrams of a spliced picture obtained by splicing based on spherical (spherical) projection in different scenes. Such a projection stitching approach can target a spherical surface for stitching fusion, so that objects in the corner regions of the final stitched picture (e.g., a white car in the right edge region of fig. 2(a) and a human face in the lower left corner region of fig. 2 (b)) generally do not have as much stretch as the plane-based projections shown in fig. 1(a) and 1 (b). However, in the stitched picture, the lane lines and the vehicle line portions of the parking lot such as in fig. 2(a) cannot be kept linear as well as the planar projection.
In view of the above problems, fig. 3 shows a flowchart of a picture processing method 300 according to an embodiment of the present invention, which can be applied to an electronic device, such as various terminals that can be used to acquire pictures and process the pictures, for example, a mobile phone, a PDA, a tablet computer, a notebook computer, and the like, and also a portable, pocket, hand-held, computer-built-in, or vehicle-mounted device.
As shown in fig. 3, the picture processing method 300 may include step S301: at least two frames of images are acquired, and at least one target object in each frame of image is acquired. The at least two frames of images can be two or more images for picture splicing, and overlapped parts can be arranged in two adjacent regions of the images so that a splicing algorithm can judge the relative position relationship of the images and splice the images. In the at least two frames of images, the target object in the image may be acquired for each frame of image. The purpose of acquiring the target objects from the image is to hope to further process the target objects in the subsequent picture processing step, so as to avoid excessive deformation of the target objects in the finally formed spliced picture and influence the splicing effect. Therefore, the target object acquired in the image may generally be a more important object or a main shooting target object in the image. For example, the target object may be a human face, or may be various types of objects such as a vehicle and a building. In an embodiment of the present invention, in the process of acquiring the target object from each frame of image, the acquisition may be performed by means of image recognition, for example, a face image existing in each frame of image may be acquired by face recognition in the image. In another embodiment of the present invention, the target object may be obtained from each frame of image by image tracking, for example, a face image existing in each frame of image may be obtained from the image by face tracking. In still another embodiment of the present invention, the target object and the like may also be acquired by a combination of image detection and tracking. In another embodiment of the present invention, when at least two frames of images are obtained, depth information corresponding to the at least two frames of images may be further obtained, so that the depth information of the image in which the target object is located may be utilized to distinguish the target object from other objects having different depth ranges, so as to select the target object, for example, the target object such as a human face may be located in a foreground range, and the human face image may be distinguished from a background according to the depth information of the image in which the target object is located, so as to obtain an accurate human face image. Wherein, the depth of field information of image can utilize two mesh cameras to obtain, for example, when the user carries out the auto heterodyne, can adopt two cameras of smart mobile phone to obtain the depth of field information. The above target object and the target object acquisition method are merely examples, and any selected target object and any target object acquisition method may be adopted in practical application of the present invention, which is not limited herein.
In step S302, a first processing mode is used to perform stitching processing on the at least two frames of images, so as to obtain a stitched image after stitching. In this step, at least two frames of images acquired in step S301 are stitched, wherein the images may be stitched by using a first processing method. Specifically, the first processing method may be the aforementioned planar projection method. In the process of stitching the images by using a plane projection mode, a plane can be taken as a target projection plane, the at least two frames of images are projected onto the plane projection plane, and the overlapping areas between the adjacent images are in smooth transition as much as possible. After the images are spliced by using a plane projection mode, spliced pictures can be obtained. The above description that the first processing mode is a plane projection mode is only an example, and in practical application of the present invention, any processing mode may be adopted to process the image to obtain the stitched image, for example, a spherical projection mode, a conical projection mode, or other projection modes may also be adopted to project and stitch the image.
In step S303, a first target object corresponding to the specified region of the stitched picture is determined from the at least one target object, and the first target object is processed in a second processing manner to obtain a processed second target object.
In this step, considering that when the image is stitched by using the first processing method in step S302, a large distortion may be generated in a partial region of the obtained stitched image, therefore, in step S303, another processing method may be selected to process the target object in the region where the distortion is generated in the stitched image, so that the processed stitched image has a better effect. Optionally, when the first processing mode is planar projection, the second processing mode may be selected to be, for example, spherical projection. At this time, since the planar projection may cause a distortion of the stitched picture in the edge region to be large, the designated region of the stitched picture may be the edge region of the stitched picture at this time. The edge region may be a left edge and a right edge of the spliced picture, an upper edge and a lower edge, or a periphery and/or four corners of the spliced picture, or the like.
After the designated area of the stitched picture that needs to be processed in the second processing manner is determined, the range of the target object in the designated area of the stitched picture may be acquired by using the frame image acquired in step S301 and the corresponding position of the target object in each frame image, and the target objects may be processed as the first target objects in the second processing manner. For example, when it is known that the position of a certain frame of image used for stitching is located on the left side of the stitched picture after stitching and a certain target object is located at the left edge of the frame of image, it may be determined that the target object is located at the edge area of the stitched picture after stitching, and the target object may be processed in the second processing manner. When the depth information corresponding to the image is further acquired in step S301, the first target object and another object different from the depth-of-field range of the first target object may be distinguished by using the depth information of the image in which the first target object is located, so as to select the first target object. In practical applications, the designated area of the stitched picture may be any area in which the second processing mode needs to be adopted, and the first processing mode and the second processing mode may also be one of planar projection, spherical projection, cylindrical projection, conical projection, and the like, for example, the first processing mode may be a spherical projection mode, and correspondingly, the second processing mode may be a planar projection mode, and at this time, the designated area may also be changed according to the effect of the stitched picture.
When the first target object in the specified region of the stitched picture is processed by using the second processing method, the separated first target object may be subjected to projection or other operations by using the second processing method, and a processed second target object is obtained. The second target object may be the first target object from which the distorted image is further removed.
In step S304, the second target object is fused with the stitched picture. Considering that the size and shape of the image processed (e.g., projected) by the second target object as the first target object may change, the second target object may adaptively resize the second target object, and the adjusted second target object may replace the first target object in the stitched picture and be fused with the stitched picture. In the final fusion process, softening operation can be carried out on the fusion edge according to the actual situation, so that the fusion effect is enhanced.
Fig. 4 illustrates a specific example of a picture processing method according to an embodiment of the present invention. As shown in fig. 4, according to the smartphone foreground dual-camera, three images, namely, the left image, the middle image and the right image, in the upper left corner of fig. 4 are acquired, and a target object, in this example, a human face, in each frame of image is acquired. These three images are subjected to planar projection (first processing). In addition, the face image located in the edge area of the stitched picture may also be obtained according to the relative position of each frame of image in the stitched picture, and extracted according to the depth information of the image to serve as the first target object, as shown in the lower left corner of fig. 4, the face image located in the left edge of the left side of the stitched picture is extracted to serve as the first target object, and spherical projection (second processing mode) is performed to obtain the second target object, as shown in the lower middle part of fig. 4, so as to obtain the stitched picture of the lower right corner of fig. 4, in which the second target object after spherical projection is fused, is stitched. That is, the spliced picture in the lower right corner of fig. 4 is a fused image obtained by combining the planar projection of the three images and the spherical projection of the face in the lower left corner. When the processed second target object is fused into the spliced picture after the planar projection, the second target object can be appropriately enlarged according to the area and the position occupied by the first target object in the spliced picture before, and the fused edge is softened after the fusion.
According to the picture processing method provided by the invention, different processing modes can be utilized to carry out different processing on objects in different areas of the spliced picture, and the processed result is fused with the spliced picture, so that the distortion of the spliced picture is reduced, and the seamless splicing effect is realized.
Next, a block diagram of a picture processing apparatus according to an embodiment of the present invention is described with reference to fig. 5. The device can execute the picture processing method. Since the operation of the apparatus is substantially the same as the respective steps of the picture processing method described above with reference to fig. 3, only a brief description thereof will be given here, and a repetitive description of the same will be omitted.
As shown in fig. 5, the picture processing apparatus 500 includes an acquisition unit 510, a splicing unit 520, a processing unit 530, and a fusion unit 540. It should be appreciated that fig. 5 only shows components relevant to an embodiment of the present invention, and other components are omitted, but this is merely illustrative, and the apparatus 500 may include other components as desired.
The electronic device in which the picture processing apparatus 500 of fig. 5 is located may be various terminals that can be used to take pictures and process pictures, such as a mobile phone, a PDA, a tablet computer, a notebook computer, and the like, and may also be a portable, pocket, hand-held, computer-embedded, or vehicle-mounted apparatus.
As shown in fig. 5, the acquisition unit 510 acquires at least two frames of images and acquires at least one target object in each frame of image. The at least two frames of images can be two or more images for picture splicing, and overlapped parts can be arranged in two adjacent regions of the images so that a splicing algorithm can judge the relative position relationship of the images and splice the images. The acquisition unit 510 may acquire a target object in each image for the image. The purpose of acquiring the target objects from the image is to hope to further process the target objects in the subsequent picture processing step, so as to avoid excessive deformation of the target objects in the finally formed spliced picture and influence the splicing effect. Therefore, the target object acquired by the acquisition unit 510 in the image may generally be a more important object or a main shooting target in the image. For example, the target object may be a human face, or may be various types of objects such as a vehicle and a building. In an embodiment of the present invention, in the process of acquiring the target object from each frame of image, the acquiring unit 510 may acquire the target object by means of image recognition, for example, a face image existing in each frame of image may be acquired by face recognition in the image. In another embodiment of the present invention, the obtaining unit 510 may obtain the target object from each frame of image by image tracking, for example, a face image existing in each frame of image may be obtained by face tracking in the image. In still another embodiment of the present invention, the obtaining unit 510 may further obtain the target object and the like by a combination of image detection and tracking. In another embodiment of the present invention, the obtaining unit 510 may further obtain depth information corresponding to at least two frames of images when obtaining the at least two frames of images, so as to distinguish a target object from other objects with different depth ranges of the target object by using the depth information of the image where the target object is located, to select the target object, for example, the target object such as a human face may be located in a foreground range, and the obtaining unit 510 may distinguish the human face image from a background according to the depth information of the image where the target object is located, so as to obtain an accurate human face image. Wherein, the depth of field information of image can utilize two mesh cameras to obtain, for example, when the user carries out the auto heterodyne, can adopt two cameras of smart mobile phone to obtain the depth of field information. The above target object and the target object acquisition method by the acquisition unit 510 are only examples, and in practical application of the present invention, the acquisition unit 510 may adopt any selected target object and may adopt any target object acquisition method, and is not limited herein.
The splicing unit 520 performs splicing processing on the at least two frames of images by using a first processing mode to obtain a spliced image after splicing. The stitching unit 520 stitches at least two frames of images acquired in the acquiring unit 510, wherein the stitching unit 520 may stitch the images by using the first processing method. Specifically, the first processing method may be the aforementioned planar projection method. In the process of stitching the images by the stitching unit 520 using a planar projection manner, the plane may be used as a target projection plane, and both the at least two frames of images are projected onto the planar projection plane, so that the overlapping regions between the respective adjacent images transition as smoothly as possible. After the image is spliced by the splicing unit 520 in a planar projection manner, a spliced image can be obtained. The above description that the first processing mode is a plane projection mode is only an example, and in practical application of the present invention, the stitching unit 520 may process the image by any processing mode to obtain the stitched image, for example, may also project and stitch the image by projection modes such as a spherical projection mode and a cone projection mode.
The processing unit 530 determines a first target object corresponding to the specified region of the stitched picture from the at least one target object, and processes the first target object by using a second processing method to obtain a processed second target object.
Considering that when the stitching unit 520 performs the stitching processing on the images by using the first processing method, a large distortion may be generated in a partial region of the obtained stitched picture, therefore, the processing unit 530 may select another processing method to process the target object in the region where the distortion is generated in the stitched picture, so that the processed stitched picture has a better effect. Optionally, when the first processing mode is planar projection, the second processing mode may be selected to be, for example, spherical projection. At this time, since the planar projection may cause a distortion of the stitched picture in the edge region to be large, the designated region of the stitched picture may be the edge region of the stitched picture at this time. The edge region may be a left edge and a right edge of the spliced picture, an upper edge and a lower edge, or a periphery and/or four corners of the spliced picture, or the like.
After determining the designated area of the stitched picture that needs to be processed in the second processing manner, the processing unit 530 may use the frame image acquired by the acquiring unit 510 and the corresponding position of the target object in each frame image to acquire the range of the target object in the designated area of the stitched picture after stitching, and process the target objects as the first target object in the second processing manner. For example, when it is known that the position of a certain frame of image used for stitching is located on the left side of the stitched picture after stitching and a certain target object is located at the left edge of the frame of image, the processing unit 530 may determine that the target object is located at the edge area of the stitched picture after stitching, and may process the target object by using the second processing method. When the obtaining unit 510 further obtains the depth information corresponding to the image, the processing unit 530 may use the depth information of the image where the first target object is located to distinguish the first target object from other objects with different depth ranges, so as to select the first target object. In practical applications, the designated area of the stitched picture may be any area in which the second processing mode needs to be adopted, and the first processing mode and the second processing mode may also be one of planar projection, spherical projection, cylindrical projection, conical projection, and the like, for example, the first processing mode may be a spherical projection mode, and correspondingly, the second processing mode may be a planar projection mode, and at this time, the designated area may also be changed according to the effect of the stitched picture.
When the processing unit 530 processes the first target object in the specified region of the stitched picture by using the second processing method, the separated first target object may be subjected to projection or other operations by using the second processing method, and a processed second target object is obtained. The second target object may be the first target object from which the distorted image is further removed.
The fusion unit 540 fuses the second target object and the stitched image. Considering that the size and shape of the image processed (e.g., projected) by the second target object as the first target object may change, the second target object may adaptively resize the second target object, and the adjusted second target object may replace the first target object in the stitched picture and be fused with the stitched picture. In the final fusion process, the fusion unit 540 may perform softening operation on the fusion edge according to the actual situation, so as to enhance the fusion effect.
In the picture processing device provided by the invention, different processing modes can be utilized to carry out different processing on objects in different areas of the spliced picture, and the processed result is fused with the spliced picture, so that the distortion of the spliced picture is reduced, and the seamless splicing effect is realized.
Next, a block diagram of a picture processing apparatus according to an embodiment of the present invention is described with reference to fig. 6. The picture processing device can execute the picture processing method. Since the operation of the apparatus is substantially the same as the respective steps of the picture processing method described above with reference to fig. 3, only a brief description thereof will be given here, and a repetitive description of the same will be omitted.
Picture processing device 600 in FIG. 6 may include one or more processors 610 and memory 620, although picture processing device 600 may also include other components such as an input unit, an output unit (not shown), etc., interconnected by a bus system and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the picture processing apparatus 600 shown in fig. 6 are only exemplary and not limiting, and the picture processing apparatus 600 may have other components and structures as necessary.
The processor 610 is a control center, connects various parts of the entire apparatus using various interfaces and lines, and performs various functions of the picture processing apparatus 600 and processes data by running or executing software programs and/or modules stored in the memory 620 and calling data stored in the memory 620, thereby performing overall monitoring of the picture processing apparatus 600. Preferably, processor 610 may include one or more processing cores; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
Memory 620 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium.
The processor 610 may execute the program instructions to implement the steps of: acquiring at least two frames of images, and acquiring at least one target object in each frame of image; splicing the at least two frames of images by using a first processing mode to obtain spliced images; determining a first target object corresponding to the specified region of the spliced picture from the at least one target object, and processing the first target object by using a second processing mode to obtain a processed second target object; and fusing the second target object and the spliced picture.
An input unit, not shown, may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Preferably, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, and can receive and execute commands sent by the processor 610. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit may comprise other input devices than a touch sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The output unit may output various information, such as image information, application control information, and the like, to the outside (e.g., a user). For example, the output unit may be a display unit operable to display information input by or provided to a user and various graphical user interfaces of the picture processing apparatus 600, which may be configured by graphics, text, icons, video, and any combination thereof. The display unit may include a display panel, and preferably, the display panel may be configured in the form of an LCD (Liquid crystal display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 610 to determine the type of touch event, and then the processor 610 provides a corresponding visual output on the display panel according to the type of touch event. The touch-sensitive surface and the display panel may be implemented as two separate components for input and output functions, or in some embodiments, the touch-sensitive surface may be integrated with the display panel for input and output functions.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific implementation of the information processing method described above may refer to the corresponding description in the product embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not implemented.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A picture processing method comprises the following steps:
acquiring at least two frames of images, and acquiring at least one target object in each frame of image;
splicing the at least two frames of images by using a first processing mode to obtain spliced images;
determining a first target object corresponding to the specified region of the spliced picture from the at least one target object, and processing the first target object by using a second processing mode to obtain a processed second target object;
fusing the second target object with the spliced picture;
the first processing mode and the second processing mode are respectively one of plane projection, spherical projection, cylindrical surface projection and conical projection;
the method comprises the following steps of obtaining a target object in a mode of combining image detection and image tracking;
wherein the acquiring of the at least two frames of images comprises: acquiring depth information corresponding to the at least two frames of images respectively;
the determining, from the at least one target object, a first target object corresponding to within the designated area of the stitched picture comprises: and distinguishing the first target object from other objects with different depth-of-field ranges by using the depth information of the image in which the first target object is positioned so as to select the first target object.
2. The method of claim 1, wherein,
the second processing mode is spherical projection and cylindrical projection.
3. The method of claim 1, wherein,
and the specified region of the spliced picture is the edge region of the spliced picture.
4. The method of claim 1, wherein the fusing the second target object with the stitched picture comprises:
adaptively resizing the second target object;
and replacing the first target object in the spliced picture with the adjusted second target object, and fusing the second target object with the spliced picture.
5. A picture processing apparatus comprising:
the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is configured to acquire at least two frames of images and acquire at least one target object in each frame of image;
the splicing unit is configured to splice the at least two frames of images by using a first processing mode to obtain spliced images;
the processing unit is configured to determine a first target object corresponding to the specified region of the spliced picture from the at least one target object, and process the first target object by using a second processing mode to obtain a processed second target object;
the fusion unit is configured to fuse the second target object and the spliced picture;
the first processing mode and the second processing mode are respectively one of plane projection, spherical projection, cylindrical surface projection and conical projection;
the acquisition unit acquires a target object in a mode of combining image detection and image tracking;
the acquisition unit acquires depth information corresponding to the at least two frames of images respectively;
the processing unit distinguishes the first target object from other objects different from the depth-of-field range of the first target object by using the depth information of the image where the first target object is located, so as to select the first target object.
6. The apparatus of claim 5, wherein,
the second processing mode is spherical projection and cylindrical projection.
7. The apparatus of claim 5, wherein,
and the specified region of the spliced picture is the edge region of the spliced picture.
8. The apparatus of claim 5, wherein,
the fusion unit adaptively resizes the second target object;
and replacing the first target object in the spliced picture with the adjusted second target object, and fusing the second target object with the spliced picture.
CN201710207499.7A 2017-03-31 2017-03-31 Picture processing method and device Active CN106815809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710207499.7A CN106815809B (en) 2017-03-31 2017-03-31 Picture processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710207499.7A CN106815809B (en) 2017-03-31 2017-03-31 Picture processing method and device

Publications (2)

Publication Number Publication Date
CN106815809A CN106815809A (en) 2017-06-09
CN106815809B true CN106815809B (en) 2020-08-25

Family

ID=59116430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710207499.7A Active CN106815809B (en) 2017-03-31 2017-03-31 Picture processing method and device

Country Status (1)

Country Link
CN (1) CN106815809B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369129B (en) * 2017-06-26 2020-01-21 深圳岚锋创视网络科技有限公司 Panoramic image splicing method and device and portable terminal
CN111414902A (en) * 2019-01-08 2020-07-14 北京京东尚科信息技术有限公司 Image annotation method and device
CN111105347B (en) * 2019-11-19 2020-11-13 贝壳找房(北京)科技有限公司 Method, device and storage medium for generating panoramic image with depth information
US11055835B2 (en) 2019-11-19 2021-07-06 Ke.com (Beijing) Technology, Co., Ltd. Method and device for generating virtual reality data
CN111124231B (en) * 2019-12-26 2021-02-12 维沃移动通信有限公司 Picture generation method and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104599247A (en) * 2015-01-04 2015-05-06 深圳市腾讯计算机系统有限公司 Image correction method and device
CN106447602A (en) * 2016-08-31 2017-02-22 浙江大华技术股份有限公司 Image mosaic method and device
CN106506795A (en) * 2016-09-13 2017-03-15 努比亚技术有限公司 A kind of mobile terminal and image processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104599247A (en) * 2015-01-04 2015-05-06 深圳市腾讯计算机系统有限公司 Image correction method and device
CN106447602A (en) * 2016-08-31 2017-02-22 浙江大华技术股份有限公司 Image mosaic method and device
CN106506795A (en) * 2016-09-13 2017-03-15 努比亚技术有限公司 A kind of mobile terminal and image processing method

Also Published As

Publication number Publication date
CN106815809A (en) 2017-06-09

Similar Documents

Publication Publication Date Title
CN106815809B (en) Picture processing method and device
US11747958B2 (en) Information processing apparatus for responding to finger and hand operation inputs
US9998651B2 (en) Image processing apparatus and image processing method
CN107357540B (en) Display direction adjusting method and mobile terminal
KR102121592B1 (en) Method and apparatus for protecting eyesight
US10055081B2 (en) Enabling visual recognition of an enlarged image
CN110100251B (en) Apparatus, method, and computer-readable storage medium for processing document
US8135440B2 (en) System for using mobile communication terminal as pointer and method and medium thereof
CN106937055A (en) A kind of image processing method and mobile terminal
US10863077B2 (en) Image photographing method, apparatus, and terminal
US20150074573A1 (en) Information display device, information display method and information display program
US20150063785A1 (en) Method of overlappingly displaying visual object on video, storage medium, and electronic device
US20160334975A1 (en) Information processing device, non-transitory computer-readable recording medium storing an information processing program, and information processing method
CN107172347B (en) Photographing method and terminal
CN110795019B (en) Key recognition method and device for soft keyboard and storage medium
CN112749613A (en) Video data processing method and device, computer equipment and storage medium
CN106981048B (en) Picture processing method and device
EP2939411B1 (en) Image capture
US20160350932A1 (en) Method and device for displaying image
US11770603B2 (en) Image display method having visual effect of increasing size of target image, mobile terminal, and computer-readable storage medium
US20180220066A1 (en) Electronic apparatus, operating method of electronic apparatus, and non-transitory computer-readable recording medium
CN110738185B (en) Form object identification method, form object identification device and storage medium
CN112749590A (en) Object detection method, device, computer equipment and computer readable storage medium
US9838615B2 (en) Image editing method and electronic device using the same
US20130236117A1 (en) Apparatus and method for providing blurred image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant