CN110300264B - Image processing method, image processing device, mobile terminal and storage medium - Google Patents

Image processing method, image processing device, mobile terminal and storage medium Download PDF

Info

Publication number
CN110300264B
CN110300264B CN201910579241.9A CN201910579241A CN110300264B CN 110300264 B CN110300264 B CN 110300264B CN 201910579241 A CN201910579241 A CN 201910579241A CN 110300264 B CN110300264 B CN 110300264B
Authority
CN
China
Prior art keywords
target object
images
cameras
image
relative position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910579241.9A
Other languages
Chinese (zh)
Other versions
CN110300264A (en
Inventor
杜鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910579241.9A priority Critical patent/CN110300264B/en
Publication of CN110300264A publication Critical patent/CN110300264A/en
Application granted granted Critical
Publication of CN110300264B publication Critical patent/CN110300264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices

Abstract

The application discloses an image processing method, an image processing device, a mobile terminal and a storage medium, and relates to the technical field of mobile terminals. The method is applied to the mobile terminal and comprises the following steps: the mobile terminal shoots a target scene through at least one camera in the multiple cameras to obtain at least one first image; determining a target object from at least one first image and acquiring spatial information of the target object; adjusting the relative position relation of the multiple cameras based on the spatial information, and shooting a target scene through the multiple cameras after the relative position relation is adjusted to obtain multiple second images; and performing image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images. This application is through taking with different visual angles simultaneously a plurality of cameras after the spatial information according to the object is adjusted, obtains the image at other visual angles to improve image quality, promote the display effect.

Description

Image processing method, image processing device, mobile terminal and storage medium
Technical Field
The present application relates to the field of mobile terminal technologies, and in particular, to an image processing method and apparatus, a mobile terminal, and a storage medium.
Background
With the development of scientific technology, mobile terminals have become one of the most common electronic products in daily life. Moreover, the user often takes a picture through the mobile terminal, but the current shooting only can acquire images at fixed angles, and the user needs to shoot images at all angles when the user wants to acquire images at all angles, which is tedious in operation.
Disclosure of Invention
In view of the above problems, the present application proposes an image processing method, apparatus, mobile terminal, and storage medium to solve the above problems.
In a first aspect, an embodiment of the present application provides an image processing method, which is applied to a mobile terminal, and the method includes: the mobile terminal shoots a target scene through at least one camera in the multiple cameras to obtain at least one first image; determining a target object from the at least one first image and acquiring spatial information of the target object; adjusting the relative position relation of the plurality of cameras based on the spatial information, and shooting the target scene through the plurality of cameras after the relative position relation is adjusted to obtain a plurality of second images; and performing image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images, wherein the corresponding visual angles of the plurality of target object images comprise the visual angles corresponding to the plurality of second images and the visual angles other than the visual angles corresponding to the plurality of second images.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which is applied to a mobile terminal, and the apparatus includes: the shooting module is used for shooting a target scene through at least one camera in the plurality of cameras by the mobile terminal to obtain at least one first image; the determining module is used for determining a target object from the at least one first image and acquiring spatial information of the target object; the adjusting module is used for adjusting the relative position relation of the plurality of cameras based on the spatial information, and shooting the target scene through the plurality of cameras after the relative position relation is adjusted to obtain a plurality of second images; and the image content compensation module is used for performing image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images, wherein the corresponding visual angles of the plurality of target object images comprise the visual angles corresponding to the plurality of second images and the visual angles other than the visual angles corresponding to the plurality of second images.
In a third aspect, an embodiment of the present application provides a mobile terminal, including a memory and a processor, where the memory is coupled to the processor, and the memory stores instructions, and the processor executes the above method when the instructions are executed by the processor.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the above method.
The embodiment of the application provides an image processing method, an image processing device, a mobile terminal and a storage medium, wherein the method is applied to the mobile terminal and comprises the following steps: the mobile terminal shoots a target scene through at least one camera in the multiple cameras to obtain at least one first image; determining a target object from at least one first image and acquiring spatial information of the target object; adjusting the relative position relation of the multiple cameras based on the spatial information, and shooting a target scene through the multiple cameras after the relative position relation is adjusted to obtain multiple second images; and performing image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images. This application is through taking with different visual angles simultaneously a plurality of cameras after the spatial information according to the object is adjusted, obtains the image at other visual angles to improve image quality, promote the display effect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 illustrates a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of another mobile terminal provided in an embodiment of the present application;
FIG. 3 is a flow chart illustrating an image processing method according to an embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating a further image processing method provided by an embodiment of the present application;
FIG. 5 is a flow chart illustrating a further image processing method provided by an embodiment of the present application;
fig. 6 is a schematic diagram illustrating an operation of a mobile terminal according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating still another operation of a mobile terminal according to an embodiment of the present application;
fig. 8 shows a block diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 9 is a block diagram of a mobile terminal for executing an image processing method according to an embodiment of the present application;
fig. 10 illustrates a storage unit for storing or carrying program codes for implementing an image processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
At present, the photographing function becomes a standard configuration of most mobile terminals, a mobile terminal user can carry the mobile terminal with him and record the good moment around the user through the mobile terminal, in addition, with the rapid development of the intellectualization of the mobile terminal, many users need new and characteristic pictures to be displayed outwards every day, and the requirements of the mobile terminal user on the quality of the pictures and the photographing mode function are higher and higher. However, at present, when a user wants to obtain a picture of a target object at each angle, the user needs to shoot each angle of the object, and needs to keep a certain track during shooting, so that the user is complicated to operate and has certain operation difficulty. Meanwhile, the user cannot well judge how to shoot in the shooting process to obtain a satisfactory image, and the user needs to shoot for many times to determine the satisfactory image, so that the operation is complicated for the user.
In view of the above problems, the inventors have found and proposed an image processing method, an image processing apparatus, a mobile terminal, and a storage medium according to embodiments of the present application through long-term research, and obtain images at other viewing angles by simultaneously capturing images of a target scene at different viewing angles by a plurality of cameras adjusted according to spatial information of an object, thereby improving image quality and improving a display effect. The specific image processing method is described in detail in the following embodiments.
As shown in fig. 1, the mobile terminal 100 according to the present disclosure may include a plurality of cameras 140 and a terminal body 150, where the plurality of cameras 140 are movably disposed on the terminal body 150, and the plurality of cameras 140 have different viewing angles, and the plurality of cameras 140 may all be front-facing cameras of the mobile terminal, that is, cameras located on the same side as an operation interface of the mobile terminal; or both the cameras can be rear cameras of the mobile terminal, that is, the camera is positioned on the back of an operation interface with the mobile terminal (as shown in fig. 1); the mobile terminal may further include a rear camera, and a camera located at another position, which is not limited herein. Fig. 1 is a schematic diagram, and does not limit the mobile terminal according to the present invention. Further, the plurality of cameras may move the positions of the plurality of cameras 140 according to the spatial information of the object, and as shown in fig. 2, the cameras 140 may be moved along the vertical direction, it is understood that the structure of the cameras 140 shown in fig. 2 is merely illustrative, and the cameras 140 may be moved along the vertical direction, may also be moved along the horizontal direction, and may also be moved along a specific direction, which is not limited herein.
Referring to fig. 3, fig. 3 is a flowchart illustrating an image processing method according to an embodiment of the present application. The image processing method is used for obtaining images of other visual angles by simultaneously shooting the target scene at different visual angles through the plurality of cameras adjusted according to the spatial information of the object, so that the image quality is improved, and the display effect is improved. In a specific embodiment, the image processing method is applied to the image processing apparatus 200 shown in fig. 8 and the mobile terminal 100 (fig. 9) configured with the image processing apparatus 200. The following will describe a specific process of this embodiment by taking a mobile terminal as an example, and it is understood that the mobile terminal applied in this embodiment may be an electronic device including a camera, such as a smart phone, a tablet computer, a wearable mobile terminal, and the like, and is not limited specifically herein. As will be described in detail with respect to the flow shown in fig. 3, the image processing method may specifically include the following steps:
step S110: and the mobile terminal shoots a target scene through at least one camera in the plurality of cameras to obtain at least one first image.
In some embodiments, the mobile terminal may shoot the target scene through at least one of the plurality of cameras, and as a mode, the mobile terminal may start the camera to shoot the target scene when a user clicks an icon of the shooting software to enter a shooting interface, or may start the camera to shoot the target scene after the user selects a preset shooting mode. The target object is photographed by the at least one camera, so as to obtain at least one first image, where the first image may be a photographed image or a preview image, and is not limited herein.
Step S120: and determining a target object from the at least one first image, and acquiring spatial information of the target object.
In this embodiment, based on the obtained at least one first image, the mobile terminal may identify the at least one captured first image based on an image identification technology, where any one of the at least one first image may be selected for identification, or each of the at least one first image may be identified, and identify one or more objects in the target scene therefrom. Meanwhile, the target object may be determined from the one or more recognized objects, where the target object may be determined by a user operation, for example, the target object may be determined based on a click operation on the operation interface, or a focus position in each of the plurality of first images may be recognized, and an object corresponding to the focus position may be determined as the target object.
In some embodiments, the target object is determined from the at least one first image, and spatial information of the target object may also be obtained. The spatial information of the target object may include at least size information, depth information, position information, and the like of the target object, which is not limited herein. The spatial information of the target object may be obtained based on one first image in the at least one first image, or may be obtained by selecting two or more first images from the at least one first image. Specifically, as one way, taking depth information as an example, one first image may be selected from at least one first image, and depth prediction may be performed on the first image based on a depth learning algorithm, so as to obtain depth information of a target object in the first image. As another mode, any two first images may be selected from the at least one first image, spatial information of the target object may be obtained based on the two first images, pixel points corresponding to the target object in the two first images may be found by using a stereo matching algorithm, parallax information may be calculated according to a triangulation principle, and depth information of the target object may be obtained by combining the parallax information with parameters of a camera.
Step S130: and adjusting the relative position relation of the plurality of cameras based on the spatial information, and shooting the target scene through the plurality of cameras after the relative position relation is adjusted to obtain a plurality of second images.
In this embodiment, based on the obtained spatial information of the target object, the obtained images of the target object captured by the multiple cameras can be obtained, so that whether the images of the target object captured by the multiple cameras in the relative position relationship meet a certain image quality can be determined, where it may be determined whether the captured image of the target object includes a complete contour of the target object, or it may be determined whether the target object in the captured image of the target object is clear, whether an edge area of the target object is blurred, or the like, which is not limited herein.
Further, when the images of the target object captured by the multiple cameras do not satisfy a certain image quality, for example, the images of the target object captured by the multiple cameras do not include a complete contour of the target object, or the images of the target object captured by the multiple cameras are blurred, the relative position relationship of the multiple cameras may be adjusted based on the spatial information of the target object obtained from at least one first image, and the target scene may be simultaneously captured by the multiple cameras after the relative position is adjusted, so as to obtain multiple second images, where the image quality of the second images is better than that of the first images. Furthermore, the relative position relationship of the plurality of cameras is adjusted according to the spatial information, and the relative positions of the plurality of cameras can be adjusted, wherein the position of any one of the plurality of cameras can be fixed, the positions of the plurality of cameras other than the camera can be moved, and each of the plurality of cameras can also be moved. As an embodiment, the relative angles of the plurality of cameras may also be adjusted based on the spatial information of the target object, wherein the angle of each of the plurality of cameras may be adjusted, or the angles of some of the plurality of cameras may be selected and adjusted. As another embodiment, based on the spatial information of the target object, the relative positions and relative angles of the multiple cameras may be adjusted simultaneously, and the multiple cameras after adjusting the relative positional relationship may be used to capture the target scene simultaneously, so as to obtain multiple second images with better image quality.
Step S140: and performing image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images, wherein the corresponding visual angles of the plurality of target object images comprise the visual angles corresponding to the plurality of second images and the visual angles other than the visual angles corresponding to the plurality of second images.
In this embodiment, the mobile terminal obtains a plurality of second images by adjusting the relative position relationship of the plurality of cameras and shooting the target scene at the same time, where the plurality of second images only include images of the target object at a part of viewing angles, that is, images of the mobile terminal at viewing angles corresponding to the plurality of cameras, but cannot include images of the target object at all viewing angles. Therefore, the image of the target object under the viewing angle other than the viewing angle corresponding to the plurality of second images can be simulated through the preset algorithm, that is, the image of the target object under the viewing angle other than the viewing angle corresponding to the plurality of second images can be subjected to image content compensation through the preset algorithm, so that the image of the target object under the viewing angle corresponding to the plurality of second images and under the viewing angle other than the viewing angle corresponding to the plurality of second images can be obtained.
In the image processing method provided by one embodiment of the application, the mobile terminal shoots a target scene through at least one camera in the multiple cameras to obtain at least one first image; determining a target object from at least one first image and acquiring spatial information of the target object; adjusting the relative position relation of the multiple cameras based on the spatial information, and shooting a target scene through the multiple cameras after the relative position relation is adjusted to obtain multiple second images; and performing image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images. This application is through taking with different visual angles simultaneously a plurality of cameras after the spatial information according to the object is adjusted, obtains the image at other visual angles to improve image quality, promote the display effect.
Referring to fig. 4, fig. 4 is a flowchart illustrating an image processing method according to another embodiment of the present application. The image processing method is applied to the mobile terminal, and will be described in detail with respect to the flow shown in fig. 4, and the method may specifically include the following steps:
step S210: and the mobile terminal shoots a target scene through at least one camera in the plurality of cameras to obtain at least one first image.
Step S220: and determining a target object from the at least one first image, and acquiring spatial information of the target object.
For the detailed description of steps S210 to S220, refer to steps S110 to S120, which are not described herein again.
Step S230: and acquiring the relative position relation of the plurality of cameras.
In this embodiment, the relative positional relationship of the plurality of cameras may be acquired based on at least one first image captured by at least one of the plurality of cameras. Specifically, as a manner, any two first images may be selected from the at least one first image, a corresponding group of feature points may be selected from the two first images, the feature points may be feature points of the target object or feature points in images other than the target object, and the relative translation distance and the relative rotation matrix between two cameras corresponding to the two first images may be obtained through calculation based on coordinates of the group of feature points, so as to obtain the relative position relationship of the multiple cameras. Alternatively, the relative position between the multiple cameras (i.e. rotation matrix, translation distance of one camera relative to another camera) may be measured by camera calibration by acquiring internal parameters of each of the multiple cameras.
Step S240: and judging whether the spatial information is matched with the relative position relation.
In this embodiment, whether the relative positional relationship between the plurality of cameras needs to be adjusted may be determined by determining whether the spatial information of the target object matches the relative positional relationship between the plurality of cameras.
Further, in this embodiment, the spatial information may include size information and depth information, and the step S240 may include the steps of:
step S241: and acquiring the area of the target object obtained by shooting the target scene by the plurality of cameras under the relative position relation based on the size information and the depth information.
In this embodiment, the area of the target object obtained by the plurality of cameras shooting the target scene in a relative positional relationship may be acquired through the acquired size information and depth information of the target object. Specifically, according to the size information and the depth information of the target object, the area of the target object, which is obtained by shooting the target scene by the multiple cameras under the relative position relationship, can be calculated by the triangulation principle. It can be understood that when the size of the target object is small, the area of the target object obtained by the multiple cameras shooting the target scene in the relative position relationship may be relatively complete, and when the size of the target object is large, the area of the target object obtained by the multiple cameras shooting the target scene in the relative position relationship may not include a complete contour of the target object.
Step S242: and judging whether the area is not larger than a preset area.
In this embodiment, it may be determined whether an area of a target object obtained by shooting a target scene by a plurality of cameras in a relative position relationship is not larger than a preset area to determine whether spatial information of the target object matches the relative position relationship of the plurality of cameras. The preset area may be preset by a user, or may be set by a mobile terminal or a server based on the type of the target object, and the preset area may be an area that can include the entire contour of the target object, which is not limited herein.
Step S243: and when the area is not larger than the preset area, determining that the spatial information is not matched with the relative position relation.
In this embodiment, when the area is not larger than the preset area, it may be understood that the complete contour of the target object cannot be obtained based on the currently acquired area, that is, it is determined that the spatial information does not match the relative positional relationship.
As an embodiment, the number of feature points of the target object obtained by the plurality of cameras shooting the target scene in a relative positional relationship may be acquired based on the size information and the depth information of the target object. Further, whether the number of feature points of the target object obtained by shooting the target scene by the plurality of cameras in the relative position relationship is not more than a preset number can be judged. The preset number may be preset by a user, or may be set by a mobile terminal or a server based on the type of the target object, and the preset number may be the number of feature points capable of describing all features of the target object, or the number of feature points capable of describing basic features of the target object, which is not limited herein. When the number of the acquired feature points is not more than the preset number, it may be understood that the currently acquired feature points are not enough to describe the basic features or all the features of the target object, i.e., it is determined that the spatial information does not match the relative positional relationship.
Step S250: and when the spatial information is not matched with the relative position relationship, adjusting the relative position relationship of the plurality of cameras.
In this embodiment, the spatial information does not match the relative positional relationship of the multiple cameras, and may be that the images of the target object captured by the multiple cameras in the relative positional relationship do not satisfy a certain image quality, for example, the captured images of the target object do not include a complete contour of the target object, or an edge area of the target object in the captured images of the target object is blurred; the area of the target object obtained by the multiple cameras shooting the target scene in the relative position relationship may not be larger than the preset area, or the number of the feature points of the target object obtained by the multiple cameras shooting the target scene in the relative position relationship may not be larger than the preset number, which is not limited herein.
Further, when the spatial information of the target object does not match the relative positional relationships of the plurality of cameras, the relative positional relationships of the plurality of cameras may be adjusted to obtain better image quality. Furthermore, the relative position relationship of the plurality of cameras is adjusted according to the spatial information, and the relative positions of the plurality of cameras can be adjusted, wherein the position of any one of the plurality of cameras can be fixed, the positions of the plurality of cameras other than the camera can be moved, and each of the plurality of cameras can also be moved. As an embodiment, the relative angles of the plurality of cameras may also be adjusted based on the spatial information of the target object, wherein the angle of each of the plurality of cameras may be adjusted, or the angles of some of the plurality of cameras may be selected and adjusted. As another embodiment, based on the spatial information of the target object, the relative positions and relative angles of the multiple cameras can be adjusted simultaneously to obtain better image quality.
Step S260: and simultaneously shooting the target scene through the plurality of cameras after the relative position relation is adjusted to obtain a plurality of second images.
Step S270: and performing image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images, wherein the corresponding visual angles of the plurality of target object images comprise the visual angles corresponding to the plurality of second images and the visual angles other than the visual angles corresponding to the plurality of second images.
For the detailed description of steps S260 to S270, refer to steps S130 to S140, which are not described herein again.
In another embodiment of the image processing method provided by the present application, the mobile terminal shoots a target scene through at least one of the plurality of cameras to obtain at least one first image, determines a target object from the at least one first image, and obtains the spatial information of the target object, obtains the relative position relationship of a plurality of cameras, judges whether the spatial information is matched with the relative position relationship, when the spatial information is not matched with the relative position relationship, the relative position relationship of the plurality of cameras is adjusted, the target scene is shot by the plurality of cameras after the relative position relation is adjusted, a plurality of second images are obtained, image content compensation is carried out on the target object based on a preset algorithm and the plurality of second images, a plurality of target object images are obtained, and the corresponding visual angles of the plurality of target object images comprise visual angles corresponding to the plurality of second images and visual angles other than the visual angles corresponding to the plurality of second images. According to the embodiment, whether the spatial information is matched with the relative position relations of the plurality of cameras is judged, so that whether the relative position relations of the cameras are matched is determined, and better image quality can be obtained.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating an image processing method according to still another embodiment of the present application. The image processing method is applied to the mobile terminal, and will be described in detail with respect to the flow shown in fig. 5, and the method may specifically include the following steps:
step S310: and the mobile terminal shoots a target scene through at least one camera in the plurality of cameras to obtain at least one first image.
Step S320: and determining a target object from the at least one first image, and acquiring spatial information of the target object.
Step S330: and adjusting the relative position relation of the plurality of cameras based on the spatial information, and shooting the target scene through the plurality of cameras after the relative position relation is adjusted to obtain a plurality of second images.
For the detailed description of steps S310 to S330, refer to steps S110 to S130, which are not described herein again.
Step S340: and performing image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images, wherein the corresponding visual angles of the plurality of target object images comprise the visual angles corresponding to the plurality of second images and the visual angles other than the visual angles corresponding to the plurality of second images.
Further, in this embodiment, step S340 may include the following steps:
step S341A: and extracting partial characteristic parameters of the target object from the plurality of second images.
In this embodiment, the target scene is simultaneously photographed by the plurality of cameras after the relative position relationship is adjusted, and a plurality of second images are obtained, and images of other viewing angles than the viewing angle corresponding to the second images can be obtained by processing the plurality of second images. As one way, the mobile terminal may extract partial feature parameters of the target object from the second image, where the feature parameters may include brightness, edges, texture, color, and the like. And extracting partial characteristic parameters of the target object from the plurality of second images, graying the image of the region where the target object is located by positioning the region where the target object is located, dividing the image into small connected regions, collecting direction histograms of gradients or edges of all pixel points in the connected regions, and finally combining the histograms to obtain the characteristic vector. As one mode, it is also possible to output the partial feature parameters of the target object extracted from the plurality of second images by inputting the image corresponding to the region where the target object is located in the plurality of second images into the neural network based on the neural network. The partial characteristic parameters of the target object may also be extracted by other ways based on the plurality of second images, which is not limited herein.
Step S342A: and identifying the object type of the target object according to the partial characteristic parameters, and inquiring the overall characteristic parameters of the target object based on the object type.
In this embodiment, a partial feature parameter of the target object may be extracted based on the plurality of second images, and if image content compensation is to be performed on the target object, an overall feature parameter of the target object needs to be obtained. The mobile terminal can identify the object type of the target object by comparing the characteristic parameters according to partial characteristic parameters based on an image identification technology, and further can query the whole characteristic parameters of the target object by querying a corresponding server or a network based on the object type of the target object. For example, when the target object is an automobile, partial feature parameters of the automobile, such as color and outline, may be extracted from the plurality of second images, and the type of the automobile may be identified based on the extracted partial feature parameters, and if the automobile is identified as an "XX" brand automobile, the overall feature parameters of the automobile may be obtained by querying an official website of the brand.
Step S343A: and performing image content compensation on the target object based on the overall characteristic parameters to obtain a plurality of target object images.
In this embodiment, based on the plurality of first images and the partial characteristic parameters of the target object in the plurality of first images, the correspondence between the partial characteristic parameters of the target object and the first images may be determined, and after the overall characteristic parameters of the target object are obtained, the target object image may be determined based on the correspondence between the partial characteristic parameters of the target object and the first images.
Further, in this embodiment, step S340 may further include the following steps:
step S341B: acquiring the parallax of the plurality of second images.
In this embodiment, two second images of the plurality of second images may be obtained respectively, the two second images are identified to obtain a corresponding relationship between the two second images, and the parallax of the second image is calculated based on a triangulation principle. Further, the matching cost of the second image can be calculated based on a stereo matching algorithm, cost aggregation is performed, a point with the optimal overlapping matching cost is selected as a corresponding matching point in a certain range after aggregation of the matching cost is completed, the parallax corresponding to the matching point is the parallax of the second image, and a specific parallax calculation mode is not limited herein.
Step S342B: and acquiring angle information of a part to be subjected to image content compensation in the target object based on different parallaxes of the plurality of second images.
In this embodiment, the mobile terminal may determine, based on the different parallaxes of the plurality of second images, the parallax of the image corresponding to the viewing angle of the target object other than the viewing angles corresponding to the plurality of second images, and determine, based on the parallax, angle information of a portion to be subjected to image content compensation in the target object. Specifically, the coordinate information of the feature point of the target object in any two of the plurality of second images may be acquired, and the corresponding angle information may be acquired based on the coordinate information, and then the correspondence between the angle information and the parallax between the two second images may be obtained, and thereby a plurality of correspondences between the angle information and the parallax may be obtained by the parallax of the plurality of second images and the coordinate information of the feature point of the target object in the plurality of second images, and the mapping table between the angle information and the parallax may be obtained via the obtained plurality of correspondences. And simulating the parallax of the part to be subjected to image content compensation by using the different parallaxes of the plurality of second images, and obtaining the angle information of the part to be subjected to image content compensation based on the obtained mapping table.
Step S343B: and matching corresponding image content based on the angle information and performing image content compensation on the target object based on the image content to obtain a plurality of target object images.
In this embodiment, based on the parallax of the second image and the angle information of the portion to be subjected to image content compensation in the target object, a matching point may be searched for on the virtual view corresponding to the angle information, the pixel point content corresponding to the matching point is obtained, and the pixel point content is moved to the matching point, so that the virtual view, that is, the target object image, may be obtained. As another mode, coordinate information of feature points of the target object in the plurality of second images may be acquired, and based on the obtained angle information, coordinate information of feature points of the target object to be subjected to image content compensation may be calculated, so that the image of the target object in the area to be subjected to image content compensation is completed according to the coordinate information of the feature points, thereby obtaining a plurality of target object images.
Step S350: and generating a plurality of third images by synthesizing the plurality of target object images and the plurality of second images.
In this embodiment, a plurality of target object images may be obtained by performing image content compensation on a plurality of second images based on a preset algorithm, where the view angles corresponding to the target object images include view angles corresponding to the plurality of second images and view angles other than the view angles corresponding to the plurality of second images. In some embodiments, the target object image may be an image including only a target object at different angles, and a plurality of target object images may be combined with a plurality of second images to generate a plurality of third images, where the third image may be generated by selecting any one of the plurality of second images, determining an area of the second image where the target object is located, and replacing the image of the area of the second image where the target object is located with the target object image. Further, the target object image may be overlaid on the image of the area where the target object is located in the second image, thereby generating a third image. The above synthesis is merely exemplary and not limiting.
In some embodiments, any one of the plurality of target object images and any one of the plurality of second images are combined, and the target object image and the second image may be processed. Specifically, the color, edge, texture, and other features of the image at the edge region of the target object image and the corresponding position of the target object image in the second image may be obtained, the similarity between the features may be calculated, and whether the similarity is greater than the similarity threshold may be determined. When the similarity is greater than the similarity threshold, it indicates that the features of the edge area of the target object image and the color of the image at the position corresponding to the target object image in the second image are the same or similar, and the target object image and the second image may be synthesized. Further, when the similarity is smaller than the threshold, characteristic parameters such as hue, saturation, contrast, brightness and the like of the image at the position corresponding to the target object image in the edge area of the target object image and the second image can be adjusted to increase the similarity between the image at the position corresponding to the target object image in the edge area of the target object image and the second image, so that the target object image is displayed more naturally and in the same style in the second image.
Step S360: displaying the plurality of third images.
In this embodiment, the mobile terminal may further display the generated plurality of third images, where the plurality of third images may be displayed on the same display interface, or the plurality of third images may be sequentially displayed. Further, it is also possible to display any one of the plurality of third images and to display other third images of the plurality of third images by rotation based on the switching operation. In this case, the switching operation may be to rotate the mobile phone as shown in fig. 6, so that the other images in the plurality of third images are correspondingly displayed by detecting the rotation angle of the mobile phone. It is also possible to slide on the operation interface of the mobile terminal as shown in fig. 7, thereby switching the third image to display the other images of the plurality of third images.
In an image processing method provided by another embodiment of the present application, a mobile terminal shoots a target scene through at least one of a plurality of cameras to obtain at least one first image, determines a target object from the at least one first image, and obtains spatial information of the target object, adjusts a relative positional relationship between the plurality of cameras based on the spatial information, and shoots the target scene through the plurality of cameras after adjusting the relative positional relationship to obtain a plurality of second images, performs image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images, wherein corresponding viewing angles of the plurality of target object images include viewing angles corresponding to the plurality of second images and viewing angles other than the viewing angles corresponding to the plurality of second images, and synthesizes the plurality of target object images and the plurality of second images based on the plurality of target object images, and generating a plurality of third images and displaying the plurality of third images. Compared with the embodiment shown in fig. 3, the embodiment may further combine the target object image and the second image to generate and display the third image, thereby improving the display effect.
Referring to fig. 8, fig. 8 is a block diagram illustrating an image processing apparatus 200 according to an embodiment of the present disclosure. The image processing apparatus 200 is applied to the mobile terminal described above. As will be explained below with respect to the block diagram of fig. 8, the image processing apparatus 200 includes: a photographing module 210, a determining module 220, an adjusting module 230, and an image content compensating module 240, wherein:
a shooting module 210, configured to shoot a target scene through at least one of the multiple cameras by the mobile terminal, so as to obtain at least one first image.
A determining module 220, configured to determine a target object from the at least one first image, and acquire spatial information of the target object.
An adjusting module 230, configured to adjust the relative position relationship of the multiple cameras based on the spatial information, and capture the target scene through the multiple cameras after adjusting the relative position relationship, so as to obtain multiple second images.
Further, the adjusting module 230 further includes: relative position obtains submodule, matching and judges submodule and first adjustment submodule, wherein:
and the relative position acquisition submodule is used for acquiring the relative position relation of the plurality of cameras.
And the matching judgment submodule is used for judging whether the spatial information is matched with the relative position relation.
Further, the matching judgment sub-module further includes: the device comprises an area acquisition unit, an area judgment unit and a determination unit, wherein:
and the area acquisition unit is used for acquiring the area of the target object obtained by shooting the target scene by the plurality of cameras under the relative position relation based on the size information and the depth information.
Further, the area acquisition unit further includes: a quantity acquisition subunit, wherein:
and the number obtaining subunit is configured to obtain, based on the size information and the depth information, the number of feature points of the target object obtained by shooting the target scene by the multiple cameras in the relative position relationship.
And the area judging unit is used for judging whether the area is not larger than a preset area or not.
Further, the area determination unit further includes: a quantity judging subunit, wherein:
and the number judgment subunit is used for judging whether the number of the feature points is not greater than a preset number.
And the determining unit is used for determining that the spatial information is not matched with the relative position relation when the area is not larger than the preset area.
Further, the determining unit further includes: determining a subunit, wherein:
and the determining subunit is configured to determine that the spatial information is not matched with the relative position relationship when the number of the feature points is not greater than the preset number.
And the first adjusting submodule is used for adjusting the relative position relation of the plurality of cameras when the spatial information is not matched with the relative position relation.
Further, the adjusting module 230 further includes: a second tuning submodule, wherein:
and the second adjusting submodule is used for adjusting the relative distance of the plurality of cameras and/or adjusting the relative angle of the plurality of cameras.
An image content compensation module 240, configured to perform image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images, where the corresponding view angles of the plurality of target object images include view angles corresponding to the plurality of second images and view angles other than the view angles corresponding to the plurality of second images.
Further, the image content compensation module 240 includes: the system comprises an extraction submodule, a query submodule and a first compensation submodule, wherein:
and the extraction sub-module is used for extracting partial characteristic parameters of the target object from the plurality of second images.
And the query sub-module is used for identifying the object type of the target object according to the partial characteristic parameters and querying the overall characteristic parameters of the target object based on the object type.
And the first compensation submodule is used for carrying out image content compensation on the target object based on the overall characteristic parameters to obtain a plurality of target object images.
Further, the image content compensation module 240 further includes: the parallax error acquisition submodule, the angle information acquisition submodule and the second compensation submodule, wherein:
and the parallax acquisition sub-module is used for acquiring the parallaxes of the plurality of second images.
And the angle information acquisition submodule is used for acquiring the angle information of the part to be subjected to image content compensation in the target object based on different parallaxes of the plurality of second images.
And the second compensation submodule is used for matching corresponding image content based on the angle information and performing image content compensation on the target object based on the image content to obtain a plurality of target object images.
Further, the image processing apparatus 200 may further include: the display device comprises a synthesis module, a first display module and a second display module, wherein:
and the synthesis module is used for synthesizing the plurality of target object images and the plurality of second images to generate a plurality of third images.
And the first display module is used for displaying the plurality of third images.
And the second display module is used for displaying any one of the third images and rotationally displaying other third images in the third images based on switching operation.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 9, a block diagram of a mobile terminal 100 according to an embodiment of the present disclosure is shown. The mobile terminal 100 may be a smart phone, a tablet computer, an electronic book, or other mobile terminal capable of running an application. The mobile terminal 100 in the present application may include one or more of the following components: a processor 110, a memory 120, a screen 130, a camera 140, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform the methods as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores, among other things. The processor 110 interfaces with various components throughout the mobile terminal 100 using various interfaces and lines, and performs various functions of the mobile terminal 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and invoking data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, such as a phonebook, audio-video data, chat log data, and the like.
Further, the screen 130 may be a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The screen 130 is used to display information input by a user, information provided to the user, and various graphic user interfaces of the mobile terminal, which may be composed of graphics, text, icons, numbers, video, and any combination thereof.
The camera 140 may be fixedly disposed on the mobile terminal 100, may be slidably disposed on the mobile terminal 100, or may be rotatably disposed on the mobile terminal 100, which is not limited herein.
Referring to fig. 10, a block diagram of a computer-readable storage medium according to an embodiment of the present disclosure is shown. The computer-readable storage medium 300 has stored therein program code that can be called by a processor to execute the methods described in the above-described method embodiments.
The computer-readable storage medium 300 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable and programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 300 includes a non-volatile computer-readable storage medium. The computer readable storage medium 300 has storage space for program code 310 for performing any of the method steps described above. The program code can be read from or written to one or more computer program products. The program code 310 may be compressed, for example, in a suitable form.
To sum up, according to the image processing method, the image processing device, the mobile terminal and the storage medium provided by the embodiment of the application, the mobile terminal shoots a target scene through at least one camera in a plurality of cameras to obtain at least one first image; determining a target object from at least one first image and acquiring spatial information of the target object; adjusting the relative position relation of the multiple cameras based on the spatial information, and shooting a target scene through the multiple cameras after the relative position relation is adjusted to obtain multiple second images; and performing image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images. This application is through taking with different visual angles simultaneously a plurality of cameras after the spatial information according to the object is adjusted, obtains the image at other visual angles to improve image quality, promote the display effect.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (9)

1. An image processing method is applied to a mobile terminal, the mobile terminal comprises a terminal main body and a plurality of cameras, the cameras are movably arranged on the terminal main body, and the cameras have different visual angles, and the method comprises the following steps:
the mobile terminal shoots a target scene through at least one camera in the multiple cameras to obtain at least one first image;
determining a target object from the at least one first image, and acquiring spatial information of the target object, wherein the spatial information comprises size information and depth information;
acquiring relative position relations of the plurality of cameras;
acquiring the area of the target object obtained by shooting the target scene by the plurality of cameras under the relative position relation based on the size information and the depth information;
judging whether the area is not larger than a preset area or not;
when the area is not larger than the preset area, determining that the spatial information is not matched with the relative position relation;
when the spatial information is not matched with the relative position relation, adjusting the relative position relation of the plurality of cameras;
shooting the target scene through the plurality of cameras after the relative position relation is adjusted, and obtaining a plurality of second images;
and performing image content compensation on the target object based on a preset algorithm and the plurality of second images to obtain a plurality of target object images, wherein the corresponding visual angles of the plurality of target object images comprise the visual angles corresponding to the plurality of second images and the visual angles other than the visual angles corresponding to the plurality of second images.
2. The method according to claim 1, wherein the obtaining, based on the size information and the depth information, a region of the target object obtained by the plurality of cameras shooting the target scene in the relative positional relationship comprises:
acquiring the number of feature points of the target object, which are acquired by shooting the target scene by the plurality of cameras under the relative position relation, based on the size information and the depth information;
the judging whether the area is not larger than a preset area includes:
judging whether the number of the feature points is not more than a preset number or not;
and when the number of the feature points is not greater than the preset number, determining that the spatial information is not matched with the relative position relationship.
3. The method according to claim 1 or 2, wherein the adjusting the relative positional relationship of the plurality of cameras comprises:
adjusting relative distances of the plurality of cameras, and/or adjusting relative angles of the plurality of cameras.
4. The method according to claim 1 or 2, wherein after the image content compensation of the target object based on the preset algorithm and the plurality of second images is performed to obtain a plurality of target object images, the method further comprises:
synthesizing the plurality of target object images and the plurality of second images to generate a plurality of third images;
displaying the plurality of third images; or
Displaying any one of the plurality of third images, and rotationally displaying the other of the plurality of third images based on a switching operation.
5. The method according to claim 1 or 2, wherein the image content compensation of the target object based on the preset algorithm and the plurality of second images to obtain a plurality of target object images comprises:
extracting partial characteristic parameters of the target object from the plurality of second images;
identifying the object type of the target object according to the partial characteristic parameters, and inquiring the overall characteristic parameters of the target object based on the object type;
and performing image content compensation on the target object based on the overall characteristic parameters to obtain a plurality of target object images.
6. The method according to claim 1 or 2, wherein the image content compensation of the target object based on the preset algorithm and the plurality of second images to obtain a plurality of target object images comprises:
acquiring parallax of the plurality of second images;
acquiring angle information of a part to be subjected to image content compensation in the target object based on different parallaxes of the plurality of second images;
and matching corresponding image content based on the angle information and compensating the target object based on the image content to obtain a plurality of target object images.
7. An image processing apparatus applied to a mobile terminal, the mobile terminal including a terminal main body and a plurality of cameras, the plurality of cameras being movably disposed on the terminal main body, the plurality of cameras having different viewing angles, the apparatus comprising:
the shooting module is used for shooting a target scene through at least one camera in the plurality of cameras by the mobile terminal to obtain at least one first image;
the determining module is used for determining a target object from the at least one first image and acquiring spatial information of the target object, wherein the spatial information comprises size information and depth information;
the relative position obtaining submodule is used for obtaining the relative position relation of the plurality of cameras;
an area acquisition unit configured to acquire an area of the target object obtained by the plurality of cameras shooting the target scene in the relative positional relationship based on the size information and the depth information;
the area judging unit is used for judging whether the area is not larger than a preset area or not;
a determining unit, configured to determine that the spatial information does not match the relative positional relationship when the area is not greater than the preset area;
the first adjusting submodule is used for adjusting the relative position relation of the plurality of cameras when the spatial information is not matched with the relative position relation;
the adjusting module is used for shooting the target scene through the plurality of cameras after the relative position relation is adjusted to obtain a plurality of second images;
and the content compensation module is used for carrying out image content compensation on the target object based on a preset algorithm and the second images to obtain a plurality of target object images, wherein the corresponding visual angles of the target object images comprise the visual angles corresponding to the second images and the visual angles except the visual angles corresponding to the second images.
8. A mobile terminal comprising a memory and a processor, the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-6.
9. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 6.
CN201910579241.9A 2019-06-28 2019-06-28 Image processing method, image processing device, mobile terminal and storage medium Active CN110300264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910579241.9A CN110300264B (en) 2019-06-28 2019-06-28 Image processing method, image processing device, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910579241.9A CN110300264B (en) 2019-06-28 2019-06-28 Image processing method, image processing device, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110300264A CN110300264A (en) 2019-10-01
CN110300264B true CN110300264B (en) 2021-03-12

Family

ID=68029536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910579241.9A Active CN110300264B (en) 2019-06-28 2019-06-28 Image processing method, image processing device, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110300264B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110740264B (en) * 2019-10-31 2021-06-04 重庆工商职业学院 Intelligent camera data rapid acquisition system and acquisition method
CN110855891A (en) * 2019-11-26 2020-02-28 爱菲力斯(深圳)科技有限公司 Method and device for adjusting camera shooting angle based on human body posture and robot
CN111083377A (en) * 2019-12-30 2020-04-28 维沃移动通信有限公司 Shooting method and electronic equipment
CN111314610B (en) * 2020-02-26 2022-03-11 维沃移动通信有限公司 Control method and electronic equipment
CN112714255B (en) * 2020-12-30 2023-02-21 维沃移动通信(杭州)有限公司 Shooting method and device, electronic equipment and readable storage medium
CN112784081A (en) * 2021-01-22 2021-05-11 维沃移动通信(杭州)有限公司 Image display method and device and electronic equipment
CN113015094B (en) * 2021-02-10 2022-11-25 维沃移动通信有限公司 Equipment positioning method and device
CN113114975B (en) * 2021-04-07 2023-04-18 科大讯飞股份有限公司 Image splicing method and device, electronic equipment and storage medium
CN113141465A (en) * 2021-04-20 2021-07-20 云南云能科技有限公司 Unmanned aerial vehicle multilayer multi-view-angle crossed live-action image acquisition method and device
CN113220251B (en) * 2021-05-18 2024-04-09 北京达佳互联信息技术有限公司 Object display method, device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556697A (en) * 2008-04-10 2009-10-14 上海宝康电子控制工程有限公司 Method and system for motion target tracking based on rapid characteristic points
CN107507239A (en) * 2017-08-23 2017-12-22 维沃移动通信有限公司 A kind of image partition method and mobile terminal
CN108694729A (en) * 2017-04-07 2018-10-23 深圳市腾盛工业设备有限公司 Localization method, unit based on image detection and storage medium
CN109816702A (en) * 2019-01-18 2019-05-28 苏州矽典微智能科技有限公司 A kind of multiple target tracking device and method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1655573B1 (en) * 2003-08-13 2012-07-11 Kabushiki Kaisha TOPCON 3-dimensional measurement device and electronic storage medium
CN101651841B (en) * 2008-08-13 2011-12-07 华为技术有限公司 Method, system and equipment for realizing stereo video communication
CN101840146A (en) * 2010-04-20 2010-09-22 夏佳梁 Method and device for shooting stereo images by automatically correcting parallax error
CN104568003B (en) * 2014-12-29 2017-02-22 国家电网公司 Remote monitoring system and method for ice coating process of power transmission lines
KR102101438B1 (en) * 2015-01-29 2020-04-20 한국전자통신연구원 Multiple camera control apparatus and method for maintaining the position and size of the object in continuous service switching point
CN107527336B (en) * 2016-06-22 2021-02-19 北京疯景科技有限公司 Lens relative position calibration method and device
CN106228608A (en) * 2016-08-24 2016-12-14 集美大学 A kind of hand-held device for object quick reconfiguration based on binocular and method
KR101816449B1 (en) * 2016-10-20 2018-01-08 현대자동차주식회사 Apparatus for integrated adjusting multi view camera exposure, system having the same and method thereof
CN106507087B (en) * 2016-12-05 2019-01-15 宇龙计算机通信科技(深圳)有限公司 A kind of terminal imaging method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556697A (en) * 2008-04-10 2009-10-14 上海宝康电子控制工程有限公司 Method and system for motion target tracking based on rapid characteristic points
CN108694729A (en) * 2017-04-07 2018-10-23 深圳市腾盛工业设备有限公司 Localization method, unit based on image detection and storage medium
CN107507239A (en) * 2017-08-23 2017-12-22 维沃移动通信有限公司 A kind of image partition method and mobile terminal
CN109816702A (en) * 2019-01-18 2019-05-28 苏州矽典微智能科技有限公司 A kind of multiple target tracking device and method

Also Published As

Publication number Publication date
CN110300264A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
CN110300264B (en) Image processing method, image processing device, mobile terminal and storage medium
EP3457683B1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
CN106899781B (en) Image processing method and electronic equipment
WO2015180659A1 (en) Image processing method and image processing device
CN109691080B (en) Image shooting method and device and terminal
CN110288534B (en) Image processing method, device, electronic equipment and storage medium
JP7387202B2 (en) 3D face model generation method, apparatus, computer device and computer program
CN111091590A (en) Image processing method, image processing device, storage medium and electronic equipment
CN110266926B (en) Image processing method, image processing device, mobile terminal and storage medium
CN110166680B (en) Device imaging method and device, storage medium and electronic device
CN111654624B (en) Shooting prompting method and device and electronic equipment
CN110266955B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN112532881A (en) Image processing method and device and electronic equipment
CN110177216B (en) Image processing method, image processing device, mobile terminal and storage medium
CN109361850B (en) Image processing method, image processing device, terminal equipment and storage medium
CN109495778B (en) Film editing method, device and system
CN110312075B (en) Device imaging method and device, storage medium and electronic device
CN116341586B (en) Code scanning method, electronic equipment and storage medium
CN115278084A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114119701A (en) Image processing method and device
CN114387157A (en) Image processing method and device and computer readable storage medium
CN113763233A (en) Image processing method, server and photographing device
JP2016071496A (en) Information terminal device, method, and program
CN114143442B (en) Image blurring method, computer device, and computer-readable storage medium
CN107742275B (en) Information processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant