CN110855875A - Method and device for acquiring background information of image - Google Patents

Method and device for acquiring background information of image Download PDF

Info

Publication number
CN110855875A
CN110855875A CN201810950586.6A CN201810950586A CN110855875A CN 110855875 A CN110855875 A CN 110855875A CN 201810950586 A CN201810950586 A CN 201810950586A CN 110855875 A CN110855875 A CN 110855875A
Authority
CN
China
Prior art keywords
image
target object
background
background information
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810950586.6A
Other languages
Chinese (zh)
Inventor
连园园
秦萍
陈浩广
覃广志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201810950586.6A priority Critical patent/CN110855875A/en
Publication of CN110855875A publication Critical patent/CN110855875A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for acquiring background information of an image, wherein the method comprises the following steps: displaying the image shot by the lens in a shooting interface; identifying the characteristics of a target object in the image based on the image segmentation model obtained by training; acquiring background information corresponding to the characteristics of the target object based on a background design model obtained by training; the image segmentation model and the background design model are both depth convolution neural network models. The invention solves the technical problem of low efficiency of presetting the background content in the prior art.

Description

Method and device for acquiring background information of image
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for acquiring background information of an image.
Background
The mobile phone has gradually become an essential important tool in the life of people, and the related application in the mobile phone has been deeply involved in the aspects of the life field of people except the basic social function. In the field of photography, a mobile phone gradually replaces a single lens reflex and a professional video camera, and not only becomes a common photographing tool for common users, but also gradually becomes a common photographing tool for photographers due to portability in carrying and easiness in parameter adjustment. In the current development situation that the pixel and imaging quality of a camera are continuously improved, the intelligent expansion degree in the photographing process becomes an important direction for improving the photographing function of the mobile phone. The background of the mobile phone in the prior art can not be automatically and intelligently designed when the mobile phone takes a picture, the background content needs to be preset, and the problem of low efficiency of the picture with the background is easily caused.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for acquiring background information of an image, which at least solve the technical problem of low efficiency of presetting background content in the prior art.
According to an aspect of an embodiment of the present invention, there is provided a method of acquiring background information of an image, including: displaying the image shot by the lens in a shooting interface; identifying the characteristics of a target object in the image based on an image segmentation model obtained by training; acquiring background information corresponding to the characteristics of the target object based on a background design model obtained by training; and the image segmentation model and the background design model are both depth convolution neural network models.
Optionally, identifying features of the target object in the image based on the trained image segmentation model includes: segmenting the image based on the image segmentation model, and identifying at least one target object contained in the image; features of each target object are identified.
Optionally, the image segmentation model at least includes: the image segmentation method comprises an input layer, an encoding network, a decoding network and an output layer, wherein the image is segmented based on the image segmentation model, and the identification of at least one target object contained in the image comprises the following steps: the input layer acquires pre-stored images of different objects; the coding network uses the images of the different objects to perform feature fusion to obtain a fused feature map, and uses the fused feature map to restore the images; the decoding network analyzes the restored image to obtain the class probability of each pixel of the image; segmenting the image based on a class probability of each pixel of the image; identifying the characteristics of each target object contained in the segmented image; the output layer outputs the identified features of the at least one target object.
Optionally, the obtaining of the background information corresponding to the feature of the target object based on the trained background design model includes: obtaining a pre-stored background training sample, wherein the background training sample comprises at least one of the following: pre-stored design images, network images and personalized images; and acquiring corresponding background information from the background training sample based on the characteristics of the target object in the image.
Optionally, in the process of obtaining the background information corresponding to the feature of the target object, or after obtaining the background information corresponding to the feature of the target object, the method further includes: collecting voice information; identifying a required material request from the voice information; searching based on the material request to obtain material information; and loading the material information into the background information.
Optionally, after obtaining the background information corresponding to the feature of the target object, the method further includes: fusing the identified target object with the background information to generate new image information; and displaying the new image information in the shooting interface.
According to another aspect of the embodiments of the present invention, there is provided a method of acquiring background information of an image, including: displaying the image shot by the lens in a shooting interface; displaying a target object in the image, wherein the characteristics of the target object in the image are identified through a trained image segmentation model; displaying background information corresponding to the target object, wherein the background information corresponding to the characteristics of the target object is obtained through a trained background design model; and the image segmentation model and the background design model are both depth convolution neural network models.
Optionally, after displaying the background information corresponding to the target object, the method further includes: fusing the identified target object with the background information to generate new image information; and displaying the new image information in the shooting interface.
According to another aspect of the embodiments of the present invention, there is provided an apparatus for acquiring background information of an image, including: the first display module is used for displaying the image shot by the lens in the shooting interface; the second display module is used for displaying the target object in the image, wherein the characteristics of the target object in the image are identified through the trained image segmentation model; the third display module is used for displaying the background information corresponding to the target object, wherein the background information corresponding to the characteristics of the target object is obtained through a trained background design model; and the image segmentation model and the background design model are both depth convolution neural network models.
According to another aspect of the embodiments of the present invention, there is provided a storage medium, the storage medium including a stored program, wherein when the program runs, a device in which the storage medium is located is controlled to execute the method of any one of the above.
According to another aspect of the embodiments of the present invention, there is provided a processor for executing a program, wherein the program executes to perform the method of any one of the above.
In the embodiment of the invention, the image shot by the lens is displayed in the shooting interface; identifying the characteristics of a target object in the image based on an image segmentation model obtained by training; acquiring background information corresponding to the characteristics of the target object based on a background design model obtained by training; the image segmentation model and the background design model are both in a deep convolutional neural network model mode, and the purpose of generating a corresponding background through a target object in an image is achieved through the image segmentation model and the background design model, so that the technical effect of improving the background generation efficiency and the background-carrying picture is improved, and the technical problem of low efficiency of presetting background content in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method of obtaining background information for an image according to an embodiment of the invention;
FIG. 2 is a flow chart of another method of obtaining background information for an image according to an embodiment of the invention;
fig. 3 is a schematic structural diagram of an apparatus for acquiring background information of an image according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of a method for obtaining background information of an image, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a flowchart of a method for acquiring background information of an image according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, displaying the image shot by the lens in a shooting interface;
step S104, identifying the characteristics of the target object in the image based on the image segmentation model obtained by training;
step S106, acquiring background information corresponding to the characteristics of the target object based on the background design model obtained by training;
the image segmentation model and the background design model are both depth convolution neural network models.
Through the steps, the image shot by the lens is displayed in the shooting interface; identifying the characteristics of a target object in the image based on the image segmentation model obtained by training; acquiring background information corresponding to the characteristics of the target object based on a background design model obtained by training; the image segmentation model and the background design model are both in a deep convolutional neural network model mode, and the purpose of generating a corresponding background through a target object in an image is achieved through the image segmentation model and the background design model, so that the technical effect of improving the background generation efficiency and the background-carrying picture is improved, and the technical problem of low efficiency of presetting background content in the prior art is solved.
The executing main body of the steps can be a mobile terminal, a computer, a mobile phone and the like. In this embodiment, the above steps are described by using a mobile phone. The shooting interface can be a display interface which can display the photos after the shooting is finished, and can also be an editing interface which can edit the photos after the photos are selected. The image may be a 2D image or a 3D image, and the type of the image is determined by the imaging device of the execution subject.
The image segmentation model is a recognition model that can be machine-learned, such as a convolutional neural network recognition model, obtained by machine learning training using a plurality of sets of data, and has a recognition capability between input data and output data, for example, by training the plurality of sets of data until the model converges. Each set of data in the plurality of sets of data includes: features of the image and the target object in the image. For example, the photograph and the characteristic information of the person in the photograph.
As an alternative embodiment, the image includes a target object portion and a non-target object portion, and the non-target object portion may be removed from the image after the image is subjected to target feature recognition by the image segmentation model, and only the feature of the target object is retained, so that not only the data amount of the image is reduced, the memory occupied by the image is reduced, but also the speed of subsequent image processing can be increased, and the data processing efficiency is improved. Then, a background is generated according to the characteristics of the target object, and a new image is generated by combining the target object part with the background. In another alternative embodiment, the image includes a target object portion and a non-target object portion, and the non-target object portion may be retained. Then, a background is generated based on the characteristics of the target object, a new image is generated by overlaying the background on the non-target object portion, the image can be restored, and after the image including the background is edited and saved, an operation of restoring the original image by deleting the background can be performed.
The background design model is a recognition model which is obtained by machine learning training using multiple sets of data, such as a convolutional neural network recognition model, and can be machine learned, for example, by training multiple sets of data until the model converges, and has a recognition capability between input data and output data. Each set of data in the plurality of sets of data includes: the characteristic of the target object and the background information corresponding to the characteristic of the target object. For example, in the case where the target object is a person, it is recognized that the expression of the target object is happy, and the background design model outputs related background information corresponding to a pleasant atmosphere, for example, an artistic word phrase, which is happy today, and the like. The related background can be generated according to the background information, the background can be generated through a background generating program, the background elements identified by the background information can be directly called, and the background is composed of the background elements.
The image segmentation model and the background design model are both deep convolution neural network models. The deep convolutional neural network model can identify more complex features, and is stronger in learning capacity and identification capacity.
Optionally, identifying features of the target object in the image based on the trained image segmentation model includes: segmenting the image based on the image segmentation model, and identifying at least one target object contained in the image; features of each target object are identified.
When the target objects in the image are identified, the step-by-step identification can be performed, one or more target objects are identified firstly, and then each target object is identified step by step, so that the operation amount of each identification can be reduced, and the system load is reduced. Or all target objects in the image can be identified at one time, so that the identification frequency is low, the identification process is accelerated, and the identification efficiency is improved.
Optionally, the image segmentation model at least includes: the image segmentation method comprises an input layer, an encoding network, a decoding network and an output layer, wherein the image is segmented based on an image segmentation model, and identifying at least one target object contained in the image comprises the following steps: the input layer acquires pre-stored images of different objects; the coding network uses images of different objects to perform feature fusion to obtain a fused feature map, and uses the fused feature map to restore the images; the decoding network analyzes the restored image to obtain the class probability of each pixel of the image; segmenting the image based on the class probability of each pixel of the image; identifying the characteristics of each target object contained in the segmented image; the output layer outputs the identified features of the at least one target object.
Optionally, the obtaining of the background information corresponding to the features of the target object based on the trained background design model includes: obtaining a pre-stored background training sample, wherein the background training sample comprises at least one of the following: pre-stored design images, network images and personalized images; and acquiring corresponding background information from the background training sample based on the characteristics of the target object in the image.
Before obtaining the background information corresponding to the features of the target object according to the background design model, the background design model needs to be established and trained. When the background design model is trained, a pre-stored background training sample is obtained first, and the training sample can be stored in a database or a cloud server. The background training sample comprises a pre-stored design image, a network image, a personalized image and the like. The design image is an image specifically designed according to a certain theme. The network image is an image derived from a network, and may be a popular hot image, for example. The personalized image may be an image created based on a personal label, an image created by the user, or the like.
After the training of the background design model is completed, the background design model may determine, from a background training sample, background information corresponding to the features of the target object according to the features of the target object.
Optionally, in the process of obtaining the background information corresponding to the feature of the target object, or after obtaining the background information corresponding to the feature of the target object, the method further includes: collecting voice information; identifying a required material request from the voice information; searching based on the material request to obtain material information; and loading the material information into the background information.
After the background information is determined, a material request can be determined according to the voice information of the user, and the material request can be an image request, a dynamic image request or an audio request. And determining material information according to the material request, wherein the material information corresponds to the material request, the image request corresponds to the image, the dynamic image request corresponds to the dynamic image, the audio request corresponds to the audio, and then the material information is loaded into the background information.
Optionally, after obtaining the background information corresponding to the feature of the target object, the method further includes: fusing the identified target object with background information to generate new image information; and displaying the new image information in the shooting interface.
Fig. 2 is a flowchart of another method for acquiring background information of an image according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S202, displaying the image shot by the lens in a shooting interface;
step S204, displaying a target object in the image, wherein the characteristics of the target object in the image are recognized through the image segmentation model obtained through training;
step S206, displaying background information corresponding to the target object, wherein the background information corresponding to the characteristics of the target object is obtained through a background design model obtained through training;
the image segmentation model and the background design model are both depth convolution neural network models.
Through the steps, the image shot by the lens can be displayed in the shooting interface; displaying a target object in the image, wherein the characteristics of the target object in the image are identified through the trained image segmentation model; displaying background information corresponding to the target object, wherein the background information corresponding to the characteristics of the target object is obtained through a background design model obtained through training; the image segmentation model and the background design model are both in a deep convolutional neural network model mode, and the purpose of generating a corresponding background through a target object in an image is achieved through the image segmentation model and the background design model, so that the technical effect of improving the background generation efficiency and the background-carrying picture is improved, and the technical problem of low efficiency of presetting background content in the prior art is solved.
Optionally, after displaying the background information corresponding to the target object, the method further includes: fusing the identified target object with background information to generate new image information; and displaying the new image information in the shooting interface.
Fig. 3 is a schematic structural diagram of an apparatus for acquiring background information of an image according to an embodiment of the present invention, as shown in fig. 3, the apparatus for acquiring background information of an image including: a first display module 32, a second display module 34, and a third display module 36, which will be described in detail below.
A first display module 32, configured to display an image captured by the lens in a capture interface; a second display module 34, connected to the first display module 32, for displaying the target object in the image, wherein the features of the target object in the image are identified through the trained image segmentation model; a third display module 36 connected to the second display module 34 and configured to display background information corresponding to the target object, where the background information corresponding to the features of the target object is obtained through a trained background design model; the image segmentation model and the background design model are both depth convolution neural network models.
By the above device, the first display module 32 can display the image shot by the lens in the shooting interface; the second display module 34 displays the target object in the image, wherein the features of the target object in the image are recognized through the trained image segmentation model; the third display module 36 displays background information corresponding to the target object, wherein the background information corresponding to the features of the target object is obtained through a trained background design model; the image segmentation model and the background design model are both in a deep convolutional neural network model mode, and the purpose of generating a corresponding background through a target object in an image is achieved through the image segmentation model and the background design model, so that the technical effect of improving the background generation efficiency and the background-carrying picture is improved, and the technical problem of low efficiency of presetting background content in the prior art is solved.
According to another aspect of the embodiments of the present invention, there is provided a storage medium including a stored program, wherein when the program is executed, a device in which the storage medium is located is controlled to perform the method of any one of the above.
According to another aspect of the embodiments of the present invention, there is provided a processor for executing a program, wherein the program executes to perform the method of any one of the above.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (11)

1. A method of obtaining background information for an image, comprising:
displaying the image shot by the lens in a shooting interface;
identifying the characteristics of a target object in the image based on an image segmentation model obtained by training;
acquiring background information corresponding to the characteristics of the target object based on a background design model obtained by training;
and the image segmentation model and the background design model are both depth convolution neural network models.
2. The method of claim 1, wherein identifying features of a target object in the image based on a trained image segmentation model comprises:
segmenting the image based on the image segmentation model, and identifying at least one target object contained in the image;
features of each target object are identified.
3. The method of claim 2, wherein the image segmentation model comprises at least: the image segmentation method comprises an input layer, an encoding network, a decoding network and an output layer, wherein the image is segmented based on the image segmentation model, and the identification of at least one target object contained in the image comprises the following steps:
the input layer acquires pre-stored images of different objects;
the coding network uses the images of the different objects to perform feature fusion to obtain a fused feature map, and uses the fused feature map to restore the images;
the decoding network analyzes the restored image to obtain the class probability of each pixel of the image;
segmenting the image based on a class probability of each pixel of the image;
identifying the characteristics of each target object contained in the segmented image;
the output layer outputs the identified features of the at least one target object.
4. The method according to any one of claims 1 to 3, wherein obtaining the background information corresponding to the features of the target object based on the trained background design model comprises:
obtaining a pre-stored background training sample, wherein the background training sample comprises at least one of the following: pre-stored design images, network images and personalized images;
and acquiring corresponding background information from the background training sample based on the characteristics of the target object in the image.
5. The method according to claim 1, wherein during or after acquiring the background information corresponding to the features of the target object, the method further comprises:
collecting voice information;
identifying a required material request from the voice information;
searching based on the material request to obtain material information;
and loading the material information into the background information.
6. The method of claim 1, wherein after obtaining the context information corresponding to the feature of the target object, the method further comprises:
fusing the identified target object with the background information to generate new image information;
and displaying the new image information in the shooting interface.
7. A method of obtaining background information for an image, comprising:
displaying the image shot by the lens in a shooting interface;
displaying a target object in the image, wherein the characteristics of the target object in the image are identified through a trained image segmentation model;
displaying background information corresponding to the target object, wherein the background information corresponding to the characteristics of the target object is obtained through a trained background design model;
and the image segmentation model and the background design model are both depth convolution neural network models.
8. The method of claim 7, wherein after displaying the background information corresponding to the target object, the method further comprises:
fusing the identified target object with the background information to generate new image information;
and displaying the new image information in the shooting interface.
9. An apparatus for acquiring background information of an image, comprising:
the first display module is used for displaying the image shot by the lens in the shooting interface;
the second display module is used for displaying the target object in the image, wherein the characteristics of the target object in the image are identified through the trained image segmentation model;
the third display module is used for displaying the background information corresponding to the target object, wherein the background information corresponding to the characteristics of the target object is obtained through a trained background design model;
and the image segmentation model and the background design model are both depth convolution neural network models.
10. A storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the storage medium is located to perform the method of any one of claims 1 to 6.
11. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 7 to 8.
CN201810950586.6A 2018-08-20 2018-08-20 Method and device for acquiring background information of image Pending CN110855875A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810950586.6A CN110855875A (en) 2018-08-20 2018-08-20 Method and device for acquiring background information of image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810950586.6A CN110855875A (en) 2018-08-20 2018-08-20 Method and device for acquiring background information of image

Publications (1)

Publication Number Publication Date
CN110855875A true CN110855875A (en) 2020-02-28

Family

ID=69595375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810950586.6A Pending CN110855875A (en) 2018-08-20 2018-08-20 Method and device for acquiring background information of image

Country Status (1)

Country Link
CN (1) CN110855875A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111933024A (en) * 2020-08-10 2020-11-13 张峻豪 Display device for visual transmission design

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102752540A (en) * 2011-12-30 2012-10-24 新奥特(北京)视频技术有限公司 Automatic categorization method based on face recognition technology
CN104994310A (en) * 2015-06-11 2015-10-21 广东欧珀移动通信有限公司 Shooting method and mobile terminal
CN105160695A (en) * 2015-06-30 2015-12-16 广东欧珀移动通信有限公司 Picture processing method and mobile terminal
CN105869198A (en) * 2015-12-14 2016-08-17 乐视移动智能信息技术(北京)有限公司 Multimedia photograph generating method, apparatus and device, and mobile phone
CN106157363A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 A kind of photographic method based on augmented reality, device and mobile terminal
CN106778705A (en) * 2017-02-04 2017-05-31 中国科学院自动化研究所 A kind of pedestrian's individuality dividing method and device
CN107392933A (en) * 2017-07-12 2017-11-24 维沃移动通信有限公司 A kind of method and mobile terminal of image segmentation
CN107506761A (en) * 2017-08-30 2017-12-22 山东大学 Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN107622518A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Picture synthetic method, device, equipment and storage medium
CN107798140A (en) * 2017-11-23 2018-03-13 北京神州泰岳软件股份有限公司 A kind of conversational system construction method, semantic controlled answer method and device
US20180122114A1 (en) * 2016-08-19 2018-05-03 Beijing Sensetime Technology Development Co., Ltd. Method and apparatus for processing video image and electronic device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102752540A (en) * 2011-12-30 2012-10-24 新奥特(北京)视频技术有限公司 Automatic categorization method based on face recognition technology
CN104994310A (en) * 2015-06-11 2015-10-21 广东欧珀移动通信有限公司 Shooting method and mobile terminal
CN105160695A (en) * 2015-06-30 2015-12-16 广东欧珀移动通信有限公司 Picture processing method and mobile terminal
CN105869198A (en) * 2015-12-14 2016-08-17 乐视移动智能信息技术(北京)有限公司 Multimedia photograph generating method, apparatus and device, and mobile phone
CN106157363A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 A kind of photographic method based on augmented reality, device and mobile terminal
US20180122114A1 (en) * 2016-08-19 2018-05-03 Beijing Sensetime Technology Development Co., Ltd. Method and apparatus for processing video image and electronic device
CN106778705A (en) * 2017-02-04 2017-05-31 中国科学院自动化研究所 A kind of pedestrian's individuality dividing method and device
CN107392933A (en) * 2017-07-12 2017-11-24 维沃移动通信有限公司 A kind of method and mobile terminal of image segmentation
CN107506761A (en) * 2017-08-30 2017-12-22 山东大学 Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN107622518A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Picture synthetic method, device, equipment and storage medium
CN107798140A (en) * 2017-11-23 2018-03-13 北京神州泰岳软件股份有限公司 A kind of conversational system construction method, semantic controlled answer method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111933024A (en) * 2020-08-10 2020-11-13 张峻豪 Display device for visual transmission design
CN111933024B (en) * 2020-08-10 2021-02-02 张峻豪 Display device for visual transmission design

Similar Documents

Publication Publication Date Title
KR102416558B1 (en) Video data processing method, device and readable storage medium
CN110889855B (en) Certificate photo matting method and system based on end-to-end convolution neural network
CN111145308A (en) Paster obtaining method and device
CN105893412A (en) Image sharing method and apparatus
WO2016188185A1 (en) Photo processing method and apparatus
CN112598580B (en) Method and device for improving definition of portrait photo
CN111787354B (en) Video generation method and device
CN105812920B (en) Media information processing method and media information processing unit
US20100260438A1 (en) Image processing apparatus and medium storing image processing program
CN106231195A (en) A kind for the treatment of method and apparatus of taking pictures of intelligent terminal
CN103500220A (en) Method for recognizing persons in pictures
WO2024193061A1 (en) Image processing method and apparatus, computer readable storage medium, and electronic device
CN114782864B (en) Information processing method, device, computer equipment and storage medium
KR101672691B1 (en) Method and apparatus for generating emoticon in social network service platform
CN105580050A (en) Providing control points in images
CN111353965A (en) Image restoration method, device, terminal and storage medium
CN110855875A (en) Method and device for acquiring background information of image
CN117575891A (en) Image processing method and device and terminal equipment
CN114222995A (en) Image processing method and device and electronic equipment
CN113313635A (en) Image processing method, model training method, device and equipment
CN108600614B (en) Image processing method and device
CN114898244B (en) Information processing method, device, computer equipment and storage medium
CN112669424B (en) Expression animation generation method, device, equipment and storage medium
CN113012040B (en) Image processing method, image processing device, electronic equipment and storage medium
CN114445427A (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200228