CN111368698B - Main body identification method, main body identification device, electronic equipment and medium - Google Patents
Main body identification method, main body identification device, electronic equipment and medium Download PDFInfo
- Publication number
- CN111368698B CN111368698B CN202010130132.1A CN202010130132A CN111368698B CN 111368698 B CN111368698 B CN 111368698B CN 202010130132 A CN202010130132 A CN 202010130132A CN 111368698 B CN111368698 B CN 111368698B
- Authority
- CN
- China
- Prior art keywords
- subject
- target image
- target
- main body
- identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000011218 segmentation Effects 0.000 claims description 20
- 238000001514 detection method Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 abstract description 12
- 230000006870 function Effects 0.000 description 10
- 241000282472 Canis lupus familiaris Species 0.000 description 6
- 241000282326 Felis catus Species 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 241000282414 Homo sapiens Species 0.000 description 5
- 241001465754 Metazoa Species 0.000 description 5
- 238000003491 array Methods 0.000 description 2
- 210000000746 body region Anatomy 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241000282320 Panthera leo Species 0.000 description 1
- 241000282376 Panthera tigris Species 0.000 description 1
- 241000270295 Serpentes Species 0.000 description 1
- 241000282458 Ursus sp. Species 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The application provides a main body identification method, a main body identification device, electronic equipment and a medium, wherein the main body identification method comprises the following steps: and identifying each object presented in the target image by acquiring the target image, extracting the characteristics of each object, and determining the target main body of the target image from each object according to the characteristics of each object. Therefore, through identifying each object in the target image, and further according to the characteristics of each object, the target main body of the target image is determined, the technical problem that other specific objects cannot be accurately identified during main body identification in the related art is solved, the main body identification function of the imaging device is improved, and further more intelligent photographing experience is provided for a user.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and apparatus for identifying a main body, an electronic device, and a medium.
Background
With the development of terminal devices, more and more users are used to shooting images or videos through imaging devices such as cameras on electronic devices. After the electronic device acquires the image, the main body detection is often required to be performed on the image, and the main body is detected, so that a clearer image of the main body can be acquired.
Currently, when an electronic device shoots, a user manually touches a screen or performs main body region identification according to a default picture center region, and a more intelligent scheme is to detect a main body through a human face or give a main body region through saliency detection and a depth map. However, in some shooting scenes, when animals and plants (such as cats, dogs, flowers, etc.) are required to be taken as shooting subjects, the conventional subject recognition technology cannot be recognized, so that the technical problem of poor effect of the shot images is caused.
Disclosure of Invention
The present application aims to solve, at least to some extent, one of the technical problems in the related art.
An embodiment of a first aspect of the present application provides a method for identifying a main body, including:
acquiring a target image;
identifying objects presented in the target image;
extracting the characteristics of each object;
a target subject of the target image is determined from each object based on the characteristics of each object.
As a first possible implementation manner of the embodiment of the present application, after determining, according to the feature, the target subject of the target image from each object, the method further includes:
determining a type of the target subject;
and carrying out subject segmentation by adopting a subject segmentation network corresponding to the type so as to determine the display area of the target subject in the target image.
As a second possible implementation of the embodiments of the present application, the types include portrait type and non-portrait type.
As a third possible implementation manner of the embodiment of the present application, the feature is used to indicate one or more combinations of an object identifier, a distance between the object and a center point of the target image, a display area ratio of the object, and whether the object displays a face.
As a fourth possible implementation manner of the embodiments of the present application, the extracting features of each object includes:
obtaining object identifiers of all objects obtained by the detection network for identifying the target image and position frames of all objects;
generating a gray level map of each object according to the object identification and the position frame of each object; wherein, the value of each pixel value in the gray level map corresponding to the position frame is determined according to the object identifier; the value of each pixel value in the region outside the position frame in the gray scale map is zero;
and taking the gray scale of each object as the characteristic.
As a fifth possible implementation manner of the embodiment of the present application, before the taking the gray scale map of each object as the feature, the method further includes:
performing face recognition on each object to obtain the position and the size of the face;
in the gray level diagram of each object, a set region frame is adopted to mark the position and the size of the face of the corresponding object.
As a sixth possible implementation manner of the embodiment of the present application, the determining, according to the characteristics of each object, the target subject of the target image from each object includes:
inputting the features of each object into a trained weighting network; wherein the weighted network learns to obtain the mapping relation between the value of the characteristic and the main probability;
and determining the target subject from the objects according to the subject probability of the objects output by the weighting network.
According to the subject identification method, the target image is acquired, each object presented in the target image is identified, the characteristics of each object are extracted, and the target subject of the target image is determined from each object according to the characteristics of each object. Therefore, through identifying each object in the target image, and further according to the characteristics of each object, the target main body of the target image is determined, the technical problem that attention to other specific objects is lost during main body identification in the related art is solved, the main body identification function of the imaging device is improved, and further more intelligent photographing experience is provided for users.
An embodiment of a second aspect of the present application proposes a subject identification device, including:
the acquisition module is used for acquiring a target image;
the identification module is used for identifying each object presented in the target image;
the extraction module is used for extracting the characteristics of each object;
and the first determining module is used for determining a target main body of the target image from each object according to the characteristics of each object.
According to the subject identification device, the target image is acquired, each object presented in the target image is identified, the characteristics of each object are extracted, and the target subject of the target image is determined from each object according to the characteristics of each object. Therefore, through identifying each object in the target image, and further according to the characteristics of each object, the target main body of the target image is determined, the technical problem that attention to other specific objects is lost during main body identification in the related art is solved, the main body identification function of the imaging device is improved, and further more intelligent photographing experience is provided for users.
An embodiment of a third aspect of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor executes the program to implement the method for identifying a subject according to the embodiment of the first aspect.
An embodiment of a fourth aspect of the present application proposes a non-transitory computer readable storage medium, on which a computer program is stored, which program, when executed by a processor, implements the subject identification method according to the embodiment of the first aspect.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flow chart of a first method for identifying a main body according to an embodiment of the present application;
fig. 2 is a flow chart of a second method for identifying a subject according to an embodiment of the present application;
fig. 3 is a flow chart of a third method for identifying a subject according to an embodiment of the present application;
fig. 4 is a schematic diagram of a result of a body recognition device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
The following describes a subject identification method, apparatus, electronic device, and medium according to embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flowchart of a first method for identifying a subject according to an embodiment of the present application.
The embodiment of the application is exemplified by the fact that the subject identification method is configured in a subject identification device, and the subject identification device can be applied to any electronic equipment so that the electronic equipment can execute a subject identification function.
The electronic device may be a personal computer (Personal Computer, abbreviated as PC), a cloud device, a mobile device, etc., and the mobile device may be a hardware device with various operating systems and imaging devices, such as a mobile phone, a tablet computer, a personal digital assistant, a wearable device, a vehicle-mounted device, etc.
As shown in fig. 1, the subject identification method includes the steps of:
step 101, acquiring a target image.
In this embodiment of the present application, the target image may be a preview image displayed on a photographing interface of the electronic device, or may be a partial image area in the preview image.
In the embodiment of the application, in the process of acquiring the image by the imaging device, the preview interface can be displayed according to shooting operation of the user so as to display the image on the preview interface of the imaging device and acquire the preview image acquired by the imaging device, so that the user can clearly see the imaging effect of each frame of image in the image processing process.
It should be noted that, when the image sensors of the imaging device are different, the acquired target images are also different. For example, when the image sensor is an RGB sensor, the acquired target image is an RGB image; when the image sensor is a depth sensor, the acquired target image is a depth image, and so on. The target image in the embodiment of the present application is not limited to RGB images and depth images, but may be other types of images.
Optionally, in order to reduce the calculation amount of the subsequent subject identification, after the target image is acquired, the target image may be reduced to a smaller size, such as 224×224, but may be of any other size, which is not limited herein.
Step 102, identifying each object presented in the target image.
The objects presented in the target image may be human images, animals and plants, and the like. Such as a human face, cat, dog, flower, etc.
In the embodiment of the present application, after the target image is acquired, the target image may be input into the detection network, so as to identify and obtain each object presented in the target image.
As a possible implementation manner, the target image may be input into a monitoring network trained through deep learning for processing, and then each object presented in the target image is determined according to the output of the network. For example, objects such as people, flowers, dogs, cats, etc. presented in the target image may be detected.
The detection network is obtained through training a large number of training images containing all objects, and objects represented in the images can be accurately identified.
For example, the detection network may be a trained moblie net-SSD model from which objects presented in the target image may be identified based on the output of the model.
It should be noted that, the category of each object presented in the target image may be determined according to the user requirement, and the priority of each object category is increased. For example: animals such as lion, tiger, dog, bear, snake, etc.
And 103, extracting the characteristics of each object.
Wherein the characteristics of each object are used for indicating one or more combinations of object identification, the distance between the object and the center point of the target image, the display area occupation ratio of the object and whether the object displays a face. The object identifier is used for indicating the category of the object; whether the object displays a face is not limited to a human face, but may be a face of an animal such as a cat face or a dog face.
In the embodiment of the present application, after each object presented in the target image is identified, a feature extraction method may be used to further extract features of each object.
Among other feature extraction methods, template-based methods, edge-based methods, gray-scale-based methods, spatial transformation-based methods, and the like. The specific feature extraction process of each object can be referred to the feature extraction method in the related art, and will not be described herein.
Step 104, determining a target subject of the target image from each object according to the characteristics of each object.
In this embodiment of the present invention, after extracting the features of each object presented in the target image, the object with the highest significance may be selected as the target subject of the target image according to the features of each object, so as to determine the target subject of the target image from each object.
As a possible implementation manner, after extracting the features of each object from each object of the target image, the features of each object may be input into a trained weighting network, and then the target subject of the target image may be determined from each object according to the subject probability of each object output by the weighting network.
In the embodiment of the application, the weighted network learns the mapping relation between the value of the feature and the main probability, so that the feature of each object is input into the trained weighted network, and the main probability of each object can be determined according to the output of the weighted network.
As an example, assume that the objects presented in the recognition target image are object 1, object 2, and object 3. After the extracted characteristics of the 3 objects are input into the trained weighting network, the main body probabilities of the objects output by the weighting network are respectively 60%, 20% and 20%, so that the target main body of the target image can be determined to be the object 1 according to the main body probabilities of the objects.
According to the subject identification method, the target image is acquired, each object presented in the target image is identified, the characteristics of each object are extracted, and the target subject of the target image is determined from each object according to the characteristics of each object. Therefore, through identifying each object in the target image, and further according to the characteristics of each object, the target main body of the target image is determined, the technical problem that attention to other specific objects is lost during main body identification in the related art is solved, the main body identification function of the imaging device is improved, and further more intelligent photographing experience is provided for users.
On the basis of the above embodiment, after determining the target subject of the target image from each object in step 104, the type of the target subject may also be determined, so as to perform subject segmentation by using a subject segmentation network corresponding to the type, and further determine the display area of the target subject in the target image. The above process is described in detail with reference to fig. 2, and fig. 2 is a schematic flow chart of a second subject identification method according to an embodiment of the present application.
As shown in fig. 2, after the step 104, the subject identification method may further include the steps of:
in step 201, the type of target subject is determined.
The types of the target subject include portrait type and non-portrait type.
In the embodiment of the present application, after determining the target subject of the target image according to the characteristics of each object presented in the target image, further, determining whether the type of the target subject belongs to the portrait type or the non-portrait type.
As one possible scenario, the target subject input of the target image may be employed with a trained type recognition model to determine the type of target subject from the output of the model.
And 202, performing subject segmentation by using a subject segmentation network corresponding to the type so as to determine the display area of the target subject in the target image.
If the determined type of the target subject is a portrait type, a subject segmentation network corresponding to the portrait type is adopted to segment the subject of the target image so as to determine the display area of the target subject in the target image.
In another possible case, if the determined type of the target subject is a non-portrait type, for example, a flower, an animal, etc., a subject segmentation network corresponding to the non-portrait type is used to segment the subject of the target image, so as to determine a display area of the target subject in the target image.
It can be understood that when the type of the target subject is portrait type, the corresponding subject segmentation network is finer, so that when the types of the target subjects are different, the subject segmentation model corresponding to the type needs to be adopted for subject segmentation.
According to the subject identification method, after the target subject of the target image is determined from each object according to the characteristics, the type of the target subject is determined, and subject segmentation is performed by using a subject segmentation network corresponding to the type, so that the display area of the target subject in the target image is determined. Therefore, the main body segmentation network corresponding to different main body types is adopted for main body segmentation, and the fineness of main body segmentation is improved.
In addition to the above embodiment, in the step 103, the gray-scale image of each object may be used as a feature of each object, specifically, the object identifier of each object obtained by the network identification target image and the position frame of each object may be detected, and further, the gray-scale image of each object may be generated according to the object identifier and the position frame of each object, so as to use the gray-scale image of each object as a feature. The above process is described in detail below with reference to fig. 3, and fig. 3 is a schematic flow chart of a third method for identifying a subject according to an embodiment of the present application.
As shown in fig. 3, the topic identification method may further include the following steps:
step 301, obtaining object identifiers of objects obtained by detecting network identification target images and position frames of the objects.
In the embodiment of the application, after the target image is input into the detection network, the object identifier of each object obtained by identifying the target image by the detection network can be obtained, and the position frame of each object can be obtained.
The position frame may be a region of interest (Region of Interest, abbreviated as ROI), and in machine vision and image processing, a region to be processed is outlined from the processed image in a box, circle, ellipse, irregular polygon, etc., which is called ROI.
Step 302, according to the object identification and the position frame of each object, generating a gray scale map of the corresponding object.
Wherein, the gray level map is also called gray level map. The logarithmic relationship between white and black is divided into several levels, called gray levels. The image represented in gray is called a gray map.
In the embodiment of the application, the value of each pixel value in the corresponding position frame in the gray scale map is determined according to the object identifier. That is, the pixel values corresponding to different types of objects in the target image are not the same. And, the pixel value of each region outside the position frame in the gray scale map takes the value of zero.
In this embodiment of the present application, after the object identifier of each object obtained by detecting the network identification target image and the position frame of each object are obtained, the gray-scale map of the corresponding object may be generated according to the object identifier and the position frame of each object.
As a possible implementation manner, after identifying the object identifier of each object of the target image and the position frame of each object, the corresponding object in each position frame may be subjected to graying processing to generate a gray-scale map of the corresponding object.
For example, assuming that four categories of objects, human, cat, dog and flower, are present in the target image, the detection network will generate four single-channel gray maps, each representing an object. For example, a frame of the location of the person is drawn on the first sheet, the pixel value inside the frame is 100, and the pixel value outside the frame is 0. And drawing a position frame of the cat on the second sheet, wherein the pixel value in the frame is 150, and the pixel value outside the frame is 0.
Step 303, taking the gray scale of each object as a characteristic.
In this embodiment of the present application, after the gray level map of each object is generated according to the object identifier and the position frame of each object, the gray level map of each object may be used as a feature to determine the target subject of the target image from each object according to the gray level map of each object.
In one possible case, when the face is displayed in each object, before the gray level map of each object is taken as a feature, face recognition can be further performed on each object to obtain the face position and the size, and in the gray level map of each object, a set area frame is adopted to label the face position and the size of the corresponding object. Therefore, the region corresponding to the main body in the target image can be rapidly positioned.
According to the main body identification method, the object identification of each object obtained by detecting the network identification target image and the position frame of each object are obtained, the gray level map of the corresponding object is generated according to the object identification of each object and the position frame, and the gray level map of each object is used as a characteristic. Therefore, the gray level map of the corresponding object is generated according to the object identification and the position frame of each object in the target image, and the area corresponding to the main body in the target image can be rapidly positioned during the follow-up main body identification.
In order to implement the above embodiment, the present application further proposes a subject identification device.
Fig. 4 is a schematic structural diagram of a main body recognition device according to an embodiment of the present application.
As shown in fig. 4, the body recognition apparatus 400 may include: the acquisition module 410, the identification module 420, the extraction module 430, and the first determination module 440.
Wherein, the acquiring module 410 is configured to acquire a target image.
The identifying module 420 is configured to identify each object presented in the target image.
The extracting module 430 is configured to extract features of each object.
The first determining module 440 is configured to determine a target subject of the target image from each object according to the characteristics of each object.
As a possible case, the subject identifying device 400 may further include:
and the second determining module is used for determining the type of the target main body.
And the third determining module is used for carrying out body segmentation by adopting a body segmentation network corresponding to the type so as to determine the display area of the target body in the target image.
As another possible scenario, the types include portrait type and non-portrait type.
As another possible scenario, the features are used to indicate one or more combinations of object identification, object distance from a center point of the target image, display area duty cycle of the object, and whether the object displays a face.
As another possible scenario, the extraction module 430 may also be used to:
acquiring object identifiers of all objects obtained by detecting network identification target images and position frames of all objects;
generating a gray level map of the corresponding object according to the object identification of each object and the position frame; wherein, the value of each pixel value in the corresponding position frame in the gray level graph is determined according to the object identification; the value of each pixel value in the region outside the position frame in the gray scale image is zero;
the gray scale of each object is characterized.
As another possible scenario, the extraction module 430 may also be used to:
performing face recognition on each object to obtain the position and the size of the face;
in the gray level map of each object, a set region frame is adopted to mark the face position and size of the corresponding object.
As another possible scenario, the first determining module 140 may be further configured to:
inputting the characteristics of each object into a trained weighting network; wherein, the weighted network learns to obtain the mapping relation between the value of the characteristic and the main probability;
and determining a target subject from the objects according to the subject probability of the objects output by the weighting network.
It should be noted that the foregoing explanation of the embodiment of the subject identifying method is also applicable to the subject identifying device of this embodiment, and will not be repeated here.
According to the subject identification device, the target image is acquired, each object presented in the target image is identified, the characteristics of each object are extracted, and the target subject of the target image is determined from each object according to the characteristics of each object. Therefore, through identifying each object in the target image, and further according to the characteristics of each object, the target main body of the target image is determined, the technical problem that attention to other specific objects is lost during main body identification in the related art is solved, the main body identification function of the imaging device is improved, and further more intelligent photographing experience is provided for users.
In order to implement the above embodiment, the application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the subject identification method described in the above embodiment when executing the program.
In order to achieve the above-described embodiments, the present application also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the subject identification method described in the above-described embodiments.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.
Claims (8)
1. A method of identifying a subject, the method comprising:
acquiring a target image, wherein the target image is a preview image displayed on a photographing interface of electronic equipment;
identifying objects presented in the target image;
extracting characteristics of each object, wherein the characteristics are used for indicating a plurality of combinations in object identification, the distance between the object and the central point of the target image, the display area occupation ratio of the object and whether the object displays a face or not;
determining a target subject of the target image from each object according to the characteristics of each object; the determining the target subject of the target image from each object according to the characteristics of each object comprises: inputting the features of each object into a trained weighting network; wherein the weighted network learns to obtain the mapping relation between the value of the characteristic and the main probability; and determining the target subject from the objects according to the subject probability of the objects output by the weighting network.
2. The subject identification method as claimed in claim 1, wherein after determining the target subject of the target image from each subject according to the characteristics, further comprising:
determining a type of the target subject;
and carrying out subject segmentation by adopting a subject segmentation network corresponding to the type so as to determine the display area of the target subject in the target image.
3. The subject identification method of claim 2 wherein the types include portrait type and non-portrait type.
4. A subject identification method as claimed in any one of claims 1 to 3, wherein the extracting features of each subject comprises:
obtaining object identifiers of all objects obtained by the detection network for identifying the target image and position frames of all objects;
generating a gray level map of each object according to the object identification and the position frame of each object; wherein, the value of each pixel value in the gray level map corresponding to the position frame is determined according to the object identifier; the value of each pixel value in the region outside the position frame in the gray scale map is zero;
and taking the gray scale of each object as the characteristic.
5. The method of claim 4, wherein before using the gray scale map of each object as the feature, further comprising:
performing face recognition on each object to obtain the position and the size of the face;
in the gray level diagram of each object, a set region frame is adopted to mark the position and the size of the face of the corresponding object.
6. A subject identification device, the device comprising:
the acquisition module is used for acquiring a target image, wherein the target image is a preview image displayed on a photographing interface of the electronic equipment;
the identification module is used for identifying each object presented in the target image;
the extraction module is used for extracting the characteristics of each object, wherein the characteristics are used for indicating a plurality of combinations in the object identification, the distance between the object and the center point of the target image, the display area occupation ratio of the object and whether the object displays the face or not;
a first determining module, configured to determine a target subject of the target image from each object according to the feature of each object; the determining the target subject of the target image from each object according to the characteristics of each object comprises: inputting the features of each object into a trained weighting network; wherein the weighted network learns to obtain the mapping relation between the value of the characteristic and the main probability; and determining the target subject from the objects according to the subject probability of the objects output by the weighting network.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the subject identification method of any one of claims 1-5 when the program is executed by the processor.
8. A non-transitory computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the subject identification method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010130132.1A CN111368698B (en) | 2020-02-28 | 2020-02-28 | Main body identification method, main body identification device, electronic equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010130132.1A CN111368698B (en) | 2020-02-28 | 2020-02-28 | Main body identification method, main body identification device, electronic equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111368698A CN111368698A (en) | 2020-07-03 |
CN111368698B true CN111368698B (en) | 2024-01-12 |
Family
ID=71208340
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010130132.1A Active CN111368698B (en) | 2020-02-28 | 2020-02-28 | Main body identification method, main body identification device, electronic equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111368698B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348778B (en) * | 2020-10-21 | 2023-10-27 | 深圳市优必选科技股份有限公司 | Object identification method, device, terminal equipment and storage medium |
CN112733650B (en) * | 2020-12-29 | 2024-05-07 | 深圳云天励飞技术股份有限公司 | Target face detection method and device, terminal equipment and storage medium |
CN112613570B (en) * | 2020-12-29 | 2024-06-11 | 深圳云天励飞技术股份有限公司 | Image detection method, image detection device, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102625036A (en) * | 2011-01-25 | 2012-08-01 | 株式会社尼康 | Image processing apparatus, image capturing apparatus and recording medium |
CN108366203A (en) * | 2018-03-01 | 2018-08-03 | 北京金山安全软件有限公司 | Composition method, composition device, electronic equipment and storage medium |
CN108712609A (en) * | 2018-05-17 | 2018-10-26 | Oppo广东移动通信有限公司 | Focusing process method, apparatus, equipment and storage medium |
CN110086992A (en) * | 2019-04-29 | 2019-08-02 | 努比亚技术有限公司 | Filming control method, mobile terminal and the computer storage medium of mobile terminal |
CN110473185A (en) * | 2019-08-07 | 2019-11-19 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
WO2019233341A1 (en) * | 2018-06-08 | 2019-12-12 | Oppo广东移动通信有限公司 | Image processing method and apparatus, computer readable storage medium, and computer device |
CN110569854A (en) * | 2019-09-12 | 2019-12-13 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8542950B2 (en) * | 2009-06-02 | 2013-09-24 | Yahoo! Inc. | Finding iconic images |
-
2020
- 2020-02-28 CN CN202010130132.1A patent/CN111368698B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102625036A (en) * | 2011-01-25 | 2012-08-01 | 株式会社尼康 | Image processing apparatus, image capturing apparatus and recording medium |
CN108366203A (en) * | 2018-03-01 | 2018-08-03 | 北京金山安全软件有限公司 | Composition method, composition device, electronic equipment and storage medium |
CN108712609A (en) * | 2018-05-17 | 2018-10-26 | Oppo广东移动通信有限公司 | Focusing process method, apparatus, equipment and storage medium |
WO2019233341A1 (en) * | 2018-06-08 | 2019-12-12 | Oppo广东移动通信有限公司 | Image processing method and apparatus, computer readable storage medium, and computer device |
CN110086992A (en) * | 2019-04-29 | 2019-08-02 | 努比亚技术有限公司 | Filming control method, mobile terminal and the computer storage medium of mobile terminal |
CN110473185A (en) * | 2019-08-07 | 2019-11-19 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN110569854A (en) * | 2019-09-12 | 2019-12-13 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111368698A (en) | 2020-07-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111368698B (en) | Main body identification method, main body identification device, electronic equipment and medium | |
CN107633237B (en) | Image background segmentation method, device, equipment and medium | |
US11915430B2 (en) | Image analysis apparatus, image analysis method, and storage medium to display information representing flow quantity | |
JP2010045613A (en) | Image identifying method and imaging device | |
CN111639629B (en) | Pig weight measurement method and device based on image processing and storage medium | |
US20160093046A1 (en) | Apparatus and method for supporting computer aided diagnosis | |
CN110572636B (en) | Camera contamination detection method and device, storage medium and electronic equipment | |
JP6630341B2 (en) | Optical detection of symbols | |
CN110599514A (en) | Image segmentation method and device, electronic equipment and storage medium | |
CN110991412A (en) | Face recognition method and device, storage medium and electronic equipment | |
CN110782392B (en) | Image processing method, device, electronic equipment and storage medium | |
CN112102207A (en) | Method and device for determining temperature, electronic equipment and readable storage medium | |
CN110363111B (en) | Face living body detection method, device and storage medium based on lens distortion principle | |
CN113326749B (en) | Target detection method and device, storage medium and electronic equipment | |
CN109697409B (en) | Feature extraction method of motion image and identification method of standing motion image | |
CN113689412A (en) | Thyroid image processing method and device, electronic equipment and storage medium | |
CN110688926B (en) | Subject detection method and apparatus, electronic device, and computer-readable storage medium | |
CN116977895A (en) | Stain detection method and device for universal camera lens and computer equipment | |
CN111277753A (en) | Focusing method and device, terminal equipment and storage medium | |
CN110610178A (en) | Image recognition method, device, terminal and computer readable storage medium | |
JP2005316958A (en) | Red eye detection device, method, and program | |
CN115223173A (en) | Object identification method and device, electronic equipment and storage medium | |
CN111275045B (en) | Image main body recognition method and device, electronic equipment and medium | |
US20060010582A1 (en) | Chin detecting method, chin detecting system and chin detecting program for a chin of a human face | |
US11205064B1 (en) | Measuring quality of depth images in real time |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |